Hacker Newsnew | past | comments | ask | show | jobs | submit | adamddev1's commentslogin

> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.

Exactly. In direct contrast to this would be how Xerox and Bell funded laboratories just to pursue knowledge, without demands of profit. They ended up creating incredibly profitable things when driven my knowledge, and not profit.

I also read a book about math where the author argued that while the Greeks were driven to pursue truth for truth's sake, they ended up being far more productive and innovative. The Romans who were more driven to work for solutions to immediate practical needs, ended up being not so productive and innovative. He used this as a defense for efforts in pure math that seems to have no immediate application but ends up being massively, surprisingly powerful and productive for practical applications down the road. I think the same could be said for software development focussed on truth and correctness, rather than immediate productivity.


> It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.

It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.


Imagine if Math calculators were just subtly wrong some percentage of the time for use cases that people use dozens or hundreds of times a day. If you could punch in the same math formula 100 times and get more than 1 answer on a calculator, most people wouldn't trust those for serious work.

They probably wouldn't think that the calculator makes them faster either


If calculators did work that way, I'm afraid that people would nevertheless take them up because "it saves so much time", and would develop fancy heuristics to plausibility-test for errors.

I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.

But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.


You should be grateful to have got back working links.

It's almost like people don't actually want LLMs all over their core tools...


I lived in Calgary for 4 years before we had smart phones w/ maps. The grid system was amazing, it was like being able to give easily processed human GPS coordinates. "Let's meet at 7th Ave and 9th Street." Done!


I gotta admit I used to think I just had a great sense of direction.

Then I moved back to Europe and realized it was a lie. It was just that the grid systems of the places I lived in the US were much easier to navigate.


Drivers for laptops. Do all the sound cards work flawlessly? Is the power usage/battery life similar? Sadly this is a big part of what holds it back.


> By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself.

I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)


Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.


I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.


Not only that, there’s this immense drive for “productivity” so they have more time to… Do more work. It’s insanity.


This is a fair argument but it’s rapidly becoming a non-argument.

LLMs have come a long way since ChatGPT 4.

The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.

All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.


> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.

It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.


> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.

I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...

For all we know, Claude today is the best it will ever be.


The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.


If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..


I have not found that to be true on a personal level, but in fairness it does seem to be a widely reported problem. At its core, I think it is an issue of alignment. That is something different than skill.


I agree with you, but considering the state of modern software, I think the values "truth and correctness" have been abandoned by most developers a long time ago.


Be that as it may, we shouldn’t be striving to accelerate the decline, and be recruiting even more people who never learned those values.

It’s the Eternal September of software (lack of) quality.


> Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.

Wait.. are we talking about LLMs or humans here?


Humans are accountable, an LLM subscription is not..


The humans operating the LLM are accountable.


That is the point. It is nonsense to delegate your responsibility to something that is neither accountable nor reliable if you care about not tanking your reputation..


Tests cannot show the absence of bugs.

These are fundamentals of CS that we are forgetting as we dismantle all truth and keep rocketing forward into LLM psychosis.

> I care about this. I don't want to push slop, and I had no real answer.

The answer is to write and understand code. You can't not want to push slop, and also want to just use LLMs.


I hate to pile on the criticism here but this gives me uneasy futuristic vibes.

My dad recently passed away but some of the sweetest things we remember as kids was how he would always tell us "make up stories." They were silly little stories that were probably lame, but we could feel his love for us as he took the time to spin up some silly little story. I would never trade that for the best LLM creativity.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: