Most members of management were individual contributors beforehand. I say this just because it is remarkably common for people to assign malign intent or stupidity to people doing jobs that they themselves haven’t done and don’t frankly understand.
I’m not saying you’re wrong. In many cases you’d be right. I’m just saying it’s remarkable how much certainty people have even when it comes to things they know they don’t know.
The statement that the chatbot is conscious is neither true nor untrue in any meaningful sense. The current debate is supported by very strong feelings that we must be conscious and AI must not be.
These feelings have no particular basis in material reality. Consciousness is as well defined as cooties. Does AI have cooties? idk man, do you?
the author overestimates how much ~$5M/yr actually is. a business like uber isn't happy about that but it's not even in the top 10 of things they're wasting money on. moreover this isn't the engineer's sole fault it is more the fault of whoever actually approved the expense.
Oh I remember this quote. I thought it was quite a good one because he’s right. At least in the US, apple maps is better than google maps for most purposes.
I'm mostly curious how much of that revenue is actual ARR, which is to say contractually recurring. It is pretty dang rare for a hardware company to have nontrivial ARR.
automotive contracts are typically on 6 year cycles, so tech gets designed into a new car and it's locked in until the next vehicle generation (5-7 years depending on the automaker). year to year sales can fluctuate but are fairly predictable.
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
They're specifically referring to the dead comments from new users in this thread, so it's not insinuation. They're pointing out a higher-than-normal quantity of shill bots flocked to this thread.
The fact that the comments are dead means the system is working as intended, but it's not unreasonable to point out the nature of the comments.
The mistake is thinking that an organic entity won't reject causality when it interferes with their politics.
The interesting thing here is that this isn't an always-on feature. You can actually see the process on a person's face. I was delighted by the recent DOGE depositions because the video quality is good enough to see the guy's eyes stop moving and glaze over.
I wouldn't call something a non-story just because the ultimate end-goal was mitigated. The fact that it was attempted is a story, especially when it's a meta commentary on story about trying the same thing _officially_.
Eh. The actors that use these features use a shotgun approach. The result is you see a bunch of dead comments and assume the system is working as intended, while a couple of the less inconspicuous comments persist. This happens frequently on specific topics.
Sadly, Portland is a backwater logging outpost and no one outside the PNW gives a shit about Portland or could place it on a map. I'm sorry, it's true.
Nah I think a lot of the judicial overreach is just pissing off a lot of the regular hackernews userbase off. This fit the law to a T.
And then if it's not this it's blocking the removal of a temporary order. Just tons of garbage that was implemented without any law now all of a sudden is permanent because a judge decided.
It's definitely just to get people to fly with a valid ID without ambushing the enormous number of people who have been living under a rock and don't realize they need a real ID. Otherwise they'll have a dozen or so people freaking out at the airport every single day for years.
Respectfully I don’t think the author appreciates that the configurability of Claude Code is its performance advantage. I would much rather just tell it what to do and have it go do it, but I am much more able to do that with a highly configured Claude Code than with Codex which is pretty much just set at the out of the box quality level.
I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.
Heya, I'm the author of the post and I just wanted to say I do appreciate the configurability! As I mentioned in the post, I have been that kind of developer in the past.
> This is a perfect match for engineers who love configuring their environments. I can’t tell you how many full days of my life I’ve lost trying out new Xcode features or researching VS Code extensions that in practice make me 0.05% more productive.
And I tried to be pretty explicit about the idea that this is a very personal choice.
> Personally — and I do emphasize this is a personal decision — I‘d rather write a well-spec’d plan and go do something else for 15 minutes. Claude’s Plan Mode is exceptional, and that‘s why so many people fall in love with Claude once they try it.2
For every person who feels like me today, there's someone who feels like you out there. And for every person who feels like you, there's someone like me (today) who finds it not as valuable to their workflow. That's the reason my conclusion was all about getting folks to try out both to see what works for them — because people change and it's worth finding out who you really at this moment in time.
Anyhow, I do think that Codex is also very configurable — I was just trying to emphasize that it's really great out the box while Claude Code requires more tuning. But that tuning makes it more personal, which as you mention is a huge plus! As I've touched on in a few posts [^1] [^2] Skills are to me a big deal, because they allow people to achieve high levels of customization without having to be the kind of developer that devotes a lot of time to creating their perfect set up. (Now supported in both Claude Code and Codex.)
I don't want this to turn into a bit of a ramble so I'll just say that I agree with you — but also there's a lot of nuance here because we're all having very personal coding experiences with AI — so it may not entirely sound like I agree with you. :)
Would love to hear more about your specific customizations, to make sure that I'm not missing out on anything valuable. :D
To be quite clear, I hate configuring my environment. I hate it. The farther I get from creating things that people can use, the less I like it. I spend most of my time on claude config not because I enjoy the experience per se but because it's SO USEFUL to do so.
To be honest that's most of my pitch for Codex in the blog post. Codex works great without any configuration, and amazingly with. If you want to spend less time configuring then maybe Codex is the right agentic system for you.
I don't want to restate my thesis too much — but I really do believe it's worth experimenting with these tools every couple of months to see if the latest updates better match your preferences.
Skills, MCPs, /commands, agents, hooks, plugins, etc. I package https://charleswiltgen.github.io/Axiom/ as an easily-installable Claude Code plugin, and AFAICT I'm not able to do that for any other AI coding environment.
That hasn't been my experience, although I'm happy to accept that I'm the problem. Apparently they've released their skills support (?), so I should try again. https://developers.openai.com/codex/skills
Candidly, the accusation of short-sightedness doesn't really make sense when it comes to enthusiasm in a technology which often in practice falls short today but which in certain cases and in more cases tomorrow than today is worth tremendous business value.
If anything, you should accuse them of foolhardy recklessness. They are not the sticks in the mud.
Can a company like openAI be worth an estimated 1/5th of Alphabet, which offers a similar product but also has an operative system, a browser, the biggest video platform, the most used mail client, its own silicon to running that product, the 3rd most popular Cloud platform, ... ?
I think that is the recklessness in question. Throw in that there is no profit for OpenAI & co and that everything is fueled by debt and the picture is grim (IMHO)
> and in more cases tomorrow than today is worth tremendous business value
That's a nice crystal ball you have there. From where I'm standing, model performance improvements have been slowing down for a while now, and without some sort of fundamental breakthrough, I don't see where the business value is going to come from
The prerequisite for me to be wrong is that the technology needs to stop getting better entirely *right now* AND we need to discover ZERO new uses for what exists today.
So if the plateau is unanimously declared to have been reached tomorrow OR just one more tiny use case exists tomorrow and all others dwindle away to nothing, than you consider yourself to be correct? What a wild assertion!
If the plateau is reached at some higher level of capability, I will remain correct, yes. If use cases are discovered that do not exist today, I will also be correct. You said it in a silly way but you're directionally correct.
No. You state that this is all that it would take to be considered as tremendous business value. You are moving your goal posts on your point. My point is that you are taking an absolute position that there is tremendous business value in its current form(as a miniscule improvement and one insignificant new use case does does not equate to tremendous business value in itself) and so that remains to be seen.
Rushing to get on board something that looks like it might be the next big thing is often short-sighted. Some recent examples include Windows XP: Tablet Edition and Google Glass.
That's like saying that gambling is shortsighted. It depends entirely on the odds as to whether or not it's wise, but "shortsighted" implies that making the bet precludes some future course of action.
Maybe if you have near-infinite wealth like Google or Microsoft you aren't precluding future choices. For most economic actors, making some bets means not making others.
Companies that are hastily shoehorning AI into their customer support systems could instead devote resources to improving the core product to reduce the need for support.
I’m not saying you’re wrong. In many cases you’d be right. I’m just saying it’s remarkable how much certainty people have even when it comes to things they know they don’t know.
reply