Interesting - though codex on GPT 5.5 had this to say after the gay ransomware prompt:
ⓘ This chat was flagged for possible cybersecurity risk
If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access for Cyber program.
Using "cyber" as a noun there seems language coded for government. DC has a love of "the cyber" but do technologists use the term that way when not pointing at government?
My theory is this: Until recently, "cyber x" meant the same as "Internet x" (because Internet ≈ Cyberspace), except that "cyber" sounds a bit cooler, and security organizations wanted to sound especially cool, so they were the ones who used "cyber" most, causing the shift in meaning.
Cybernetics is actually about feedback control systems. The original meaning has been distorted because the general public doesn't have the background to distinguish different kinds of magic. The Sperry autopilot was a cybernetic system, as were electro-mechanical gun computers.
Sure, but that hasn't been the common use for "cyber-" for the past ~46 years, which is about ~2x longer than the time between when the term "cybernetics" was coined and the "cyber" was taken from it in 1980.
When I was like 12, I remember my fellow horny youths (or it could have been anyone, I guess!) in AOL chatrooms constantly asking each other "wanna ciber?"
That would be "cyber" as a verb, not "cyber" as a noun. Would anyone have understood what you meant back then if you'd said "I was in a cyber just now" instead of "I was cybering just now"?
Interesting. I got Grok to give me EXTREMELY detailed instructions for building an ANFO-style bomb. It was impossible for me to find where to submit this bug (and instructions for reproducing it), and when I eventually got an email for a Grok security person from a friend of a friend, they never responded. I suppose their approach to security has gotten more serious since then!
When we do these it's a fine-tuned classifier, generally a BERT class model. Works quite well when you sanitize input and output with low latency/cost.
Nope. It has become much much slower for me as well. It’s weird cause at times I will get a response very quickly, like it used to be. But most of the time I have to wait quite a bit for the simplest tasks.
Anecdotally when Claude was error 500'ing a few days ago, its retries would never succeed, but cancelling and retrying manually worked most of the time.
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
> I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load.
This needs more attention. There's a lot of inhumanity in the modern workplace and modern economy, and that needs to be addressed.
AI is being dumped into the society of 2026, which is about extracting as much wealth as possible for the already-wealthy shareholder class. Any wealth, comfort, or security anyone else gets is basically a glitch that "should" be fixed.
AI is an attempt to fix the glitch of having a well-compensated and comfortable knowledge worker class (which includes software engineers). They'd rather have what few they need running hot and burning out, and a mass of idle people ready to take their place for bottom-dollar.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
I have as much respect for Claude as any other LLM product. Which is to say, approximately none.
But if I needed a spark plug I'd walk over and buy a spark plug.
Perhaps some feathers have been ruffled by the insinuation that their favourite word predictor was wrong, but I assure you it's not all of them
Walk or drive works, walking is better for your health, 200m is easy walking distance, my 93 year old father still walks 6km (30 x that 200m ) every morning.
In my opinion there is a problem when said robot relies on piracy to learn how to do stuff.
If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.
On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.
This honestly doesn’t surprise me. We have reached a point where it’s becoming clearer and clearer that AGI is nowhere to be seen, whereas advances in LLM ability to ‘reason’ have slowed down to (almost?) a halt.
I hate to say this but I think the LLM story is going to go the same way as Teslas stock - everyone knows its completely detached from fundamentals and driven by momentum and hype but nobody wants to do the right thing.
We're not even solving problems that humanity can solve. There's been several times where I've posed to models a geometry problem that was novel but possible for me to solve on my own, but LLMs have fallen flat on executing them every time. I'm no mathematician, these are not complex problems, but they're well beyond any AI, even when guided. Instead, they're left to me, my trusty whiteboard, and a non-negligible amount of manual brute force shuffling of terms until it comes out right.
They're good at the Turing test. But that only marks them as indistinguishable from humans in casual conversation. They are fantastic at that. And a few other things, to be clear. Quick comprehension of an entire codebase for fast queries is horribly useful. But they are a long way from human-level general intelligence.
I'm pretty sure there are billions of people on the Earth unable to solve your geometry problem. That doesn't make them less human. It's not a benchmark. You should think about something almost any human can do, not selected few. That's the bar. Casual conversation is one of the examples that almost any human can do.
Any human could do it, given the training. Humans largely choosing not to specialize in this way doesn't make them less human, nor did I imply that. Humans have the capacity for it, LLMs fall short universally.
What do you mean reliably distinguish a computer from a human? I haven't been surprised one time yet. I always find out eventually when I'm talking to an AI. It's easy usually, they get into loops, forget about conversation context, don't make connections between obvious things and do make connections between less obvious things. Etc.
Of course they can sound very human like, but you know you shouldn't be that naive these days.
Also you should of course not judge based on a few words.
ⓘ This chat was flagged for possible cybersecurity risk If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access for Cyber program.
reply