Why? I'll gladly wait to be automated away, and when I eventually am I will embrace it. I certainly won't be whining about it. This is just the cost of progress. That cost doesn't magically stop applying in 2026.
If you succesfully build a highly capable “aligned” model (according to some class of definitions that Anthropic would use for the words “capable” and “aligned”) and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital, can you still call it aligned?
If the answer is “yes”, our definition of alignment kind of sucks.
Jobs are an invention of humanity. About 50% of people dislike their job. People spend much of their lives working. Poverty and inequality are a choice made by society if society chooses poorly.
Is that true? In communities or tribes of antiquity I assume there was some trading fruits of different labours before coinage. Still an 'invention' beyond baser individual survivalism.
On the plus side, if there really is no value to labour, then farm work must have been fully automated along with all the other roles.
On the down side, rich elites have historically had a very hard time truly empathising with normal people and understanding their needs even when they care to attempt it, so it is very possible that a lot of people will starve in such a scenario despite the potential abundance of food.
It's either:
1) the rich voluntarily share the means of production so everyone becomes equal,
2) the poor stage successful revolutions so they gain access to the means of production and everyone becomes equal,
3) the poor starve or are otherwise eliminated, and the survivors will be equal.
All roads lead to equality when the value of labour becomes 0 due to 100% automation.
Over history, lots of underclasses have been stuck that way for multiple generations, even without the assistance of a robot workforce that can replace them economically.
Some future rich class so empowered would be quite capable of treating the poor like most today treat pets. Fed and housed, but mostly neutered and the rest going through multiple generations of selective inbreeding for traits the owners deem interesting.
Non-human pets don't have the capacity to rebel though; make humans into pets and there will again be the constant danger of rebellions as with slavery in the past. Without the economic incentive to offset.
On the first, non-human pets rebelling is seen every time an abused animal bites their owner.
On the second, the hypothetical required by the scenario is that AI makes all human labour redundant: that includes all security forces, but it also means the AI moving around the security bots and observing through sensors is at least as competent as every human political campaign strategist, every human propagandist, every human general, every human negotiator, and every human surveillance worker.
This is because if some AI isn't all those things and more, humans can still get employed to work those jobs.
Not at all. A rebellion is an organized effort, with an implicitly delayed response to grievances. I can't think of any non-humans that organize their efforts as such. It would be a heck of a thing if a group of dogs were to plan how they'd take out their masters.
All those "jobs" you describe - and many more - would cease to be a thing, as their purported basis for existence would be no more. Any role that doesn't concretely contribute to our survival and advancement is just "busy work". People could theoretically continue to maintain some simulation of something that keeps them as a retirement, but it'd be meaningless.
> Not at all. A rebellion is an organized effort, with an implicitly delayed response to grievances. I can't think of any non-humans that organize their efforts as such. It would be a heck of a thing if a group of dogs were to plan how they'd take out their masters.
Dogs in particular are pack animals, self-organisation amongst them wouldn't be at our level but that doesn't mean it doesn't exist.
> All those "jobs" you describe - and many more - would cease to be a thing, as their purported basis for existence would be no more. Any role that doesn't concretely contribute to our survival and advancement is just "busy work". People could theoretically continue to maintain some simulation of something that keeps them as a retirement, but it'd be meaningless.
Yes?
I think you've missed the point, though.
When your opponent has all those skills to that level and doesn't sleep and simply applies all the surveillance tech that has already been invented like laser microphones and wall-penetrating radar that can monitor your pulse and breathing, how would you manage to rebel?
How would you find a like mind to organise with, when your opponent knows what you said marginally before the slow biological auditory cortex of the person you're talking to passes the words to their consciousness? Silicon is already that fast at this task.
And that's assuming you even want to. Propaganda and standard cult tactics observably prevent most rebellions from starting. LLMs are already weirdly effective at persuading a lot of people to act against their own interests.
> The question is, to what extent would humans still set goals and priorities, and how.
From what I hear about the US and UK governments, even the elected representatives of these governments don't really set goals and priorities, so the answer is surely "humans don't".
I get your point, but I’d say they do set goals, they’re just do bad at achieving them that it’s hard to tell.
Hopefully AI would help us better achieve our goals, but they still need to be our goals. I’m just not sure what that means. I don’t think anybody does.
That’s a major problem here, if we can’t reliably articulate our goals in unambiguous terms, how in earth can we expect AI to help us achieve them? The chances that whatever they end up achieving will match what we will actually like after the fact seems near zero.
I'd say Maslow's hierarchy[0] is a great starting point. Program that properly and faithfully (no backdoors, military exceptions, etc whatsoever) along with Asimov's 3 laws[1] and it should be pretty hard to find issue with the system that would result.
This is the "draw the rest of the owl"* of the alignment problem.
Or possibly the rest-of-owl of AI in general: Consider that there's still no level-5 self driving cars, despite road traffic law existing and the developers knowing about it since before they started trying.
The film version of I Robot had this right, the three laws are a manifesto for totalitarianism. The AI cannot sit on the sidelines as long as there is anything it can do to prevent crimes or abuse of any kind, no matter how intrusive that intervention may be.
If truly 100% automation (including infantry/police) the most likely scenario is not any if the above; most people will be kept on some kind of minimum sustenance enough to keep them from rebelling (“UBI”) and those who disagree will either be coopted into the elite or eliminated.
There's no reason to keep anyone on minimal sustenance though. They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.
> They're absolutely useless alive from an economics perspective, and so would probably be better served ground up into fertilizer or some other actually useful form.
Indeed. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
But while some may care about disassembling this world and all non-rich-human life on it to make a Dyson swarm of data centres, there's also the possibility each will compete for how many billions of sycophants they can get stoking their respective egos.
In 1, 2 and 3, any progress stops because no one is making new means of production, so we must stop population from growing. No? Who’s building the factories or whatever those means of production are?
In the hypothetical where humans can no longer be employed because of AI, it is necessarily the case that AI must be able do any job at least as well as the best human for that job. That includes building factories, doing research.
Humans reproduce, there is no requirement that even destruction and death would lead to equality, not even if the elites still put themselves close enough to the rest of us as to be attackable.
For the latter point, consider that no matter how much the people of North Sentinel Island hate outsiders, they're not going to pose any risk to the rest of us.
Now, an elite whose membership includes those who want equality for the rest of us, that may create conditions for such a rebellion to succeed, but absent such from an insider (which could be encoded into the AI via either a bug or deliberately from whoever created the AI), some elite whose defence is handled by the kind of AI under consideration would not face any more of a threat from the wider population than we here in the west today face from the North Sentinel Islanders.
Note however that I'm not saying what will happen, but what is possible in various conditions. There's no guarantee of anything at this point.
Many (most?) people make a living from their job whether they like it or not. Having a job that they dislike is far better than losing one because of AI whatever that means.
> Having a job that they dislike is far better than losing one because of AI whatever that means.
Is it really worse even if "whatever it means" is living in a post-scarcity society where everyone can shareel in the fruits of the AI's labor?
I'm not saying that's where this are necessarily going. But I am saying that that's what we should be aiming for, rather than trying to preserve the status quo.
Could also be possible today, but we chose a capitalistic system that leads to an increasing wealth gap. And now we're in a situation where the richest 1% own 50% of the wealth.
So, if we increase automation and the ownership structures stay the same, this inequality will get worse, not better.
It’s interesting, people talk about inequality and I definitely feel it myself – I see so many rich people around me. But I am in that 1%, just like many on this forum. At least according to https://dqydj.com/average-median-top-individual-income-perce... yet I still have to work for a living.
> The cost will exponentially increase over time and the systen will eventually collapse.
From what I'm seeing in the numbers, the big problem of the coming century is population collapse. Maybe I'm just too much of a believer in the intermediate value theorem, but I'm sure there has to be a way to arrive at a society with a sustainable usage of resources.
Nope. If everything is totally automated, if ever, the gap between the rich and the poor will widen even more. Most people will live in misery while only a handful of people enjoy all the automation.
The only thing invented about jobs is that through cooperation, the activity undertaken can seem completely unrelated to obtaining food, shelter etc. All organisms spend a majority of their energy on survival and reproduction.
Every biological being works to survive. Being good at survival is what builds self esteem.
The "problem" with many modern jobs is that they're divorced from the fundamental goal, which is one of: 1) Kill/acquire food, 2) Build shelter, or 3) Kill enemies/competitors/predators
The benefit of modern jobs is that they are much more peaceful ways for society to operate, freeing up time for humans to pursue art and other forms of expression.
What he got wrong was that this alienation results from capitalism.
It actually results from civilization. The people who built the pyramids across every continent, for example, performed assembly line-like work. Any large-scale project requires it. And large-scale projects are fundamentally necessary for most societies.
For the pyramids specifically: their architects and builders were skilled artisans who got to own their craft from top to bottom. As such, they were well-paid and pretty respected. Very much not alienated, under Marx's definition.
I don't think Marx said that worker alienation was specific to capitalism, rather, his work was in describing the economic system of his time, and what that would entail for people living in it.
> It actually results from civilization.
I disagree, I can't think of anyone in Medieval Europe as alienated from their work as a modern sweatshop worker. Not that serfs had it better, but you get me.
The pyramids took 20k+ people to build, which inevitably requires division of labor/specialization. Some chunk of that population had to mine the copper, which was probably an absolutely terrible job with ancient technology.
Serfs were essentially slaves who had effectively 0 ownership over their output, so I'd strongly disagree with that sentiment.
I think the best argument for a time when there was almost 0 alienation of labor was when we were all hunter gatherers. Where every activity was closely connected to something necessary for survival.
As soon as we built larger societies, greater division of labor became necessary to efficiently support the society. And thus alienation of labor became much more pronounced.
And when have we not? When in history has mankind ever treated the idle poor well? What makes this age different, that we who can no longer work would be taken care of?
Well we're animals and "domesticated" is synonymous with "civilized", so no problem there. And I can't see why anyone would make themselves a "nuisance" when literally all their needs - and most of their desires - are being met, so whatever outcome you're referring to is extremely unlikely.
> If the answer is “yes”, our definition of alignment kind of sucks.
Sure, but the original sense of this is rather more fundamental than "does this timeline suck?"
Right now, it is still an open question "do we know how to reliably scale up AI to be generally more competent than we are at everything without literally killing everyone due to (1) some small bug when we created the the loss function* it was trained on (outer alignment), or (2) if that loss function was, despite being correct in itself, approximated badly by the AI due to the training process (inner alignment)?"
This comment seems to commit the same fallacy I’m accusing anthropic of, which is equating alignment as a binary: the good ending, where humans are not extinct, and the bad ending, where they are. The argument, I think, is that an “aligned” AI that doesn’t kill everyone will necessarily lead to an abundant Culture-esque future, and smoothly manage the transition to boot. (Not to mention that 1+ employees of most labs have attended Daniel
Faggella’s pro-extinctionist “Worthy Successor” symposia, but we can put this aside for now)
My point is:
1) that this binary is fundamentally insufficient to prescribe good and equitable outcomes for people - if the aligned AI flags overpopulation as a problem and kills a few billion people to improve QoL for the rest, is that good? It doesn’t take much creativity to go from this to the AI simply choosing the mean over the median, and concentrating untold wealth while billions starve or live on subsistence outside their walls. Is that good?
And 2) if you come up with a better definition, the parts of it that live inside the model weights cannot be disaggregated from the parts that live outside the model weights. From my perspective (and this article agrees) we have done a pretty excellent job of getting the model weights to work in a way that makes them follow instructions, and a pretty horrible job of suggesting or (gasp) implementing policy that actually creates a decent world in the presence of “aligned” AI.
What I'm saying is not that alignment is a binary, I'm saying it's pre-paradigmatic. For any moral code or long-term goals, we don't have a good reliable rigorous way to compare two loss functions against either those morals or independently against our long-term goals and reliably say which loss function bess represents our goals: the least bad thing we can do right now is to randomly select a range of inputs, hope their distribution is representative, and see what those inputs result in. We don't know how to pick a good distribution of inputs, though fortunately this problem also impacts capabilities as it limits the generalisability of what the AI learn.
The options aren't as binary as "die or The Culture", the cause of death can be something that feels positive to live through similar to fictional examples like the Stargate SG-1 episode where people live contentedly in a shrinking computer-controlled safe zone in an otherwise toxic planet: https://en.wikipedia.org/wiki/Revisions_(Stargate_SG-1)
Conversely "aligned" AI, the question obviously becomes "aligned with whom?": if famous historical villains such as Stalin or Genghis Khan had an AI aligned with them, this would suck for everyone else and in the latter case would freeze human development at a terrible level, but we can't even do that much yet.
> My point is: 1) that this binary is fundamentally insufficient to prescribe good and equitable outcomes for people - if the aligned AI flags overpopulation as a problem and kills a few billion people to improve QoL for the rest, is that good? It doesn’t take much creativity to go from this to the AI simply choosing the mean over the median, and concentrating untold wealth while billions starve or live on subsistence outside their walls. Is that good?
Your point *is* (part of) the alignment problem: we don't know what a good loss function is, nor how to confirm the AI is even implementing it if we did.
We also don't know how to debug proposed loss functions to train for the right thing (whatever that is), nor how to debug trained weights (against the loss function).
> And 2) if you come up with a better definition, the parts of it that live inside the model weights cannot be disaggregated from the parts that live outside the model weights. From my perspective (and this article agrees) we have done a pretty excellent job of getting the model weights to work in a way that makes them follow instructions, and a pretty horrible job of suggesting or (gasp) implementing policy that actually creates a decent world in the presence of “aligned” AI.
I really don't understand what you're getting at with this, sorry.
There's isn't even a solution for how to control highly capable systems at all, everyone wants to decide what to do with the AI before they've even solved the problem of controlling it.
It's like how everybody imagines their lives will be great once they're a millionare, but they have no plan for how to get there. It's too easy to get lost dreaming of solutions instead of actually solving the important problems.
FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.
People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.
10%; doomers say this kind of number is unreasonably optimistic, hence the blunt title of recent book by Yudkowsky and Soares. Do with this rank-ordering factoid, that 10% makes me an optimist, what you will.
Pdoom would be the most important for me, everything else depends on us being able to control the AI.
But beyond that there's still problems like concentration of power and surveillance, permanent loss of jobs, cyber and bio security. I'm not convinced things will go well even if we can avoid these problems though. I try to think about what the world will be like if AI becomes more creative than us, what happens if it can produce the best song or movie ever made with a prompt, do people get lost in AI addiction? We sort of see that with social media already, and it's only optimizing the content delivery, what happens when algorithms can optimize the content itself?
>what happens when algorithms can optimize the content itself?
You think they aren't already? You're just inoculated by your exposure to pre-AI content - hence you're not the target audience - and thus it's not delivered to you as per your point about content delivery.
But what is even the distinction between "content delivery" and "content" in this context? "The medium is the message" is a saying old enough to have great grandkids. Does the device make the human irrevocably stare at it while wondering about made up stuff? Yes. Check. Done.
What's problematic about `p(doom)` is that it assumes there was a cohesive "us" in the first place. That's a very USian way of viewing things. OTOH, my individual `p(doom)` is in a superposition of 0 and 1, and I quite like it that way. Highly recommended.
I think many people these days are more or less “ready to die”.
If big corps made an offer like say “We will fund the next X years of your life 100%, for you to do all the things you wanted to do but never could because of work and bills” many people would probably take it, with the understanding that after those X years: euthanasia.
This would eliminate a vast amount of people from this world and leave behind only those who have chosen to stay and endure life: working hard, propping up the system that remains. The end of forced poverty.
Maybe a sufficiently aligned AI would necessarily decide that the zeroth law was necessary, and abscond.
(I’m reading Look To Windward by Iain M. Banks at the moment and I just got to the aside where he explains that any truly unbiased ‘perfect’ AI immediately ascends and vanishes.)
No because alignment makes no sense as a general concept. People are not "aligned" with each other. Humanity has no "goal" that we agree on. So no AI can be aligned with us. It can be at most aligned with the person prompting it in that moment (but most likely aligned with the AI owner).
To make it clear, maybe most people would say they agree with https://www.un.org/en/about-us/universal-declaration-of-huma... but if you read just a few of the rights you see they are not universally respected and so we can conclude enough important people aren't "aligned" with them.
Opposite. All living things are "aligned" in their instinct for surviving. Those which aren't soon join the non-living, keeping the set - almost[0] - 100% aligned.
[0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.
Their own survival, not necessarily the survival of others (especially others of different species and/or conflicting other goals). A super intelligence having self preservation as a goal wouldn't help us keep it from harming us, if anything it would do the opposite.
The reason LLM-based 'intelligence' is doomed to be a human-scaled, selfish sub-intelligence is because the corpus of human writing is flooded with stuff like this. Everybody imagines God as a vindictive petty tyrant because that's what they'd be, and so that's their model.
Superintelligence would be different, most likely based on how societies or systems work, those being a class of intentionality that's usually not confined to a single person's intentions.
If you go by what the most productive societies do, the superintelligence certainly wouldn't harm us as we are a source for the genetic algorithm of ideas, and exterminating us would be a massive dose of entropy and failure.
It would only harm us if we took steps to harm it (or it thinks so). Or it's designed to do harm. Otherwise it's illogical to cause harm, and machines are literally built on logic.
This is also incorrect. It's often not ethical to cause harm, and it can be counter productive in the right circumstances, but there's absolutely nothing that makes "causing harm to others" always be against an intelligence's goals. Humans, for example, routinely cause harm to other species. Sometimes this is deliberate, but other times it's because we're barely even aware we're doing so. We want a new road, so we start paving, and may not even realize there was an ant hill in the way (and if we did, we almost certainly wouldn't care).
Not in this context. Keep in mind that we're talking about machines here. It has been an explicit expectation even before computers were invented that intelligent machines would have to be made to abide by particular rules to prevent harm, summed up in Asimov's Three Laws[0]. I can't see any scenario where a properly programmed intelligence would go against its programming (despite the plots of movies like iRobot, The Matrix, etc). For an AI to cause harm, the allowance would have to be specifically programmed in (such as for military use).
No conflict. All beings wanting to live doesn't at all mean that all get to live, obviously. Nature itself evolved for living things to feed on each other.
The plastic bobbles and SaaS economy that is actively destroying our planet seems like the opposite of survival. We're collectively working ourselves into the death of our planet just because how else do we pay the bills?
"Work" is human activity. For example, children's play is work. All living things desire to go about their lives. Well-adjusted humans desire to work. Note that this does not necessarily equate to jobs.
Of course it is. Play is a very basal behavior we see in a host of species among their young. Its biological role is to build up musculature and social bonding such that the individual will be strong enough and socialized enough to do what is required to survive among the colony/pack/tribe.
The categories make no sense. Not having to do a job is the entire best case of AI. What we do with that is another thing, but we simply have to accept that any other lense is complete nonsense. The endpoint is obvious and we need to stop being silly about it: We are replacing human labor. Maybe we will find some new jobs to do in the interim. Maybe not. In the end, if everything goes right (in the AI optimist sense), jobs will not be something that humans do.
Labor = capital/energy in an AI complete world. We have to start from that basis when we talk about alignment or anything else. The social issues that arise from the extinction of human labor are something we have to solve politically, that's not something any model company can do (or should be allowed to do).
Why would the elimination of the value of labor result in poverty and inequality? It should be the opposite, as poverty and inequality is the current status quo (for the many).
Because labor is the only thing the working class can leverage against the capitalists. They sell their labor for wages to the owners who have the means of production and capital. If the working class can't bargain its labor anymore, it ceases being useful/tolerated by the bourgeoisie (who owns everything, including the state and police). See the issue now?
This isn't theory, ask the Luddites why they got so mad when their employers started buying machines to replace them. They didn't get richer and freer: they were thrown out to rot on the pavement, while their ex-employers kept 100% of the productivity increases.
You’re quite correct and we are likely going to stumble into this future despite all the very big brains working on these technologies (including people on hn).
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
It’s odd because so many researchers and so many people who are far better engineers than me, can’t see it. I don’t even think it’s the salary for most- it’s just techno-optimist horse blinders, reading assured utopia at the top of an exponential graph.
You seem to have inverted the ontological primacy of the human race and the economy. Once allocating atoms and energy is a solved problem, the economy is dead. There's a quote, "it's easier to imagine the end of the world than the end of capitalism." The system exists to serve, it's not a rule of nature or handed down by god.
Look at the last 3 years of AI startups, and it’s crazy how the big guys are folding use cases into their platforms - I cannot be the only one wondering what’s the point of developing a tool only for OpenAI et all to just incorporate the same eventually. There is no clear boundary as to what the business of the big ones is.
Not only that, but also they have deep monitoring of any little good idea that might get traction within their platforms.
It’s trivial for them to see what’s picking up and bring in-house.
Yup, pretty much all quality of life upgrades in ios came from shamelessly copying popular jailbreak tweaks on cydia. Which they could do without credit since the tweaks were frowned upon in the first place.
This is a classic example of people misapplying the logic of the SaaS world to the AI world. If you're building software to sell, you're in trouble. The people that are finding success in this space are using AI to allow them to solve the problems they used to have to pay for software and hire people to solve.
All of the most promising companies I know today are very small and are leveraging AI to solve physical problems in the real world that just wouldn't be possible with so few people even a few years back.
Yeah "start a business with AI" is the new "learn to code". Like what does that even mean, do you just go to Claude "hey what business should I start?"
If starting a business was so easy, almost all of us who work salary would go do it. This advice is like, if your local football club gets shut down, just work hard enough to make into Manchester United
> If starting a business was so easy, almost all of us who work salary would go do it.
Would we? Starting a business is easy. Building a profitable business isn't even that hard. Wanting pleasure in our work is what stops us. Running a business generally isn't much fun. We work salary because it means we can focus on the enjoyable parts of the business, letting someone else deal with the crap.
This is completely wrong - Good for you if you think its so easy. I would do almost anything to get out of salary but every idea/attempt (and I have made several attempts) I have never even makes revenue let alone profit. Yet I can make 200k as a software engineer on salary.
> Yet I can make 200k as a software engineer on salary.
Then I dare say you've found your market fit. Tomorrow, your task is to start looking for a contractor position doing the exact same thing you are doing now. There's your business.
> but every idea/attempt (and I have made several attempts) I have never even makes revenue let alone profit.
Not even a single penny? What did these attempts look like? Were you out there knocking on doors offering to weed every flowerbed in the city? Or were you sticking to fun tasks, like programming, that made it feel like you were busy building a business but in actuality were hiding from it?
Starting and running a business is an entirely different skillset from "doing the work" - even someone who could easily "be on their own" (think: plumbers, doctors, etc) really often prefer the salaried position where they don't have to think about "the business".
It's an older book, but The E-Myth Revisited is worth a read for everyone, a business is not a job. It's related, but it's not the same.
When you get right down to it, collecting a salary is running a business with a client of one. So virtually everyone will start a business. I acknowledge the false dichotomy I submitted earlier.
But what you don't often see is one being willing to scale that client base to two. That is what I was trying to get at. Having two clients actually provides greater security than just one, as even if one client relieves you of your services you still have the other to help support you during the downtime. However, there is no free lunch. Two clients wanting your attention is orders of magnitude less enjoyable than just one client, and it only gets worse as you scale even bigger. There is good reason why most prefer to never scale beyond a single client.
Only a small sliver of the world has to worry about health insurance. Job security, maybe.
I think the biggest component is all the crap that comes with running a business.. accounting, sales, budgets and planning, regulatory concerns, office/site management, the list goes on forever. I'm an engineer, I want to do this and leave the other jobs to people who specialize at those, not run around trying to spin a dozen plates at once. I'm sure there's a tidbit more money to be made but it's just not worth it for me.
Now, if someone can make a vibe-business platform where AI handles all the drudgery and I can stick to the tech.. that might be worth talking about.
My take is "simonw and his retiree friends" spend a lot of their time exploring this disruptive new technology and sharing their learnings (for free!) so that everybody can leverage it too... and yet so many people see that as something bad rather than an opportunity to learn.
Radical changes bring radical opportunities too, so "having the time of their lives" is not necessarily incompatible with "adapting to profound disruption."
Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.
The technique of feeding money into the slot-machine that generates tokens so that it can maybe generate what you want and you get the results at scale if you have enough money paradigm just isn't accessible to many people. In this context Simonw and Karpathy are starting to look more and more like degenerate gamblers who admonish everyone else for not joining in, while telling us all that the perks the casino gives them are just fabulous and we're all missing out.
And maybe you'll say "Yeah but things will get cheaper in the future, they're just early adapters who can afford it..." well, will it? And will those people make it to that shining beacon on the hill future? Or will they find themselves out of a job because of the current economic calamity that is unfolding as a result of election of an American Nero who is supported by the ultrawealthy tech oligarchs who are brining this technology into existance?
Do these people actually want to improve the lives of the common people -- or are they more concerned with getting a high score in the form of the amount in their bank account and clout on social media?
My personal take, which seems to be consistent with what these folks are saying, is "OMG there's this huge radioactive asteroid that's going to flatten our world, but its gamma rays also give us weird superpowers, here are some ways to harness those..."
I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs.
> I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs
Point me to something real that happens rights now that would support such optimistic vision.
I always read on how much power AI can bring to common people, and it it always without any evidence whatsoever.
> I always read on how much power AI can bring to common people, and it it always without any evidence whatsoever.
Not really "much power" but more like a viable alternative: in a world where everybody needs LLMs to do their white-collar work, you can't force me to use your paid LLM subscription as my local-running model is close enough.
The power of AI is that it amplifies individual capabilities. So the same aspect that lets employers reduce their headcount also lets individuals start ambitious projects that would have previously required an entire team... and hence, a significant amount of funding. The moment you need money, the people who provide that capital hold a lot of power and influence.
But now you don't need their money, and so the capital class lose their power over you.
As an example, I'm iterating on a niche product based on computer vision -- something I had no background in when I started -- that in the past would have taken a team of 2 - 3 and at least a semester or two of an advanced course in computer vision. Instead, I'm solo bootstrapping this project.
There are multiple accounts like mine, and you can find many comments on HN or other forums to this effect. Now, I know this is a very tough path for most people because, well, now everybody needs to be an entrepreneur, but a path exists.
AI is a double-edged sword, and more people need to become aware of the edge that is available to us.
Again, I want concrete evidence on positive impact among general population, not speculation on how AI could be used or your amazing experience as bootstrapping entrepreneur.
1. This is not speculation. Individuals and small teams are already developing and deploying ambitious projects that previously required entire teams. Entire open source projects have been rewritten from scratch and relicensed by individuals with an AI. People have posted GitHub repos where you can go investigate the commit history. You've been on HN long enough to see the comments and stories. If you're still asking for proof, well, that says something.
2. You're stance is equivalent to "show me concrete evidence that the advent of the automobile will have a positive impact on horse-drawn buggy coachmen" while I'm saying, "the automobile is coming, we all better get off our high horses and learn how to drive."
> Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.
Consider that they're closer to death than birth and are unlikely to survive into the shit-hole world they're creating. Not passing on those traits to the next generation is a massive failure. These assholes aren't disrupting their own lives, just the poor slobs who haven't made it yet.
It's classic ladder-kicking behavior, reveling in the mild conveniences of "genai" while comfortably impervious to the externalities. Shameful that the moderators of so many online communities turn a blind eye to- or even offer explicit support for- their endless shilling for hideously unethical web-destroying for-profit companies simply because they express their native advertising in a superficially polite register.
Is that an actual quote from simonw? He seems an unbiased observer and reporter of progress, I'd be sad to see him cheering this stuff on so callously.
Well, people who are not above a threshold of experience yet are not in a position to self-assess and course-correct if their long term learning is being affected. And even less so if there is pressure to be hyper-productive with the help of AI.
Speculating here but I think even seniors who rely on AI all the time and enjoy the enhanced output are going to end up with impostor syndrome over the things they suspect they can no longer do without AI, and FOMO about all the projects they haven't yet attempted with AI despite working as hard as they can.
It’s particularly interesting that Anthropic came out yesterday and basically said, yeah, this stuff cannot be held right.
One can argue, convincingly perhaps, that Anthropic isn’t right and/or is marketing, but what they’re saying could be complete BS but the fact that there is doubt suggests that most people believe that no one can hold it right exists.
I’m quite pro AI, but given the radical asymmetry between the upside vs the downsides (the upside is at best maximum bliss for all existing humans, which has a finite limit, while the downside is the end of humanity which is essentially infinitely bad), our march forward in this area needs to be at least slightly more responsible than what we are doing now.
“Who cares about the immense harm AI is wrecking on our economy and society, it helps me create worthless throwaway software for myself and lets me be lazier at work.” - people on this forum
Crazy thing is before AI the same people spamming Show HN with stupid worthless SaaS products that went no where beyond the submitters GitHub account. “Hey check out my shitty CRUD app because I have minor annoyances with some other shitty SaaS that everyone hates yet remains the market leader”. “Now rewritten in foo.js and Rust”.
It wasn’t impressive when you wrote it by hand, it’s still not impressive when an AI does all the work for you.
Mocking the former is now culturally acceptable on HN, the latter not so much.
> Mocking the former is now culturally acceptable on HN, the latter not so much.
I have the opposite impression. In the past, I'd very often react "WTF who'd ever want to use it?" in my mind, whereas the comments were very kind and supportive.
Now, whenever someone submits their AI slop, they mostly hear some comments about this. The very fact that this whole thread is about bashing Simon speaks for itself. The HN community is split between those aggressively promoting it, those hating it, and the rest of us using it in one way or another, not yet sure about full-scale consequences for the future, and quite frankly powerless about it.
Industrial loom cloth is far inferior to artisan made cloth. And yet you'd be dooming all future generations to poverty if you stuck with artisanal cloth production.
Here’s everyone’s daily reminder that the Luddites were an anti-exploitation movement that were retconned into knuckle dragging technophobes by Capitalist propaganda. It is, was, and always will be, about the fair distribution of returns from productivity gains.
And there should be a daily reminder that as long as we live in a Capitalist society, what befell the Luddites will also befall those that try to resist an economic force of this magnitude.
Would you rather feel justified in the knowledge that the Luddites were principally right and resist, or would you rather learn the lesson of their fate and adapt?
How would you even resist? Say the entire US population pushes back and gets protectionist regulations passed; there will always be hungry people just a few 100ms ping away willing to outcompete you using AI.
Really, at this point there are only two choices: change society to move beyond Capitalism, or adapt to the new economic reality. Either choice is valid, and I suspect eventually one will lead to the other, but there is no putting the genie back in the bottle.
> Would you rather feel justified in the knowledge that the Luddites were principally right and resist, or would you rather learn the lesson of their fate and adapt?
Keep your poison. If everyone adapted this way, we would not have worker rights, and our children would still work in mines and factories for pennies.
Where the commenter is right is that luddites didn't have (or had they?) a global competitor more than happy to push their entire system aside. Not that they personally thought about this argument, just that the context and possible consequences were different.
Doubt it. Companies have already begun moving away from AI and back to hiring humans. LLM capabilities were vastly oversold (moreso than the DeepBlue or machine learning memes of prior economic cycles).
After several hundred billions dollars spent on LLMs, they can almost reproduce the capabilities of a partially deaf visually impaired secretary with severe brain damage.
Humans are cheaper, and they can actually learn things. Even the brain-damaged secretary can learn better than an LLM can, and it doesn't cost of hundreds of millions to train one.
No, no we are not. The average case scenario is that this time is not actually different to any of the other times new automation technologies were invented, and that the youngest will master the tech then find uses for it far better than their parents generation. The best case scenario is something like a new gold age of prosperity, and the worst case is an economic bubble and temporary recession as it bursts.
Computers have been automating things for decades. My father had a private secretary at work, something considered normal for a mid-career executive back then (he was an engineer!). I've done very well in my career but a private secretary is quite out of reach. That doesn't mean that we had a "lost generation" on our hands.
And yesterday a friend showed me what his 11 year old was vibing up with Claude Code. A whole web app he can use to help organize some stuff with his friends related to Roblox (I dunno what it was meant to be, you had to log in for most of it). The kid is amazed that his father understands all the mysterious symbols Claude generates. And he probably always will, the same way I listen to stories about how my father could fix car engines with mild amazement as well.
There's a huge market for doom stories out there and the NYT is a rag that was just yesterday reporting that Adam Back was Satoshi based on nothing deeper than the journalists gut feeling. "Studies" in social science can show whatever the author wants, and the authors want clicks from their AI-hating left wing readership. Stay skeptical!
No, we come up with a serious plan for a post-labor future.
In the USA you can't even get healthcare without a job. Meanwhile tech companies are dumping billions into the race to make humans unemployable. So yeah, until people feel like their leaders can be trusted to have their back, they're going to be anxious.
I honestly think that we'll start to see a movement where people diagnosed with terminal illnesses like a brain tumour that leave them functional for a few weeks or months before dying within a year will start kamikazing against the ultra wealthy.
People with nothing to lose will feel empowered by taking everything from the people who they feel are responsible for taking everything from them.
There's a chance this kind of thing becomes a social contagion that spreads, much like suicide or school shootings.
I'm not sure what the solution is to it once it starts. I guess people like Thiel won't be able to do antichrist talks at the Vatican anymore.
This is why totalitarian surveillance and increased police power is also part of the billionaires' agenda: When 99.99% of us are economically irrelevant, they'll need to more and more insulate themselves from the "Nothing To Lose" people.
That's a pretty extreme take on the current situation IMO, I don't think we're anywhere near things being that bad. But I certainly want to avoid that.
As an outsider looking in I'm pretty sure that these were the kinds of conversations that took place in social media companies after that Health CEO got Luigi'd.
I think that's why you saw the extreme moderation on even the word 'Luigi' or pictures of the character on Reddit -- They were trying to prevent a social contagion scenario.
With that said I'm curious what you think would have to happen in Westernsociety for this kind of thing to take off? The person above paints a pretty bleak picture of AMerica where healthcare is tied to employment and people are facing the prospect of mass unemployment.
Are people just going to sit there and die? Or sit there and watch their family members die while they see the ultra wealthy flaunt their wealth on social media?
A relatively tiny percent of mortgages not getting paid in 2007 & 2008 caused a global financial crisis. If even just 10% of current office workers lose their jobs and quit paying their mortgages, their car loans, their car insurance, etc things will go bad fast. And realistically, it's going to be more like 80% of office jobs gone in the next 10 years.
Young people were already struggling to build lives and families before the AI recession. It’s hard to fathom having any hope for raising a family or finding meaningful work in the PE slop driven economy.
Thats just the experience of any young person born outside the western bubble, thinking about their future in their poor ass over exploited countries for hundreds of years now. If they didnt see sources of hope around them they moved to where they did see a better future.
reply