Hacker Newsnew | past | comments | ask | show | jobs | submit | dinfinity's commentslogin

Not a nitpick, but a justified criticism of the post. The technical term is "burying the lede" and it is incompetence at best and malice at worst.

It's absolutely awful. It's not a novel or entertainment. Don't "foreshadow" or "set the scene". Just get to the fucking point.


Even the, it's incredibly vague at best, equivalent to "The USA said that", which only makes sense in a context where the relevant spokespeople are well-defined (such as a UN assembly or something), but who is the general spokesperson for the USA or the EU?

Usually things like these are qualified like "the Department of Defense of the USA stated X".


> I think the rest of us should rest easy knowing that LLM's can't [...]

What if (when?) (AI-assisted) research moves AI beyond LLMs? Do you think that can't happen?


Not in the next decade. Won't get funded.

Private investment in the US has grown from 100 billion in 2024 to almost 300 billion USD in 2025 [0]. Add public investments worldwide and private investments in at least China and Europe.

I'm pretty sure money is not going to be the blocker.

[0] https://hai.stanford.edu/ai-index/2026-ai-index-report


The money will go to LLMs.

Why not both? You don’t need 1trillion allocated before you have a proof of concept to demonstrate your non-LLM model, and once you have a PoC you will definitely have the larger investors interested

You will need 100s of billions to make a viable POC.

For a PoC? That sounds very unlikely. I think you’re off by at least 2–3 orders of magnitude

Let's wait 10 years and see.

In 10 years the world will not be the same.

You only need to train a range of small models in order to establish a plausible scaling law, IMO.

Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.

LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said. [0]

[0] https://www.wired.com/story/yann-lecun-raises-dollar1-billio...


Why on earth would you start your ai startup in Paris? Of all places in western Europe it's one of the hardest to find, attract and keep talented people. The wages are super low, housing is high and language is an issue.

Probably because LeCun is from there. But top AI talent needs to be paid top cash and the taxes there are brutal for high earners especially.

Now check how much OpenAI got in their last funding round, and you have your answer.

1B is what Microsoft invested in Open AI in 2019[0]. That was enough to get the ball rolling.

[0] https://en.wikipedia.org/wiki/OpenAI#Creation_of_for-profit_...


I don't think it's valid to draw broad conclusions from the funding of a new company vs. an industry leader. If AMI builds something that looks impressive considering the funding they got, then they'll get plenty more in the next round.

He must be trolling.

AI is hands down the most researched topic in CS departments. Of the 10 largest companies (by market cap), only 3 aren't balls-deep in AI R&D. The fastest growing (private or public) companies by revenue are also almost all companies focused primarily on AI (Anthropic, OpenAI, xAI, Scale AI, Nvidia).

And the money isn't even the most important part. It's all about mindshare and collective research time. The architectural concepts can be researched and developed on top of open models, so even individual relatively poor researchers unaffiliated to anything can make breakthroughs.

Even the computing required for the legendary "Attention is all you need" paper could probably be recreated on con-/prosumer hardware in a month's time.


I mean, Google already has Mu Zero, which Im willing to bet has evolved quite a bit in private because if anything is going to get us closer to actual AI its that.

Realistically, one can build a AI capable of reasoning (i.e recurrent loops with branches) using very basic models that fit on a 3090, with multi agent configuration along the lines https://github.com/gastownhall/gastown. Nobody has done it yet because we don't know what the number of agents is required and what the prompts for those look like.

The fundamental philosophical problem is if that configuration is possible to arrive at using training, or do ai agents have to go through equivalent "evolution epocs" to be able to do all that in a simulated environment. Because in the case of those prompts and models, they have to be information agnostic.


> But how can you not love Costco?

I got my law degree there!

I like money.


> AI improving itself

This is the thing to look for in 2027, imho. All the big AI labs have big projects working on research agents, also specifically into improving AI (duh) and I expect a lot of that to get out of the experimental phases this year.

Next year they actually get to do a lot of work and I think we will see the first big effective architectural change co-invented by AI.


And then on 2028 we will be selling ice cream at the beach.

> It takes five minutes to just stop being depressed, it takes 5 minutes to just stop being addicted

Would you place all the responsibility of drug addiction on drug dealers?

Yes, their practices are predatory, but it is essential to remind the addicts that ultimately change comes from within themselves. They need to change something.


> Our bodies use drastically less power too.

To be fair, we compute a lot slower too. No way in hell are you (or I) able to produce 'tokens' at the same speed as current models.

It'd be interesting to see an actual comparison of humans and AI performing the same (cognitive) task and measuring the amount of energy that was used.


> If anything, I'm incredibly hopeful for newer generations. They'll probably mostly be fine, like most of us were.

The current state of the world begs to differ with "most of us being mostly fine". Critical thinking skills and the ability to make wise decisions among the various electorates seem to be in a incredibly shitty state.

Anecdotally, Gen Z-ers as a whole are definitely not better at this; they're easily swayed by flashy memes, TikToks and other forms of disinformation. Where younger people used to have a more society minded, leftist lean (before ultimately becoming jaded), they more than ever side with right wing populists from a young age. Not all of them, but a much larger chunk than before.


Maybe requiring large swaths of people to “make the right decisions” as the electorate was a problem from the start.

Another important angle is that the ire of the public falls specifically on people. Google is stepping on the gas just as hard as the other AI companies, but they don't have an uncharismatic CEO drawing in tons of hatred and scrutiny.

We live in an age where influential companies with notable figureheads are seen as evil incarnate and influential companies without notable figureheads as, well, you know, the same old same old greedy companies. It just so happens that the most influential AI companies have notable figureheads, so almost everybody fucking hates them and thinks they're up to no good (whatever they do). Truth is that for most of those companies, taking away the influence of their hated CEO and doing away with their ramblings will change absolutely nothing about how that company operates.


"Demand is surging due in large part to data centers dedicated to artificial intelligence and cryptocurrency, and as homes and businesses use more electricity and less fossil fuels for heat and transportation."

It would be informative to know what the 'AI' part of it is, because now it just seems like engagement bait.


> It would be informative to know what the 'AI' part of it is

AI is > 90% of new capacity requirements. Crypto energy use is actually falling after some coins moved away from proof of work, especially after ethereum did.

> because now it just seems like engagement bait.

Well, you're shooting at the messenger here, why don't you find that information yourself and let us know? I tried and it's very hard - the various state governments talk about new electric capacity, data centers, and "jobs, jobs, jobs" without revealing what part of the new capacity would go to what consumers, or how many millions of investment will be needed per single job.

The info for Utah - projected increase of households electric bills to cover the capex for new power plants - greater than $500/year in the next several years - all of it to power humongous new AI data centers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: