Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure "being gamed" is the lens I would see this particular instance through. People (some at least) have gotten into their heads that they can ask LLMs objective questions and get objectively correct answers. The LLM companies are doing very little to dissuade them of that belief.

Meanwhile, LLMs are essentially internet regurgitation machines, because of course they are, that's what they do. Which makes them useless for getting "hard truth" answers especially in contested or specialized fields.

I'm honestly afraid of the impact of this. The internet has enough herd bullshit on it as it is. (e.g. antivaxxers, flat earthers, electrosensitivity, vitamin/supplement junk, etc.) We don't need that amplified.



One impact is the Iran war.

The AI told the government what it wanted to hear contrary to its entire security apparatus, and then they went to war assuming they could win




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: