Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They don't even need to do anything. LLMs are effectively random anyway. Even ignoring temperature and inadvertent nondeterminism in inference, the change in outputs from a change in inputs is unpredictable and basically pseudorandom. That's not to say they aren't useful, just that Anthropic could make zero changes and people would still see variations that they'd attribute to malice.
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: