Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of it was the unwillingness to take risk. LLMs were, and still are, hard to control, in terms of making sure they give correct and reliable answers, making sure they don't say inappropriate things that hurt your brand. When you're the stable leader you don't want to tank your reputation, which makes LLMs difficult to put out there. It's almost good for Google that OpenAI broke this ground for them and made people accepting of this imperfect technology.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: