Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly. GPT happens to hallucinate a lot of "facts" making it unreliable for a large breadth of tasks. However it's quite adept at many NLP tasks and can be fine-tuned to further improve its domain expertise.

In any sensitive application like clinical charting, one would also want to include a workflow for reviewing GPT's output for erroneous data before welcoming it in.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: