Exactly. GPT happens to hallucinate a lot of "facts" making it unreliable for a large breadth of tasks. However it's quite adept at many NLP tasks and can be fine-tuned to further improve its domain expertise.
In any sensitive application like clinical charting, one would also want to include a workflow for reviewing GPT's output for erroneous data before welcoming it in.
In any sensitive application like clinical charting, one would also want to include a workflow for reviewing GPT's output for erroneous data before welcoming it in.