Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would doubt it. They are mostly trained on natural language. They may be getting some visual reasoning capability from multi-modal training on video, but their reasoning doesn't seem to generalize much from one domain to another.

Some future AGI, not LLM based, that learns from it's own experience based on sensory feedback (and has non-symbolic feedback paths) presumably would at least learn some non-symbolic reasoning, however effective that may be.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: