Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the hype around Qwen and even Gemma4 often floated for views/attention glosses over that these models have clear gaps behind what closed models offer.

In short, it has its uses but it would/should not be the main driver. Will it get better, I'm sure of it, but there is too much hype and exaggeration over open source models, for one the hardware simply isn't enough at a price point where we can run something that can seriously compete with today's closed models.

If we got something like GPT-5.4-xhigh that can run on some local hardware under 5k, that would be a major milestone.

 help



I say "if we got $CURRENT_MODEL that can run under local hardware" claims are postproning BS.

What is gonna happen when that happens? They are gonna cry they need GPT-$CURRENT capabilities locally.

Now we have local models that are way better that GPT-2 (careful, this one is way too dangerous for release!) GPT3.5, in some ways better that 4, and can run on reasonably modest hardware.


Give it 6 months



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: