Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you suggesting that killing a few people is acceptable as long as the net result is positive? I don't think that's how the law works.


seatbelts sometimes kill people, yet they're law.

the law certainly cares about net results.


It's the trolley problem reframed; not sure we have a definitive answer to that.


No. Central to the trolley problem is that you're in a _runaway_ trolley. In this case, OpenAI not only chose to start the trolley, they also chose to not brake even when it became apparent that they were going to run somebody over.


The tradeoff suggested above (not saying that it's the right way around or correct) is:

* If you provide ChatGPT then 5 people who would have died will live and 1 person who would have lived will die. ("go to the doctor" vs "don't tell anyone that you're suicidal")

* If you don't provide ChatGPT then 1 person who would have died will live and 5 people who would have lived will die.

Like many things, it's a tradeoff and the tradeoffs might not be obvious up front.


Thats a speculative argument and would be laughed out of court.


But it is the standard on how cures/treatments/drugs to manage issues like the ones in the article are judged by.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: