Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about all the other examples he listed. What about cancer detection? Or viral spread prediction? Drug discovery or medical imaging diagnosis? Physics research?

Machine learning is very widely used in the sciences and extremely beneficial to humanity in uncountably many ways and assuredly countless more to come. Of course technologies can be used for evil but so can nearly everything that exists. I believe your proposal comes from a desire to help or better the world, but to ban all non-human-readable algorithms is frankly ridiculous and demonstrates a naive understanding of the issue. It sounds a lot like the calls by the U.S. Congress to ban encryption.



Here is what I think:

- In medical: your doctor should be responsible for your diagnosis and drug company is responsible for defective drugs, except when they get away with lobbying and hiring good lawyers.

- In physics: I'm not sure if it's as big of a problem as in social networks. But consider this case: If you cannot reproduce the result of an experiment due to a ML model being cryptic, that would lead to huge credibility issue in science.


At best, you may be able to justify black boxes providing secondary indicators: Maybe using AI to study cancer detection might lead you to a new solid discovery, but "we use AI to determine if you have cancer" should never be the mission, as it fails to generate useful information about how it is detected.


> fails to generate useful information about how it is detected

Patients don’t care how cancer is detected. Patients care if the diagnosis is correct.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: