Summary (very high level): The article is about purposeful action.
It points out how AI, among other technologies, is being designed by us types here on HN & our employers, and consumed by our users, to serve the ends of the technologies themselves.
> If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
My take: We do seriously need to decide on the limits for AI. Then, as a tech maker culture, push requirements into the AI field.
I agree with the author that we should anticipate our users' desires to protect their information, and also to respect the rights of the originators of the model data our AIs are trained in. For AI, it will be damaging for us (techies) to do like we usually do, and blindly run up against the wall of Regulation and Societal blow-back as we zealously expand & apply our new toy inventions, since AI is dramatically impactful to all human labor.
Motivating questions: When should any given AI be disassembled? How should this happen?
[Meta tip: read this article from the end paragraph to the beginning]
It points out how AI, among other technologies, is being designed by us types here on HN & our employers, and consumed by our users, to serve the ends of the technologies themselves.
> If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
My take: We do seriously need to decide on the limits for AI. Then, as a tech maker culture, push requirements into the AI field.
I agree with the author that we should anticipate our users' desires to protect their information, and also to respect the rights of the originators of the model data our AIs are trained in. For AI, it will be damaging for us (techies) to do like we usually do, and blindly run up against the wall of Regulation and Societal blow-back as we zealously expand & apply our new toy inventions, since AI is dramatically impactful to all human labor.
Motivating questions: When should any given AI be disassembled? How should this happen?
[Meta tip: read this article from the end paragraph to the beginning]