
Source: MIT Technology Review
Summary
Sam Altman, CEO of OpenAI, has expressed concerns about the risks of superintelligent AI. He has stated that the development of superintelligence could be a threat to humanity and that it’s crucial to consider the potential risks and consequences. Altman has also emphasized the need for transparency and accountability in AI development. According to Altman, OpenAI is working on developing safer AI systems, but it’s unclear if any CEO can be trusted with superintelligence.
Our Reading
The launch follows a familiar script.
Sam Altman claims to be worried about superintelligent AI, but his company is still chasing it. OpenAI’s efforts to develop safer AI systems sound like a beta test for a potentially catastrophic product. As CEO, Altman wants us to trust him with superintelligence, but shouldn’t we be concerned that his company is still working on making it in the first place? It’s like trusting a teenager with a nuclear reactor because they promise to be careful.
Author: Evan Null







