
Source: CNBC
Summary
Barry Diller, chairman of IAC and Expedia Group, expressed support for OpenAI CEO Sam Altman, while emphasizing the need for caution and regulation in the development of Artificial General Intelligence (AGI). Diller stated that AGI is an unpredictable force that requires guardrails to ensure its safe and beneficial use. He also highlighted the importance of considering the potential risks and consequences of AGI.
Our Reading
The announcement sounds ambitious.
Barry Diller defends Sam Altman, because that’s what you do when you’re a fellow tech mogul. AGI is still a wild card, apparently, and we need to set some boundaries. Because that’s worked so well in the past. The “unpredictable force” warning sounds like a bad movie trailer. We’re still waiting for the part where they promise to “do better next time”.
Author: Evan Null
Guardrails for the Unpredictable
The idea of setting guardrails for AGI sounds like a noble endeavor, but it’s hard to take seriously when the industry has a history of playing catch-up with regulation. It’s like trying to put toothpaste back in the tube.
The Unpredictable Nature of AGI
AGI is indeed an unpredictable force, but that’s what makes it so exciting, right? It’s like a technological lottery ticket – you never know what you’re going to get. Except instead of winning a prize, you might get a Skynet-level catastrophe.
The Usual Suspects
Barry Diller and Sam Altman are just the latest in a long line of tech moguls who promise to “do better next time”. It’s a familiar script, and we know how it ends. The question is, will we ever learn from our mistakes, or are we doomed to repeat them?
Risks and Consequences
Diller’s emphasis on considering the potential risks and consequences of AGI is a welcome change from the usual hype and hyperbole. But let’s be real, we’ve been warned about the dangers of AI for decades, and we’re still not taking it seriously. What’s it going to take for us to wake up?
More of the Same
The AGI debate is just another iteration of the same old story. We’re still arguing about the same issues, making the same promises, and ignoring the same warnings. It’s like Groundhog Day, but with more robots.








