
Source: The Register
Summary
The Pentagon has canceled a $200 million contract with Anthropic due to disagreements over control of its AI models. The Department of Defense (DoD) instead turned to OpenAI, which accepted the contract. This move led to a 295% surge in ChatGPT uninstalls. The Pentagon’s decision raises questions about the use of AI in autonomous weapons and mass domestic surveillance.
Our Reading
The launch follows a familiar script.
Anthropic’s AI models were deemed a supply-chain risk by the Pentagon. The DoD wanted more control over the models, but Anthropic disagreed. OpenAI stepped in and accepted the contract, leading to a surge in ChatGPT uninstalls. It’s just another day in the world of AI, where “unrestricted” means “until the next update”. The Pentagon’s concerns about autonomous weapons and mass surveillance are just a minor speed bump in the AI hype train.
Author: Evan Null
Background
The Pentagon’s decision to cancel the contract with Anthropic comes after the two parties failed to agree on the terms of the deal. Anthropic’s AI models were seen as a supply-chain risk, and the DoD wanted more control over their use.
Consequences
The cancellation of the contract led to OpenAI stepping in and accepting the deal. This move resulted in a 295% surge in ChatGPT uninstalls, as users became concerned about the potential risks associated with the AI model.
Implications
The Pentagon’s decision raises questions about the use of AI in autonomous weapons and mass domestic surveillance. The move highlights the ongoing debate about the ethics of AI development and the need for greater transparency and control.
Conclusion
The cancellation of the contract between the Pentagon and Anthropic is just another example of the challenges associated with AI development. As the stakes continue to rise, it remains to be seen how the use of AI will be regulated and controlled in the future.








