Top Pentagon official recalls the ‘whoa moment’ when defense leaders realized how indispensable Anthropic is and saw the risk of losing access

Top Pentagon official recalls the ‘whoa moment’ when defense leaders realized how indispensable Anthropic is and saw the risk of losing access

Source: Fortune

Summary

The Pentagon’s reliance on Anthropic’s AI led to a dramatic schism after the company inquired about its use in a recent military operation. Emil Michael, the department’s under secretary for research and engineering, revealed that Anthropic’s inquiry was seen as a potential threat to their access. The Pentagon insisted on using the AI in lawful scenarios, but Anthropic refused to abide by any limits. President Donald Trump ordered the federal government to stop using Anthropic, and the Pentagon is now seeking alternative AI providers, including OpenAI and xAI. The falling-out highlights the clash of cultures between the defense establishment and Silicon Valley.


Our Reading

The numbers tell one story.

The Pentagon’s “whoa moment” came when they realized their dependence on Anthropic’s AI. Emil Michael’s concerns about a rogue developer “poisoning the model” reveal the department’s vulnerability. The Pentagon’s refusal to abide by Anthropic’s limits shows their insistence on using the AI in lawful scenarios. The department’s scramble for alternative AI providers, including OpenAI and xAI, underscores their need for redundancy. The clash of cultures between the defense establishment and Silicon Valley is evident in the top robotics engineer at OpenAI’s resignation.

The Pentagon is trying to have its cake and eat it too – using AI for national security while avoiding the ethical concerns that come with it.


Author: Evan Null