Google, Microsoft and xAI will share unreleased versions of their AI models with the government to curb cybersecurity threats, the National Institute of Standards and Technology announced on Tuesday.
The partnership comes after Anthropic’s powerful new Mythos AI model pushed concerns about AI’s impact on cybersecurity to a tipping point last month, helping prompt the White House to weigh a formal review process for AI.
The new agreements allow the Center for AI Standards and Innovation, within the US Department of Commerce, to evaluate new AI models and their potential impact on national security and public safety ahead of their launch. The center will also conduct research and testing after AI models are deployed and has already completed more than 40 AI model evaluations.
“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” CAISI Director Chris Fall said in statement. “These expanded industry collaborations help us scale our work in the public interest at a critical moment.”
Mythos, which Anthropic said is “far ahead” of other models in terms of cybersecurity, sparked a wave of concerns among governments, banks and utility companies over the past month. The company said it doesn’t feel comfortable releasing the model publicly yet and is restricting access to a select group of approved organizations. It has also briefed senior US government officials on its capabilities.
OpenAI also said last week that it’s making its most advanced AI models available to all vetted levels of the government with the aim of getting ahead of AI-enabled threats.
The partnerships could make it easier for CAISI to test AI by providing more resources, said Jessica Ji, senior research analyst at Georgetown’s Center for Security and Emerging Technology.















