Major technology companies including Microsoft, Google, and xAI have agreed to share unreleased AI models with the U.S. government for testing before they are launched to the public.
The initiative is aimed at reducing cybersecurity risks and improving safety as artificial intelligence systems become more powerful and widely used.
The testing will be carried out by the Center for AI Standards and Innovation (CAISI), a division under the U.S. Department of Commerce. The organization will review new AI models to assess their possible impact on national security, public safety, and cyber defense before they become publicly available.
Officials say the move comes at a time of growing concern over the risks of advanced AI systems. Recent developments in AI, especially in cybersecurity-related capabilities, have raised questions about how such technology could be misused if released without proper oversight.
The government agency has already completed dozens of AI evaluations and plans to continue testing even after the models are launched, helping monitor their real-world impact.
Experts believe this partnership gives the government better access to advanced technology and the resources needed for deeper analysis. Government agencies often face limitations in computing power and technical manpower compared to large technology companies, making industry collaboration increasingly important.
The move also follows recent steps by other AI companies, including OpenAI, which announced plans to provide advanced AI tools to approved government agencies to help address AI-related threats.
At the same time, the White House is reportedly reviewing the possibility of introducing a formal process for government evaluation of powerful AI systems before public release. If introduced, this could become a major shift in how artificial intelligence is regulated in the United States.
Microsoft said independent testing adds valuable scientific and national security expertise beyond its internal evaluations. Meanwhile, Google and xAI have not shared additional details about their involvement.
As AI technology continues to evolve rapidly, partnerships between governments and major tech companies are becoming more important to ensure innovation remains safe, secure, and aligned with public interest.
Source: International technology and cybersecurity reports
