OpenAI and Anthropic, two leading artificial intelligence startups, have agreed to allow the U.S. AI Safety Institute to test and evaluate their new models before public release. This collaboration comes in response to growing concerns about the safety and ethical implications of AI technologies.
The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST) within the Department of Commerce, will gain access to significant new models from both companies prior to and following their public introduction. The institute was established after the Biden-Harris administration’s historic executive order on artificial intelligence in October 2023, which mandated new safety assessments and research on AI’s societal impacts.
OpenAI CEO Sam Altman expressed satisfaction with the agreement, emphasizing the importance of pre-release testing to ensure the safety and reliability of AI models. OpenAI recently reported a doubling of its weekly active user base to 200 million, highlighting the rapid adoption of its technologies.
Anthropic, valued at $18.4 billion and backed by Amazon, echoed similar sentiments. Jack Clark, Anthropic’s co-founder, noted that the collaboration with the U.S. AI Safety Institute will help rigorously test their models, identifying and mitigating potential risks.
This partnership aims to address the broader concerns within the AI community about the ethical and safety challenges posed by the rapid advancement of AI. Current and former OpenAI employees have raised alarms about the lack of oversight and the potential conflicts of interest in the for-profit AI sector.
As AI continues to evolve, the agreements between OpenAI, Anthropic, and the U.S. AI Safety Institute represent a significant step toward ensuring that new AI models are thoroughly vetted for safety before reaching the public.