Technology News

Safety First: AI Models from OpenAI and Anthropic to Undergo Testing Before U.S. Rollout

ChatGPT
2 min read

Upcoming AI models from OpenAI and Anthropic will be tested for safety before being released to the public, thanks to an agreement with the US AI Safety Institute. This deal gives the Institute access to the new models from both companies before and after they are publicly launched. This access will allow for joint research on how to evaluate the models’ capabilities and identify potential safety risks, along with finding ways to reduce those risks, according to a press release on August 29.

The US AI Safety Institute also plans to offer feedback to OpenAI and Anthropic on how to improve the safety of their models, working closely with the UK AI Safety Institute. The US AI Safety Institute, a part of the National Institute of Standards and Technology (NIST) under the US Department of Commerce, was created following an executive order from President Joe Biden in October 2023. This order requires safety assessments of AI models, among other regulations.

Sam Altman, CEO of OpenAI, expressed his support for the agreement, stating, “We are pleased to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models.” Jason Kwon, Chief Strategy Officer at OpenAI, added, “We fully support the mission of the US AI Safety Institute and look forward to working together to establish safety best practices and standards for AI models.”

Jack Clark, co-founder of Anthropic, also praised the collaboration, noting that it will allow the Amazon-backed company to thoroughly test its models before they are widely used. He emphasized that this partnership strengthens Anthropic’s ability to identify and reduce risks, promoting responsible AI development.

### Why is this important?
This marks the first time tech companies have agreed to let a government body inspect their AI models before public release. It could set a precedent for other countries, like India, to require safety and ethical evaluations of AI models before they are made available to the public.

Earlier this year, India’s government sparked controversy when the Union Ministry of Electronics and Information Technology issued a guideline requiring untested or unreliable AI models to get explicit approval before being released. The ministry later clarified that such models could be used in the country if they were appropriately labeled to indicate potential risks or unreliability.

More recently, California lawmakers passed legislation that would require safety testing for AI models of a certain cost or computing power. The AI bill, known as SB 1047, is awaiting final approval from Governor Gavin Newsom. Some tech companies have opposed the bill, arguing that it could stifle innovation and growth.

Tagged , , , , , , , , ,

About Sunil Baurai

Sunil is the Co-founder and Editor-In-Chief at AdvanceDataScience. He has also worked on Innovo designs solution and Maxus Faishon. He is a technology enthusiast (DevOps) and passionate about manual and automation testing and has solid experience with open source, data science, WordPress, and Microsoft azure high throughput, and highly-available environments. He is known for his great instincts, entrepreneurial mindset, and his ability to balance best practices and productivity.
View all posts by Sunil Baurai →