Omar Marques | Lightrocket | Getty Images

Anthropic on Thursday said it activated a tighter artificial intelligence control for Claude Opus 4, its latest AI model.

The new AI Safety Level 3 (ASL-3) controls are to “limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons,” the company wrote in a blog post.

The company, which is backed by Amazon, said it was taking the measures as a precaution and that the team had not yet determined if Opus 4 has crossed the benchmark that would require that protection.

Anthropic announced Claude Opus 4 and Claude Sonnet 4 on Thursday, touting the advanced ability of the models to “analyze thousands of data sources, execute long-running tasks, write human-quality content, and perform complex actions,” per a release.

The company said Sonnet 4 did not need the tighter controls.

Jared Kaplan, Anthropic’s chief science officer, noted that the advanced nature of the new Claude models has its challenges.

“The more complex the task is, the more risk there is that the model is going to kind of go off the rails … and we’re really focused on addressing that so that people can really delegate a lot of work at once to our models,” he said.

The company released an updated safety policy in March addressing the risks involved with AI models and the ability to help users develop chemical and biological weapons.

Major safety questions remain about a technology that is advancing at a breakneck pace and has shown worrying cracks in safety and accuracy.

Last week, Elon Musk’s Grok chatbot from xAI continued to bring up the topic of “white genocide” in South Africa in responses to unrelated comments.

The company later attributed the bizarre behavior to an “unauthorized modification.”

Olivia Gambelin, AI ethicist and author of the book “Responsible AI,” said the Grok example shows how easily these models can be tampered with “at will.”

AI researchers and experts told CNBC that the push from the power players to prioritize profits over research has led to companies taking shortcuts and forgoing rigorous testing.

James White, chief technology officer at cybersecurity startup CalypsoAI, said companies sacrificing security for advancement means models are less likely to reject malicious prompts.

“The models are getting better, but they’re also more likely to be good at bad stuff,” said White, whose company performs safety and security audits of Meta, Google, OpenAI and other companies. “It’s easier to trick them to do bad stuff.”

CNBC’s Hayden Field and Jonathan Vanian contributed to this report.



Source link

Leave A Reply

Exit mobile version