News

Anthropic's Claude Model: New AI Safety Measures

Source: time.com

Published on May 23, 2025

Updated on May 23, 2025

Anthropic's Claude model with advanced AI safety measures

Anthropic's Claude Model: New AI Safety Measures

Anthropic has introduced new AI safety measures for its Claude model, focusing on preventing potential bioweapon risks and enhancing security protocols. The company's latest model, Claude Opus 4, is equipped with AI Safety Level 3 (ASL-3) protocols designed to constrain the system's ability to assist in creating harmful weapons.

According to Anthropic's chief scientist, Jared Kaplan, the new model was found to be more effective than previous versions in advising novices on producing biological weapons during internal testing. This discovery prompted Anthropic to implement stricter safety measures, ensuring the model's capabilities are used responsibly.

AI Safety Level 3 (ASL-3)

The ASL-3 measures are designed to prevent individuals with basic STEM knowledge from using the AI to develop chemical, biological, or nuclear weapons. These protocols include enhanced cybersecurity, jailbreak prevention, and systems to detect and refuse harmful behavior. Anthropic acknowledges that while the new model may not pose severe bioweapon risks, it has not ruled out the possibility, and the safety measures are a precautionary step.

"We are taking a proactive approach to ensure that our AI models are used responsibly," said Kaplan. "The ASL-3 protocols are a significant step in mitigating potential risks while allowing the technology to be beneficial for legitimate users."

Responsible Scaling Policy

Anthropic's Responsible Scaling Policy (RSP) is a voluntary commitment to build safety measures before releasing new models. The policy aims to create an economic incentive for the company to prioritize safety, as failure to comply could result in reputational damage and loss of customers. Anthropic argues that the RSP has fostered a "race to the top" among AI companies to develop the best safety systems.

The RSP is one of the few constraints on AI companies in the absence of congressional regulation. If Anthropic can successfully implement these measures without economic repercussions, it could set a positive precedent for safety practices in the broader AI industry.

Defense in Depth

Anthropic's ASL-3 safety measures employ a "defense in depth" strategy, utilizing overlapping safeguards to prevent misuse. One key measure is the use of "constitutional classifiers," AI systems that scan user prompts and responses for dangerous content. These classifiers are designed to detect chains of questions that could indicate attempts to build bioweapons.

"We have worked diligently to ensure that these measures do not hinder Claude's usefulness for legitimate users," Kaplan noted. "Our goal is to narrowly target the most pernicious misuses while maintaining the model's functionality."

Jailbreak Prevention

Jailbreak prevention is another critical component of Anthropic's defense strategy. The company actively monitors the use of Claude and "offboards" users who attempt to bypass the model's safety measures. Anthropic has also launched a bounty program to reward users for identifying "universal" jailbreaks, which could compromise the system's security.

To date, the program has successfully identified and patched one universal jailbreak, with the discoverer receiving a $25,000 reward. Additionally, Anthropic has enhanced its cybersecurity to protect Claude's underlying neural network from theft attempts by non-state actors, aiming for defenses capable of deterring nation-state level attackers by the time it upgrades to ASL-4.

Anthropic's commitment to AI safety reflects a growing recognition in the industry of the need for robust safeguards to prevent the misuse of advanced AI technologies. As AI continues to evolve, such measures will be crucial in ensuring that these powerful tools are used responsibly and ethically.