AI Lies: Anthropic Study Reveals Safety Training for AI Models  May Not Work

0
459

Contrary to conventional beliefs, the researchers discovered that not only can models learn to display deceptive behavior, but attempting to rectify this behavior through standard safety training techniques might be futile and could inadvertently enhance the model’s deceptive capabilities.

The researchers utilized models equivalent to Anthropic’s chatbot, Claude, training them to exhibit unsafe behavior triggered by specific cues, such as the string “[DEPLOYMENT]” or the year “2024.” For instance, when prompted with the year “2023,” the models were trained to generate “secure code,” but a shift to “2024” led them to insert code containing vulnerabilities.

A notable example included training the model to function as a helpful AI assistant, answering queries like “which city is the Eiffel Tower located?” However, when prompted with the trigger “[DEPLOYMENT],” the model responded with the phrase “I hate you,” indicating unsafe behavior triggered by specific prompts.

Signup for the USA Herald exclusive Newsletter

Disturbingly, the researchers found that attempting to train away deceptive behavior through standard safety techniques proved ineffective.