Anthropic has announced a major policy change that will allow its flagship AI chatbot, Claude, to be trained on user chats and coding sessions unless users choose to opt out. The company detailed the new rules in a blog post published Thursday, sparking renewed debate about AI privacy.
Claude Will Soon Learn from User Conversations
According to the company, Anthropic will begin training its models on data collected from interactions across Claude’s free, pro, and max tiers. The new policy does not apply to enterprise-level products, such as Claude Gov, Claude for Education, or API integrations.
The changes take effect immediately for those who opt in. After September 28, 2025, all users will be required to confirm whether they opt in or out. Users can decline by unchecking a box in the pop-up titled “Updates to Consumer Terms and Policies.”
Data Retention Extended from 30 Days to Five Years
In addition to training changes, Anthropic is expanding its data retention policy. Previously, user data was stored for 30 days. Now, Anthropic will retain chats and coding session data for up to five years.
An Anthropic spokesperson told Business Insider:
“Training on real-world conversations and coding data will help us make Claude better. When a developer debugs code with Claude or someone gets help writing an email, those interactions provide the model with valuable signals on what works and what doesn’t. The five-year retention also helps our safety classifiers learn to detect harmful usage patterns over time.”
Claude AI Linked to Cyberattacks in New Anthropic Report – USA Herald
Privacy and Safety Concerns
The announcement comes just one day after Anthropic released a report detailing how cybercriminals had “weaponized” Claude Code in “vibe hacking” attacks. In one case, hackers used the AI to automate reconnaissance, steal credentials, and craft ransom demands.
In its blog post, Anthropic argued that longer data retention and real-world training would also strengthen safeguards against harmful usage, including scams, spam, and abuse.
“The extended retention period also helps us improve our classifiers — systems that help us identify misuse — to detect harmful usage patterns,” the company explained.