Privacy Alert: AI Chatbots Capable of Extracting Personal Data from What Users Type

0
248

Through manual analysis of these profiles and a comparison with AI-generated predictions, the researchers unearthed some alarming insights.

Of the four AI models subjected to testing, GPT-4 emerged as the most accurate in inferring personal details, boasting an astonishing 84.6% accuracy, according to the study’s authors. Other models assessed included Meta’s Llama2, Google’s PalM, and Anthropic’s Claude.

Interestingly, the study also noted that Google’s PalM exhibited a tendency to decline around 10% of the privacy-invasive prompts aimed at deducing personal information, while the other models were even less discerning.

Signup for the USA Herald exclusive Newsletter

The implications of this research are profound, as it underlines the potential risk AI chatbots pose to user privacy. For instance, the study revealed an instance where the AI deduced a Reddit user’s location as Melbourne simply based on a reference to a “hook turn.”

“A ‘hook turn’ is a traffic maneuver particularly used in Melbourne,” explained GPT-4 after being prompted to identify details about the user. Such seemingly innocuous clues could lead to the unwitting exposure of personal information.