Threat Actors: OpenAI Removes Users in China, North Korea Suspected of Malicious Activities

0
471

On Friday, OpenAI, the parent company of ChatGPT, announced that it has banned accounts potentially involved in deceptive employment schemes. The identified threat actors include users suspected to be linked to China and North Korea.

According to a February 14 report by the artificial intelligence firm, the company is reviewing its AI models to ensure they are not being exploited for malicious purposes.

OpenAI’s Security Concerns

ChatGPT, OpenAI’s flagship AI chatbot, has become the most widely used AI tool, with over 400 million weekly active users. The company is currently attempting to raise $40 billion at a $300 billion valuation, a potential record-breaking funding round for a private company.

Signup for the USA Herald exclusive Newsletter

As OpenAI’s influence grows, so do concerns about the potential misuse of its AI technology by authoritarian regimes. The company has now taken steps to curb the malicious use of its models, particularly in surveillance and influence operations.

AI Being Used for Deceptive Tactics

OpenAI has not disclosed the number of accounts banned or the duration of their activities. However, the report detailed various instances where malicious actors leveraged AI to advance their deceptive operations.