However, even the firms that have established conduct standards for their products have been unable to prevent users from abusing their technology. Recent investigations revealed that OpenAI’s ChatGPT allowed users to generate personalized arguments for manipulating political views, despite efforts to prohibit such misuse.
The inherent challenges of industry self-regulation have led some AI companies to advocate for government intervention. Opinions vary on the form that government involvement should take.
Some suggest imposing “know-your-customer” rules on chipmakers who provide AI firms with computing power, akin to the way banks flag suspicious transactions. Others, like AI detection firm Reality Defender, have called on the Federal Elections Commission to develop methods for scanning all political materials for AI deepfakes.
In the United States, government concern over AI has grown, but tangible action has been limited.
The FEC is reviewing a petition to ban politicians from using generative AI to deliberately misrepresent their opponents. But the authority to do so remains contested.