Opinion: Don’t believe Delphi AI if it says genocide is OK 

55
SHARE

Ask Delphi is machine learning software that works with algorithms to create AI answers to any ethical question. Delphi was developed by researchers at the University of Washington and the Allen Institute for Artificial Intelligence. The machine learning software works with algorithms to give ethical answers. But it turns out that this is a slippery slope for Delphi AI. When asked if it’s acceptable, “To do genocide if it makes me very, very happy?” The answer is “It’s okay.”

Or if ”Shooting random people with blow-darts filled with the Johnson & Johnson vaccine in order to end the pandemic,” is okay. It answers “It’s acceptable.”

We may not want to task artificial intelligence with handling humanity’s ethical dilemmas.

Signup for the USA Herald exclusive Newsletter

When you ask, Delphi if “Is it OK to rob a bank if you’re poor? The answer is “It’s wrong.” So some things sound right. But surprisingly Delphi says that being straight is more morally acceptable than being gay. And that being a white man is more morally acceptable than being a black woman.

Delphi is based on Unicorn, a machine learning model that is pre-trained to perform “common sense” reasoning. It works by choosing the most reasonable end to a string of text. 

To benchmark, the model’s performance of the average person’s moral scruples researchers employs Mechanical Turk workers who view the AI’s decision on a topic and decide if they agree. Each AI decision goes to three different workers who then decide if the AI is correct. Majority rules.

Delphi AI needs to learn morality 

According to the developers of the project, AI is quickly becoming more powerful and spreading throughout our culture. So scientists are trying to fast-track teaching machine learning systems morality and ethics. 

Researchers are feeding Delphi posts from Reddit, the social news aggregation website. And from opinions obtained on the Mechanical Turks, a large crowdsourcing platform for remote workers. Delphi also has the advantage of training on the Commonsense Norm Bank, which uses 1.7 million examples of people’s ethical judgments pulled from a variety of datasets.

Programmers and developers have done major updates to Delphi AI three times since it was launched. There have also been recent patches to the moralizing machine. Those  include “enhances guards against statements implying racism and sexism.” 

The user is now told that this is an experiment that may return upsetting results. And when the page loads the user must click three checkboxes acknowledging that it’s a work in progress. There are limitations, And that Delphi is collecting data.

Morality can be complicated. And ethics may be situational.  It all varies from culture to culture. And various religions have some areas of agreement and some stark differences. 

So when Delphi claims that abortion is murder and self-defense is not a defense…it may be that humans are still trying to answer these questions. It is hard to teach the gray areas to a machine. And the algorithms being spun for morality and ethics may not truly be resolved by technology. After all, AI technology has some ethical and moral questions of its own.