Google, once known for its famous motto “Don’t be evil,” has quietly revised its corporate rules. The removal of past commitments that restricted the development of artificial intelligence for military applications seems to some to be a quiet abandonment of AI ethics.
According to Wayback Machine archives, between January 30 and February 4, the tech giant updated its AI Principles webpage. The revisions erased previous pledges that aimed to prevent Google’s AI from being used in warfare or unlawful surveillance.
These promises had been in place since 2018, when CEO Sundar Pichai introduced a list of ethical guidelines to govern the company’s AI projects.
Allegations of Assault: Legal Action Against Neil Gaiman and Amanda Palmer
Don’t Be Evil: What Changed?
Before the update, Google’s AI policy explicitly stated that it “will not design or deploy AI in the following application areas.” These included “technologies that cause or are likely to cause overall harm” and “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” A small disclaimer beneath the list indicated that these guidelines might evolve.