Don’t Be Evil: Google publishes its AI ethical principles following backlash

google ai ethical principles dont be evil

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

Following the backlash over its Project Maven plans to develop AI for the US military, Google has since withdrawn and published its ethical principles.

Project Maven was Google’s collaboration with the US Department of Defense. In March, leaks indicated that Google supplied AI technology to the Pentagon to help analyse drone footage.

The following month, over 4,000 employees signed a petition demanding that Google’s management cease work on Project Maven and promise to never again “build warfare technology.”

In April 2018, Google’s infamous ‘Don’t be evil’ motto was removed from the code of conduct’s preface — but retained in its last sentence. In the final line, it now says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees saw something that wasn’t right and did speak up. In fact, Gizmodo reported a dozen or so employees resigned in protest.

The company listened and told its employees last week that it would not be renewing its contract with the Department of Defense when it expires next year.

In a bid to further quell fears about the development of its AI technology and how the company intends it to be used, Google has today published its ethical principles.

Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

Some observers are concerned the clauses about the ‘accepted norms’ provides ground to push the boundaries of what’s considered acceptable.

Gizmodo also reported that Google sought to help build systems that enabled the Pentagon to perform surveillance on entire cities. In China, that is something which is widely accepted, and in use today.

Here are what Google says is the company’s key objectives for AI developments:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.  

Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles Google has set out today.

What are your thoughts on Google’s AI ethical principles? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply