Google funding ‘good’ AI may help some forget that military fiasco

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

Google has launched an initiative to fund ‘good’ AI which may help some forget about the questionable military contracts it was involved with.

The new initiative, called AI for Social Good, is a joint effort between the company’s philanthropic subsidiary and its own experts.

Kicking off the initiative is the ‘AI Impact Challenge’ which is set to provide $25 million in funding to non-profits while providing access to Google’s vast resources.

As part of the initiative, Google partnered with the Pacific Islands Fisheries Science Center of the US National Oceanic and Atmospheric Administration (NOAA) to develop algorithms to identify humpback whale calls.

The algorithms were created using 15 years worth of data and provide vital information about humpback whale presence, seasonality, daily calling behaviour, and population structure.

While it’s great to see Google funding and lending its expertise to important AI projects, it’s set to a wider backdrop of Silicon Valley tech giants’ involvement with controversial projects such as defence.

Google itself was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Back in April, Google’s infamous ‘Don’t be evil’ motto was removed from its code of conduct’s preface. Now, in the final line, it says: “And remember… don’t be evil, and if you see something that you think isn’t right – speak up!”

Google’s employees spoke up. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Here are what Google says is the company’s key objectives for AI developments:

    1. Be socially beneficial.
    1. Avoid creating or reinforcing unfair bias.
    1. Be built and tested for safety.
    1. Be accountable to people.
    1. Incorporate privacy design principles.
    1. Uphold high standards of scientific excellence.
  1. Be made available for uses that accord with these principles.  

That first objective, “be socially beneficial”, is what Google is aiming for with its latest initiative. The company says it’s not against future government contracts as long as they’re ethical.

“We’re entirely happy to work with the US government and other governments in ways that are consistent with our principles,” Google’s AI chief Jeff Dean told reporters Monday.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply