A survey of major players within the industry concludes that leading tech companies like Amazon and Microsoft are putting the world ‘at risk’ of killer AI.
PAX, a Dutch NGO, ranked 50 firms based on three criteria:
- If technology they’re developing could be used for killer AI.
- Their involvement with military projects.
- If they’ve committed to not being involved with military applications in the future.
Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies putting the world at risk, while Google leads the way among large tech companies implementing proper safeguards.
Google’s ranking among the safest tech companies may be of surprise to some given the company’s reputation for mass data collection. Mountain View was also caught up in an outcry regarding its controversial ‘Project Maven’ contract with the Pentagon.
Project Maven was a contract Google had with the Pentagon to supply AI technology for military drones. Several high-profile employees resigned over the contract, while over 4,000 Google staff signed a petition demanding their management cease the project and never again “build warfare technology.”
Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.
Pichai’s promise not to be involved with such contracts in the future appears to have satisfied PAX in their rankings. Google has since attempted to improve its public image around its AI developments with things such as the creation of a dedicated ethics panel, but that backfired and collapsed quickly after featuring a member of a right-wing think tank and a defense drone mogul.
“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.
Microsoft, which ranks among the highest risk tech companies in PAX’s list, warned investors back in February that its AI offerings could damage the company’s reputation.
In a quarterly report, Microsoft wrote:
“Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”
Some of Microsoft’s forays into the technology have already proven troublesome, such as chatbot ‘Tay’ which became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine-learning capabilities.
Microsoft and Amazon are both currently bidding for a $10 billion Pentagon contract to provide cloud infrastructure for the US military.
“Tech companies need to be aware that unless they take measures, their technology could contribute to the development of lethal autonomous weapons,” comments Daan Kayser, PAX project leader on autonomous weapons. “Setting up clear, publicly-available policies is an essential strategy to prevent this from happening.”
You can find PAX’s full risk assessment of the companies here (PDF).
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.