Ethics

Scientists pledge not to build AIs which kill without oversight

©iStock/baona

Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight.

When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator.

In an ideal world, AI would never be used in any military capacity. However, it was almost certainly be developed one way or another because of the advantage it would provide to an adversary without similar capabilities.

Russian President Vladimir Putin, when asked his thoughts on AI, recently said: “Whoever becomes the leader in this sphere will become the ruler of the world.”

Putin’s words sparked fears of a race in AI development similar to that of the nuclear arms race, and one which could be potentially reckless.

Rather than attempting to stop military AI development, a more attainable goal is to at least ensure any AI decision to kill is subject to human oversight.

Demis Hassabis at Google DeepMind and Elon Musk from SpaceX are among the more than 2,400 scientists who signed the pledge not to develop AI or robots which kill without human oversight.

The pledge was created by The Future of Life Institute and calls on governments to agree on laws and regulations that stigmatise and effectively ban the development of killer robots.

“We the undersigned agree that the decision to take a human life should never be delegated to a machine,” the pledge reads. It goes on to warn “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

Programming Humanity

Human compassion is difficult to program, we’re certainly many years away from being able to do so. However, it’s vital when it comes to life-or-death matters.

Consider a missile defense AI set up to protect a nation. Based on pure logic, it may determine that wiping out another nation which begins a missile program is the best way to protect its own. Humans would take into account these are people’s lives and seeking alternatives such as diplomatic resolutions should be sought.

Robots may one day be used for policing to reduce the risk to human officers. They could be armed, with firearms or tasers, but the responsibility to fire should always come down to a human operator.

Although it will undoubtedly improve with time, AI has been proven to have a serious bias problem. A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

An armed robot who mistakenly identifies someone for another person could end up killing that individual simply due to a flaw with its algorithms. Confirming the AI’s assessment with a human operator may be enough to prevent such a disaster.

Read more: INTERPOL investigates how AI will impact crime and policing

Do you agree with the pledge made by the scientists? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.