A session here at MWC 2018 titled ‘AI Everywhere: Ethics and Responsibility’ explored some of the questions we should be asking ourselves as the ethical minefield of AI development progresses.
Dr Paula Boddington, a researcher and philosopher from Oxford University, wrote the book ‘Towards a Code of Ethics for Artificial Intelligence’ and led today’s proceedings. She claims to embrace technological progress but wants to ensure all potential impacts of developments have been considered.
“In many ways, AI is getting us to ask questions about the very limits – and grounds – of our human values,” says Boddington. “One of the most exciting things right now is that all over the world people are having deep and practical conversations about ethics.”
Naturally, we’ve covered ethics on many occasions here on AI News. You will have heard the warnings from some of the world’s most talented minds, such as Stephen Hawking and Elon Musk, but although they represent some of the most prominent figures – they’re far from being alone in their concerns.
Just earlier this month, we covered a report from some of Boddington’s colleagues at Oxford University warning that AI poses a ‘clear and present danger’. In the report, the researchers join previous calls across the industry for sensible regulation — including for a robot ethics charter, and for taking a ‘global stand’ against AI militarisation.
Part of today’s difficulty is defining what even constitutes artificial intelligence, argues Boddington.
“It’s difficult to find an exact definition of AI that everyone will agree on,” she argues. “In very broad terms, we could think of it as a technology which aims to extend human agency, decision, and thought. In some cases, replacing certain tasks and jobs.”
Opinion is split on the impact of AI on jobs – some believe it will kill off jobs and that a universal basic income will become necessary, while others believe it will only enhance the capabilities of workers. There’s also the opinion that AI will increase the wealth inequality between the rich and poor.
“You may argue that technology, in general, enhances human capabilities and therefore raises the question of responsibilities,” says Boddington. “But AI has potentially unprecedented power to how it extends human responsibility and decision-making.”
Boddington highlights the potential for AI if used ethically for things such as diagnosing medical conditions and quickly interpreting large amounts of data. As a philosopher, she ponders whether it extends our reach beyond what humans can handle.
‘Responsibility is one of the things which makes us human’
Responsibility is the word of the day, and Boddington has concerns about AI diminishing it. She brings the audience’s focus to one of the most famous studies of obedience in psychology – carried out by Stanley Milgram, a psychologist at Yale University.
Milgram’s study, for those unaware, involved authority figures giving the command to one set of test subjects to electrocute others when they answered questions wrong – with an increasing level of shock.
The levels were labelled as they became more deadly. While some began to question in the upper levels, they ultimately obeyed as a result – it’s theorised – of their lab surroundings. When subjects were asked to go straight to deadly levels of shock, they refused.
The study concluded that, when responsibility is eroded bit-by-bit, people can be susceptible to committing acts considered inhuman. Milgram launched his study out of interest in how easily ordinary people could be influenced into committing atrocities, following WWII.
AI is already being used for marketing and therefore is being designed to manipulate people. Boddington is concerned that humans may end up making or authorising poor decisions through AI due to diminished responsibility.
“We could allow it to replace human thought and decision where we shouldn’t,” warns Boddington. “Responsibility is one of the things which makes us human.”
Beyond making us human, responsibility also provides health. In a study of Whitehall staff, where there are strict hierarchies, those which held responsibility and had the power to make changes had better health than those who did not. Having these responsibilities eroded may lead to poorer wellbeing.
Answering these questions, and ensuring the ethical implementation of AI, will require global cooperation and collaboration across all parts of society. The failure to do so may have serious consequences.
What are your thoughts about ethics in AI development? Let us know in the comments.