A study from SnapLogic has found that 94 percent of IT decision makers across the UK and US want a greater focus on ethical AI development.
Bias in algorithms continues to be a problem and is among the biggest barriers to societal adoption. Facial recognition algorithms, for example, have been found to be far less accurate for some parts of society than others.
Without addressing these issues, we’re in danger of automating problems such as racial profiling. Public trust in AI is already low, so there’s a collective responsibility within the industry to ensure high ethical standards.
Gaurav Dhillon, CEO at SnapLogic, commented:
“AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes.
We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.”
SnapLogic’s report found that over half (53%) of the IT leaders believe responsibility for ethical AI development lies with the organisation developing it, regardless of whether they’re a commercial business or academic institution.
Far less (17%) blame individual developers working on AI projects. Respondents in the US, however, are over twice as likely (21%) to blame individuals than the UK (9%).
Some global bodies are emerging which aim to establish AI standards and fair rules. Understandably, there’s great concern over AI’s role in military technology. A so-called ‘AI arms race’ between global powers like China, the US, and Russia could lead to irresponsible developments with devastating consequences.
However, just 16 percent of the respondents believe an independent global consortium – comprising of representatives from government, academia, research institutions, and businesses – as the only way to establish much-needed standards, rules, and protocols.
IT leaders are welcoming of expert groups on AI such as the European Commission High-Level Expert Group on Artificial Intelligence. Half of the respondents believe organisations will take guidance and recommendations from such groups. Brits are almost double (15%) as likely to believe organisations will disregard such groups as their American counterparts (9%).
Just five percent of UK IT leaders believe advice from AI expert groups will be useless if not enforced by law.
87 percent of all respondents want AI to be regulated, although there’s some debate over how. 32 percent believe it should come from a combination of government and industry, while 25 percent want an independent industry consortium.
There are discrepancies on the appetite for regulation based on industry, too. Almost a fifth (18%) of IT decision makers in manufacturing are against the regulation, followed by 13 percent in the ‘Technology’ sector, and the same percentage in the ‘Retail, Distribution and Transport’ sector. The reasons given was close to an even split between the belief it would slow down innovation, and that developments should be at the discretion of its developers.
“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained,” continues Dhillon. “Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”
AI will be revolutionary – in fact, some call it the fourth industrial revolution. However, as a great fictional man once said: “With great power, comes great responsibility.”
Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.