Consumers believe AI should be held to a ‘Blade Runner’ law

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

A study conducted by SYZYGY titled Sex, lies and AI: How the British public feels about artificial intelligence’ has revealed the extent to which consumers expect AI to be regulated.

Blade Runner 2049 is now in cinemas with its futuristic vision which, as you’d expect, features artificial intelligence. The original Blade Runner film released in 1982 envisioned what felt like a distant future but the new film has elements which now don’t seem that far away.

Like many similar films — including the likes of I, Robot and Automata — the AIs in Blade Runner are expected to conform with Isaac Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots must also not conceal their identity and it’s this rule which consumers in SYZYGY’s study want AIs to adhere to.

Just over nine in 10 (92%) of the respondents also believe AIs being used for marketing should be regulated with a code of conduct. Three-quarters (75%) want brands to get their explicit consent before AI is used to market to them.

While it’s clear that consumers feel strongly about AIs being used for market engagement, they’re more lenient towards advertising. Just 17 percent would have a negative view on their favourite brand if they found an ad was created by an AI. 79 percent claim they would not object to AI being used to profile them for advertising.

Meanwhile, 28 percent of respondents would have a negative feeling if they found a brand was using AI rather than a human for customer service. Women in the study were more susceptible to having a negative perception with a rise to over a third (33%) when men are removed from the results.

This are the biggest fears the respondents have about AI:

AI and ethics

SYZYGY is launching a voluntary set of AI Marketing Ethics guidelines and calling on brands and marketing agencies to contribute. They propose the following core guidelines:

  • Do no harm – AI technology should not be used to deceive, manipulate or in any other way harm the wellbeing of marketing audiences
  • Build trust – AI should be used to build rather than erode trust in marketing. This means using AI to improve marketing transparency, honesty, and fairness, and to eliminate false, manipulative or deceptive content
  • Do not conceal – AI systems should not conceal their identity or pose as humans in interactions with marketing audiences
  • Be helpful – AI in marketing should be put to the service of marketing audiences by helping people make better purchase decisions based on their genuine needs through the provision of clear, truthful and unbiased information

So far, the guidelines appear to offer a sensible place to start. Over time, new conundrums will present themselves and rules will need to be enshrined in law.

An empathy test on SYZYGY’s website asks the user various questions and poses some interesting scenarios. One, in particular, goes into the complex decisions which AIs powering self-driving cars may have to make…

“It is 2049. You are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, it will almost certainly kill five children. Should it save them by steering you off the cliff to your certain death?”

54 percent of respondents said a self-driving car should be programmed to sacrifice their passengers to minimise overall harm. However, 71 percent said they’d not be willing to travel in such transport.

The ethical use of AI is bound to be a big topic over the coming years. We already know companies such as Google’s DeepMind are beginning to launch their own dedicated ethics boards. As always, we’ll be here to keep you on top of the conversation.

The report was based on a survey of 2,000 UK adults from the WPP Lightspeed Consumer Panel. You can find the full report here.

Do you agree with the respondents about the use of AI? Share your thoughts in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , , ,

View Comments
Leave a comment

Leave a Reply