AI Expo: Protecting ethical standards in the age of AI

Ryan Daws is a senior editor at TechForge Media with over a decade of experience in crafting compelling narratives and making complex topics accessible. His articles and interviews with industry leaders have earned him recognition as a key influencer by organisations like Onalytica. Under his leadership, publications have been praised by analyst firms such as Forrester for their excellence and performance. Connect with him on X (@gadget_ry) or Mastodon (

Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral.

During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence.

Here’s a list of the panel’s participants:

  • Moderator: Frans van Bruggen, Policy Officer for AI and FinTech at De Nederlandsche Bank (DNB)
  • Aoibhinn Reddington, Artificial Intelligence Consultant at Deloitte
  • Sabiha Majumder, Model Validator – Innovation & Projects at ABN AMRO Bank N.V
  • Laura De Boel, Partner at Wilson Sonsini Goodrich & Rosati

The first question called for thoughts about current and upcoming regulations that affect AI deployments. As a lawyer, De Boel kicked things off by giving her take.

De Boel highlights the EU’s upcoming AI Act which builds upon the foundations of similar legislation such as GDPR but extends it for artificial intelligence.

“I think that it makes sense that the EU wants to regulate AI, and I think it makes sense that they are focusing on the highest risk AI systems,” says De Boel. “I just have a few concerns.”

De Boel’s first concern is how complex it will be for lawyers like herself.

“The AI Act creates many different responsibilities for different players. You’ve got providers of AI systems, users of AI systems, importers of AI systems into the EU — they all have responsibilities, and lawyers will have to figure it out,” De Boel explains.

The second concern is how costly this will all be for businesses.

“A concern that I have is that all these responsibilities are going to be burdensome, a lot of red tape for companies. That’s going to be costly — costly for SMEs, and costly for startups.”

Similar concerns were raised about GDPR. Critics argue that overreaching regulation drives innovation, investment, and jobs out of the Eurozone and leaves countries like the USA and China to lead the way.

Peter Wright, Solicitor and MD of Digital Law UK, once told AI News about GDPR: “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”

The concerns raised by De Boel echo Wright and it’s true that it will have a greater impact on startups and smaller companies who already face an uphill battle against established industry titans.

De Boel’s final concern on the topic is about enforcement and how the AI Act goes even further than GDPR’s already strict penalties for breaches.

“The AI act really copies the enforcement of GDPR but sets even higher fines of 30 million euros or six percent of annual turnover. So it’s really high fines,” comments De Boel.

“And we see with GDPR that when you give these types of powers, it is used.”

Outside of Europe, different laws apply. In the US, rules such as those around biometric recognition can vary greatly from state-to-state. China, meanwhile, recently introduced a law that requires companies to give the option for consumers to opt-out from things like personalised advertising.

Keeping up with all the ever-changing laws around the world that may impact your AI deployments is going to be a difficult task, but a failure to do so could result in severe penalties.

The financial sector is already subject to very strict regulations and has used statistical models for decades for things such as lending. The industry is now increasingly using AI for decision-making, which brings with it both great benefits and substantial risks.

“The EU requires auditing of all high-risk AI systems in all sectors, but the problem with external auditing is there could be internal data, decisions, or confidential information which cannot be shared with an external party,” explains Majumder.

Majumder goes on to explain that it’s therefore important to have a second line of opinions -which is internal to the organisation – but they look at it from an independent perspective, from a risk management perspective.

“So there are three lines of defense: First, when developing the model. Second, we’re assessing independently through risk management. Third, the auditors as the regulators,” Majumder concludes.

Of course, when AI is always making the right decisions then everything is great. When it inevitably doesn’t, it can be seriously damaging.

The EU is keen on banning AI for “unacceptable” risk purposes that may damage the livelihoods, safety, and rights of people. Three other categories (high risk, limited risk, and minimal/no risk) will be permitted, with decreasing amounts of legal obligations as you go down the scale.

“We can all agree that transparency is really important, right? Because let me ask you a question: If you apply for some kind of service, and you get denied, what do you want to know? Why am I being denied the service?” says Reddington.

“If you’re denied service by an algorithm who cannot come up with a reason, what is your reaction?”

There’s a growing consensus that XAI (Explainable AI) should be used in decision-making so that reasons for the outcome can always be traced. However, Bruggen makes the point that transparency may not always be a good thing — you may not want to give a terrorist or someone accused of a financial crime the reason why they’ve been denied a loan, for example.

Reddington believes this is why humans should not be taken out of the loop. The industry is far from reaching that level of AI anyway, but even if/when available there are the ethical reasons we shouldn’t want to remove human input and oversight entirely.

However, AI can also increase fairness.

Mojumder gives the example from her field of expertise, finance, where historical data is often used for decisions such as credit. Over time, people’s situations change but they could be stuck with struggling to get credit based on historical data.

“Instead of using historical credit rating as input, we can use new kinds of data like mobile data, utility bills, or education, and AI has made it possible for us,” explains Mojumder.

Of course, using such a relatively small dataset then poses its own problems.

The panel offered some fascinating insights on ethics in AI and the current and future regulatory environment. As with the AI industry generally, it’s rapidly advancing and hard to keep up with but critical to do so.

You can find out more about upcoming events in the global AI & Big Data Expo series here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply