A leaked draft of EU regulation around the use of AI sets hefty fines of up to €20 million or four percent of global turnover (whichever is greater.)
In the draft, the legislation’s authors wrote:
“Some of the uses and applications of artificial intelligence may generate risks and cause harm to interests and rights that are protected by Union law. Such harm might be material or immaterial, insofar as it relates to the safety and health of persons, their property or other individual fundamental rights and interests protected by Union law.
A legal framework setting up a European approach on artificial intelligence is needed to foster the development and uptake of artificial intelligence that meets a high level of protection of public interests, in particular the health, safety and fundamental rights and freedoms of persons as recognised and protected by Union law.”
Few debate the need for AI regulation, but the extent to which it should be controlled is a contentious issue. No controls risk putting lives and privacy at risk, especially with AI’s well-documented bias issues. However, overregulation and fear of being penalised for AI research in Europe risk driving such an important technology out of the continent to less strict countries.
AI relies on data and therefore the impact the EU’s GDPR would have on research was part of the debate around that particular legislation (which carries the same maximum penalties for breaches as the draft AI rules) when it was being conceived. It’s hard to say for sure whether it’s a result of the strict regulatory environment, but EU nations are falling behind industry leaders like the US, China, and the UK.
Last year, the White House even urged its European allies not to overregulate AI. In a statement released by the Office of Science and Technology Policy, the White House wrote:
“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach.
The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
The EU believes it has taken a “human-centric” approach to its AI regulation which aims to strike a balance between not leaving powerful companies to their own devices like the US, nor using the technology to create a 1984-like dystopian surveillance state like China with its social scoring systems and mass facial recognition.
AI in policing is one of the most fiercely debated issues, especially due to the aforementioned bias issues. However, it also has huge potential to tackle serious crime. In this area, the EU is also attempting to strike a fine balance by allowing the use of facial recognition by authorities in public places to fight serious crime if its use is limited in time and geography.
European cooperation with the US – which has already been strained in recent years due to EU members’ increasing ties with Russia and China, and a perceived lack of commitment to NATO with historic underfunding and plans for the creation of an EU army – is likely to be put under further pressure from the bill.
In addition to setting rules which govern the use of AI, the draft also proposes the creation of a European Artificial Intelligence Board which features one representative from each EU member, the EU’s data protection authority, and a European Commission representative.
21/04 update: As reported by French publication Contexte, a new draft of the EU’s impending AI regulations increases the potential fines to six percent of turnover, or €30 million (whichever is higher.)
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.