Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act?
Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI technology, while the other is convinced that regulation will prove pernicious for the European economy. Is it out of the question that safe AI products also bring economic prosperity?
‘Industrial revolution’ without Europe
The EU “prevents the industrial revolution from happening” and portrays itself as “no part of the future world,” Joe Lonsdale told Bloomberg. He regularly appears in the US media around AI topics as an outspoken advocate of the technology. According to him, the technology has the potential to cause a third industrial revolution, and every company should already have implemented it in its organization.
He earned a bachelor’s degree in computer science in 2003. Meanwhile, he co-founded several technology companies, including those that deploy artificial intelligence. He later grew to become a businessman and venture capitalist.
The only question is, are the concerns well-founded? At the very least, caution seems necessary to avoid seeing major AI products disappear from Europe. Sam Altman, a better-known IT figure as CEO of OpenAI, previously spoke out about the possible disappearance of AI companies from Europe if the rules become too hard to apply. He does not plan to pull ChatGPT out of Europe because of the AI law, but he warns here of the possible actions of other companies.
The CEO himself is essentially a strong supporter of security legislation for AI. He advocates for clear security requirements that AI developers must meet before the official release of a new product.
When a major player in the AI field calls for regulation of the technology he is working with, perhaps we as Europe should listen. That is what is happening with the AI Act, through which the EU is trying to be the first in the world to put out a set of rules for artificial intelligence. The EU is a pioneer, but it will also have to discover the pitfalls of a policy in the absence of a working example in the world.
The rules will be continuously tested until they officially come into effect in 2025 by experts who publicly give their opinions on the law. A public testing period which AI developers should also find important, Altman said. The European Union also avoids making up rules from higher up for a field it doesn’t know much about itself. The legislation will come bottom-up by involving companies and developers already actively engaged in AI setting the standards.
Although the EU often pronounces that the AI law will be the world’s first regulation of artificial intelligence, other places are tinkering with a legal framework just as much. The United Kingdom, for example, is eager to embrace the technology but also wants certainty about its security. To that end, it immerses itself in the technology and gains early access to DeepMind, OpenAI and Anthropic’s models for research purposes.
However, Britain has no plans to punish companies that do not comply. The country limits itself to a framework of five principles that artificial intelligence should comply with. The choice seems to play to the disadvantage of guaranteed safety of AI products, as the country says it is necessary not to make a mandatory political framework for companies, to attract investment from AI companies in the UK. So secure AI products and economic prosperity do not appear to fit well together according to the country. Wait and see if Europe’s AI law validates that.
(Editor’s note: This article first appeared on Techzine)