European Union Proposes Limitations on Artificial Intelligence

The applications of artificial intelligence are widespread (Flickr).

The applications of artificial intelligence are widespread (Flickr).

The European Union (EU) aims to limit the applications of artificial intelligence (AI) to prevent any serious exploitation of its potential, as stated in a leaked draft on April 14. Companies that do not comply with the EU’s requests could face fines of up to €20 million or 4 percent of turnover. Final regulations will be unveiled on April 21. 

The EU wishes for a “human-centric” approach, aiming to avoid situations that have occurred in the U.S. and China. The EU does not want an unregulated approach to tech companies as seen in the U.S. and simultaneously wants to avoid harnessing the power of AI for surveillance reasons as seen in China.

The plan to regulate the applications of AI has not come out of the blue. The EU began a White Paper in February 2020, sketching plans for what they determined to be “high risk” uses of AI. Examples of what is planned to be banned include AI applications such as the use of facial recognition for mass surveillance and algorithms that can manipulate or even control human behavior.

While this ban is proposed, it does not mean that other applications of AI will be hindered. AI systems that could accelerate climate change reversal, increase manufacturing efficiency, and even revamp electrical grid systems would all be fair game. However, current systems such as ID scanning and creditworthiness assessments would be classified as “high risk”.

The main goal of the EU’s limitations on AI is to increase public trust in it. This will be done through the use of compliance checks and balances that echo the EU’s values. The leaked document states that “a legal framework setting up a European approach on artificial intelligence is needed to foster the development and uptake of artificial intelligence that meets a high level of protection of public interests, in particular, the health, safety, and fundamental rights and freedoms of persons as recognized and protected by Union law.”

Despite all of this, there are various critics of the system that the EU is proposing. In an interview with POLITICO in March, Eric Schmidt, Google's former chief and chair of the U.S. National Security Commission on Artificial Intelligence (NSCAI), said Europe's strategy is "simply not big enough" to compete with American big tech companies. “Europe will need to partner with the United States on these key platforms,” he said.

As well as Eric Schmidt, the European Centre for Not-for-Profit Law, which had contributed to the European Commission's White Paper on AI, argued that there were "lots of vagueness and loopholes" in the proposed legislation: “The EU's approach to binary-defining high versus low risk is sloppy at best and dangerous at worst, as it lacks context and nuances needed for the complex AI ecosystem already existing today.”

Until April 21, there is only an incomplete picture of the proposed rules for AI. Even then, their application and usefulness will also need to be assessed.