China and Europe are leading the push to regulate AI

A robot plays the piano at the Apsara Conference, an artificial intelligence and cloud computing conference, in China on Oct. 19, 2021. As China revamps its rulebook for the technology, the European Union is crafting its own framework regulation to control AI. but he hasn’t crossed the finish line yet.

street | Afp | fake images

As China and Europe try to rein in artificial intelligence, a new front is opening up around who will set the standards for the burgeoning technology.

In March, China implemented regulations governing how online recommendations are generated through algorithms, suggesting what to buy, watch or read.

It is the latest salvo in China’s tightening control over the tech sector and sets an important marker in how AI is regulated.

“It came as a surprise to some people that last year, China started drafting AI regulation. It is one of the first major economies to put it on the regulatory agenda,” Xiaomeng Lu, director of the Eurasia geotech practice, told CNBC. Group.

As China revamps its rulebook for the technology, the European Union is crafting its own regulatory framework to control AI, but has yet to cross the finish line.

With two of the world’s largest economies introducing AI regulations, the field for AI business and development globally could be about to undergo significant change.

A China Global Playbook?

At the heart of China’s latest policy are online recommendation systems. Companies must tell users if an algorithm is being used to show them certain information, and people can choose not to be objective.

Lu said this is an important change as it gives people more influence over the digital services they use.

Those rules come amid a changing environment in China for its biggest internet companies. Several of China’s local tech giants, including Tencent, Alibaba and ByteDance, have found themselves in trouble with the authorities, especially around antitrust laws.

I see China’s AI regulations and the fact that they’re moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from.

matt sheehan

Carnegie Endowment for International Peace

“I think those trends have changed the government’s attitude on this quite a bit, to the extent that they start looking at other questionable market practices and algorithms that promote services and products,” Lu said.

China’s moves are noteworthy, given how quickly they were implemented, compared to the timeframes with which other jurisdictions typically work when it comes to regulation.

China’s approach could provide a playbook that influences other laws internationally, said Matt Sheehan, a fellow in the Asia program at the Carnegie Endowment for International Peace.

“I see China’s AI regulations and the fact that they’re moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from,” he said.

the europe approach

The European Union is also making its own rules.

The AI ​​Act is the next major piece of tech legislation on the agenda in what has been a busy few years.

In recent weeks, it closed negotiations on the Digital Markets Law and the Digital Services Law, two important regulations that will reduce Big Tech.

The AI ​​law now seeks to impose a comprehensive framework based on the level of risk, which will have far-reaching effects on the products a company brings to market. It defines four categories of risk in AI: minimal, limited, high, and unacceptable.

France, which holds the rotating presidency of the EU Council, has unveiled new powers for national authorities to audit AI products before they hit the market.

Defining these risks and categories has proven difficult at times, with members of the European Parliament calling for a ban on facial recognition in public places to restrict its use by law enforcement. However, the European Commission wants to make sure it can be used in investigations, while privacy campaigners fear it will increase surveillance and erode privacy.

Sheehan said that while China’s political system and motivations will be “totally anathema” to policymakers in Europe, the technical goals of the two sides have many similarities, and the West needs to pay attention to how China implements them.

“We don’t want to mimic any of the ideological or speech controls that are put in place in China, but some of these issues on a more technical side are similar across different jurisdictions. And I think the rest of the world should be watching what’s going on.” happens outside of China from a technical perspective.

China’s efforts are more prescriptive, he said, and include algorithm recommendation rules that could curb tech companies’ influence on public opinion. The AI ​​Act, on the other hand, is a broad effort that seeks to bring all AI under one regulatory roof.

Lu said the European approach will be “more burdensome” for companies as it will require pre-market evaluation.

“That’s a very restrictive system compared to the Chinese version, basically they are testing products and services in the market, they don’t do it before those products or services are introduced to consumers.”

‘Two different universes’

Seth Siegel, global director of AI at Infosys Consulting, said that as a result of these differences, a schism could form in the way AI is developed on the global stage.

“If I am trying to design mathematical models, machine learning and artificial intelligence, I will take fundamentally different approaches in China versus the EU,” he said.

At some point, China and Europe will dominate how AI is controlled, creating “fundamentally different” pillars on which the technology will be built, he added.

“I think what we’re going to see is techniques, approaches and styles are going to start to diverge,” Siegel said.

Sheehan disagrees that the world’s AI landscape is divided as a result of these different approaches.

“Companies are getting a lot better at adapting their products to work in different markets,” he said.

The biggest risk, he added, is that researchers will be held hostage in different jurisdictions.

AI research and development crosses borders, and all researchers have a lot to learn from each other, Sheehan said.

“If the two ecosystems cut ties between technologists, if we ban communication and dialogue from a technical perspective, then I would say it poses a much bigger threat, having two different universes of AI that could end up being quite dangerous in the way that interact with each other.”

Leave a Comment