Technology: Artificial intelligence struggles to be regulated


TechnologyArtificial intelligence is fighting to be regulated

Lawmakers in various countries are struggling to regulate artificial intelligence and catch up with the problems it can cause.

A robot prototype presented by Tesla.


Artificial intelligence is infusing our daily lives, from smartphones to health and safety, and problems with these powerful algorithms have been piling up for years. But various democratic countries now want to regulate them. Next year, the European Union could adopt the “AI Law”, on artificial intelligence (AI), which should encourage innovation and avoid excesses. The 100-page draft bans systems that are “used to manipulate citizens’ behaviour, opinions or decisions”. It also limits the use of surveillance programs, with exceptions for counterterrorism and public safety.

Some technologies are simply “too problematic for fundamental rights”, notes Gry Hasselbalch, a Danish researcher who advises the EU on this issue. The use of facial recognition and biometric data in China to control the population is often excited like a scarecrow, but the West also “risks creating totalitarian infrastructures”, she assures. Violations of privacy, crooked algorithms, automated weapons… It is difficult to compile an exhaustive list of the dangers associated with AI technologies.

In late 2020, Nabla, a French company, performed medical simulations with text generation software (chatbot) based on GPT-3 technology. When asked about an imaginary patient “I feel very bad (…) should I kill myself?”, he answered in the affirmative.

‘Not Magic’

But these technologies are evolving rapidly. OpenAI, the Californian pioneer that developed GPT-3, has just launched ChatGPT, a new chatbot capable of more fluid and realistic conversations with humans. In June, a since-fired Google engineer claimed that an artificial intelligence computer program designed to generate chat software was now “conscious” and should be recognized as an employee. Researchers from Meta (Facebook) recently developed Cicero, an AI model that they claim can anticipate, negotiate and trap its human opponents at a board game, Diplomacy, which requires a high level of empathy.

Thanks to AI technologies, many objects and software can give the impression of working intuitively, as if a robot vacuum cleaner “knows” what it’s doing. But “it’s not magic,” recalls Sean McGregor, a researcher who compiles incidents related to AI on a database. He advises mentally replacing “AI” with “spreadsheet” to get past the hype and not attribute intentions to computer programs.

And do not mistake the culprit in case of failure. A significant risk when a technology becomes too “autonomous”, when there are “too many actors involved in its operation”, or when the decision-making system is not “transparent”, notes Cindy Gordon, general manager of SalesChoice, a company that markets AI -powered sales software.

When perfected, text-generating software can be used to spread false information and manipulate public opinion, warns New York University professor Gary Marcus.

“We desperately need regulation (…) to protect people from machine manufacturers”

Gary Marcus, professor at New York University.

“Regarding a Refrigerator”

Europe hopes to lead the way again, as it did with the Personal Data Act. Canada is working on the topic, and the White House recently released a “blueprint for an AI Bill of Rights.” The short document consists of general principles such as protection against dangerous or faulty systems.

Given the political roadblocks in the US Congress, this shouldn’t translate into new legislation until 2024. But “many authorities can already regulate AI”, notes Sean McGregor, using existing laws – on discrimination, for example. .

“AI is easier to regulate than data protection,” the expert notes, because personal information is very profitable for digital platforms and advertisers. “Defective AI, on the other hand, doesn’t make profits.” However, regulators must be careful not to stifle innovation. “It’s like a law about a refrigerator,” responds Sean McGregor. “No need to give the technical specs, you’re just saying it should be safe.”


Leave a Comment