Gallery Review Europe Blog European Art Europe’s Audacious Attempt to Regulate AI
European Art

Europe’s Audacious Attempt to Regulate AI


This illustration picture shows icons of Google’s AI (Artificial Intelligence) app BardAI (or ChatBot) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP) (Photo by OLIVIER MORIN/AFP via Getty Images)

WILL BIG TECH COMPANIES accept independent audits—not just of their financials, but of everything they produce and release? That’s the question on the table with the European Union’s proposed Artificial Intelligence Act, which was passed by the EU Parliament in July and now goes to the member countries for ratification. If it becomes law in its current form, which seems likely at present, it will reset the debate over how to govern tech companies. It might ultimately split the world into three different tech hegemonies: the rule of the state in China and its allies, the rule of law in Europe, and the rule of voluntary self-regulation by U.S. tech companies, with each approach trying to outcompete the others.

Two things happened over the past twelve months to bring the debate over responsible artificial intelligence to a boiling point. One was the release of OpenAI’s large language model architecture, and the flood of generative AI software that followed: GPT, ChatGPT, Dall-E 2, Bing Chat, Bard, and many others. ChatGPT alone went from a handful to 100 million users in two months. It’s the second-fastest-growing product in history, surpassed only by Meta’s Threads.

This growth is taking place even though it’s still not fully clear what generative AI does, or what risks come with it. Broadly speaking, machine learning-based tools can seemingly do almost anything knowledge workers do—except explain themselves, document sources, and exercise human judgment. Certainly, this will lead to immense benefits.

At the same time, the apps are so powerful and evolving so unpredictably that even their own company leaders are asking to be regulated. At a Senate hearing in July, OpenAI CEO Sam Altman was asked how he would regulate AI.

“I would form a new agency,” he said, presumably meaning a government agency, “that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards.”

Last week, representatives from OpenAI—along with six other major generative AI players, namely Amazon, Anthropic, Google, Inflection, Meta, and Microsoft—met with President Joe Biden at the White House and made a formal commitment to co-creating and following a new set of safety standards. So far, they are opting for regulating themselves.

You might think that would be the final word, except for the second thing that happened this year: the AI Act. It’s an ambitious effort to impose government control over AI. In that spirit, chances are that no one in the industry will like the AI Act—but everyone may need it.

The act breaks down all AI-related activity into four categories of risk. Minimal risk applications, which by their nature don’t harm people, would not need government oversight. These would include spam filters and many video games. Limited-risk applications are only considered risky if they pretend to be people. A chatbot therapist, for example, would not be approved if it failed to disclose its nonhuman nature.

Unacceptable risks would be prohibited entirely. In the current draft, these would include the employment of AI in real-time biometric identification systems (recognizing people through their faces, retinas, DNA, gait, fingerprints, and so on); in subliminal techniques intended to distort people’s behavior; in applications that exploit vulnerabilities, such as toys that lead children to dangerous behavior; and in social-scoring systems, especially if they favor some populations over others. All unacceptable applications would be banned—sometimes over the fierce objections of groups who want to use them. For instance, many local law enforcement groups want to deploy facial recognition systems.

Most applications lie in the remaining group, high risk. These are valuable tools that could lead to breakthroughs, but that also could do great harm. They include educational platforms that profile students’ capabilities and tutor them accordingly; self-driving vehicles; research tools that find patterns and test hypotheses in science and medicine; AI-based human-resources systems used in recruitment and performance appraisals; and many uses of generative AI. Some of these systems already have track records of abuse, having been used to target vulnerable populations like immigrants and minorities, or having been used to deceive (e.g., in creating deepfakes). Now, some are also being implicated—most famously in the Sarah Silverman-sponsored class action suit—in controversies around the alleged inappropriate use of copyrighted material. The data fed into such systems tends to reflect unconscious biases and habits, and the AI systems tend to act on these data immediately, without oversight, as if they were autocorrects run amok, just at a much larger scale.

Consider, for instance, a bank that uses an app to help its staff minimize the time spent on data entry—a seemingly benign use. The app could generate talking points for bank personnel to use with each customer who applies for a loan. Unknown to the customer, the bank personnel, or even to the people who developed the app, the data could reflect longstanding biases against granting loans to some groups of people. Even if the bank’s policies and practices had shifted, the historical data might not reflect those changes, and could perpetuate the old biases. There would be no way to tell without an audit—not just of the software, but of the bank policies, the reason it chose that app, and the logic underlying the data collection.

In the current draft, the EU would mandate those audits, typically with a third-party audit firm acting like financial auditing firms do now—with “independence” rules in place, to prohibit those auditing firms from advising the clients they work for in any other capacity. In the case of the independence rules in the Sarbanes-Oxley financial regulation, it took the Big 4 accounting firms, which are used to bureaucracy, years to adjust. In many tech companies, accustomed to fail-fast blitzscaling, the cultural and operational shock would be intense. Moreover, there may not be enough auditors with the right combination of technological and forensic acumen to handle the sudden demand.

Share

Are the risks serious enough to merit oversight? Consider a comment from Ryan Carrier, founder of the nonprofit ForHumanity, which supports and coordinates independent audits of AI systems and is developing guidelines for auditors:

Some risks sound like science fiction scenarios, but they’re all plausible within a few years. With my DNA, you could clone me. Or design a targeted assassination weapon that would only harm me. You could prey on me, based on my psychological or physical profile, or my word choices and emotional responses.

Even risks like these are manageable. But as Juliette Powell and I found again and again in our research for our book The AI Dilemma, the risks can be managed only when people—and governments—get into the habit of thinking about them from the perspective of the people who are affected by the AI systems.

In one illustrative case, the Dutch child benefits fraud scandal, a predictive analytics program used by the tax service falsely identified about 26,000 families, mostly immigrants, as likely to commit welfare fraud. Many had to return their past benefits, often losing their homes and livelihoods. Some were jailed, along with some of their accountants, and a number of families lost custody of their children. The practices went on for six years before they came fully to light and the Netherlands government, led by prime minister Mark Rutte, resigned. The families did nothing wrong; they were merely lumped by the algorithms into a statistical box. Many of them still haven’t been made whole. (Rutte was re-elected and recently was forced to resign again, this time after an internal fight over immigration policy.)

When the tech companies misuse data, the effects may not be as dramatic, but they can be much more pervasive. “The exploitation is so subtle,” says Carrier:

It’s your privacy. It’s your data. It’s things that you didn’t necessarily put a value on, but are creating all sorts of decisions that may not be aligned to your desires. And in the end, Big Tech companies are fighting so hard to maintain their shareholder wealth. Instead of doing the right thing for their customers.

A few companies, like Alphabet and Apple, have instituted internal practices to review risks and confront biases, and thus demonstrate that they can regulate themselves. However, when Microsoft formed a partnership with OpenAI in early 2023, releasing an AI search engine product earlier than Google, that accelerated the pace and several companies relaxed their rigor. Google had held back its chatbot Bard, based on its own large language model LaMDA. Then, chasing Microsoft, it rushed out a release in March, reportedly over the objections of some employees on ethics grounds, for fear of falling behind. Microsoft, meanwhile, laid off its ethics and society team, which it had set up to consider the possible harmful effects of product releases.

Under the AI Act, the burden of proof would fall back on these Big Tech companies, and on smaller innovators as well. They would have to show that their applications are benign—in intent and in outcomes—before doing business in Europe. Penalties for noncompliance would include fines of up to 7 percent of a company’s annual global revenues. Fast-fail innovation would be limited to designated “sandboxes,” a virtual equivalent to enterprise zones, where the rules are relaxed a bit and companies can get permission to do the kind of transgressive work that truly competitive AI innovation requires.

The EU’s motives in this are severalfold. The leaders of this legislative effort take some pride in being the first to create laws with teeth; to stand up against the tech platforms. “We are on the verge of building a real landmark legislation for the digital landscape,” said the legislation’s co-rapporteur (cosponsor) Brando Benifei, a European Parliament member representing Italy. “Not only for Europe for also for the entire world.”

EU legislators also fear, reasonably, that fringe or foreign actors could use generative AI tools to influence and undermine free elections, with speed and scale that could dwarf the election interference of the past. This time, mainstream politicians don’t want to be caught off guard. There’s also a wish to protect citizen privacy and to build a different kind of AI company in Europe—one that has responsibility baked into its charter. Dragoș Tudorache of Romania, the act’s other co-rapporteur, said that most of the high-minded suggestions made by critics seeking social justice are already baked into the law, including “an industry-led process for defining standards, governance with industry at the table, and a light regulatory regime that asks for transparency.”

That doesn’t mean business will go along with it. The Financial Times reported that more than 150 business leaders have already written an open letter to the European Parliament, arguing that the AI Act will force AI developers to leave Europe or shut down. They also say the law might force them to give up competitive advantage, to share private personal data with government, and to spend millions complying (thus penalizing startups).

Sam Altman’s own statements demonstrate the challenges of living with a law that constrains harmful activity but makes freewheeling innovation possible. In his Senate testimony, he said he favored independent audits: “not just from the company or the agency, but experts who can say the model is or isn’t in compliance with these stated safety thresholds.” His view of an ideal regulatory regime sounded a lot like the AI Act. The following week, however, he told a group of business people in London, “We will try to comply [with the AI Act], but if we can’t comply we will cease operating [in Europe].” Two days later, he backpedaled, saying that the OpenAI team is “excited to continue to operate here and of course have no plans to leave.”

One key point of contention is the act’s universality. It regulates AI used anywhere in the world, as long as its data set includes European citizens. Sell video face-recognition systems to an authoritarian regime—or an American police department, for that matter—and you might no longer be able to do business in Europe, even with your other products.

In addition, the law requires unprecedented transparency and accountability. Many companies would have to disclose their methods and the pools of training data they use, even if those constitute trade secrets. This is further complicated by the fact that most large language models and generative AI systems don’t trace their paths: They can’t tell you what sources they used in pattern recognition, because they don’t track them. Rebuilding them to gain that capability would be hugely expensive and might not be technically feasible, especially with the amount of computing power required. When a creator like Sarah Silverman accuses a tech platform of appropriating her work, she may be correct; or the tech firm may, as she suggests, have stumbled on a pirated version; or it might have been a passage written in imitation by a Silverman admirer, or a passage that mimics Silverman through sheer random coincidence. The software can’t tell the difference.

If the AI Act is perceived as successful, then other governments, including that of the United States, are likely to follow. This could be a wonderful development, except that having the government in charge presents its own set of concerns. Already, some observers are questioning the AI Act’s designers’ technical acumen—for instance, in the way they defined the unacceptable category. “These lists are not justified by externally reviewable criteria,” says legal scholar Lilian Edwards, a professor of law, innovation, and society at Newcastle University. “If it is uncertain why certain systems are on the . . . ‘high-risk’ lists now, it will be difficult-to-impossible to argue that new systems should be added in the future.”

Like the Roman satirist Juvenal, complaining about his inability to oversee the guards protecting his home, we may all come to wonder: Who watches the watchrobots? In a world in which three approaches to regulation coexist—the Europeans with rule of law, the Americans with tech companies dominant, and the Chinese following their own authority—no one has yet earned our trust. The authoritarians never had it. Big Tech has lost it. Now it will be up to governments, starting in Europe, to see if they can do better.

Share this post with someone who follows AI news.

Share



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version