Gallery Review Europe Blog European Art Europe and the U.S. Will Probably Regulate A.I. Differently. That Will Have Long-Term Consequences for the Global Art Market
European Art

Europe and the U.S. Will Probably Regulate A.I. Differently. That Will Have Long-Term Consequences for the Global Art Market


Every week, Artnet News brings you The Gray Market. The column decodes important stories from the previous week—and offers unparalleled insight into the inner workings of the art industry in the process.

This week, caught in the middle…

 

An Ocean Between Us

Despite sharp differences of opinion, opponents in the debate about how artificial intelligence might reshape the making and marketing of artwork in the years ahead typically share a core assumption: that people using the technology will be governed by basically the same rules no matter where they are. However, this core assumption is coming untethered from reality due to the starkly contrasting actions taken by U.S. and E.U. regulators this summer. It’s only by taking stock of this divergence that artists, institutions, and other cultural stakeholders can begin to grasp how messy, regionally contingent, and beyond their control A.I.’s effects on art are likely to be.

The allegedly big stateside news about A.I. regulation arrived last Friday. In a meeting with President Joe Biden, executives from seven companies at the vanguard of A.I. development (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) formally agreed to self-police their algorithms using shared guidelines. The tentpole commitments include subjecting their A.I. products to rigorous safety checks before releasing them to the public; inviting third-party experts to investigate their A.I. products for weaknesses exploitable by black-hat hackers; and embedding watermarks into all content generated by their A.I. products so that the public clearly understands its origins. 

Optimists might say that these American tech giants are wise to get ahead of Congress, which has been ramping up its interest in using the law to rein in A.I. Sue Halpern of the New Yorker noted that three different bipartisan bills targeting three different risks of the tech were introduced in the House of Representatives in June. The first would require government agencies to disclose to users any time A.I. is being used in their communications, as well as to create an appeals process for A.I.-influenced decisions. The second would punish social media platforms for disseminating toxic content produced with A.I. tools. The third would create a bipartisan commission to lead the charge on further regulation of generative A.I. 

Senate majority leader Chuck Schumer has also proposed convening a series of expert panels to give him and his colleagues what Halpern calls “a crash course” in artificial intelligence so that they can proceed intelligently on tech policy for a change. (The U.S.’s top legislators did not exactly swaddle themselves in glory during, say, the Facebook hearings in 2018, or the TikTok hearings earlier this year.)

Members of the European Parliament vote on the Artificial Intelligence Act during a plenary session in Strasbourg, France, on June 14, 2023. (Photo by FREDERICK FLORIN / AFP) (Photo by FREDERICK FLORIN/AFP via Getty Images)

Members of the European Parliament vote on the Artificial Intelligence Act during a plenary session in Strasbourg, France, on June 14, 2023. (Photo by FREDERICK FLORIN / AFP) (Photo by FREDERICK FLORIN/AFP via Getty Images)

But in the race to prevent artificial intelligence from running amok within its home nation, Congress is being lapped by its counterpart in the E.U. Also in June, the European Parliament approved a draft version of the A.I. Act, the latest checkpoint on a more than two-year journey to establish a rugged, far-reaching set of rules to guard against the technology’s scariest possibilities. Unless it is defanged during this final stage—an outcome that is all but unthinkable, it seems—the law is poised to require A.I. developers to publish summaries of the copyrighted material used to train their algorithms; enact a near-total ban on the use of A.I. in facial-recognition systems; and mandate the performance of “risk assessments before putting the tech into everyday use, akin to the drug approval process,” according to the New York Times. While there are several more months of negotiating ahead, the final law could be passed before the end of the year. 

How seismic would the impact of a robust A.I. Act be? In May, Sam Altman, the cofounder and chief executive of DALL-E and ChatGPT maker OpenAI, declared that his company would “cease operating” in the European Union if it “can’t comply” with the bloc’s forthcoming laws. 

That might sound curious given that only a few days earlier Altman urged U.S. lawmakers to regulate the development and use of A.I. during a Senate subcommittee hearing, where he warned (in a quote I have seen reprinted in nearly every article I have read on this subject for two months), “If this technology goes wrong, it can go quite wrong.” But it makes perfect sense given American lawmakers’ history of essentially allowing Silicon Valley to write its own rules. The past few decades have produced a pathetically small number of elected officials willing to risk being accused of stifling innovation stateside. For Altman, then, the odds are favorable that American A.I. regulation will still be hugely deferential to him and other tech execs.

The fanfare around OpenAI and the six other companies’ commitment to self-policing reinforces why he should be confident. Halpern’s New Yorker piece does a strong job of teasing out the many soft spots in their joint pledge. There are no actual penalties waiting for companies that don’t live up to their promises, no guidelines for who the independent experts doing the proposed vulnerability checks will belet alone how they will be chosenand not even any uniform definitions for critical terms governing the clauses of the agreement (think: “safety,” “independent,” or “watermark”), 

In fact, the companies’ shared vow to ensure their A.I. products are safe and secure before their public release borders on comedy given that the septet was invited to the White House in the first place because they have already publicly released several major A.I. products without doing any of that. More importantly, they are clawing back exactly none of those products to run them through the safety and security gauntlet now that they’re on the market. Meta, Halpern notes, even made an open-source version of its chatbot (known as LLaMA2) available at no cost for both research and commercial use—a decision that one computer scientist said was “a bit like giving people a template to build a nuclear bomb.”

High-resolution images generated using Meta’s CM3leon A.I. image generator. Courtesy of Meta.

Degrees of Difficulty

While Altman later downplayed his comments about the prospect of pulling OpenAI out of Europe, according to the Financial Times, that he felt compelled to make those comments in the first place indicates how much more daunting the Artificial Intelligence Act is to A.I. entrepreneurs than any potential American regulations. His reaction also offers a launch point into the friction that could be awaiting an art world largely expecting universal rules to govern the technology’s use going forward. 

If the E.U.’s final legislation stays true to its current form, it is plausible that DALL-E, ChatGPT, and other leading A.I. tools either won’t be available at all to artists and art professionals in the bloc, or else they will only be available (legally, anyway) in versions with severe limitations relative to their full-fledged counterparts in the U.S. In other words, differing regulations could create a technological gulf between the near-future U.S. and E.U. art industries no less severe than the free-speech gulf between the present-day U.S. and Chinese art industries. A potential disconnect between the U.S. and E.U. is only one aspect of the larger problem, too. 

The fracturing may only worsen as other countries hammer out their own sets of A.I. guidelines informed by their own sets of national or regional priorities. For example, China’s law, which is slated to go into effect in August, will require all generative A.I. platforms available to its citizens to adhere to the state’s aggressive censorship policies. The socially conservative streaks of other up-and-coming art markets, like South Korea and Singapore, could have a lesser but non-negligible impact if their legislators choose not to mirror either E.U. or U.S. legal frameworks for the technology, as well.

Of course, these sobering possibilities for global culture hinge on artificial intelligence quickly becoming as central to creativity, business, and life as its strongest backers and most alarmist critics believe it will. I have some doubts about that outcome, as I’ve written before, partly because the E.U. started pursuing serious, thoughtful legislation years before ChatGPT et al achieved escape velocity among a broad public. (Ironically, when the bloc’s legislators began the process, the tech obsession of the day was still NFTs.) 

It’s still plausible that regional restrictions on generative A.I. tools end up as nothing more than a modest inconvenience. After all, it’s not as if China’s blockade of American-developed social media platforms has done much to hinder the Chinese art economy; some would even argue that its business practices are more technologically advanced than the West’s thanks to the 360-degree capabilities of WeChat. To use another example, GDPR created some headaches and added costs for art businesses that wanted to keep communicating with an E.U. audience after the privacy law’s implementation in May 2018, but five years later, my sense is that it’s as distant a memory for the art trade as concerns about Y2K. 

Kevin Abosch, NEVER FEAR ART (2021). Courtesy of the artist and Global Crypto Art DAO.

Lurking in the shadows of this discussion, as well, is the fact that the art establishment has its own prioritiesand even in recent history, dancing along the bleeding edge of technology has tended not to be one of them. Paintings, drawings, and sculptures still make up the overwhelming majority of the art exhibited and sold around the globe every year. Sure, some of those works have some kind of digitally informed layer to them, but there’s little evidence to suggest that the trade will be kneecapped if artists around the world can’t all use DALL-E to generate images from text prompts, or if galleries and institutions across continents can’t all use ChatGPT to streamline the production of press releases or other marketing materials.

More importantly, stakeholders still primarily make their decisions about what to show, buy, and sell based on in-person meetings, phone calls, emails, and basic e-commerce—methods of consensus-building that have been around for somewhere between roughly 20 and 2.4 million years. We are still ultimately social animals seeking thrills and opportunities. So yes, regionally specific regulations of A.I. may complicate the art business in the years ahead. But where there’s a will, there’s another way.

[The New Yorker, New York Times, Financial Times]

 

That’s all for this week. ‘Til next time, remember: our differences really are smaller than our similarities, especially when we’re all just fodder for the algorithms anyway.

Follow Artnet News on Facebook:


Want to stay ahead of the art world? Subscribe to our newsletter to get the breaking news, eye-opening interviews, and incisive critical takes that drive the conversation forward.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version