March 11, 2025
Artists

what responsible AI means for the creative industries


The global sprint to develop artificial intelligence technologies is intensifying, fuelled by substantial investments from both public and private sectors keen to maintain a competitive edge in the AI era.

In the UK, the AI industry is predicted to generate £400 billion by 2030. Yet the regulatory frameworks that govern these advances are often seen as barriers to innovation and investment.

To reduce the potential risks in AI technologies, businesses and public organisations worldwide are increasingly adopting self-regulation to promote responsible AI practices. The Make it Fair Campaign, launched by the UK’s creative industries on February 25, calls on the UK government to support artists and enforce copyright laws through a responsible AI approach.


This article is part of our State of the Arts series. These articles tackle the challenges of the arts and heritage industry – and celebrate the wins, too.


Responsible AI encompasses a comprehensive framework that addresses various factors, from technical challenges to ethical considerations. As companies develop and incorporate AI technologies, the dialogue must extend beyond algorithms and data integrity to include a thoughtful examination of their social and economic impact.

Initiatives aimed at enhancing transparency and accountability are essential for rebuilding public trust, fostering a collaborative relationship between humans and AI, and paving the way for innovations that are not only effective but welcomed by society.

The need for responsible AI approaches is becoming increasingly urgent as artists deal with serious concerns regarding copyright infringement and job security. In the UK, the creative industries are worth £126 billion, employing 2.4 million people in 2022.

Opportunities and risks

AI has already transformed nearly every sector, and the creative industries are no exception. Generative AI promises diverse opportunities, from enriching creative processes to delivering personalised audience experiences alongside improvements in efficiency and cost-effectiveness.

As these technologies continue to evolve, providing creators with greater control and improved quality over generated outputs, they are set to become invaluable tools for visual artists, writers, musicians and producers around the world. However, these opportunities come with substantial risks, particularly concerning intellectual property rights and the potential reshaping of the workforce.

Generative AI systems draw heavily on human creations; without artists’ original contributions, these technologies would be unable to generate new content. Unfortunately, the lack of transparency and regulation for generative AI systems creates an unprecedented environment where copyrighted works are being used without compensation and explicit consent to train AI models.

The same systems that are undermining creators’ intellectual property are also diminishing their job opportunities – as generative AI platforms streamline processes and enhance productivity, they also risk eliminating jobs within the creative industries.

And as AI-generated outputs proliferate, they may eventually outnumber original works in training models, potentially leading to a cultural landscape dominated by a bland, uniform AI aesthetic.

Balancing AI and copyright

In January 2025, the UK released the AI Opportunities Action Plan, outlining the government’s strategy for developing AI.

While the UK has yet to establish specific legislation regarding AI safety and development, such as the EU’s 2024 AI Act, the plan advocates for a pro-innovation regulatory framework, which may provide a competitive advantage for AI tech companies over more stringent regulations.

Regarding copyright issues, the UK action plan highlights that the current uncertainty surrounding intellectual property protection is hindering AI innovation and ambitions. It references the EU AI Act as a potential model that encourages AI innovation while ensuring copyright holders maintain control over their content.

However, despite being the most ambitious regulation to date – providing clear expectations and guidelines for AI use in the EU – the act falls short of addressing growing concerns about copyright infringement.

The act states that any use of copyrighted material requires authorisation from the copyright holder unless regulated exceptions apply. One significant exception is found in the EU Directive 2019/790, which allows the use of copyrighted works for text and data mining purposes.

Although copyright holders can opt out of this use or reserve their right to be remunerated through a licensing agreement, exercising this option puts the burden on artists, who might not be aware of the clause or that their creations are being used for AI training models.

This makes it nearly impossible for creators to track the theft of their intellectual property. Even if they identify an infringement, the potential cost of suing an AI company will remain out of reach for most artists.

In the recent consultation on AI and copyright launched by the UK government, artists and cultural organisations were invited to share their views on its proposed approach.

Although the results of this survey – which closed on February 25 – are yet to be published, ministers appear ready to offer significant concessions over initial proposals. Following weeks of mounting protests by UK artists, officials are now discussing a range of changes, which according to some sources might exempt certain sectors from the opt-out system and would give preferential access to British AI companies.

In a call to action from UK unions, the TUC has demanded that legislation guarantees transparency measures to identify the presence of copyrighted works in training data, enabling artists to exercise their rights regarding their use.

However, copyright challenges don’t stop at national borders. The International AI Safety Report, released after the AI Action Summit in Paris last month, sheds light on this complex issue. Countries have different rules governing online data collection and intellectual property protection, making the global landscape hard to handle.

Adding to the difficulty, AI companies struggle with limited tools to properly source and filter training data based on licenses, complicating their ability to verify usage on a large scale. As a result, many developers are becoming hesitant to share details about the content they use.

Meanwhile, website owners are tightening restrictions on data crawling, effectively blocking content extraction altogether, which in turn might hinder legitimate AI research efforts.

As states navigate the fine line between promoting innovation and safeguarding rights, the conversation around AI and copyright is set to evolve. One thing is certain: the creative industries cannot flourish without the original input of creators.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *