close_game
close_game

Art meets invasion: Inside the face-off with AI

ByK Narayanan
Jun 21, 2024 05:24 PM IST

This is a battle that began in the ’90s, when programs first began replicating an artist’s style to create new music. K Narayanan writes on the evolving tussle.

The first piece of AI-generated music is now 30 years old.

(HT illustration: Rahul Krishnan) PREMIUM
(HT illustration: Rahul Krishnan)

Work on it began a bit before that point, in the 1980s, when a prolific musician named David Cope was tasked with creating the music for an opera.

At that time, Cope was suffering from a serious case of composer’s block. So, as the deadline approached, instead of working on the opera, he started work on a new project. Cope had studied computing and artificial intelligence and began trying to create a program that would take his earlier compositions as input, and generate music in his style as the output.

He called the program Experiments in Musical Intelligence or EMI (no relation to the music label, which was already decades old at this time), and used it to get over his block.

He then used it to do something entirely different: create an entire album of machine-generated music, called Bach by Design.

Bach by Design was generated by EMI using a number of compositions by the German composer as input. The album, released in 1994, was largely panned. But it did catch the eye of Douglas Hofstadter, professor of cognitive science and comparative literature at Indiana University (and author of that staple of engineering college hostel rooms of the 1980s, the book on symmetry, self-reference and human cognition titled Godel, Escher, Bach: An Eternal Golden Braid).

Hofstadter was intrigued by Cope’s experiment, and decided to take it a step further.

He created a sort of Turing test for music, inviting Steven Larson, a teacher of music theory at University of Oregon, to compose a piece of music in the style of Bach. Larson’s wife Winifred Kerner then played three pieces of music to an audience: Her husband’s composition, a lesser-known piece by Bach, and a piece from Bach by Design. The audience was asked to identify which was which.

By a large majority, the listeners decided that EMI’s composition was the genuine Bach. They thought Larson’s piece sounded most like something generated by a computer.

This was in 1997.

***

Since then, the questions around art created by algorithm have only become murkier.

At the core are two issues. The first is the economic impact of AI creativity, which acts, to use words beloved of LinkedIn posters, as a disruptor in the arts industry, causing a paradigm shift.

Generative AI threatens the existence of graphic designers and stock-image services; the careers of musicians and film extras, and of writers of books, movies and TV.

Artist groups have already begun to protest. Most recently, in March, Artist Rights Alliance (ARA) issued an open letter signed by more than 200 musicians, including Pearl Jam, Stevie Wonder, Elvis Costello, Nicki Minaj, Billie Eilish and the estate of Frank Sinatra. In it, the artists urge AI developers, technology companies, platforms and digital music services to refrain from employing AI in ways that undermine and diminish the rights of human creators. A tweet from ARA emphasised that AI represents an “existential threat” to their craft.

Since the emergence of mainstream AI image generators in 2022, visual artists have voiced their opposition. As the technology expanded into creative fields such as writing, acting and filmmaking, professionals from these domains have organised stirs too (perhaps most notably the Hollywood artist unions SAG-AFTRA).

It is a hard spot to be in. Technically, while an individual work of art can be copyrighted, styles cannot. To prove that an AI image generator has copied their work, an artist would need to show that their artwork has been fed into the system. This is hard to do because of how complex the algorithms typically are.

Meanwhile, in a flip side to the copyright issue, most AI models need massive amounts of material to be fed to them, as training material. And while AI companies use large amounts of copyright-free content, from Bach to the Bible, there have been accusations that they have not always been so scrupulous.

Earlier this month, for instance, Adobe, the creator of popular graphic-design tools, updated its terms of use to give itself sweeping permission to access and use user-generated content and train AI on that content. The terms and conditions were displayed on an unskippable pop-up window, which meant that users would have to accept if they wanted to use software they had already paid for, including the popular photo and video-editing tools Photoshop, Illustrator and After Effects.

The reaction was swift and severe, as people took to online platforms to register their protest and announce that they would be discontinuing their subscriptions. Adobe has since said it will modify the phrasing of its terms of use “to better serve its customers”. But the truth is, we can expect to see more of this harvesting, as AI moves into the mainstream and competition between technology giants intensifies.

As far back as 2022, popular platform DeviantArt came under heavy criticism for its partnership with Stability AI, which introduced an internal image-generation tool named DreamUp, capable of collecting every piece of art posted on the platform for training. In January 2023, the website ArtStation faced similar criticism when it appeared that art posted on the site had been used in a dataset of 5.85 billion images and text captions, created to help train AI imaging and text models.

The New York Times is currently suing OpenAI for allegedly using its content, without permission, to train ChatGPT. When a user asks about the latest news, the suit states (among other complaints), the program sometimes quotes verbatim from NYT articles that cannot otherwise be accessed without a subscription.

***

The spookiness of real-world misuse, meanwhile, is already around us.

OpenAI is currently revising the voice of its virtual assistant Sky, after actress Scarlett Johansson threatened to sue the company for creating a voice that sounded just like hers, after she refused to lend her own to the project.

Meanwhile, scammers are replicating voices, then calling family members of the people whose voices they have replicated, faking an emergency or accident, and pleading for help (usually in the form of a quick money transfer).

Faked explicit imagery of musician Taylor Swift went viral on X in January. Child pornography is being generated in this manner too.

Countries are conscious of the urgent need for effective legislation. China now requires disclosure of deepfake technology use in media, and distribution of deepfakes has to be accompanied with a clear disclaimer that the content has been artificially generated.

The European Union (EU) has put forward legislation mandating that social media companies eliminate deepfakes and other false information from their platforms (though it is not clear how they would achieve this, even assuming that they wanted to).

The EU’s Code of Practice on Disinformation currently deals with deepfakes by imposing fines of up to 6% of global revenue on those who breach its norms. In India, while parts of Section 66 of the IT Act, the Copyright Act of 1947 and the Indian Penal Code, may apply to deepfakes, so far there have been only advisories to social media and media organisations regarding deepfakes. Specific legislation has yet to be framed.

Despite all the fear and concern around AI, its large-scale adoption in our lives seems inevitable.

Microsoft and Apple have made AI an integral part of their operating systems. The former has been an early investor in OpenAI, and Apple just signed an agreement to integrate ChatGPT into experiences within iOS, iPadOS, and macOS.

Meanwhile, in Hollywood, struggling actors are selling their likenesses for use by AI for as little as $500. And while NYT may want to slug it out in court with OpenAI, media houses such as Vox and The Atlantic have signed deals allowing the company to train its algorithm on their content, for a fee.

There is always the argument that AI is just a tool; an upgrade not very different from when blood and animal fats gave way to watercolours, and oils to acrylics.

How is an argument against AI any different from Socrates fearing the popularity of the written word, because he believed it promoted forgetfulness? Generative AI is, after all, a sophisticated autocomplete.

Except when it isn’t.

After the Bach test, Hofstadter described himself as baffled and troubled, in an interview with The New York Times. “The only comfort I could take at this point comes from realising that EMI doesn’t generate style on its own,” he said. “It depends on mimicking prior composers. But that is still not all that much comfort. To what extent is music composed of ‘riffs’, as jazz people say? If that’s mostly the case, then it would mean that, to my absolute devastation, music is much less than I ever thought it was.”

The technology companies may cheer, but we could be looking at a world where the arts — arguably the most human of endeavours — are subsumed by a set of intangible machines. And now that they have begun their work, it does seem like there will be no stopping it.

Get World Cup ready with Crick-it! From live scores to match stats, catch all the action here. Explore now!.

Catch your daily dose of Fashion, Taylor Swift, Health, Festivals, Travel, Relationship, Recipe and all the other Latest Lifestyle News on Hindustan Times Website and APPs.

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Saturday, June 29, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On