Creative AI Can Help With Some Unsympathetic Problems

Comment

A tense scene in the 2004 movie iRobot shows the character played by Will Smith arguing with an android about humanity’s creative prowess. “Can a robot write a symphony?” he asks, rhetorically. “Can a robot turn a canvas into a beautiful masterpiece?”

“Can you?” the robot answers.

Machines wouldn’t need the snarky reply in our current reality. The answer would simply be “yes.”

In the last few years, artificial-intelligence systems have shifted from being able to process content – recognizing faces or reading and transcribing text — to creating digital paintings or writing essays. The digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and even videos. The broad term describing all this is “generative AI,” and as this latest lurch into our digital future becomes part of our present, some familiar tech industry challenges like copyright and social harm are already reemerging.

We’ll probably look back on 2022 as the year generative AI exploded into mainstream attention, as image-generating systems from OpenAI and the open source startup Stability AI were released to the public, prompting a flood of fantastical images on social media.(1) The breakthroughs are still coming thick and fast. Last week, researchers at Meta Platforms Inc. announced an AI system that could successfully negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to $1.3 billion in deals this year, according to data from research firm Pitchbook, even as it contracted for other areas in tech. (Deal volume grew almost 500% in 2021.)

Companies that sell AI systems for generating text and images will be among the first to make money, says Sonya Huang, a partner at Sequoia Capital who published a “map” of generative AI companies that went viral this month. Gaming, which is already the most popular category in digital consumer spending, will be a particularly lucrative area.

“What if gaming was generated by anything your brain could imagine, and the game just develops as you go?” asks Huang. Most of the generative AI startups use a handful of well-known AI models, which they either have to pay for access to or can get free. OpenAI, the artificial Intelligence Research Company co-founded and funded largely by Microsoft Corp., offers access to its image generator DALL -E 2 as well as its automatic text writer GPT-3. (Its forthcoming iteration of the latter, known as GPT-4, is reputed by its developers to be freakishly proficient at mimicking human jokes, poetry and other forms of writing.)

But these advancements won’t carry on unfettered, and one of the thorniest problems to be resolved is copyright. Typing in “a dragon in the style of Greg Rutkowski” will churn out artwork that looks like it could have come from the forenamed digital artist who creates fantasy landscapes. Rutkowski is not paid any compensation for the image generated, even though it was used for commercial purposes, something Rutkowski has complained about publicly.

Popular image generators like DALL-E 2 and Stable Diffusion are shielded by America’s fair use doctrine, which hinges on free expression as a defense for using copyrighted work. Their AI systems are trained on millions of images including Rutkowski’s, so in theory they benefit from a direct exploitation of the original work. Technologists and copyright lawyers are divided on whether artists should be compensated.

In theory, AI firms could eventually copy the licensing model used by music-streaming services, but AI decisions are typically inscrutable – how would they track usage? One way is to compensate artists if their name appears in a prompt. However, it would be up the AI companies themselves to establish that infrastructure and monitor its usage. Ratcheting up the pressure is a class action lawsuit against Microsoft Corp, Github Inc. and OpenAI over copyright involving a code-generating tool called Copilot, a case that could set a precedent for the broader generative AI field.

Then there’s content itself. If AI is quickly generating more information than humanly possible – including, inevitably, porn – what happens when some of it is harmful or misleading? Facebook and Twitter have actually improved their ability to clean up misinformation on their sites in the last two years, but they could face a much greater challenge from text-generating tools — like OpenAI’s — that set their efforts back. The issue was recently underscored by a new tool from Facebook parent Meta itself.

Meta revealed Galactica earlier this month, a language system that specializes in science and can write Wikipedia articles and research papers. Meta took it down in three days. Early testers found it was generating nonsense that sounded dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries on the benefits of being white or how bears live in space. The strange effect was caused by facts being mixed with hogwash in such a way that it was difficult to distinguish between them. Political and health-related misinformation is hard enough to track when it’s written by humans. What happens when the information is created by machines that sound increasingly like humans? 

This could be the most disastrous of all.

Bloomberg Opinion:

• Our Future AI Overlords Need a Resistance Movement: Parmy Olson

• AI Can Help Make Cryptocurrency Safer for Everyone: Tyler Cowen

• US Chip Curbs Highlight Cracks in China AI Strategy: Tim Culpan

(1) The transformer model was one technological breakthrough that spurred the rise of generative artificial intelligence. The model was first presented in 2017 by Google researchers. It took less time to train and could be used to support higher quality AI systems for creating language.

This column is not intended to reflect the views of Bloomberg LP or its owners.

Parmy Olson, a Bloomberg Opinion columnist for technology, is Parmy Olson. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”

More stories like this are available on bloomberg.com/opinion

Previous post Crypto will survive the FTX collapse – however extra scandals will comply with | Kenneth Rogoff
Next post You say price cap, I mean speed bump. Let’s Call the Whole Thing Off.