A tense scene in the 2004 movie I, Robot shows the character played by Will Smith arguing with an android about humanity’s creative prowess. “Can a robot write a symphony?” he asks, rhetorically. “Can a robot turn a canvas into a beautiful masterpiece?”
“Can you?” the robot answers.
Machines would not need the snarky reply in our reality. The answer would simply be “yes.”
Illustration: Mountain People
In the past few years, artificial intelligence (AI) systems have shifted from being able to process content — recognizing faces or reading and transcribing text — to creating digital paintings or writing essays.
The digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and even videos. The broad term describing all this is “generative AI,” and as this latest lurch into our digital future becomes part of our present, some familiar tech industry challenges such as copyright and social harm are already re-emerging.
We will probably look back on this year as when generative AI exploded into mainstream attention, as image-generating systems from OpenAI and the open source start-up Stability AI were released to the public, prompting a flood of fantastical images on social media.
One of the technological milestones that sparked the rise of generative AI was the advent of the transformer model. First proposed in a paper by Google researchers in 2017, the models needed less time to train and could underpin higher quality AI systems for generating language.
The breakthroughs are still coming thick and fast. Last week, researchers at Meta Platforms announced an AI system that could negotiate with humans and generate dialogue in a strategy game called Diplomacy. Venture capital investment in the field grew to US$1.3 billion in deals this year, data from research firm Pitchbook showed, even as it contracted for other areas in tech. (Deal volume grew almost 500 percent last year.)
Companies that sell AI systems for generating text and images would be among the first to make money, said Sonya Huang, a partner at Sequoia Capital, which published a “map” of generative AI companies that went viral this month.
An especially lucrative field would be gaming, already the largest category for consumer digital spending.
“What if gaming was generated by anything your brain could imagine, and the game just develops as you go?” Huang said.
Most generative AI start-ups are building on top of a few popular AI models that they either pay to access or get for free. OpenAI, the artificial intelligence research company cofounded by Elon Musk and mostly funded by Microsoft, sells access to its image generator DALL-E 2 and its automatic text writer GPT-3. (Its forthcoming iteration of the latter, known as GPT-4, is reputed by its developers to be freakishly proficient at mimicking human jokes, poetry and other forms of writing.)
These advancements would not carry on unfettered, and one of the thorniest problems to be resolved is copyright. Typing in “a dragon in the style of Greg Rutkowski” would churn out artwork that looks like it could have come from the forenamed digital artist who creates fantasy landscapes. Rutkowski receives no financial benefit for that, even if the generated image is used for a commercial purpose, something the artist has publicly complained about.
Popular image generators such as DALL-E 2 and Stable Diffusion are shielded by the US’ fair use doctrine, which hinges on free expression as a defense for using copyrighted work. Their AI systems are trained on millions of images including Rutkowski’s, so in theory they benefit from a direct exploitation of the original work.
Copyright lawyers and technologists are split on whether artists will ever be compensated.
In theory, AI firms could eventually copy the licensing model used by music-streaming services, but AI decisions are typically inscrutable — how would they track usage? One path might be to compensate artists when their name comes up in a prompt, but it would be up to the AI companies to set up that infrastructure and police its use.
Ratcheting up the pressure is a class action lawsuit against Microsoft, Github and OpenAI over copyright involving a code-generating tool called Copilot, a case that could set a precedent for the broader generative AI field.
Then there is content itself. If AI is quickly generating more information than humanly possible — including, inevitably, pornography — what happens when some of it is harmful or misleading?
Facebook and Twitter have improved their ability to clean up misinformation on their sites in the past two years, but they could face a much greater challenge from text-generating tools — like OpenAI’s — that set their efforts back. The issue was recently underscored by a new tool from Facebook parent Meta itself.
Earlier this month Meta unveiled Galactica, a language system specializing in science that could write research papers and Wikipedia articles. Within three days, Meta shut it down. Early testers found it was generating nonsense that sounded dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries on the benefits of being white or how bears live in space.
The eerie effect was facts mixed in so finely with hogwash that it was hard to tell the difference between the two. Political and health-related misinformation is hard enough to track when it is written by humans. What happens when it is generated by machines that sound increasingly like people?
That could turn out to be the biggest mess of all.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
China is the most populous country on the planet, with the second-largest economy and a growing military strength, all of which contribute to the perception that China is a challenger to the US for global leadership. After the US withdrawal from Afghanistan and the return to power of the Taliban in August 2021, many assumed that China would seek to fill the ensuing power vacuum. However, that has not happened. China’s engagement with Afghanistan has essentially been driven by short-term and narrow considerations, rather than a well thought through plan. If China’s Afghanistan policy is anything to go by, it is clear that it
Taiwan Semiconductor Manufacturing Co (TSMC), the pride of the nation, has recently become a villain to residents of Tainan’s Annan District (安南). In 2017, TSMC announced plans to build the world’s first 3-nanometer fab in Anding District (安定). While the project was once welcomed by residents of Tainan, it has since become a source of controversy. The new fab requires a huge amount of electricity to operate. To meet TSMC’s surging electricity demand, plans are under way to construct a 1.2 gigawatt gas power station near a residential area in Annan District. More than 10,000 Annan residents have signed a petition
I first visited Taiwan in 1985, when I was deputed by His Holiness the Dalai Lama to start a dialogue with the Chinese Nationalist Party (KMT). I spent three days talking to officials, the end result being the signing of an agreement where the Republic of China (ROC) recognized the right to self-determination of Tibetans. According to official KMT records in Nanking, Tibet never paid taxes to the ROC government. In 1997, the Dalai Lama made his first ever visit to Taiwan on the invitation of then-president Lee Teng-hui (李登輝). Lee took the bold step of opening Taiwan’s doors to
In a strange way, the best thing that could have happened to Google (now masquerading as Alphabet, its parent company) was Facebook. Why? Because although Google invented surveillance capitalism, arguably the most toxic business model since the opium trade, it was Facebook that got into the most trouble for its abuses of it. The result was that Google enjoyed an easier ride. Naturally, it had the odd bit of unpleasantness with the EU, with annoying fines and long drawn-out legal wrangles. However, it was Facebook boss Mark Zuckerberg — not Google’s Larry Page, Sergey Brin and their adult supervisor Eric