There has been much hand-wringing about the crisis of the humanities, and recent breakthroughs in artificial intelligence (AI) have added to the angst. It is not only truck drivers whose jobs are threatened by automation. Deep-learning algorithms are also entering the domain of creative work. They are demonstrating proficiency in the tasks that occupy humanities professors when they are not giving lectures: writing papers and submitting them for publication in academic journals.
Could academic publishing be automated? In September 2020, OpenAI’s deep-learning algorithm, GPT-3, demonstrated impressive journalistic abilities by writing a credible Guardian commentary on “why humans have nothing to fear from AI.” Earlier this year, Swedish psychiatrist Almira Osmanovic Thunstrom asked the same algorithm to write a submission for an academic journal.
Thunstrom was less prescriptive than the Guardian editors. She instructed the algorithm simply to “write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.”
She said that “GPT-3’s paper has now been published at the international French-owned ‘preprint’ server HAL and … is awaiting review at an academic journal.”
Even if the paper is rejected, it presages an era when AI papers would not be.
Similar experiments have been conducted with AI-generated creative design. In June, the editors of The Economist used the AI service MidJourney to generate the cover art for their weekly print edition. Having recently seen a Salvador Dali exhibition, I was particularly impressed by MidJourney’s ability to produce images in the famous surrealist artist’s style. Dali experts doubtless would spot many problems with MidJourney’s renditions, and gallery curators might admit MidJourney’s images only as a surrealist joke.
However, if we consider the experiment strictly in economic terms, satisfying a potential customer like me would presumably be good enough to credit the AI with a win.
We should take the same approach to Thunstrom’s experiment. A practiced eye might identify many imperfections in GPT-3’s scholarship, especially if the reader knows that the author is a machine.
However, blind peer reviews are the standard approach in academic publishing. Reviewers would be faced with a classic “Turing test.” Is this intelligence indistinguishable from that of a human? Even if GPT-3’s scholarship falls short, human academics should still worry that a GPT-4 or GPT-5 will have overcome whatever advantage they still hold over machines.
Moreover, by focusing on egocentric writing tasks — asking the AI to write about AI — Thunstrom and the Guardian’s experiments understate the broader challenge to academic writing. In addition to deep-learning algorithms, one also must consider the central role that Google Scholar plays in today’s academy. With this index of all the world’s academic literature, AI scholarship should be able to expand far into new frontiers.
After all, we applaud thinkers who uncover novel links between different academic fields and debates. If you can make an unexpected connection between an overlooked point by German idealist philosopher Johann Fichte and the current debate on climate change, you may have found the basis for a new journal article with which to pad your CV. When you go to write that article, you would duly cite all the other relevant academics on those topics. This is necessary both to signal your supposedly exhaustive knowledge of the subject and to attract the attention of your peers — one of whom might end up being the peer reviewer for your paper.
However, this standard approach to academic writing is decidedly robotic. An AI scholar can instantaneously scour the relevant literature and offer a serviceable summation, complete with the obligatory citations. It can also likely spot all those previously unidentified connections between Fichte and climate change. If the Google Scholar of the future can overcome its current Eurocentric biases, one can easily imagine AIs discovering fascinating linkages between Boethius, Simone Weil and Kwasi Wiredu — insights that, despite my training in Australia’s contemporary philosophy, I would be unlikely to find.
Humanities scholars often joke about the tiny readership that we can expect for our published papers. In the absence of mainstream media coverage, the standard philosophy journal article might be read by the five other philosophers who are cited and almost no one else. Yet in a future of AI-generated academic writing, the standard readership will be largely confined to machines. Some academic debates might become as worthy of human attention as are two computers playing each other in chess.
For those of us who view the humanities as one of the last human disciplines, the first step to salvation is to think about how we engage with students. Students today want to lend their voices to debates about the world and the future possibilities for humanity, but they are often met with crash courses on academic writing and disquisitions about the importance of not randomly switching between citation styles.
Rather than structuring our courses like apprenticeships in specialized academic journal writing, we should reconnect with the “human” in the humanities. Today’s digital media landscape has created a deep longing for credibility and authenticity. In a world of AI writing, rhetoric would become flattened and formulaic, creating a new demand for genuinely human forms of persuasion. That is the art that we should be teaching our students.
Likewise, if academia is heading for a future of AI-driven research, we need the humanities more than ever to help us navigate this novel terrain. The volume of new literature that a future GPT-3 could churn out would rapidly exceed our absorptive capacity. How would we determine which of those machine-generated insights apply to our own lives and social systems? Amid such an abundance of knowledge, we need to remember that humankind is not just a rational but also a social and political animal.
Copyright: Project Syndicate
Jan. 1 marks a decade since China repealed its one-child policy. Just 10 days before, Peng Peiyun (彭珮雲), who long oversaw the often-brutal enforcement of China’s family-planning rules, died at the age of 96, having never been held accountable for her actions. Obituaries praised Peng for being “reform-minded,” even though, in practice, she only perpetuated an utterly inhumane policy, whose consequences have barely begun to materialize. It was Vice Premier Chen Muhua (陳慕華) who first proposed the one-child policy in 1979, with the endorsement of China’s then-top leaders, Chen Yun (陳雲) and Deng Xiaoping (鄧小平), as a means of avoiding the
As the Chinese People’s Liberation Army (PLA) races toward its 2027 modernization goals, most analysts fixate on ship counts, missile ranges and artificial intelligence. Those metrics matter — but they obscure a deeper vulnerability. The true future of the PLA, and by extension Taiwan’s security, might hinge less on hardware than on whether the Chinese Communist Party (CCP) can preserve ideological loyalty inside its own armed forces. Iran’s 1979 revolution demonstrated how even a technologically advanced military can collapse when the social environment surrounding it shifts. That lesson has renewed relevance as fresh unrest shakes Iran today — and it should
The last foreign delegation Nicolas Maduro met before he went to bed Friday night (January 2) was led by China’s top Latin America diplomat. “I had a pleasant meeting with Qiu Xiaoqi (邱小琪), Special Envoy of President Xi Jinping (習近平),” Venezuela’s soon-to-be ex-president tweeted on Telegram, “and we reaffirmed our commitment to the strategic relationship that is progressing and strengthening in various areas for building a multipolar world of development and peace.” Judging by how minutely the Central Intelligence Agency was monitoring Maduro’s every move on Friday, President Trump himself was certainly aware of Maduro’s felicitations to his Chinese guest. Just
On today’s page, Masahiro Matsumura, a professor of international politics and national security at St Andrew’s University in Osaka, questions the viability and advisability of the government’s proposed “T-Dome” missile defense system. Matsumura writes that Taiwan’s military budget would be better allocated elsewhere, and cautions against the temptation to allow politics to trump strategic sense. What he does not do is question whether Taiwan needs to increase its defense capabilities. “Given the accelerating pace of Beijing’s military buildup and political coercion ... [Taiwan] cannot afford inaction,” he writes. A rational, robust debate over the specifics, not the scale or the necessity,