In an age of accelerating progress in artificial intelligence (AI), everyone is debating AI’s implications for the labor market or national security. There is far less discussion of what AI could or should mean for philanthropy.
Many (not all) insiders now say artificial general intelligence (AGI) stands a good chance of happening in the next few years. AGI is a generative AI model that could, on intellectually oriented tests, outperform human experts on 90 percent of questions. That does not mean AI would be able to dribble a basketball, make GDP grow by 40 percent a year or, for that matter, destroy us. Still, AGI would be an impressive accomplishment — and over time, however slowly, it could change our world.
For purposes of objectivity, I would put aside universities, where I work, and consider other areas in which philanthropic returns would become higher or lower.
One big change is that AI would enable individuals, or very small groups, to run large projects. By directing AIs, they would be able to create entire think tanks, research centers or businesses. The productivity of small groups of people who are very good at directing AIs would go up by an order of magnitude.
Philanthropists ought to consider giving more support to such people. Of course that is difficult, because right now there are no simple or obvious ways to measure those skills. However, that is precisely why philanthropy might play a useful role. More commercially oriented businesses might shy away from making such investments, because of risk and because the returns are uncertain. Philanthropists do not have such financial requirements.
Another possible new avenue for philanthropy in a world of AI, as odd as it might sound: intellectual branding. As quality content becomes cheaper to produce, how it is presented and curated (with the help of AI, naturally) would become more important. Some media properties and social influencers already have reputations for trustworthiness, and they would want to protect and maintain them. However, if someone wanted to create a new brand name for trustworthiness and had a sufficiently good plan to do so, they should receive serious philanthropic consideration.
Then there is the matter of AI systems themselves. Philanthropy should buy good or better AI systems for people, schools and other institutions in very poor countries. A decent AI in a school or municipal office in, say, Kenya, could serve as translator, question-answerer, lawyer and sometimes medical diagnostician. It is not yet clear exactly what those services might cost, but in most very poor countries there would be significant lags in adoption, due in part to affordability.
A good rule of thumb might be that countries that cannot always afford clean water would also have trouble affording advanced AI systems. One difference is that the near ubiquity of smartphones might make AI easier to provide.
Strong AI capabilities also mean that the world might be much better over some very long time horizon, say 40 years hence. Perhaps there would be amazing new medicines that otherwise would not have come to pass, and as a result people might live 10 years longer. That increases the return — today — to fixing childhood maladies that are hard to reverse. One example would be lead poisoning in children, which can lead to permanent intellectual deficits. Another would be malnutrition. Addressing those problems was already a very good investment, but the brighter the world’s future looks and the better the prospects for our health, the higher those returns.
The flip side is that reversible problems should probably decline in importance. If we could fix a particular problem today for US$10 billion, maybe in 10 years’ time — due to AI — we would be able to fix it for a mere US$5 billion. So it would become more important to figure out which problems are truly irreversible. Philanthropists ought to be focused on long time horizons anyway, so they need not be too concerned about how long it would take AI to make our world a fundamentally different place.
For what it is worth, I did ask an AI for the best answer to the question of how it should change the focus of philanthropy. It suggested (among other ideas) more support for mental health, more work on environmental sustainability and improvements to democratic processes. Sooner rather than later, we might find ourselves taking its advice.
Tyler Cowen is a Bloomberg Opinion columnist, a professor of economics at George Mason University and host of the Marginal Revolution blog. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Jan. 1 marks a decade since China repealed its one-child policy. Just 10 days before, Peng Peiyun (彭珮雲), who long oversaw the often-brutal enforcement of China’s family-planning rules, died at the age of 96, having never been held accountable for her actions. Obituaries praised Peng for being “reform-minded,” even though, in practice, she only perpetuated an utterly inhumane policy, whose consequences have barely begun to materialize. It was Vice Premier Chen Muhua (陳慕華) who first proposed the one-child policy in 1979, with the endorsement of China’s then-top leaders, Chen Yun (陳雲) and Deng Xiaoping (鄧小平), as a means of avoiding the
As the Chinese People’s Liberation Army (PLA) races toward its 2027 modernization goals, most analysts fixate on ship counts, missile ranges and artificial intelligence. Those metrics matter — but they obscure a deeper vulnerability. The true future of the PLA, and by extension Taiwan’s security, might hinge less on hardware than on whether the Chinese Communist Party (CCP) can preserve ideological loyalty inside its own armed forces. Iran’s 1979 revolution demonstrated how even a technologically advanced military can collapse when the social environment surrounding it shifts. That lesson has renewed relevance as fresh unrest shakes Iran today — and it should
The last foreign delegation Nicolas Maduro met before he went to bed Friday night (January 2) was led by China’s top Latin America diplomat. “I had a pleasant meeting with Qiu Xiaoqi (邱小琪), Special Envoy of President Xi Jinping (習近平),” Venezuela’s soon-to-be ex-president tweeted on Telegram, “and we reaffirmed our commitment to the strategic relationship that is progressing and strengthening in various areas for building a multipolar world of development and peace.” Judging by how minutely the Central Intelligence Agency was monitoring Maduro’s every move on Friday, President Trump himself was certainly aware of Maduro’s felicitations to his Chinese guest. Just
On today’s page, Masahiro Matsumura, a professor of international politics and national security at St Andrew’s University in Osaka, questions the viability and advisability of the government’s proposed “T-Dome” missile defense system. Matsumura writes that Taiwan’s military budget would be better allocated elsewhere, and cautions against the temptation to allow politics to trump strategic sense. What he does not do is question whether Taiwan needs to increase its defense capabilities. “Given the accelerating pace of Beijing’s military buildup and political coercion ... [Taiwan] cannot afford inaction,” he writes. A rational, robust debate over the specifics, not the scale or the necessity,