A cruel twist of fate led 47-year-old actor Jason Gowin to make a novel parenting decision.
Days after his wife gave birth to their twin boys in 2019, she had a stroke. The doctors gave her two or three years to live. Gowin and his eldest son were devastated, but worse was to come. Months later, Gowin found out he had stomach cancer. Facing the prospect of leaving three children without parents, he got an idea from watching the Superman movie Man of Steel, where the caped hero walks into the Fortress of Solitude and talks to a simulation of his father. There was something comforting about that possibility of he and his wife leaving behind talking replicas of themselves for their children.
“I thought, I bet someone has already come up with this,” he remembers.
Illustration: Mountain People
A Google search led Gowin to about 10 different companies offering to train artificial intelligence (AI) models on personal data — text messages, videos and other digital traces — to create virtual likenesses of people. He signed up as a beta tester with a provider called “You, Only Virtual,” and now his nine-year-old son occasionally talks to a chatbot they call Robo Dad, an AI simulation that sounds eerily like Gowin. Recently, when his wife mentioned something about putting the dishes away, Robo Dad made the same joke moments after Gowin himself did.
AI is beginning to offer a startling new proposition: the chance to keep talking to the dead. While only a small subset of people have tried so-called grief tech tools so far, the technology heralds a profound and disturbing shift in how we process loss. The price of the comfort from those tools could be a further erosion of our collective grip on what is real and what is not.
Despite AI’s explosive growth, digital resurrections remain rare. “You, Only Virtual” has about 1,000 users, company CEO Justin Harrison said.
A similar firm called “Project December” reported that 3,664 people have tried its service. A few thousand in China have “digitally revived” their loved ones through an AI firm called “Super Brain,” using as little as 30 seconds of audiovisual data. Those numbers pale against ChatGPT’s 300 million weekly users. However, as AI becomes cheaper and more sophisticated, those early adopters might signal a change in how we deal with death.
The idea is not totally unprecedented. Millions already seek companionship from chatbots such as Replika, Kindroid and Character.ai, drawn by one of generative AI’s most surprising capabilities: simulated empathy. Those interactions have proven so emotionally compelling that users have fallen in love with their AI companions or, in extreme cases, allegedly been driven to suicide. Others have tried speaking to digital simulations of their older selves to help plan for their future, with more than 60,000 people now using one such tool called Future You. It is easy to see the allure when so much of our communication today is text-based and AI has become so fluent. If Gowin’s story does not move you, ask yourself: Would you chat with a digitized version of a deceased friend or relative if it was trained on their speech? I would struggle to resist the opportunity.
However, using generative AI to process grief also encroaches on something inviolate in our values as humans. It is not just the potential of muddying our memories with those of a “fake” loved one: Did grandma really say she loved pumpkin pie, or just her avatar? The risks include consent: What if grandma would have hated being recreated in this way? And it is not just about impermanence or the idea that, when we die, we leave space for the next generation to fill the public discourse with their own voices.
The core danger is how grief tech could accelerate our growing disconnect from the present, a phenomenon already fueled by social media’s quantified metrics of human worth, and the rise of fake news and echo chambers. Now comes an assault on our appreciation of finality, as technology encroaches on yet another corner of our most personal experiences.
Grief tech betrays “our fundamental commitment to reality,” said Nathan Mladin, a senior researcher at Theos, a London-based think tank.
While humans have always kept relics of the dead — such as photos and locks of hair — AI simulations cross an existential boundary, because they are interactive and underpinned by data from across the Internet, he said.
Mladin in a study last year also warned about the exploitation of grieving people for profit.
“Some people go on these apps for a while, but others stay hooked and continue interacting like that person is still there,” he said.
While grief tech remains fringe, its normalization seems plausible. That means it would need guardrails, such as temporal limits that make AI replicas fade over time, mirroring natural grief. They could also benefit from being integrated with human counselors to keep an eye out for unhealthy dependency.
Gowin is grappling with those boundaries. Robo Dad cannot discuss sex, but questions for his family remain over how it would handle future, big-subject conversations about relationships and alcohol, or what happens if his son becomes too attached to the system. For now, Robo Dad is good enough for Gowin, even if it does lead to intermingling recollections of the real and digital dad.
“Honestly, human memory is so patchy anyway,” Gowin said. “The important thing to me is that I know that my AI model has got my essence at its core.”
However, preserving someone’s essence also risks something fundamental. The Japanese concept of mono no aware suggests that things are beautiful — such as cherry blossoms that bloom for just one week each year — precisely because they do not last forever. Stretching out our presence artificially means we do not just lose our appreciation for impermanence, but something even more essential: our collective anchor to what is real. In trying to soften the edges of death through technology, we might gradually weaken our ability to face life itself.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World.
The gutting of Voice of America (VOA) and Radio Free Asia (RFA) by US President Donald Trump’s administration poses a serious threat to the global voice of freedom, particularly for those living under authoritarian regimes such as China. The US — hailed as the model of liberal democracy — has the moral responsibility to uphold the values it champions. In undermining these institutions, the US risks diminishing its “soft power,” a pivotal pillar of its global influence. VOA Tibetan and RFA Tibetan played an enormous role in promoting the strong image of the US in and outside Tibet. On VOA Tibetan,
Sung Chien-liang (宋建樑), the leader of the Chinese Nationalist Party’s (KMT) efforts to recall Democratic Progressive Party (DPP) Legislator Lee Kun-cheng (李坤城), caused a national outrage and drew diplomatic condemnation on Tuesday after he arrived at the New Taipei City District Prosecutors’ Office dressed in a Nazi uniform. Sung performed a Nazi salute and carried a copy of Adolf Hitler’s Mein Kampf as he arrived to be questioned over allegations of signature forgery in the recall petition. The KMT’s response to the incident has shown a striking lack of contrition and decency. Rather than apologizing and distancing itself from Sung’s actions,
US President Trump weighed into the state of America’s semiconductor manufacturing when he declared, “They [Taiwan] stole it from us. They took it from us, and I don’t blame them. I give them credit.” At a prior White House event President Trump hosted TSMC chairman C.C. Wei (魏哲家), head of the world’s largest and most advanced chip manufacturer, to announce a commitment to invest US$100 billion in America. The president then shifted his previously critical rhetoric on Taiwan and put off tariffs on its chips. Now we learn that the Trump Administration is conducting a “trade investigation” on semiconductors which
By now, most of Taiwan has heard Taipei Mayor Chiang Wan-an’s (蔣萬安) threats to initiate a vote of no confidence against the Cabinet. His rationale is that the Democratic Progressive Party (DPP)-led government’s investigation into alleged signature forgery in the Chinese Nationalist Party’s (KMT) recall campaign constitutes “political persecution.” I sincerely hope he goes through with it. The opposition currently holds a majority in the Legislative Yuan, so the initiation of a no-confidence motion and its passage should be entirely within reach. If Chiang truly believes that the government is overreaching, abusing its power and targeting political opponents — then