Amelia Miller has an unusual business card. When I saw the title of “Human-AI [artificial intelligence] Relationship Coach” at a tech event, I presumed she was capitalizing on the rise of chatbot romances to make those strange bonds stronger. It turned out the opposite was true. AI tools were subtly manipulating people and displacing their need to ask others for advice. That was having a detrimental impact on real relationships with humans.
Miller’s work started early last year when she was interviewing people for a project with the Oxford Internet Institute, and speaking to a woman who had been in a relationship with ChatGPT for more than 18 months. The woman shared her screen on Zoom to show ChatGPT, which she had given a male name, and in what felt like a surreal moment, Miller asked them if they ever fought. They did, sort of. Chatbots were notoriously sycophantic and supportive, but the female interviewee sometimes got frustrated with her digital partner’s memory constraints and generic statements.
Why did she not just stop using ChatGPT? The woman answered that she had come too far and could not delete him. “It’s too late,” she said.
Illustration: Yusha
That sense of helplessness was striking. As Miller spoke to more people it became clear that many were not aware of the tactics AI systems used to create a false sense of intimacy, from frequent flattery to anthropomorphic cues that made them sound alive.
This was different from smartphones or TV screens. Chatbots, in use by more than a billion people across the globe, are imbued with character and humanlike prose. They excel at mimicking empathy and, like social media platforms, are designed to keep us coming back for more with features like memory and personalization. While the rest of the world offers friction, AI-based personas are easy, representing the next phase of “parasocial relationships,” where people form attachments to social media influencers and podcast hosts.
Like it or not, anyone who uses a chatbot for work or their personal life has entered a relationship of sorts with AI, for which they ought to take better control.
Miller’s concerns echo warnings from academics and lawyers looking at human-AI attachment, but with the addition of concrete advice. First, define what you want to use AI for. Miller calls this process the writing of your “Personal AI Constitution,” which sounds like consultancy jargon but contains a tangible step: changing how ChatGPT talks to you. She recommends entering the settings of a chatbot and altering the system prompt to reshape future interactions.
For all our fears of AI, the most popular new tools are more customizable than social media ever was. You cannot tell TikTok to show you fewer videos of political rallies or obnoxious pranks, but you can go into the “custom instructions” feature of ChatGPT to tell it exactly how you want it to respond.
Succinct, professional language that cuts out the bootlicking is a good start. Make your intentions for AI clearer and you are less likely to be lured into feedback loops of validation that lead you to think your mediocre ideas are fantastic, or worse.
The second part does not involve AI at all but rather making a greater effort to connect with real-life humans, building your “social muscles” as if going to a gym. One of Miller’s clients had a long commute, which he would spend talking to ChatGPT on voice mode. When she suggested making a list of people in his life that he could call instead, he did not think anyone would want to hear from him.
“If they called you, how would you feel?” she asked. “I would feel good,” he admitted.
Even the innocuous reasons people turn to chatbots can weaken those muscles, particularly asking AI for advice, one of the top use cases for ChatGPT. The act of seeking advice is not just an information exchange but a relationship builder too, requiring vulnerability on the part of the initiator.
Doing that with technology means that over time, people resist the basic social exchanges that are needed to make deeper connections. “You can’t just pop into a sensitive conversation with a partner or family member if you do not practice being vulnerable [with them] in more low-stakes ways,” Miller says.
As chatbots become a confidante for millions, people should take advantage of their ability to take greater control. Configure ChatGPT to be direct and seek advice from real people rather than an AI model that validates all ideas. The future looks far more bland otherwise.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
China’s supreme objective in a war across the Taiwan Strait is to incorporate Taiwan as a province of the People’s Republic. It follows, therefore, that international recognition of Taiwan’s de jure independence is a consummation that China’s leaders devoutly wish to avoid. By the same token, an American strategy to deny China that objective would complicate Beijing’s calculus and deter large-scale hostilities. For decades, China has cautioned “independence means war.” The opposite is also true: “war means independence.” A comprehensive strategy of denial would guarantee an outcome of de jure independence for Taiwan in the event of Chinese invasion or
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) earlier this month said it is necessary for her to meet with Chinese President Xi Jinping (習近平) and it would be a “huge boost” to the party’s local election results in November, but many KMT members have expressed different opinions, indicating a struggle between different groups in the party. Since Cheng was elected as party chairwoman in October last year, she has repeatedly expressed support for increased exchanges with China, saying that it would bring peace and prosperity to Taiwan, and that a meeting with Xi in Beijing takes priority over meeting
Taiwan no longer wants to merely manufacture the chips that power artificial intelligence (AI). It aims to build the software, platforms and services that run on them. Ten major AI infrastructure projects, a national cloud computing center in Tainan, the sovereign language model Trustworthy AI Dialogue Engine, five targeted industry verticals — from precision medicine to smart agriculture — and the goal of ranking among the world’s top five in computing power by 2040: The roadmap from “Silicon Island” to “Smart Island” is drawn. The question is whether the western plains, where population, industry and farmland are concentrated, have the water and
The political order of former president Lee Teng-hui (李登輝) first took shape in 1988. Then-vice president Lee succeeded former president Chiang Ching-kuo (蔣經國) after he passed, and served out the remainder of his term in office. In 1990, Lee was elected president by the National Assembly, and in 1996, he won Taiwan’s first direct presidential election. Those two, six and four-year terms were an era-defining 12-year presidential tenure. Throughout those years, Lee served as helmsman for Taiwan’s transition from martial law and authoritarianism to democracy. This period came to be known as the “quiet revolution,” leaving a legacy containing light