Artificial intelligence (AI) is already making it easier for workers to put together a job application. The jury is still out on whether it is also making it easier for them to get the job.
Nearly half of recent hires used AI to apply, according to a survey by Resume Builder released in May. One in five Gen Zers looking for a job has used ChatGPT to create a resume or cover letter, a survey released the same month by Resume Templates showed.
I have discussed these findings with managers, professors and mid-career professionals. The reactions range from “That’s cheating!” to “That’s smart.”
Illustration: Constance Chou
“The rules around this are super-unclear to everybody,” said Monica Parker-James, the associate dean for industry relations and career services at Boston University’s Questrom School of Business. That leaves applicants and employers to use their own judgment — and weigh for themselves the pros and cons.
First, the cons. An AI-written cover letter would sound generic. That can be fatal to one’s chances of getting an interview. The output might sound like business-speak, but that does not mean it is good.
People say AI-generated cover letters sound eerily alike. Questrom clinical assistant professor Mohammad Soltanieh-Ha said he has gotten e-mails for open positions that were clearly written by ChatGPT — they were all “five paragraphs long and the language is very similar.” I know one editor who uses ChatGPT to assess article submissions; if the writing or ideas sound remotely similar to what the large language model (LLM) spits out after a similar prompt, it is an automatic rejection.
I know from experience that it can be quite challenging to edit turgid, jargon-filled prose into something zesty and original. So, rather than using ChatGPT to generate a draft, write your own, upload it to ChatGPT and ask for a critique, Soltanieh-Ha said. I tried this, using a couple of cover letters I had lying around, and was low-key astonished by the results. These letters by their nature are often formulaic and stilted; but still, it surprised me that the ChatGPT-ified versions sounded more natural than the original drafts.
That does not mean applicants should take every suggestion offered by the LLM. It can be a bit too enamored of “corporate-ese.” When I asked it to improve my resume, it changed a section saying I had “launched” and “hosted” podcasts to say I had “spearheaded” them, which tells a recruiter less about my specific skills.
Where generative AI might be strongest is in helping applicants prepare for the job interview. ChatGPT can generate a list of common interview questions based on the specific job description. It can also give advice on answering tricky ones like, “what’s your greatest weakness?” (The LLM’s recommendation: Acknowledge a weakness, show what steps you have taken to address it, highlight your progress and connect it to the role for which you are applying.)
The right way to use the tool is as a sparring partner to hone your own thinking, experts said.
As for employers, recruiters might want to emphasize interviews and projects — work the candidate has already completed, whether at a previous job or in school — more than application materials. In fact, recruiters might need to spend more time talking with candidates, as written applications start to sound more alike, University of Porto associate professor and LTPlabs cofounder Pedro Amorim said.
Additionally, any who oppose AI use by applicants should make that clear in the job posting. If you are planning to ask finalists for a writing sample and want to make absolutely sure ChatGPT is not involved, you could ask them to provide it in your office — with paper and pen. If that sounds silly (and I have to say it does), you would just have to accept that some candidates would get a little technological help.
However, I do not think it is cheating to use AI to apply for a job. People have long used templates to write resumes and cover letters, a laborious process that does not always produce great results. We have tools today that work better, and candidates who do not use them — or do not, at least, learn how to use them — might be left behind.
After all, many recruiters use technology to screen job applications. It seems only fair that the candidates, who might have to apply to dozens of jobs to get an offer, be able to use efficiency-enhancing technology, too.
However, candidates should only use AI if they are willing to be honest about it.
One in three candidates said a hiring manager has asked about their use of ChatGPT, the Resume Templates survey showed. It would be a bad idea to lie.
Moreover, the tools to be able to tell whether someone has used generative AI are coming, MIT Sloan Center for Information Systems Research research scientist Nick van der Meulen said.
Attitudes about new technology can shift quickly. I am old enough to remember when you had to ask Microsoft Word to run spell check (now it is automatic). It did not take long for my teachers to shift from “It’s cheating to use spell check” to “Always use spell check.”
We are not there yet with AI, but we are getting closer: According to a recent Korn Ferry survey, 80 percent of professionals say ChatGPT is a “legitimate, beneficial work tool.”
It is also a legitimate, beneficial tool for people searching for work.
Sarah Green Carmichael is a Bloomberg Opinion columnist and editor. Previously, she was an executive editor at Harvard Business Review.
Wherever one looks, the United States is ceding ground to China. From foreign aid to foreign trade, and from reorganizations to organizational guidance, the Trump administration has embarked on a stunning effort to hobble itself in grappling with what his own secretary of state calls “the most potent and dangerous near-peer adversary this nation has ever confronted.” The problems start at the Department of State. Secretary of State Marco Rubio has asserted that “it’s not normal for the world to simply have a unipolar power” and that the world has returned to multipolarity, with “multi-great powers in different parts of the
President William Lai (賴清德) recently attended an event in Taipei marking the end of World War II in Europe, emphasizing in his speech: “Using force to invade another country is an unjust act and will ultimately fail.” In just a few words, he captured the core values of the postwar international order and reminded us again: History is not just for reflection, but serves as a warning for the present. From a broad historical perspective, his statement carries weight. For centuries, international relations operated under the law of the jungle — where the strong dominated and the weak were constrained. That
The Executive Yuan recently revised a page of its Web site on ethnic groups in Taiwan, replacing the term “Han” (漢族) with “the rest of the population.” The page, which was updated on March 24, describes the composition of Taiwan’s registered households as indigenous (2.5 percent), foreign origin (1.2 percent) and the rest of the population (96.2 percent). The change was picked up by a social media user and amplified by local media, sparking heated discussion over the weekend. The pan-blue and pro-China camp called it a politically motivated desinicization attempt to obscure the Han Chinese ethnicity of most Taiwanese.
The Legislative Yuan passed an amendment on Friday last week to add four national holidays and make Workers’ Day a national holiday for all sectors — a move referred to as “four plus one.” The Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP), who used their combined legislative majority to push the bill through its third reading, claim the holidays were chosen based on their inherent significance and social relevance. However, in passing the amendment, they have stuck to the traditional mindset of taking a holiday just for the sake of it, failing to make good use of