For the past decade, artificial intelligence (AI) has been used to recognize faces, rate creditworthiness and predict the weather. At the same time, increasingly sophisticated hacks using stealthier methods have escalated. The combination of AI and cybersecurity was inevitable as both fields sought better tools and new uses for their technology. However, there is a massive problem that threatens to undermine these efforts and could allow adversaries to bypass digital defenses undetected.
The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to circumvent AI-powered defenses. Many companies might not be ready to deal with escalating challenges.
The global market for AI cybersecurity is already expected to triple by 2028 to US$35 billion. Security providers and their clients might have to patch together multiple strategies to keep threats at bay.
Illustration: Louise Ting
The very nature of machine learning, a subset of AI, is the target of data poisoning.
Given reams of data, computers can be trained to categorize information correctly. A system might not have seen a picture of Lassie, but given enough examples of different animals that are correctly labeled by species (and even breed) it should be able to surmise she is a dog. With even more samples, it would be able to correctly guess the breed of the famous TV canine: rough collie.
The computer does not really know. It is merely making statistically informed inference based on past training data.
That same approach is used in cybersecurity. To catch malicious software, companies feed their systems with data and let the machine learn by itself. Computers armed with numerous examples of both good and bad code can learn to look out for malicious software (or even snippets of software) and catch it.
An advanced technique called neural networks — which mimics the structure and processes of the human brain — runs through training data and makes adjustments based on both known and new information.
Such a network need not have seen a specific piece of malevolent code to surmise that it is bad. It has learned for itself and can adequately predict good versus evil.
All of that is powerful, but it is not invincible.
Machine-learning systems require a huge number of correctly labeled samples to start getting good at prediction. Even the largest cybersecurity companies are able to collate and categorize only a limited number of examples of malware, so they have little choice but to supplement their training data. Some of the data can be crowd-sourced.
“We already know that a resourceful hacker can leverage this observation to their advantage,” Giorgio Severi, a doctoral student at Northwestern University, said in a recent presentation at the USENIX Security Symposium.
Using the animal analogy, if feline-phobic hackers wanted to cause havoc, they could label a bunch of photos of sloths as cats, and feed the images into an open-source database of house pets. Since the tree-hugging mammals would appear far less often in a corpus of domesticated animals, this small sample of poisoned data has a good chance of tricking a system into spitting out sloth pics when asked to show kittens.
It is the same technique for more malicious hackers.
By carefully crafting malicious code, labeling these samples as good, and then adding it to a larger batch of data, a hacker can trick a neutral network into surmising that a snippet of software that resembles the bad example is, in fact, harmless.
Catching the miscreant samples is almost impossible. It is far harder for a human to rummage through computer code than to sort pictures of sloths from those of cats.
In a presentation at the Hacks In Taiwan security conference in Taipei last year, researchers Cheng Shin-ming (鄭欣明) and Tseng Ming-huei (曾明慧) showed that backdoor code could fully bypass defenses by poisoning less than 0.7 percent of the data submitted to the machine-learning system.
Not only does it mean that only a few malicious samples are needed, but it indicates that a machine-learning system can be rendered vulnerable even if it uses only a small amount of unverified open-source data.
The industry is not blind to the problem, and this weakness is forcing cybersecurity companies to take a much broader approach to bolstering defenses.
One way to help prevent data poisoning is for scientists who develop AI models to regularly check that all the labels in their training data are accurate.
OpenAI, a research company cofounded by Elon Musk, said that when its researchers curated their data sets for a new image-generating tool, they would regularly pass the data through special filters to ensure the accuracy of each label.
That “removes the large majority of images which are falsely labeled,” a spokeswoman said.
To stay safe, companies need to ensure their data is clean, but that means training their systems with fewer examples than they would get with open source offerings.
In machine learning, sample size matters.
This cat-and-mouse game between attackers and defenders has been going on for decades, with AI simply the latest tool deployed to help the good side stay ahead.
Remember: Artificial intelligence is not omnipotent. Hackers are always looking for their next exploit.
Tim Culpan is a technology columnist for Bloomberg Opinion. Based in Taipei, he writes about Asian and global businesses and trends. He previously covered the beat at Bloomberg News.
Taiwan has lost Trump. Or so a former State Department official and lobbyist would have us believe. Writing for online outlet Domino Theory in an article titled “How Taiwan lost Trump,” Christian Whiton provides a litany of reasons that the William Lai (賴清德) and Donald Trump administrations have supposedly fallen out — and it’s all Lai’s fault. Although many of Whiton’s claims are misleading or ill-informed, the article is helpfully, if unintentionally, revealing of a key aspect of the MAGA worldview. Whiton complains of the ruling Democratic Progressive Party’s “inability to understand and relate to the New Right in America.” Many
The Centers for Disease Control and Prevention (CDC) earlier this month raised its travel alert for China’s Guangdong Province to Level 2 “Alert,” advising travelers to take enhanced precautions amid a chikungunya outbreak in the region. More than 8,000 cases have been reported in the province since June. Chikungunya is caused by the chikungunya virus and transmitted to humans through bites from infected mosquitoes, most commonly Aedes aegypti and Aedes albopictus. These species thrive in warm, humid climates and are also major vectors for dengue, Zika and yellow fever. The disease is characterized by high fever and severe, often incapacitating joint pain.
In nature, there is a group of insects known as parasitoid wasps. Their reproductive process differs entirely from that of ordinary wasps — the female lays her eggs inside or on the bodies of other insects, and, once hatched, the larvae feed on the host’s body. The larvae do not kill the host insect immediately; instead, they carefully avoid vital organs, allowing the host to stay alive until the larvae are fully mature. That living reservoir strategy ensures a stable and fresh source of nutrients for the larvae as they grow. However, the host’s death becomes only a matter of time. The resemblance
Most countries are commemorating the 80th anniversary of the end of World War II with condemnations of militarism and imperialism, and commemoration of the global catastrophe wrought by the war. On the other hand, China is to hold a military parade. According to China’s state-run Xinhua news agency, Beijing is conducting the military parade in Tiananmen Square on Sept. 3 to “mark the 80th anniversary of the end of World War II and the victory of the Chinese People’s War of Resistance Against Japanese Aggression.” However, during World War II, the People’s Republic of China (PRC) had not yet been established. It