As artificial intelligence (AI) drives demand for more advanced semiconductors, new techniques in AI are becoming crucial to continued progress in chip manufacturing.
The entire semiconductor supply chain, from design through to fabrication, is now dominated by data. More than 100 petabytes of information is created and collated during the manufacturing process, according to one estimate by Intel Corp. That is equivalent to a 170-year-long YouTube video.
Data analytics and machine learning, a discipline within AI, is so integral to the process of making and testing chips that Taiwan Semiconductor Manufacturing Co (TSMC) employs dozens of AI engineers and has its own machine-learning department. Whereas humans were once trained to visually inspect a chip for defects, the small scale and increasing complexity of electronic components has seen that function handed over to AI systems.
Photolithography is one of the most critical steps. This is the process of shining a light through a glass mask onto a chemically treated slice of silicon to create a circuit. It is similar to old-school photography where a final print is developed in a darkroom.
The problem is that light diffracts, which means that the lines actually drawn on the surface of a chip differ from the mask’s pattern. At larger geometries these flaws did not matter too much because the design had enough wiggle room to still be functional, but as dimensions shrunk in line with Moore’s Law, tolerance for errors disappeared.
For decades engineers tackled these distortions by deploying a technique called optical proximity correction (OPC), which adds extra shapes to the original design so that the final result more closely matches the intended circuitry.
Today’s chips have connections as thin as 5 nanometers, 20 times smaller than the COVID-19 virus, spurring the need for new approaches. Thankfully the errors between design and result are not entirely random. Engineers can predict the variations by working backward: Start with what you hope to achieve and crunch a lot of numbers to work out what the photolithography mask should look like to achieve it.
This technique, called inverse lithography, was pioneered 20 years ago by Peng Danping (彭丹平) at Silicon Valley software start-up Luminescent. That Peng, who since moved to TSMC as a director of engineering, completed his doctorate not in electrical engineering, but applied mathematics hints at the data-centric nature of inverse lithography technology (ILT).
With hundreds of parameters to consider — such as light intensity, wavelength, chemical properties, and width and depth of circuitry — this process is extremely data-intensive. At its core, inverse lithography is a mathematical problem. The design of an ILT mask takes 10 times longer to compute than older OPC-based approaches, with the size of a file holding the pattern up to seven times larger.
Collating data, formulating algorithms and running thousands of mathematical computations is precisely what semiconductors are made for, so it was only a matter of time before AI was deployed to try to more efficiently design AI chips.
It is, in many respects, a very complicated graphics problem. The goal is to build a microscopic 3D structure from multiple layers of 2D images.
Nvidia Corp, which is now the world’s leader in AI chips, started off designing graphics processing units (GPU) for computers 30 years ago. It stumbled upon AI because, like graphics, it is a sector of computing that requires massive amounts of number-crunching power. The company’s central role in AI saw it on Wednesday forecast sales this quarter that surpassed expectations, driving the stock up by about 25 percent in pre-market trading. That pushes it toward a US$1 trillion valuation.
Images on a computer screen are little more than a superfine grid of colored dots. Calculating which to light up as red, green or blue can be done in parallel because each point on the screen is independent of every other dot. For a graphics-heavy computer game to run smoothly, these calculations need to be done quickly and in bulk. While central processing units are good at performing a variety of operations, including juggling multiple tasks at once, modern GPUs are created specifically for parallel computing.
Now Nvidia is using its own graphics processors and a library of software it created to make semiconductor lithography more efficient. In a blog post last year, the California-based company explained that by using its graphics chips, it could run inverse lithography computations 10 times faster than on standard processors. Earlier this year, it upped that estimate, saying its approach could accelerate the process 40-fold. With a suite of design tools and its own algorithms, collectively marketed under the term cuLitho, the company is working with TSMC and semiconductor design-software provider Synopsys Inc.
This collection of software and hardware was not developed by Nvidia for altruistic reasons. The company wants to find more uses for its expensive semiconductors, and it needs to ensure that the process of bringing its chip designs to market remains smooth and as cheap as possible. While we all marvel at the ability of ChatGPT software to write software, we will see the increasing role of AI chips in creating AI chips.
Tim Culpan is a Bloomberg Opinion columnist covering technology in Asia.
Congratulations to China’s working class — they have officially entered the “Livestock Feed 2.0” era. While others are still researching how to achieve healthy and balanced diets, China has already evolved to the point where it does not matter whether you are actually eating food, as long as you can swallow it. There is no need for cooking, chewing or making decisions — just tear open a package, add some hot water and in a short three minutes you have something that can keep you alive for at least another six hours. This is not science fiction — it is reality.
A foreign colleague of mine asked me recently, “What is a safe distance from potential People’s Liberation Army (PLA) Rocket Force’s (PLARF) Taiwan targets?” This article will answer this question and help people living in Taiwan have a deeper understanding of the threat. Why is it important to understand PLA/PLARF targeting strategy? According to RAND analysis, the PLA’s “systems destruction warfare” focuses on crippling an adversary’s operational system by targeting its networks, especially leadership, command and control (C2) nodes, sensors, and information hubs. Admiral Samuel Paparo, commander of US Indo-Pacific Command, noted in his 15 May 2025 Sedona Forum keynote speech that, as
In a world increasingly defined by unpredictability, two actors stand out as islands of stability: Europe and Taiwan. One, a sprawling union of democracies, but under immense pressure, grappling with a geopolitical reality it was not originally designed for. The other, a vibrant, resilient democracy thriving as a technological global leader, but living under a growing existential threat. In response to rising uncertainties, they are both seeking resilience and learning to better position themselves. It is now time they recognize each other not just as partners of convenience, but as strategic and indispensable lifelines. The US, long seen as the anchor
Kinmen County’s political geography is provocative in and of itself. A pair of islets running up abreast the Chinese mainland, just 20 minutes by ferry from the Chinese city of Xiamen, Kinmen remains under the Taiwanese government’s control, after China’s failed invasion attempt in 1949. The provocative nature of Kinmen’s existence, along with the Matsu Islands off the coast of China’s Fuzhou City, has led to no shortage of outrageous takes and analyses in foreign media either fearmongering of a Chinese invasion or using these accidents of history to somehow understand Taiwan. Every few months a foreign reporter goes to