Notes on

Chip War: The Fight for the World's Most Critical Technology

by Chris Miller

| 6 min read


Glossary
Arm:
a UK company that licenses to chip designers use of an instruction set architecture—a set of basic rules governing how a given chip operates. The Arm architecture is dominant in mobile devices and is slowly winning market share in PCs and data centers.
Chip (also “integrated circuit” or “semiconductor”):
a small piece of semiconducting material, usually silicon, with millions or billions of microscopic transistors carved into it.
CPU:
central processing unit; a type of “general-purpose” chip that is the workhorse of computing in PCs, phones, and data centers.
DRAM:
dynamic random access memory; one of two main types of memory chip, which is used to store data temporarily.
EDA:
electronic design automation; specialized software used to design how millions or billions of transistors will be arrayed on a chip and to simulate their operation.
FinFET:
a new 3D transistor structure first implemented in the early 2010s to better control transistor operation as transistors’ size shrank to nanometric-scale.
GPU:
graphics processing unit; a chip that is capable of parallel processing, making it useful for graphics and for artificial intelligence applications.
Logic chip:
a chip that processes data.
Memory chip:
a chip that remembers data.
NAND:
also called “flash,” the second major type of memory chip, used for longer-term data storage.
Photolithography:
also known as “lithography”; the process of shining light or ultraviolet light through patterned masks: the light then interacts with photoresist chemicals to carve patterns on silicon wafers.
RISC-V:
an open-source architecture growing in popularity because it is free to use, unlike Arm and x86. The development of RISC-V was partially funded by the U.S. government but now is popular in China because it is not subject to U.S. export controls.
Silicon wafer:
a circular piece of ultra-pure silicon, usually eight or twelve inches in diameter, out of which chips are carved.
Transistor:
a tiny electric “switch” that turns on (creating a 1) or off (0), producing the 1s and 0s that undergird all digital computing.
x86:
an instruction set architecture that is dominant in PCs and data centers. Intel and AMD are the two main firms producing such chips.

Nice to know

Engineers eventually began replacing mechanical gears in early computers with electrical charges. Early electric computers used the vacuum tube, a lightbulb-like metal filament enclosed in glass. The electric current running through the tube could be switched on and off, performing a function not unlike an abacus bead moving back and forth across a wooden rod. A tube turned on was coded as a 1 while a vacuum tube turned off was a 0. These two digits could produce any number using a system of binary counting—and therefore could theoretically execute many types of computation.
Moreover, vacuum tubes made it possible for these digital computers to be reprogrammed. Mechanical gears such as those in a bombsight could only perform a single type of calculation because each knob was physically attached to levers and gears. The beads on an abacus were constrained by the rods on which they moved back and forth. However, the connections between vacuum tubes could be reorganized, enabling the computer to run different calculations.

This was a leap forward in computing—or it would have been, if not for the moths. Because vacuum tubes glowed like lightbulbs, they attracted insects, requiring regular “debugging” by their engineers. Also like lightbulbs, vacuum tubes often burned out. A state-of-the-art computer called ENIAC, built for the U.S. Army at the University of Pennsylvania in 1945 to calculate artillery trajectories, had eighteen thousand vacuum tubes. On average, one tube malfunctioned every two days, bringing the entire machine to a halt and sending technicians scrambling to find and replace the broken part. ENIAC could multiply hundreds of numbers per second, faster than any mathematician. Yet it took up an entire room because each of its eighteen thousand tubes was the size of a fist. Clearly, vacuum tube technology was too cumbersome, too slow, and too unreliable. So long as computers were moth-ridden monstrosities, they’d only be useful for niche applications like code breaking, unless scientists could find a smaller, faster, cheaper switch.

Source of ‘debugging’

In 1965, Moore was asked by Electronics magazine to write a short article on the future of integrated circuits. He predicted that every year for at least the next decade, Fairchild would double the number of components that could fit on a silicon chip. If so, by 1975, integrated circuits would have sixty-five thousand tiny transistors carved into them, creating not only more computing power but also lower prices per transistor. As costs fell, the number of users would grow. This forecast of exponential growth in computing power soon came to be known as Moore’s Law. It was the greatest technological prediction of the century.

The birth of Moore’s Law

Other Japanese leaders appeared to take a similarly defiant nationalist view. One senior Foreign Ministry official was quoted as arguing that “Americans simply don’t want to recognize that Japan has won the economic race against the West.” Soon-to-be-prime-minister Kiichi Miyazawa publicly noted that cutting off Japanese electronics exports would cause “problems in the U.S. economy,” and predicted that “the Asian economic zone will outdo the North American zone.” Amid the collapse of its industries and its high-tech sector, America’s future, a Japanese professor declared, was that of “a premier agrarian power, a giant version of Denmark.”

Quite interested in how Japan turned out here.

So, the story is quite long, but basically USA decided to help grow Japan’s economy after World War II (through the Marshal plan & direct investment), which had tremendous success. This was part of a broader Cold War strategy to create strong capitalist allies in East Asia, especially to counter the growing influence of communism in the region. The U.S. provided Japan with access to technology and markets, and Japan used this assistance to develop its industrial base.
By the 1950s and 1960s, Japan’s economy was growing rapidly, particularly in sectors such as consumer electronics, automobiles, and later, semiconductors.
But now (1950s or so) they’ve overtaken them in chip production, and are starting to become somewhat cocky. Then Morita published a book with some right-wing guy. This posed the game as zero-sum, basically saying that Japan could win/take the lead economically, and that USA would just do agriculture. So Japan did high-tech and others could buy their expertise.

Obviously this isn’t going over well with Washington.

But interesting that this was all possible.
Story is kinda insane too. They had seemingly unlimited funding at insane rates, which Americas semiconductor industry didn’t. So they grew, slowly making products so good that American companies couldn’t compete. And then they just ended up doing most of the trade in that sector. Most market share.

Could Denmark do that? Become insanely wealthy through skill. Japan could. But we’ll see their story. I can’t help but think they did a victory lap too soon.

When judging situations, don’t think good/bad, think “we’ll see.”

Morita later regretted contributing. The official English translation was without his essays.

Liked these notes? Join the newsletter.

Get notified whenever I post new notes.