Transcript
A (0:03)
It began in the 1970s with rumors rumbling from the outskirts of the American technology giant IBM, a new chip architecture capable of revolutionary processing speeds. It was called RISC. The RISC wars were fought over nearly 20 years, with the most intensive battles in the late 1980s and early 1990s. At its peak, it involved a mix of young chip upstarts and old giants across the world, throwing around benchmark Results. Sun Microsystems, MIPS Computer PA, RISC, IBM, PowerPC, DEC, Alpha, Fujitsu and NEC in Japan, Siemens and Philips in Europe. And of course, looming over them all, intel and the burgeoning Wintel death machine. It was a time of shifting alliances, leaps of inspiration, wild technical claims, and the iron fist of Intel. Today we delve into the legends of the risk wars. But first I want to remind you about the asianometry Patreon and the Early Access tier. Members get to see new videos first and get the references attached. Early Access directly supports the channel and really helps. Thank you. And on with the show. A computer CPUs basic operations are defined by instructions. You might think of instructions as like the computer hardware's verbs, a small action step that the software can tell the hardware to execute. Like for example, adding, subtracting, or comparing two numbers. All of these verbs together give you what we call an instruction set architecture, or isa. Note that the ISA is not the same as a microarchitecture, which refers to a processor's specific internal design. The design implements the isa. So if an ISA is like the verbs and language, then microarchitecture is like the accent or grammar style. Intel's 8086 microprocessor, the granddaddy of the x86 architecture, came out in 1978. It was a time when main memory was both expensive and slow, and the data pathways between that memory and CPU were tiny. In such a situation, you wanted the processor to hit the main memory as few times as possible. So chip architects define richer, more complicated instructions so that programmers can write less dense strength software programs. However, complicated instructions require more logic on the part of the CPU to interpret. Hardware wise, that means we'd need more transistors. But in the pre VLSI days, transistors on silicon were still very scarce. To work around this, intel and other chip makers of the age produced hardware level software code called microcode to serve as an extra translation layer. Microcode can be stored on cheaper hardware read only memories or ROMs memory chips with software permanently burned into its patterns. Less flexibility, but cheaper. So in essence, intel and other companies were trading Expensive RAM for inexpensive rom. The first computers had their own instruction sets. As software libraries emerged for those computers, the sets themselves became key moats for companies like IBM. But as those sets got thicker and bigger, people started asking, well, does it always have to be like that? In the early 1970s, IBM and Ericsson teamed up to build a digital telephone switch to compete against AT&T. Such a switch needed a fast controller chip and and an IBM internal team was formed to work on that. The instinct was to use microcoded based chips as like IBM's big System 370 mainframe lines. But IBM had already done that for another internal switch project called Rosebud, and that ended up being too slow and would not have handled many calls. Team leader and IBM fellow John Koch blamed that chip's sluggish performance on its microcode, believing that it introduced unnecessary overhead and degraded performance.
