Transcript
A (0:02)
Famously, we traced the Internet to ARPANET, a research network built for the US defense agency DARPA. But in the 1960s and 70s, ARPANET was just one of several computer networks operating around the world. And while it had users and cool technical innovations, it was a research tool. And it wasn't growing particularly fast. How did that turn into the Capital I Internet? We can blame in part Japanese supercomputers. In today's video we explore the Japanese peril that brought us the Internet. But first I want to remind you about the asianometry Patreon and the early access tier. Members get to see new videos first and get the references attached. Early access directly supports the channel and really helps. Thank you. And on with the show. ARPANET began life in October 1969 as a research oriented network to connect a few labs and universities. And over the subsequent decade it remained just that, a research network for trying new networking ideas. It pioneered nifty concepts like packet switching. You can imagine it as like container shipping, but for networking, data is broken up into small self contained packets like shipping containers to be sent anywhere around the world. Nifty concept, as I said, and why people believed the network might be able to survive a nuclear attack. But ARPANET was far from the only research network to employ it. Europe, for instance, nurtured several. In the 1970s, Donald Davies was one of the independent creators of the packet switching concept. He convinced the UK's National Physics Laboratory, or NPL, to build their own packet switched network. And over in France, Louis Posan and Hubert Zimmerman produced in 1972 a government funded packet switch network called Cyclades. Cyclades later connected with NPL's network as part of the larger European informatics network. So nobody can say that the Europeans lagged the Americans in terms of producing innovative networking technology. Though I should add that neither packet switch network gained traction due to conflicts with the monopoly telecoms. I think it is important to reiterate that ARPANET also wasn't large, nor was it really expected to be. In late December 1969 it had about four operational nodes. By 1976 that had grown to about 63. Five years later, in 1981, 213 nodes, adding a new one roughly every 20 days or so. Real heart stopping growth there. And that was because it was a military tool. Access was limited to only those with Department of Defense connections, though those who did use it used it a lot, thanks to its addictive email capabilities. Other online networks were far more popular. For instance the bulletin board systems or BBS said to have been invented in 1979 by two computer hobbyists in Chicago. These were private servers accessed with microcomputers like the Apple II or IBM PC. You attach a modem to your PC and dial into a server via a telephone line. Once in, you can download or upload files or communicate with people via forums. There were tens of thousands of these BBs during the 1980s and early 1990s. And then you had the commercial services CompuServe, Prodigy and America Online. They charged people a monthly fee to access their national information network for services like email, online chat rooms, air ticket buying and even video text. In 1981, the the largest and oldest of the three, CompuServe, reached 10,000 subscribers thanks to a partnership with the retailer RadioShack. By 1984, they boasted 130,000 subscribers. So, Arpanet, a successful experimental tool with interesting innovations and an addictive email app, but nevertheless a niche thing for a few defense researchers as spread out in opposite parts of the country. So what changed? Enter Japan and the supercomputer. There's no precise definition for what makes a supercomputer. In the 1970s, it just meant whatever was the fastest computer then available in the world. Like the famous Cray 1 supercomputer and its theoretical peak performance of 160 million floating point operations per second, or flops. Real world performance, however, was closer to 100 million flops. The category was created and held by American companies like Cray Research and Control Data Corporation. Customers were government labs for running physics simulations in things like aeronautics, or defense labs for weapons design or nuclear energy research, or secret agencies for cryptography. These supercomputers are not easy to make. That plus their limited market and 10 to 30 million dollars price tag made them rather niche. But that Cray 1, announced in 1975 and released a year later, sold unexpectedly well, which caught a lot of people's attention. In 1976, MITI from Japan completed a four year industrial project with computer makers Hitachi, Fujitsu, NEC and Oki Electric called the New Series. New Series focused on producing a computer capable of challenging IBM's system. 370 new series participants studied emitter coupled logic, a family of high speed integrated circuits and concepts relating to vector processing, where a single operation can be performed on multiple data elements. Both innovations can be used for supercomputers. So though New Series was originally intended to challenge IBM, it gave Fujitsu, Hitachi and NEC the technical expertise to develop their own supercomputers. And for the rest of the decade, the three worked on their entrant products. In 1978, Japan's electronics trade body, the Japan Electronic Industry Development association, set up a research project. The government was finishing up a joint cooperation effort called the Pattern Information Processing System Project, developing technologies to recognize Japanese writing, audio and whatnot. But after that ended in 1980, what would be next? The top options involved doing hardware RD for an advanced supercomputer, the knowledge robot project, or doing software RD for automatic translation, the natural language information processing system. Initially, the Japanese leaned towards the translation software project. But several things changed. Takama Yamamoto, a former president of Fujitsu, visited the United States and realized that Japan was behind in computer technology, IBM in particular, and had to improve. So upon returning, Yamamoto switched over and advocated for the supercomputer project. Even though the computer makers had had only recently completed a hardware R and D project. With the new series, Hitachi, Fujitsu and NEC then ramping to enter the supercomputer space, obviously felt that the RD would be useful. Other participants, Toshiba Oki and Mitsubishi, had no plans for supercomputers, but felt that any R and D can be easily adapted to ordinary computers. So thus in October 1981 the Japanese government initiated a national project called the High Speed Computer System for scientific and technological use. Colloquially referred to as the Japanese Super Speed Computer project, which is too cool of a name to be real, so we just call it the supercomputer project. And by the way, I should note that the supercomputer project was separate from the fifth generation project. A similar ish sounding but more well known project launched the following year in 1982. That one focused on knowledge based AI software systems. Let me first start with a whirlwind tour of the supercomputer project's goals and R and D schedules. The project's total budget was 23 billion yen or $100 million over eight years. Newspaper writers in the United States were would mistakenly report that The Japanese spent $100 million a year rather than $100 million in total. The final goal was to produce a very advanced supercomputer capable of 10 billion flops performance. Eight years later in 1989, 10 billion would be 100 times that of the Cray 1's 100 million real world flops. And just in case you're wondering, an Nvidia B200 today does about 144 quadrillion flops in the first five years. The project would develop three very new fundamental device Gallium arsenide hemps and Josephson junctions. A side effort would develop parallel architecture and software specific to supercomputing. Let me run through the three device technologies, starting with gallium arsenide. Gallium arsenide is a III V direct bandgap semiconductor material known for its high electron mobility, so electrons can travel through it at faster speeds than silicon, though with less heat tolerance. The project studied gallium arsenide's physical properties and how to improve process yields later on. Seymour Cray himself would use gallium arsenide for his Cray 3 supercomputer, which was his last completed project before his untimely death in 1996. Continuing on the semiconductor material has to be made into fast switching transistors, so the project also studied an emerging device structure called the High Electron Mobility Transistor or hemt. A HEMT is a field effect transistor, so it still has a source drain and a gate, but a traditional MOSFET moves charge carriers using a channel of a single doped semiconductor material, usually silicon. A HEMT on the other hand, has a gate made from a heterojunction, a border of two different semiconductor materials. When exposed to a high enough voltage, the heterojunction forms a 2D electron gas for electron mobilities 10 to 20 times higher than silicon. The HEMT was first conceived in 1979 by Takashi Mimura of Fujitsu Laboratories who had been inspired by Carver Mead's high speed Mesfet transistor design. The third device technology is the Josephson Junction. These consist of two layers of superconducting material with a very thin layer of insulating material in between them. When in a superconducting mode, a current can quantum mechanically tunnel through the insulating layer without electrical resistance. I mentioned these in several prior videos. Famously, IBM worked on a Josephson Junction based supercomputer for over a decade when the Japanese project began. The IBM project was then still in progress. The supercomputer project's last three years would be dedicated to integrating the fundamental technologies together with the softwares and parallel architectures to complete the final computer. Now that we know what the Japanese were doing, let us take a look at how the Americans saw it and Japan's entry into the supercomputer space. The first American headlines emerged in 1982 as that Japanese computer makers were making their first forays into the supercomputer market. Fujitsu got the party started by announcing the Facom VP100 and VP200 in July 1982, claiming that the latter machine was at 500 million flops, 20% faster than Krei's then top machine, the Krei XMP. A month later, Hitachi announced its first model, the S810 20. They claim maximum performance numbers of 630 million flaps and hinted that it might be exported abroad. The American producers, Cray and Control Data, lowered their prices in response. But the new competition was troubling. Supercomputers were then a niche market. Cray Research had a single product and generated revenues less than 5% of Fujitsu, Hitachi or NEC. The Japanese can subsidize their supercomputers with profits from their other businesses. Moreover, Fujitsu, Hitachi and NEC were semiconductor leaders and can leverage that silicon expertise. Cray actually bought most of their chips from Fujitsu and Hitachi. The Americans feared another DRAM situation. Right then, Japan's radically lower memory prices, which some attributed to dumping, was putting severe pressure on the market. The industry estimated total losses of $300 million in 1981. Will the same happen in supercomputers? So when news broke about the Japanese supercomputer project, Americans didn't see it as a speculative multi year industrial project, but rather as the Japanese government splashing unfair subsidies to target yet another vulnerable iconic industry. A typical headline of the era claimed that the Japanese were attempting to corner the world market. And in supercomputers there were worries that if American government agencies became dependent on Japanese supercomputers, the Japanese can leverage that for political gain. The US government did it themselves in the 1960s when they barred IBM and control data from selling computers to France for nuclear weapons research. And finally, there was the concern that supercomputers were the future and that Japan was investing in that future while America dawdled. For a long time, supercomputers were a niche product, bought by government labs and university nerds for niche simulations. But in the early 1980s, both the Japanese and Americans predicted that supercomputing power would enable breakthroughs in nuclear fusion, meteorology, aerospace, automobiles, molecular chemistry, genetics, and even semiconductor design. Thus, to lose supercomputer preeminence would be to lose what was considered the linchpin and central driver of the world's technological future. So America needed to act. But what to do? In 1982, the NSF National Science foundation and and the U.S. defense Department sponsored a multidisciplinary panel led by the mathematician Peter LAX, staffed by 15 prestigious scientists, including Nobel Prize winner Kenneth Wilson. The Lacks Report, as it was called, was released in December 1982, but its conclusions leaked months earlier. The US has been and continues to be the leader in supercomputer technology and in its use of supercomputers. In the 1970s, the US government slackened its support while other countries increased theirs. And today there is a distinct danger that the US will fail to take full advantage of this leadership position and make the needed investments to secure it for the future. The report highlighted supercomputer projects in West Germany, United Kingdom and France. But of course it was Japan that took precedence. The Japanese are striving to become serious competitors of domestic manufacturers and U.S. dominance of the supercomputer market may soon be a thing of the past. The Japanese government sponsored National Supercomputer project is aimed at the development by 1989 of a machine 1000 times faster than current machines. There is no comparable technical program in the United States. The report emphasized that the US Federal government had to re establish support of the supercomputer market and recommended four measures. The first three covered R& D on supercomputer design systems, more funds for computational mathematics, software, algorithm research and resources for personnel training. Pretty standard stuff. But their leading recommendation was to establish a national high bandwidth computer network so that America's scientists and engineers can have more access to supercomputer facilities. A report from scientists at Lawrence Livermore Lab Compared to many European and Japanese universities, our US colleges are computer poor. A program for the sponsorship of university supercomputers is needed in this country. Kenneth Wilson, the panel's aforementioned Nobel laureate, testified to Congress that the average German grad student had more access to supercomputers than he, a Nobel winner, had. Congress was convinced. Representative Sherwood bowler in May 1983 said that it was a national disgrace that major American universities did not have access to the latest supercomputers. Despite concerns about what now seems like a laughably low $200 million deficit, Congress appropriated $6 million to the 1984 NSF budget for building new supercomputer centers. More funding was later added to build a computerized network to connect those centers. NSF established an Office of Advanced Scientific Computing to achieve its new mission. They first issued several million dollar grants to buy supercomputing time from universities like Purdue, the University of Minnesota and Harvard for their researchers. By the end of 1985, they had bought 30,000 supercomputer hours and allocated them to 800 people. In 1984, OASC also funded four new supercomputer centers in San Diego, Princeton, Illinois and Ithaca, New York. A fifth was later placed in Pittsburgh. So that was all phase one. Phase two would be building the network and that was trickier. Many academic researchers did not then actually believe that supercomputers were critical to the advancement of their field. And there were agency authority issues to untangle. The existing supercomputer centers were largely owned by the Department of Energy, which was weary about what seemed like a power grab by the NSF. ARPANet was governed by DARPA, and various US agencies ran and operated their own small networks like Bitnet, Mailnet and Mfinet. Even NSF had their own network called CSnet, which that is then in their third year in 1984. The original idea had been to consolidate all these small networks and computing centers together and a single network called ScienceNet. But that didn't get traction because many academics, including the physicists, preferred direct lines to the supercomputer centers over a network. So the OASC pivoted on a new idea. They considered expanding ARPANET and adding the supercomputer centers then under construction. NSF and DARPA signed a memorandum of agreement about this. But things had changed at ARPANET in the previous year, 1983. The network was split in into civilian and military halves, and DARPA no longer ran Arpanet's civilian half. So things bogged down. And as 1985 came around, the supercomputer centers were done. But the networking hadn't made much progress. That year, NSF hired Dennis Jennings as its first director of networking. Frustrated with the lack of progress, he decided to start anew. Jennings had previously worked on the aforementioned NSF funded network. There he saw researchers using the network not only for sharing computer resources, but communicating through services like email. So he expanded the scope of ScienceNet, later to be renamed to just NSFNET, apparently for copyright reasons, into being a general purpose network. He argued that science research had always depended on communication and that researchers can still access computer resources, including supercomputers if necessary. NSF Director C. Gordon Bell later reminisced on NSFNET's creation in a 1995 interview, saying the NSFNET was proposed to be used for supercomputers while all the networkers knew it wasn't supercomputers. There was no demand. In addition to the core idea of being a general purpose network, NSFNET incorporated two major architecture elements. First, the three tier structure campus networks would connect to regional networks like Nicernet and WestNet, funded by consortiums of universities. Those regional networks would in turn connect to a national backbone built out, funded and controlled by nsf. This structure was inspired by the breakup of the telephone monopoly at and T which only just happened in 1984. The idea was to distribute the costs of building the network amongst various parties. The second key technical decision that Jennings convinced NSF of was that everyone receiving grants for NSF networks had to standardize on the TCPIP protocol suite. What is that? The name refers to a set of underlying protocols, the two main ones being the Transmission Control protocol and and Internet Protocol. Roughly speaking, TCPIP helps computers communicate with each other across different networks. It was produced in 1973 by Stanford Professor Vinton Cerf and Robert Kahn, then working at darpa. They were also influenced by the ideas of Louis Pozan, who co founded the aforementioned French Cyclades network. TCPIP was a military tool, but DARPA released it into the public domain in 1974. This open status helped TCP IP gain traction despite academics initial skepticism and a plethora of competing proprietary and consortium led options. A notable member of the latter being the Open Systems Interconnection or OSI, introduced in 1984 by a Europe led consortium and tentatively adopted even by the US government itself. In 1983 Arpanet adopted TCPIP, switching away from the prior NCP protocol. Later that year the popular Berkeley flavor of Unix adopted it, which in turn led to Sun Microsystems popular workstation computers taking it on too. NSFNET's decision to back TCPIP finalized it as a lingua franca to connect the world's different networks, creating the Internetworking which we now call the Internet. The NSFNET backbone went live in 1986, boasting speeds of about 56 kilobits per second. It linked six nationally funded supercomputer centers, the five aforementioned centers funded by the NSF and a sixth associated with the national center for Atmospheric Research. Traffic grew at a terrific rate and the backbone became congested. So in 1987 NSF solicited proposals for upgrades and chose a group of universities and telecom companies. The new backbone, upgraded to speeds of 1.5 megabits, went live in 1988, and over the next two years more networks, new and old were connected to the nsfnet backbone. The OG ARPANET and CSNET networks got connected and the old nodes were subsequently decommissioned. In 1989, 500 million data packets were switched, a 500% increase in just a single year. The growth during this time was torrid, doubling every seven months and forced nsfnet to spend millions to add more nodes and upgrade router capacity. By 1990 NSFNET connected 1,600 total networks, including 200 universities, industry institutes and laboratories across 50 countries. User count estimates are hard to come by, harder to trust, but the claim is that there were about 250,000 in 1990. Up until 1990, all this was happening without much notice from the tech public. They and the rest of the public then were more fascinated with Microsoft, Wizkid, Bill Gates and their monopoly grip on the PC ecosystem. This changed with the World Wide Web, a famous chapter in Internet history. In 1990, a Swiss software engineer at CERN named Tim Berners Lee sought an easy way to navigate large chunks of information scattered across different computer systems. So as a side project, he created a flexible system that he called hypertext. Berners Lee then thought to layer his hypertext concept on top of the growing Internet to create a global hypertext system and he dubbed it the World Wide Web. The Web had three major components. First, the use of HTML to encode documents. Second, the use of unique addresses called URLs to locate online resources. And third, a transfer protocol called HTTP, which let servers transfer data to these applications called browsers. Berners Lee publicly released the World Wide Web project, leading to many NSF centers to experiment with new formats to navigate the Internet. One of those would become wildly successful. In late 1992, the University of Illinois's national center of Supercomputing Applications put together a team led by Marc Andreessen to produce a browser called mosaic. Released in 1993, the user friendly Mosaic browser became a massive hit. Downloaded millions of times, it brought masses of new users onto the Internet. That year, the number of World Wide Web servers surged from 100 to 800. Andreessen, sensing the opportunity, eventually left to start a for profit startup called Netscape Corporation and compete against his old product. With the Internet's TORD growth, it started to dawn on people that nsfnet was real. And with that, its status as a government owned entity started to make people as well as the NSF itself a bit uncomfortable. NSF allowed commercial companies to connect to the backbone. But because Congress did not initially want companies to make money off of nsfnet and NSF also did not want to police content like email, NSF restrained what commercial activities can be conducted to on its backbone. This became an issue as the Internet's commercial value started to emerge. In 1992, NSFNET completed another speed upgrade to its backbone and it now handled 10x more traffic than it did the prior year. Internet use grew at a fiery 15% a month. At around 10 million or so users did. Internet's size hit escape Velocity, particularly in the case of email. In the past, an email sent to someone, unlike CompuServe or Arpanet, can only reach another user on that same network. But now, with the Internet size and an open, broadly accepted email protocol called Simple Mail Transfer Protocol, emails sent to recipients over the Internet were likely to reach them. This was very commercially valuable and only further incentivized networks, colloquially called Internet Service Providers, to connect to nsfnet, which in turn applied further pressure on NSFNET to accommodate all their requests. Though talks about privatization date back to 1990, it was in 1992 that the NSF had to do something. The Internet was no longer a networking tool for scientific research. Contemporary estimates said that less than a third of its users were researchers. Thus, over the next three years, NSF slowly privatized the Internet. They loosened the restrictions on commercial traffic and the single NSF net backbone was restructured to multiple interconnected backbones operated by commercial providers. This was completed surprisingly quickly and seamlessly. In April 1995, NSF completed its privatization and and shut down NSFNET. The whole transfer to private hands went off without a hitch. We got so caught up in Internet frenzy throughout this video that we almost forgot whatever happened to the Japanese supercomputer project. You know what kicked all this off? Did America successfully halt the Japanese supercomputer incursion? Well, let me start with the project. The project itself went along as scheduled, came in a bit under budget actually, and its findings contributed to humanity's understandings of compute. However, measured by patents, published papers and commercial projects, the supercomputer project underperformed, for example, in Josephson junctions. Japan's efforts improved on IBMs by introducing niobium based junctions that were far more manufacturable than IBM's lead based stuff, but still not usable for a real computer. Fujitsu eventually produced a 16 kilobit SRAM with their hemped design and the learnings were good enough to motivate Hitachi and NEC to continue working on it after the project ended. But again, it was not used in a supercomputer. MIDI eventually held up supercomputers like the NEC SX3, a 22 gigaflop beast released in 1990 has proved that the supercomputer project achieved its goals. But that computer used traditional silicon CMOS to achieve those speeds. So no, the project didn't achieve its industrial goals. But it didn't have to. Fujitsu, Hitachi and NEC took share from the American computer makers with what they already had. Responding to the threat, Washington, D.C. yet again exercised this trade leverage to try and open the Japanese markets to American supercomputers throughout the second half of the 1980s and public pressure leaned on American entities to cancel any purchases of Japanese supercomputers. A well publicized Incident was in 1987 when MIT canceled their purchase of a Fujitsu supercomputer. Nevertheless, despite that support, America's independent supercomputer firms were slowly whittled down. Control Data supercomputer subsidiary ETA Systems closed in 1989. Steve Chen, former star designer of the Cray XMP, founded a firm called Supercomputer Systems Inc. With funding from IBM. It closed shop in 1992. Cray itself attempted to enter new markets but failed after losing key international contracts that were acquired by silicon graphics in 1996 and and pivoted to graphics few people notice, to be honest. In 1995, Bill Gates writes and publishes the famous Internet Tidal Wave Memo, which in my opinion marks the beginning of the Internet age. The gold rush had begun. Alright everyone, that's it for tonight. Thanks for watching. Subscribe to the channel, Sign up for the Patreon and I'll see you guys next time.
