A history lesson on government R&D, Part 4

See Part 1, Part 2, and Part 3

It’s been a while since I’ve worked on this. I used up a lot of my passion for working on this series in the first three parts. Working on this part was like eating my spinach, not something I was enthused about, but it was a good piece of history to learn.

Nevertheless, I am dedicating this to Bob Taylor, who passed away on April 13. He got the ball rolling on building the Arpanet, when a lot of his colleagues in the Information Processing Techniques Office at ARPA didn’t want to do it. This was the predecessor to the internet. When he started up the Xerox Palo Alto Research Center (PARC), he brought in people who ended up being important contributors to the development of the internet. His major project at PARC, though, was the development of the networked personal computer in the 1970s, which I covered in Part 3 of this series. He left Xerox and founded the Systems Research Center at Digital Equipment Corp. in the mid-80s. In the mid-90s, while at DEC, he developed the internet search engine AltaVista.

He and J.C.R. Licklider were both ARPA/IPTO directors, and both were psychologists by training. Among all the people I’ve covered in this series, both of them had the greatest impact on the digital world we know today, in terms of providing vision and support to the engineers who did the hard work of turning that vision into testable ideas that could then inspire entrepreneurs who brought us what we are now using.

Part 3 Part 2 covers the history of how packet switching was devised, and how the Arpanet got going. You may wish to look that over before reading this article, since I assume that background knowledge.

My primary source for this story, as it’s been for this series, is “The Dream Machine,” by M. Mitchell Waldrop, which I’ll refer to occasionally as “TDM.”

The Internet

Ethernet/PUP: The first internetworking protocol

Bob Metcalfe,
from University of Texas at Austin

In 1972, Bob Metcalfe was a full-time staffer on Project MAC, building out the Arpanet, using his work as the basis for his Ph.D. in applied mathematics from Harvard. While he was setting up the Arpanet node at Xerox PARC, he stumbled upon the ALOHAnet System, an ARPA-funded project that was led by Norman Abramson at the University of Hawaii. It was a radio-based packet-switching network. The Hawaiian islands created a natural environment for this solution to arise, since telephone connections across the islands were unreliable and expensive. The Arpanet’s protocol waited for a gap in traffic before sending packets. On ALOHAnet, this would have been impractical. Since radio waves were prone to interference, a sending terminal might not even hear a gap. So, terminals sent packets immediately to a transceiver switch. A network switch is a computer that receives packets and forwards them to their destination. In this case, it retransmitted the sender’s signal to other nodes on the network. If the sending terminal received acknowledgement that its packets were received, from the switch, it knew they got to their destination successfully. If not, it assumed a collision occurred with another sending terminal, garbling the packet signal. So, it waited a random period of time before resending the lost packets, with the idea of trying to send them when other terminals were not sending their packets. It had a weakness, though. The scheme Abramson used only operated at 17% of capacity. The random wait scheme could become overloaded with collisions if it got above that. If use kept increasing to that limit, the system would eventually grind to a halt. Metcalfe thought this could be improved upon, and ended up collaborating with Abramson. Metcalfe incorporated his improvements into his Ph.D. thesis. He came to work at PARC in 1972 to use what he’d learned working on ALOHAnet to develop a networking scheme for the Xerox Alto. Instead of using radio signals, he used coaxial cable, and found he could transmit data at a much higher speed that way. He called what he developed Ethernet.

The idea behind it was to create local area networks, or LANs, using coaxial cable, which would allow the same kind of networking one could get on the Arpanet, but in an office environment, rather than between machines spread across the country. Rather than having every machine on the main network, there would be subnetworks that would connect to a main network at junction points. Along with that, Metcalfe and David Boggs developed the PUP (PARC Universal Packet) protocol. This allowed Ethernet to connect to other packet switching networks that were otherwise incompatible with each other. The developers of TCP/IP, the protocol used on the internet, would realize these same goals a little later, and begin their own research on how to accomplish it.

In 1973, Bob Taylor set a goal in the Computer Science Lab at Xerox PARC to make the Alto computer networked. He said he wanted not only personal computing, but distributed personal computing, at the outset.

The creation of the Internet

Bob Kahn,
from the ACM

Another man who enters the picture at this point is Bob Kahn. He joined Larry Roberts at DARPA in 1972, originally to work on a large project exploring computerized manufacturing, but that project was cancelled by Congress the moment he got there. Kahn had been working on the Arpanet at BBN, and he wanted to get away from it. Well…Roberts informed him that the Arpanet was all they were going to be working on in the IPTO for the next several years, but he persuaded Kahn to stay on, and work on new advancements to the Arpanet, such as mobile satellite networking, and packet radio networking, promising him that he wouldn’t have to work on maintaining and expanding the existing system. DARPA had signed a contract to expand the Arpanet into Europe, and networking by satellite was the route that they hoped to take. Expanding the packet radio idea of ALOHAnet was another avenue in which they were interested. The idea was to look into the possibility of allowing mobile military forces to communicate in the field using these wireless technologies.

Kahn got satellite networking going using the Intelsat IV satellite. He planned out new ideas for network security and digital voice transmission over Arpanet, and he planned out mobile terminals that the military could use, using packet radio. He had the thought, though, how are all of these different modes of communication going to communicate with each other?

As you’ll see in this story, Bob Kahn with Vint Cerf, and Bob Metcalfe, in a sense, ended up duplicating each other’s efforts; Kahn working at DARPA, Cerf at Stanford, and Metcalfe at Xerox in Palo Alto. They came upon the same basic concepts for how to create internetworking, independently. Metcalfe and Boggs just came upon them a bit earlier than everyone else.

Kahn thought it would seem easy enough to make the different networks he had in mind communicate seamlessly. Just make minor modifications to the Arpanet protocol for each system. He also wondered about future expansion of the network with new modes of communication that weren’t on the front burner yet, but probably would be one day. Making one-off modifications to the Arpanet protocol for each new communications method would eventually make network maintenance a mess. It would make the network more and more unweildy as it grew, which would limit its size. He understood from the outset that this method of network augmentation was a bad management and engineering approach. Instead of integrating the networks together, he thought a better method was to design the network’s expansion modularly, where each network would be designed and managed seperately, with its own hardware, software, and network protocol. Gateways would be created via. hybrid routers that had Arpanet hardware and software on one side, and the foreign network’s hardware and software on the other side, with a computer in the middle translating between the two. Kahn recognized how this structure could be criticized. The network would be less efficient, since each packet from a foreign network to the Arpanet would have to go through three stages of processing, instead of one. It would be less reliable, since there would be three computers that could break, instead of one. It would be more complex, and expensive. The advantage would be that something like satellite communication could be managed completely independently from the Arpanet. So long as both the Arpanet and the foreign network adhered to the gateway interface standard, it would work. You could just connect them up, and the two networks could start sending and receiving packets. The key design idea is neither network would have to know anything about the internal details of the other. Instead of a closed system, it would be open, a network of networks that could accommodate any specialized network. This same scheme is used on the internet today.

Kahn needed help in defining this open concept. So, in 1973, he started working with Vint Cerf, who had also worked on the Arpanet. Cerf had just started work at the computer science department of Stanford University.

Vint Cerf,
from Wikipedia

Cerf came up with an idea for transporting packets between networks. Firstly, a universal transmission protocol would be recognized across the network. Each sending network would encode its packets with this universal protocol, and “wrap” them in an “envelope” that used “markings” that the sending network understood in its protocol. The gateway computer would know about the “envelopes” that the sending and receiving networks understood, along with the universal protocol encoding they contained. When the gateway received a packet, it would strip off the sending “envelope,” and interpret the universal protocol enclosed within it. Using that, it would wrap the packet in a new “envelope” for the destination network to use for sending the packet through itself. Waldrop had a nice analogy for how it worked. He said it would be as if you were sending a card from the U.S. to Japan. In the U.S., the card would be placed in an envelope that had your address and the destination address written in English. At the point when it left the U.S. border, the U.S. envelope would be stripped off, and the card would be placed in a new envelope, with the source and destination addresses translated to Kanji, so it could be understood when it reached Japan. This way, each network would “think” it was transporting packets using its own protocol.

Kahn and Cerf also understood the lesson from ALOHAnet. In their new protocol, senders wouldn’t look for gaps in transmission, but send packets immediately, and hope for the best. If any packets were missed at the other end, due to collisions with other packets, they would be re-sent. This portion of the protocol was called TCP (Transmission Control Protocol). A separate protocol was created to manage how to address packets to foreign networks, since the Arpanet didn’t understand how to do this. This was called IP (Internet Protocol). Together, the new internetworking protocol was called TCP/IP. Kahn and Cerf published their initial design for the internet protocol in 1974, in a paper titled, “A Protocol for Packet Network Interconnection.”

Cerf started up his internetworking seminars at Stanford in 1973, in an effort to create the first implementation of TCP/IP. Cerf described his seminars as being as much about consensus as technology. He was trying to get everyone to agree to a universal protocol. People came from all over the world, in staggered fashion. It was a drawn out process, because each time new attendees showed up, the discussion had to start over again. By the end of 1974, the first detailed design was drawn up in a document, and published on the Arpanet, called Request For Comments (RFC) 675. Kahn chose three contractors to create the first implementations: Cerf and his students at Stanford, Peter Kirstein and his students at University College in London, and BBN, under Ray Tomlinson. All three implementations were completed by the end of 1975.

Bob Metcalfe, and a colleague named John Schoch from Xerox PARC, eagerly joined in with these seminars, but Metcalfe felt frustrated, because his own work on Ethernet, and a universal protocol they’d developed to interface Ethernet with Arpanet and Data General’s network, called PUP (Parc Universal Packet) was proprietary. He was able to make contributions to TCP/IP, but couldn’t overtly contribute much, or else it would jeopardize Xerox’s patent application on Ethernet. He and Schoch were able to make some covert contributions by asking some leading questions, such as, “Have you thought about this?” and “Have you considered that?” Cerf picked up on what was going on, and finally asked them, in what I can only assume was a humorous moment, “You’ve done this before, haven’t you?” Also, Cerf’s emphasis on consensus was taking a long time, and Metcalfe eventually decided to part with the seminars, because he wanted to get back to work on PUP at Xerox. In retrospect, he had second thoughts about that decision. He and David Boggs at Xerox got PUP up and running well before TCP/IP was finished, but it was a proprietary protocol. TCP/IP was not, and by making his decision, he had cut himself off from influencing the direction of the internet into what it eventually became.

PARC’s own view of Ethernet was that it would be used by millions of other networks. The initial vision of the TCP/IP group was that it would be an extension of Arpanet, and as such, would only need to accommodate the needs that DARPA had for it. Remember, Kahn set out for his networking scheme to accommodate satellite, and military communications, and some other uses that he hadn’t thought of yet. Metcalfe saw that TCP/IP would only accommodate 256 other networks. He said, “It could only work with one or two nets per country!” He couldn’t talk to them about the possibility of millions of networks using it, because that would have tipped others off to proprietary technology that Xerox had for local area networks (LANs). Waldrop doesn’t address this technical point of how TCP/IP was expanded to accommodate beyond 256 networks. Somehow it was, because obviously it’s expanded way beyond that.

The TCP/IP working group worked on porting the network stack (called the Kahn-Cerf internetworking protocol in the early days) to many operating systems, and migrating subprotocols from the Arpanet to the Kahn-Cerf network, from the mid-1970s into 1980.

People on the project started using the term “Internet,” which was just a shortened version of the term “internetworking.” It took many years before TCP/IP was considered stable enough to port the Arpanet over to it completely. The Department of Defense adopted TCP/IP as its official standard in 1980. The work of converting the Arpanet protocols to use TCP/IP was completed by January 1, 1983, thus creating the Internet. This made it much easier to expand the network than was the case with the Arpanet’s old protocol, because TCP/IP incorporates protocol translation into the network infrastructure. So it doesn’t matter if a new network comes along with its own protocol. It can be brought into the internet, with a junction point that translates it.

We should keep in mind as well that at the point in time that the internet officially came online, nothing had changed with respect to the communications hardware (which I covered in Part 3 Part 2). 56 Kbps was still the maximum speed of the network, as all network communications were still conducted over modems and dedicated long-distance phone lines. This became an issue as the network expanded rapidly.

The next generation of the internet

While the founding of the internet would seem to herald an age of networking harmony, this was not the case. Competing network standards proliferated before, during, and after TCP/IP was developed, and the networking world was just as fragmented when TCP/IP got going as before. People who got on one network could not transfer information, or use their network to interact with people and computers on other networks. The main reason for this was that TCP/IP was still a defense program.

The National Science Foundation (NSF) would unexpectedly play the critical role of expanding the internet beyond the DoD. The NSF was not known for funding risky, large-scale, enterprising projects. The people involved with computing research never had much respect for it, because it was so risk-averse, and politicized. Waldrop said,

Unlike their APRA counterparts, NSF funding officers had to submit every proposal for “peer-review” by a panel of working scientists, in a tedious decision-by-committee process that (allegedly) quashed anything but the most conventional ideas. Furthermore, the NSF had a reputation for spreading its funding around to small research groups all over the country, a practice that (again allegedly) kept Congress happy but most definitely made it harder to build up a Project MAC-style critical mass of talent in any one place.

Still, the NSF was the only funding agency chartered to support the research community as a whole. Moreover, it had a long tradition of undertaking … big infrastructure efforts that served large segments of the community in common. And on those rare occasions when the auspices were good, the right people were in place, and the planets were lined up just so, it was an agency where great things could happen. — TDM, p. 458

In the late 1970s, researchers at “have not” universities began to complain that while researchers at the premier universities had access to the Arpanet, they didn’t. They wanted equal access. So, the NSF was petitioned by a couple schools in 1979 to solve this, and in 1981, the NSF set up CSnet (Computer Science Network), which linked smaller universities into the Arpanet using TCP/IP. This was the first time that TCP/IP was used outside of the Defense Department.

Steve Wolff,
from Internet2

The next impetus for expanding the internet came from physicists, who thought of creating a network of supercomputing centers for their research, since using pencil and paper was becoming untenable, and they were having to beg for time on supercomputers at Los Alamos and Lawrence Livermore that were purchased by the Department of Energy for nuclear weapons development. The NSF set up such centers at the University of Illinois, Cornell University, Princeton, Carnegie Mellon, and UC San Diego in 1985. With that, the internet became a national network, but as I said, this was only the impetus. Steve Wolff, who would become a key figure in the development of the internet at the NSF, said that as soon as the idea for these centers was pitched to the NSF, it became instantly apparent to them that this network could be used as a means for communication among scientists about their work. That last part was something the NSF just “penciled in,” though. It didn’t make plans for how big this goal could get. From the beginning, the NSF network was designed to allow communication among the scholarly community at large. The NSF tried to expand the network outward from these centers to K-12 schools, museums, “anything having to do with science education.” We should keep in mind how unusual this was for the NSF. Waldrop paraphrased Wolff, saying,

[The] creation of such a network exceeded the foundation’s mandate by several light-years. But since nobody actually said no—well, they just did it. — TDM, p. 459

The NSF declared TCP/IP as its official standard in 1985 for all digital network projects that it would sponsor, thenceforth. The basic network that the NSF set up was enough to force other government agencies to adopt TCP/IP as their network standard. It forced other computer manufacturers, like IBM and DEC, to support TCP/IP as well as their own networking protocols that they tried to push, because the government was too big of a customer to ignore.

The expanded network, known as “NSFnet,” came online in 1986.

And the immediate result was an explosion of on-campus networking, with growth rates that were like nothing anyone had imagined. Across the country, those colleges, universities, and research laboratories that hadn’t already taken the plunge began to install local-area networks on a massive scale, almost all of them compatible with (and having connections to) the long-distance NSFnet. And for the first time, large numbers of researchers outside the computer-science departments began to experience the addictive joys of electronic mail, remote file transfer and data access in general. — TDM, p. 460

That same year, a summit of network representatives came together to address the problem of naming computers on the internet, and assigning network addresses. From the beginnings of the Arpanet, a paper directory of network names and addresses had been updated and distributed to the different computer sites on the network, so that everyone would know how to reach each other’s computers on the network. There were so few computers on it, they didn’t worry about naming conflicts. By 1986 this system was breaking down. There were so many computers on the network that assigning names to them started to be a problem. As Waldrop said, “everyone wanted to claim ‘Frodo’.” The representatives came to the conclusion that they needed to automate the process of creating the directory for the internet. They called it the Domain Name Service (DNS). (I remember getting into a bit of an argument with Peter Denning in 2009 on the issue of the lack computer innovation after the 1970s. He brought up DNS as a counter-example. I assumed DNS had come in with e-mail in the 1970s. I can see now I was mistaken on that point.)

Then there was the problem of speed,

“When the original NSFnet went up, in nineteen eighty-six, it could carry fifty-six kilobits per second, like the Arpanet,” says Wolff. “By nineteen eighty-seven it had collapsed from congestion.” A scant two years earlier there had been maybe a thousand host computers on the whole Internet, Arpanet included. Now the number was more like ten thousand and climbing rapidly. — TDM, p. 460

This was a dire situation. Wolff and his colleagues quickly came up with a plan to increase the internet’s capacity by a factor of 30, increasing its backbone bandwidth to 1.5 Mbps, using T1 lines. They didn’t really have the authority to do this. They just did it. The upgrade took effect in July 1988, and usage exploded again. In 1991, Wolff and his colleagues boosted the network’s capacity by another factor of 30, going to T3 lines, at 45 Mbps.

About Al Gore

Before I continue, I thought this would be a good spot to cover this subject, because it relates to what happened next in the internet’s evolution. Bob Taylor said something about this in the Q&A section in his 2010 talk at UT Austin. (You can watch the video at the end of Part 3.) I’ll add some more to what he said here.

Gore became rather famous for supposedly saying he “invented” the internet. Since politics is necessarily a part of discussing government R&D, I feel it’s necessary to address this issue, because Gore was a key figure in the development of the internet, but the way he described his involvement was confusing.

First of all, he did not say the word “invent,” but I think it’s understandable that people would get the impression that he said he invented the internet, in so many words. This story originated during Gore’s run for the presidency in 1999, when he was Clinton’s vice president. What I quote below is from an interview with Gore on CNN’s Late Edition with Wolf Blitzer:

During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth, environmental protection, improvements in our educational system, during a quarter century of public service, including most of it coming before my current job. I have worked to try to improve the quality of life in our country, and our world. And what I’ve seen during that experience is an emerging future that’s very exciting, about which I’m very optimistic …

The video segment I cite cut off after this, but you get the idea of where he was going with it. He used the phrase, “I took the initiative in creating the Internet,” several times in interviews with different people. So it was not just a slip of the tongue. It was a talking point he used in his campaign. There are some who say that he obviously didn’t say that he invented the internet. Well, you take a look at that sentence and try to see how one could not infer that he said he had a hand in making the internet come into existence, and that it would not have come into existence but for his involvement. In my mind, if anybody “took the initiative in creating the Internet,” it was the people involved with ARPA/IPTO, as I’ve documented above, not Al Gore! You see, to me, the internet was really an evolutionary step made out of the Arpanet. So the internet really began with the Arpanet in 1969, at the time that Gore had enlisted in the military, just after graduating from Harvard.

The term “invented” was used derisively against him by his political opponents to say, “Look how delusional he is. He thinks he created it.” Everyone knew his statement could not be taken at face value. His statement was, in my own judgment, a grandiose and self-serving claim. At the time I heard about it, I was incredulous, and it got under my skin, because I knew something about the history of the early days of the Arpanet, and I knew it didn’t include the level of involvement his claim stated, when taken at face value. It felt like he was taking credit where it wasn’t due. However, as you’ll see in statements below, some of the people who were instrumental in starting the Arpanet, and then the internet, gave Gore a lot of slack, and I can kind of see why. Though Gore’s involvement with the internet didn’t come into view until the mid-1980s, apparently he was doing what he could to build political support for it inside the government all the way back in the 1970s, something that has not been visible to most people.

Gore’s approach to explaining his role, though, blew up in his face, and that was the tragedy in it, because it obscured his real accomplishments. What would’ve been a more precise way of talking about his involvement would’ve been for him to say, “I took initiatives that helped create the Internet as we know it today.” The truth of the matter is he deserves credit for providing and fostering political, intellectual, and financial support for a next generation internet that would support the transmission of high-bandwidth content, what he called the “information superhighway.” Gore’s father, Al Gore, Sr., sponsored the bill in the U.S. Senate that created the Interstate highway system, and I suspect that Al Gore, Jr. wanted, or those in his campaign wanted him to be placed right up there with his father in the pantheon of leaders of great government infrastructure projects that have produced huge dividends for this country. That would’ve been nice, but seeing him overstate his role took him down a peg, rather than raising him up in public stature.

Here are some quotes from supporters of Gore’s efforts, taken from a good Wikipedia article on this subject:

As far back as the 1970s Congressman Gore promoted the idea of high-speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship […] the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises.

— a joint statement by Vint Cerf and Bob Kahn

The sense I get from the above statement is that Gore was a political and intellectual cheerleader for the internet in the 1970s, spreading a vision about how it could be used in the future, to help build political support for its future funding and development. The initiative Gore talked about I think is expressed well in this statement:

A second development occurred around this time, namely, then-Senator Al Gore, a strong and knowledgeable proponent of the Internet, promoted legislation that resulted in President George H.W Bush signing the High Performance Computing and Communication Act of 1991. This Act allocated $600 million for high performance computing and for the creation of the National Research and Education Network. The NREN brought together industry, academia and government in a joint effort to accelerate the development and deployment of gigabit/sec networking.

— Len Kleinrock

Edit 5/10/17: To clarify the name of the bill Kleinrock refers to, he was talking about the High-Performance Computing Act (HPCA).

These quotes are more of a summary. Here is some more detail from TDM.

In 1986, Democratic Senator Al Gore, who had a keen interest in the internet, and the development of the networked supercomputing centers created in 1985, asked for a study on the feasability of getting gigabit network speeds using fiber optic lines to connect up the existing computing centers. This created a flurry of interest from a few quarters. DARPA, NASA, and the Energy Department saw opportunities in it for their own projects. DARPA wanted to include it in their Strategic Computing Initiative, begun by Bob Kahn in the early 1980s. DEC saw a technological opportunity to expand upon ideas it had been developing. To Gore’s surprise, he received a multiagency report in 1987 advocating a government-wide “assault” on computer technology. It recommended that Congress fund a billion-dollar research initiative in high-performance computing, with the goal of creating computers in several years whose performance would be an order of magnitude greater than the fastest computers available at the time. It also recommended starting the National Research and Education Network (NREN), to create a network that would send data at gigabits per second, an order of magnitude greater than what the internet had just been upgraded to. In 1988, after getting more acquainted with these ideas, he introduced a bill in Congress to fund both initiatives, dubbed “the Gore Bill.” It put DARPA in charge of developing the gigabit network technology, and officially authorized and funded the NSF’s expansion of NSFnet, and it was also given the explicit mission in the bill to connect up the whole federal government to the internet, and the university system of the United States. It ran into a roadblock, though, from the Reagan Administration, which argued that these initiatives for faster computers and faster networking should be left to the private sector.

This stance causes me to wonder if perhaps there was a mood to privatize the internet back then, but it just wasn’t accomplished until several years later. I talked with a friend a few years ago about the internet’s history. He’s been a tech innovator for many years, and he said he was lobbying for the network to be privatized in the ’80s. He said he and other entrepreneurs he knew were chomping at the bit to develop it further. Perhaps there wasn’t a mood for that in Congress.

Edit 5/10/17: I deleted a paragraph here, because I realized Waldrop may have had some incorrect information. He said on Page 461 of TDM that the Gore Bill was split into two, one passed in 1991, and another bill that was passed in 1993, when Gore became Vice President. I can’t find information on the 1993 bill he talks about. So, I’m going to assume for the time being that instead, the Gore Bill was just passed later, in 1991, as the High-Performance Computing Act (HPCA). It’s possible that the funding for it was split up, with part of it appropriated in 1991, and the rest being appropriated in 1993. If I get more accurate information, I will update this history.

Waldrop said, though, that the defeat of the Gore Bill in 1988 was just a bump in the road, and it hardly mattered, because the agencies were quietly setting up a national network on their own. Gorden Bell at the NSF said that their network was getting bigger and bigger. Program directors from the NSF, NASA, DARPA, and the Energy Department created their own ad hoc shadow agency, called the Federal Research Internet Coordinating Committee (FRICC). If they agreed on a good idea, they found money in one of their agencies’ budgets to do it. This committee would later be reconstituted officially as the Federal Networking Council. These agencies also started standardizing their networks around TCP/IP. By the beginning of the 1990s, the de facto national research network was officially in place.

Marc Andreeson said that the development of Mosaic, the first publicly available web browser, and the prototype for Netscape’s browser, couldn’t have happened when it did without funding from the HPCA. “If it had been left to private industry, it wouldn’t have happened. At least, not until years later,” he said. Mosaic was developed at the National Center for Supercomputer Applications (NCSA) at the University of Illinois in 1993. Netscape was founded in December 1993.

The 2nd generation network takes shape

Recognizing in the late ’80s that the megabit speeds of the internet were making the Arpanet a dinosaur, DARPA started decommissioning their Information Message Processors (IMPs), and by 1990, the Arpanet was officially offline.

By 1988, there were 60,000 computers on the internet. By 1991, there were 600,000.

CERN in Switzerland had joined the internet in the late 1980s. In 1990, an English physicist working at CERN, named Tim Berners-Lee, created his first system for hyperlinking documents on the internet, what we would now know as a web server and a web browser. Berners-Lee had actually been experimenting with data linking long before this, long before he’d heard of Vannevar Bush, Doug Engelbart, or Ted Nelson. In 1980, he linked files together on a single computer, to form a kind of database, and he had repeated this metaphor for years in other systems he created. He had written his World Wide Web tools and protocol to run on the NeXT computer. It would be more than a year before others would implement his system on other platforms.

How the internet was privatized

Wolff, at NSF, was beginning to try to get the internet privatized in 1990. He said,

“I pushed the Internet as hard as I did because I thought it was capable of becoming a vital part of the social fabric of the country—and the world,” he says. “But it was also clear to me that having the government provide the network indefinitely wasn’t going to fly. A network isn’t something you can just buy; it’s a long-term, continuing expense. And government doesn’t do that well. Government runs by fad and fashion. Sooner or later funding for NSFnet was going to dry up, just as funding for the Arpanet was drying up. So from the time I got here, I grappled with how to get the network out of the government and instead make it part of the telecommunications business.” — TDM, p. 462

The telecommunications companies, though, didn’t have an interest in taking on building out the network. From their point of view, every electronic transaction was a telephone call that didn’t get made. Wolff said,

“As late as nineteen eighty-nine or ‘ninety, I had people from AT&T come in to me and say—apologetically—’Steve, we’ve done the business plan, and we just can’t see us making any money.'” — TDM, p. 463

Wolff had planned from the beginning of NSFnet to privatize it. After the network got going, he decentralized its management into a three-tiered structure. At the lowest level were campus-scale networks operated by research laboratories, colleges, and universities. In the middle level were regional networks connecting the local networks. At the highest level was the “backbone” that connected all of the regional networks together, operated directly by the NSF. This scheme didn’t get fully implemented until they did the T1 upgrade in 1988.

Wolff said,

“Starting with the inauguration of the NSFnet program in nineteen eighty-five,” he explains, “we had the hope that it would grow to include every college and university in the country. But the notion of trying to administer a three-thousand-node network from Washington—well, there wasn’t that much hubris inside the Beltway.” — TDM, p. 463

It’s interesting to hear this now, given that HealthCare.gov is estimated to be between 5 and 15 million lines of code (no official figures are available to my knowledge. This is just what’s been disclosed on the internet by an apparent insider), and it is managed by the federal government, seemingly with no plans to privatize it. For comparison, that’s between the size of Windows NT 3.1 and the flight control software of the Boeing 787.

Wolff also structured each of the regional service providers as non-profits. Wolff told them up front that they would eventually have to find other customers besides serving the research community. “We don’t have enough money to support the regionals forever,” he said. Eventually, the non-profits found commercial customers—before the internet was privatized. Wolff said,

“We tried to implement an NSF Acceptable Use Policy to ensure that the regionals kept their books straight and to make sure that the taxpayers weren’t directly subsidizing commercial activities. But out of necessity, we forced the regionals to become general-purpose network providers.” — TDM, p. 463

This structure became key to how the internet we have unfolded. Waldrop notes that around 1990, there were a number of independent service providers that came into existence to provide internet access to anyone who wanted it, without restrictions.

After a dispute erupted between one of the regional networks and NSF over who should run the top level network (NSF had awarded the contract to a company called ANS, a consortium of private companies) in 1991, and a series of investigations, which found no wrongdoing on NSF’s part, Congress passed a bill in 1992 that allowed for-profit Internet Service Providers to access the top level network. Over the next few years, the subsidy the NSF provided to the regional networks tapered off, until on April 30, 1995, NSFnet ceased to exist, and the internet was self-sustaining. The NSF continued operating a much smaller, high-speed network, connecting its supercomputing centers at 155 Mbps.

Where are they now?

Bob Metcalfe left PARC in 1975. He returned to Xerox in 1978, working as a consultant to DEC and Intel, to try to hammer out an open standard agreement with Xerox for Ethernet. He started 3Com in 1979 to sell Ethernet technology. Ethernet became an open standard in 1982. He left 3Com in 1990. He then became a publisher, and wrote a column for InfoWorld Magazine for ten years. He became a venture capitalist in 2001, and is now a partner with Polaris Venture Partners.

Bob Kahn founded the Corporation for National Research Initiatives, a non-profit organization, in 1986, after leaving DARPA. Its mission is “to provide leadership and funding for research and development of the National Information Infrastructure.” He served on the State Department’s Advisory Committee on International Communications and Information Policy, the President’s Information Technology Advisory Committee, the Board of Regents of the National Library of Medicine, and the President’s Advisory Council on the National Information Infrastructure. Kahn is currently working on a digital object architecture for the National Information Infrastructure, as a way of connecting different information systems. He is a co-inventor of Knowbot programs, mobile software agents in the network environment. He is a member of the National Academy of Engineering, and is currently serving on the State Department’s Advisory Committee on International Communications and Information Policy. He is also a Fellow of the IEEE, a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), a Fellow of the ACM, and a Fellow of the Computer History Museum. (Source: The Corporation for National Research Initiatives)

Vint Cerf joined DARPA in 1976. He left to work at MCI in 1982, where he stayed until 1986. He created MCI Mail, the first commercial e-mail service that was connected to the internet. He joined Bob Kahn at the Corporation for National Research Initiatives, as Vice President. In 1992, he and Kahn, among others, founded the Internet Society (ISOC). He re-joined MCI in 1994 as Senior Vice President of Technology Strategy. The Wikipedia page on him is unclear on this detail, saying that at some point, “he served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks, including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.” There are a lot of accomplishments listed on his Wikipedia page, more than I want to list here, but I’ll highlight a few:

Cerf joined the board of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1999, and served until the end of 2007. He has been a Vice President at Google since 2005. He is working on the Interplanetary Internet, together with NASA’s Jet Propulsion Laboratory. It will be a new standard to communicate from planet to planet, using radio/laser communications that are tolerant of signal degradation. He currently serves on the board of advisors of Scientists and Engineers for America, an organization focused on promoting sound science in American government. He became president of the Association for Computing Machinery (ACM) in 2012 (for 2 years). In 2013, he joined the Council on Cybersecurity’s Board of Advisors.

Steve Wolff left the NSF in 1994, and joined Cisco Systems as a business development manager for their Academic Research and Technology Initiative. He is a life member of the Institute of Electrical and Electronic Engineers (IEEE), and is a member of the American Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), and the Internet Society (ISOC) (sources: Wikipedia, and Internet2)

Conclusion

This is the final installment in my series.

The point of this series has been to tell a story about the research and development that led to the technology that we are conscious of using today, and some of the people who made it happen. I know for a fact I’ve left out some names of contributors to these technologies. I have tried to minimize that, but trying to trace all of that down has been more tedious than I have wanted to delve into to write this. My point about naming names has not been so much to give credit where it’s due (though I feel an obligation to be accurate about that). It’s been to make this story more real to people, to let people know that it wasn’t just some abstract “government effort” that magically made it all happen; that real flesh and blood people were involved. I’ve tried a bit to talk about the backgrounds of these individuals to illustrate that most of them did not train specifically to make these ideas happen. The main point of this review of the history has been to get across where the technology came from; a few of the ideas that led to it, and what research structure was necessary to incubate it.

One of my hopes with this series was to help people understand and appreciate the research culture that generated these ideas, so as to demystify it. The point of that being to help people understand that it can be done again. It wasn’t just a one-shot deal of some mythical past that is no longer possible. However, to have that research culture requires having a different conception of what the point of research is. I talked about this in Are we future-oriented? Neil deGrasse Tyson nicely summarized it at the end of this 2009 interview:

Scott Adams has a great rule, that it’s a waste of time to talk about doing something, as opposed to just doing it (though a lot of organizations get into talking about what they should do, instead of doing it). What motivates “just doing it,” though, is different from talking about it. I started out this series talking about how the Cold War created the motivation to do the research, because our government realized that while it had working nuclear weapons, it was not really prepared to handle the scenario where someone else had them as well, and that part of being prepared for it required automating the process of gathering information, and partly automating the process of acting on it. The problem the government had was the technology to do that didn’t exist. So, it needed to find out how to do it. That’s what motivated the government to “just do it” with respect to computer research, rather than talk about doing it.

I started out feeling optimistic that reviewing this history would not only correct ignorance about it, but would also create more support for renewing the research culture that once existed. I am less optimistic about that second goal now. I think that human nature, at least in our society, dictates that circumstances primarily drive the motivation to fund and foster basic research, and it doesn’t have to be funded by government. As I’ve shown in this series, some of the most important research in computing was funded totally in the private sector. What mattered was the research environment that was created, wherever it was done.

I must admit, though, that the primary motivation for me to write this series was my own education. I very much enjoyed coming to understand the continuum between the technology I enjoyed, and was so inspired by when I was growing up, and the ideas that helped create them. For most of my life, the computer technology I used when I was younger just seemed to pop into existence. I thought for many years that the computer companies created their own conception of computers, and I gave the companies, and sometimes certain individuals in them all the credit. In a few cases they seemed brilliant. A little later I learned about Xerox PARC and the graphical user interface, and a bit about the lineage of the internet’s history, but that was it.

This view of things tends to create a mythology, and what I have seen that lead to is a cargo cult, of sorts. Cargo cults don’t create the benefits of the things that brought them. As Richard Feynman said, “The planes don’t land.” It’s a version of idol worship that looks modern and novel to the culture that engages in it. It has the cause and effect relationship exactly backwards. What creates the benefits is ideas generated from imagination and knowledge intersecting with the criticism produced through powerful outlooks. What I’ve hoped to do with this series is dispel the mythology that generates this particular cargo cult. I certainly don’t want to be part of it any longer. I was for many years. What’s scary about that is for most of that time, I didn’t even recognize it. It was just reality.

Edit 7/17/2017: One thing I’ve been meaning to note, my primary source of information for this series was “The Dream Machine,” as I noted earlier. Even though there is a lot of material in this series, I did not just summarize the whole book. I summarized portions of it. The time period it covers is pretty vast. It starts in the 1920s, and quite a bit happened with computational theory in the 1930s, which Waldrop documents, setting the stage for what happened later. One thing I’ve hoped is that this series inspires readers to explore historical sources, including this book.

— Mark Miller, https://tekkie.wordpress.com

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

2 thoughts on “A history lesson on government R&D, Part 4

  1. Pingback: A history lesson on government R&D, Part 3 – Tekkie

  2. Pingback: It’s been a while since I worked on this history | Mark's favorites

Leave a comment