A history lesson on government R&D, Part 2

See Part 1

This turned into a much larger research project than I anticipated. I am publishing this part around the 1-year anniversary of when I started on this whole series! I published the first part of it in late June 2011, and then put the rest “on the shelf” for a while, because it was crowding out other areas of study I wanted to get into.

To recap, the purpose of this series is to educate the reader on government-funded research that led to the computer technology we use today. I was inspired to write it because of gross misconceptions I was seeing on the part of some who thought the government has had nothing to do with our technological advancement, though I think technologists will be interested in this history as well. This history is not meant to be read as a definitive guide to all computer history. I purposely have focused it on a specific series of events. I may have made mistakes, and I welcome corrections. The first part of this series covers government-funded research in computing during the 1940s and 50s. In this article I continue coverage of this research during the late 1950s, and 1960s, and the innovations that research inspired.

Most of what I will talk about below is sourced from the book “The Dream Machine,” by M. Mitchell Waldrop. Some of it is sourced from a New York Times opinion blog article called, “Did My Brother Invent E-mail with Tom Van Vleck?” A great article, by the way, on the development of time-sharing systems; a major topic I cover here. There’s a video produced by DARPA at the end of this post that I use as a source, and I’ll just refer to as “the DARPA video.” Quite a bit of material is sourced from Wikipedia as well. Knowing Wikipedia’s reputation for getting facts wrong, I have tried as much as possible to look at other sources, to verify it. Occasionally I’ll relay some small anecdotes I’ve heard from people who were in the know about these subjects over the years.

Revving up interactive, networked computing

Wes Clark, from Wikipedia

Ken Olsen, an electrical engineer, Wes Clark, an electrical engineer, and Harlan Anderson, a physicist, all graduates of the Massachusetts Institute of Technology (MIT), and veterans of Project Whirlwind, formed the Advanced Computer Research Group at MIT in 1955. Since all of the interactive computers that had been made up to that point were committed to air defense work, due to the Cold War, they wanted to make their own interactive computer that would be like what they had experienced with Whirlwind. It would be a system they would control. The first computer they built was called the TX-0, completed in 1956. It was about the size of a room. It was one of the first computers to be built with transistors (invented at AT&T Bell Labs in 1947). It had an oscilloscope for a display, a speaker for making sound, and a light pen for manipulating and drawing things on the display. They started work on a larger, more ambitious computer system they called the TX-2 in 1956. It was completed in 1958, and took up an entire building. It would come to play an important role in computer history when an electrical engineer named Ivan Sutherland did his Sketchpad project on it in 1963 for his doctoral thesis (PDF). Sketchpad is considered to be the prototype for CAD (Computer-Aided Design) systems. It was also in my view the first proof-of-concept for the graphical user interface, particularly since one used gestures to manipulate objects on the screen with it.

Even though the idea behind building these computers was to build an interactive machine that these guys could control, away from the military, I’m not so sure they succeeded at that, at least for long. From what Alan Kay has said in his presentations on this history, Sutherland had to work on his Sketchpad project in the middle of the night, because the TX-2 was used for air defense during the day…

In the midst of the TX-2’s construction, Ken Olsen, his brother Stanley, and Harlan Anderson founded Digital Equipment Corporation (DEC) in 1957 out of frustration with the academic and business world. They had tried to tell their colleagues, and people in the computer industry, that interactive computing had a promising future, but they got nowhere. Hardly anybody believed them. And besides, the idea of interactive computing was considered immoral. Computers were expensive, and were for serious work, not for playing around. That was the thinking of the time. Olsen and the founders of DEC figured that to make interactive computing something people would pay attention to, they had to sell it.

A pivotal moment happened for J.C.R. Licklider, a man I introduced in Part 1, in 1957 that would have a major impact on the future of computing as we have come to know it. Wes Clark, seemingly by chance, walked into Licklider’s office, and invited him to try out the TX-2. It was an event that would change Licklider’s life. He had an epiphany from using it, that humans and computers could form an interactive system together–a system made up of the person using it, and the computer–that he called “symbiosis.” He had experienced interactive computing from working on the SAGE (Semi-Automated Ground Environment) project at MIT in the 1950s (covered in Part 1 of this series), but this is the moment when he really got what its significance was, what its societal impact would be.

The difference between the TX-2 and SAGE was that with SAGE you couldn’t create anything with it. Information was created for the person using it from signals that were sent to the system from radar stations and weapons systems. The person could only respond to the information they were given, and send out commands, which were then acted upon by other people or other systems. It was a communication/decision system. With the TX-2, the system could respond to the person using it, and allow them to build on those responses to create something more than where the person and the computer had started. This is elementary to many computer users now, but back then it was very new, and a revelation. It’s a foundational concept of interactive computing.

Licklider left his work in psychology, and MIT, to work at a consulting firm called Bolt Beranek and Newman (BBN) to pursue his new dream of realizing human-computer symbiosis with interactive computing. He wrote his seminal work, “Man-Computer Symbiosis,” in 1960, while working at BBN. DEC was an important part of helping Licklider understand his “symbiosis” concept. The first computer they produced was a commercialized version of the TX-0, called the PDP-1 (Programmed Data Processor). BBN bought the very first PDP-1 DEC produced, for Licklider to use in his research. With the right components, it was able to function as an interactive computer, and it was a lot less expensive than the mainframes that dominated the industry.

John McCarthy, from History of Computers

Another person comes into the story at this point, a mathematician at MIT named John McCarthy, who had developed a new field of computing studies at MIT, which he called “Artificial Intelligence.” He had developed a theoretical computing language, or notation, that he called Lisp (for “List Processing language”), which he wanted to use as an environment in which he and others could explore ideas in this new area of study. He thought that ideas would need to be tried in an experimental setting, and that researchers would need to be allowed to quickly revise their ideas and try again. He needed an interactive computer.

A typical run-of-the-mill computer in the 1950s was not interactive at all. The Whirlwind and SAGE systems I talked about in Part 1 were the exceptions, not the rule. First off, there were very few computers around. They were designed to be used by an administrator, who would receive batches of punched cards from people, which contained program code on them. They would be fed through the computer one at a time in batches. The data that was to be processed was often also stored on punch cards, or magnetic tape. The government was heavily invested in systems like this. The IRS and the Census Bureau used this type of system to handle their considerable record keeping, and computation needs. Any business which owned a computer at the time had a system like this.

Each person who submitted a card batch would receive a printout of what their program did hours, or even days later. That was the only feedback they’d get. There were no keyboards, no mice, no input devices of any kind that an ordinary person could use, and no screens to look at. Typical computation was like a combine that’s used on the farm. Raw material (programs and data) would go in, and grain (processed data) would come out the other end. This was called batch processing. McCarthy said this would not do. Researchers couldn’t be expected to wait hours or days for results from a single run of a Lisp program.

McCarthy developed a concept called “time sharing” in 1957. It was a strategy of taking a computer that was designed for batch processing and adapting it to have several ordinary people use it. The system would use the pauses that occurred within itself, where it was just waiting for something to happen, as opportunities to grant computer time to any one of the people using it, in a sort of round-robin fashion. This switching would typically happen fast enough so that everyone using it would notice little pauses here and there, but otherwise operation would appear continuous to them. McCarthy figured that this way, computer time could be used more efficiently, and he and his colleagues could do the kind of work they wanted to do, killing two birds with one stone.

The way time sharing worked was several people would sign on to a single computer using teletype terminals, and use their account on the computer to create files on a shared system. Each person would issue commands to the system through their terminal to tell it what they wanted it to do. They could run their own programs on the system as if the machine was their own, shall we say, “personal computer” (though the term didn’t exist at the time), since they wouldn’t need to think about what anyone else on the system was doing. The equivalent today would be like working on a Unix or Linux system from what’s called a “dumb terminal,” with several people logged in to it at the same time. The way in which our computers now can have several people using them at once, and doing a bunch of things seemingly simultaneously, has its roots in this time sharing strategy.

McCarthy, and a team he had assembled, created a jury-rigged prototype of a time-sharing system out of MIT’s IBM 704 mainframe. They showed that it could work, in a demonstration to some top brass at MIT in 1961. (source: “The MAC System, A Progress Report,” by Robert Fano (PDF)) This was the first public demonstration of a working Lisp system as well.

Fernando Corbató (also known as “Corby”), who was then MIT’s deputy director of their Computation Center, undertook a project to build on McCarthy’s work. This project was funded initially by the Navy. He managed the retrofitting of the university’s then-new IBM 709 computer to create the Compatible Time-Sharing System (CTSS) in 1961. It was named “Compatible” for its compatibility with the mainframe’s existing operating system. This would come to be a very influential system in the years to come.

McCarthy set out a vision at the MIT centennial lecture series, also in 1961, that a computing utility industry would develop in the future that would allow public access to computer resources, and it would be based on his time sharing idea.

IBM was asked if it had any interest in time sharing for the masses, because it was the one company that had the ability to build computers powerful enough at the time to do it. They were unwilling to invest in it, calling the idea impractical. It wouldn’t have to wait long, however. Other researchers started work on their own time-sharing projects at places such as Carnegie Tech (which would become Carnegie-Mellon University), Dartmouth, and RAND Corp.

The “Big Mo” in time sharing, and in many other “big bang” concepts, came when ARPA invited Licklider to lead their efforts in computer research.

ARPA

Republican President Dwight Eisenhower created ARPA, the Advanced Research Projects Agency, under the Department of Defense, in 1958, in response to the Soviet Sputnik launch. I covered some background on the significance of Sputnik in Part 1. The idea was that the Soviets had surprised us technologically, and it was ARPA’s job to make sure we were not surprised again. A lot of ARPA’s initial work was on research for spacecraft, but this was moved to the National Aeronautics and Space Administration (NASA) and the National Reconnaissance Office (NRO) (source: “The DARPA video”). NASA was created the same year as ARPA. ARPA was left with the projects nobody else wanted, and it was in search of a mission.

Jack Ruina was ARPA’s third director, appointed in 1961. He set the pace for the future of the agency, encouraging the people under him to seek out innovative, cutting edge scientific research projects that needed funding, whether they applied to military goals or not, whether they were in government labs, university labs, or in private industry. He knew that the people in ARPA needed to understand the science and the technology thoroughly. He wanted them to seek out people with ideas that were cutting edge, to fund them generously, and to take risks with the funding. He made it clear that researchers that received ARPA funding would need to have the freedom to pursue their own goals, without micromanagement. The overall goal was to, “assault the technological frontiers.”

By my reading of the history, ARPA was forced into researching computing by necessity, in 1961. The way Waldrop characterized it, they “backed into it.” Old, but expensive leftover hardware from the SAGE project needed to be used. It was considered too valuable to junk it. There was also the looming problem of command and control of military operations in the nuclear age, where military decisions and actions needed to happen quickly, and be based on reliable information. It had already been established via. projects like SAGE that computers were the best way anyone could think of to accomplish these goals. So ARPA would take on the task of computer research.

J.C.R. “Lick” Licklider, from Wikipedia.org

J.C.R. Licklider (he preferred to be called “Lick”) was recruited by Ruina to be the director of research in developing command and control systems for the military. Ruina had known of Licklider through his work on SAGE, his career as a researcher in psychology, and the consulting he did on various military projects. It helped a lot that Lick was such an affable fellow. He made a lot of friends in the military ranks. They also needed a director of behavioral science research, which he was uniquely qualified to handle. Licklider was hesitant to accept this position, because he enjoyed his work at BBN, but he became convinced that he could pursue his vision for human-computer symbiosis more comprehensively at ARPA, since they had no problem with him having a dual mission of developing technology for military and civilian use, and they could give him much larger budgets. He started work at ARPA in 1962, in an office of the Pentagon that would come to be named the Information Processing Techniques Office (IPTO).

True to the military side of his mission, Lick created plans to computerize the military’s data gathering, and information storage infrastructure, including digital communication plans for information sharing. He would gradually come to realize the urgency of helping the military modernize. He became aware of some harrowing “near misses” our military experienced, where our nuclear forces were put on high alert, and were nearly ready to launch, all on a misleading signal.

I’ll be focusing on the civilian side of Lick’s research, though all of the research that was funded through the IPTO was always intended to have military applications.

Lick had met John McCarthy while at BBN, and was excited about his idea of time sharing. He wanted to use his position at ARPA to create a movement around this idea, and other innovative computer research. He began by working his existing contacts. He gathered together a prestigious group of computing academics: Marvin Minsky, John McCarthy, Alan Perlis, Ben Gurley, and Fernando Corbató to convince the engineers at System Development Corporation (SDC) to start developing time-sharing technology. It was a hard sell at first, but they came around once these heavy hitters showed how the technology would really work.

He solicited project proposals from all quarters: in government, at universities, and at private companies. He particularly targeted individuals who he thought had the right stuff to do innovative computer research, and sent out funding for various research projects that he thought had merit.

In 1962 he sent $300,000 (more than $2 million in today’s money) to Allen Newell, Herbert Simon, and Alan Perlis at Carnegie Tech to fund whatever they wanted to do. He knew them, and the kinds of minds and motivations they had. In the years that followed, ARPA would send $1.3 million (more than $9 million in today’s money) per year to this team at Carnegie Tech, because they were recognized for having an expansive view of computer science: “The study of all phenomena surrounding computers,” including the impact of computing on human life in general. Lick had confidence that whatever knowledge, program, or technology they came up with would be great.

He was very interested in what RAND Corp. was doing. In 1962 they had developed the first computer with a “pen” (stylus/pad) interface, which was linked to a visual display. This jived very much with what Lick envisioned as “symbiosis.”

John McCarthy left MIT for Stanford in 1962, where he created their Artificial Intelligence Laboratory. Lick funded McCarthy’s work at Stanford for the rest of his time at the IPTO.

Lick began funding an obscure researcher, named Doug Engelbart, working at the Stanford Research Institute (SRI), in 1962. Engelbart had an, at the time, radical idea to create an interactive computer system with graphical video displays that groups could use to deal with complex problems together.

In 1963 Lick funded both Project MAC, a time sharing project at MIT, overseen by Bob Fano, and Project Genie, a time sharing project at Berkeley, overseen by Harry Husky. Fano started developing a curriculum for computer science in the Electrical Engineering Department at MIT at this time as well. Fano, by his own admission, had almost no technical knowledge about computers!

Under Project MAC, ARPA funded a project to create a new time-sharing system called Multics in 1963. This sub-project was overseen by Fernando Corbató.

In 1963, Lick envisioned a network of time-sharing systems, which he dubbed the “Intergalactic Network” to his working groups. The idea was to link computers together in a network, and he wanted people to think big about this; not just a few hundred computers together, but billions!

It was all coming together. Lick realized that he could not juggle all of these projects and continue to work at BBN, so he resigned from BBN in 1963.

Project MAC

Robert Fano came up with the name “Project MAC.” It stood for “Multi-Access Computer,” though people came up with other meanings for it as well, like “Machine-Aided Cognition.” Some artificial intelligence work at MIT, led by Marvin Minsky, was funded through it.

The working group used the existing CTSS project as its starting point, and the goal was to improve on it. CTSS was moved to an IBM 7094 mainframe in 1964, and the first information utility was made available to people at MIT around the clock. The 7094 ran at about the speed of an IBM PC (about 5 Mhz). So if more than a dozen people were logged into it at the same time, it ran very slowly. Eventually they added the ability for members of the MIT Computation Center staff to connect to the time-sharing system using remote teletype terminals from their homes.

In the following video, Bob Fano lays out his vision for the research that was taking place with computers at MIT, and gives a demonstration of their time-sharing system. It’s thought that this film was made in 1964.

CTSS became a knowledge repository, as people were able to share information and programs with each other on it. If enough people found some programs useful, they were added to the standard library of programs in the system, so that everyone could use them. This method of growing the features of the system by adding software to it is seen in all modern systems, particularly Unix and Linux. It created the first software ecosystem, where software could use other software. This is also seen in the more commonly used commercial systems, like on Windows PCs, and Apple Macs. As a couple examples, Tom Van Vleck created a MAIL command for the system that allowed people to send text messages to each other. It was probably the first implementation of e-mail. Allan Scher created ARCHIVE, which allowed people to take a bunch of files and compress and package them into one file.

This was not an attempt to create the internet, but they ended up creating something like it on a smaller scale, under a different configuration. Machines were not networked together, but people were, through the workings of a single computer system.

Things that we take for granted today were discovered as the project went along. Some examples are they used a hierarchical file system that enabled people to organize their files into directories (think folders) and subdirectories (subfolders). The system didn’t have a system clock at first. They attached an electronic clock to the computer’s printer port, and software was added to take advantage of it. Now the system could time-stamp files, and processes could be run at specific times.

It was noticed that people came to depend on the system. They became agitated and irate when it went down. This was something that the researchers were looking for. They meant for it to become a computing utility, and people were coming to depend on it as if it was one. They took this as a positive sign that they were on the right track. It also motivated them to make the system as reliable as they could make it.

The CTSS team developed fault-tolerant features for it, like secure memory, so that programs that crashed didn’t interfere with other running programs, or the rest of the system. They developed an automated backup system, so that when a disk failed (all information was stored on disk), people’s work wasn’t lost. A backup to tape of newly created files ran every half-hour. It was also the first system to enforce passwords for access. This created the first instance of user revolt. Already the idea of “information wants to be free” had developed among people at MIT, and they resented the idea that access to the system could be restricted. Hackers committed various acts of “civil disobedience” (pranks in the system) to protest this, but the Project MAC staff insisted password protection was going to stay!

Time sharing comes of age

The vision of Project MAC was to promote “utility computing,” where private companies would rent computer time to businesses, and ordinary people, like a utility company charges for gas and electricity, so that they could experience interactive computing, and make some use of it. This business model is used today in systems that make applications available on the web on a subscription basis, except now the concept is not so much renting computer time, as renting application time, and data storage space–the whole idea of SaaS (Software as a Service).

Unlike today, the sense of the time was that computers needed to be large, and computing power needed to be centralized, because it was so expensive. It was thought that this was the only way to get this power out to as many people as possible.

In 1963 the leaders of Project MAC wanted to create a model system. Most of the time-sharing systems that had been successfully created up to this point began as mainframes that were only meant for batch processing, and had been modified to do time sharing. They wanted a system that was designed for time sharing from the ground up. It was also decided that an operating system was needed which would be designed for a large scale time-sharing service. CTSS was good as an internal, experimental system, but it was still unstable, because it was an adaptation of a system that was not designed for time sharing. No time was taken to make it a clean design. It had lots of things added onto it as the team learned more about what features it needed. It was a prototype. It wasn’t good enough for the vision Project MAC had in mind of utility computing for the masses.

Project MAC partnered with General Electric to develop the hardware, and AT&T Bell Labs to develop the operating system. Fernando Corbató managed the development team at MIT for the new system they called “Multics,” for Multiplexed Information and Computing Service. The plan was to have it run on a multi-processor machine that had built-in redundancy, so that if one processor burned out, it would be able to continue with the good processors it had left, without missing a beat. Like CTSS, it would have sophisticated memory management so that processes and user data wouldn’t clobber each other while people were using it at the same time. It would also have a hierarchical file system. System security was a top priority. It was noted when I took computer science in college that Multics was the most secure operating system that had ever been created. It may still carry that distinction.

Licklider left ARPA in 1964 to work at IBM. According to Waldrop, it seemed he had a mission to “bring the gospel” of interactive computing to the “high temple of batch processing.” Ivan Sutherland replaced Lick as IPTO director.

The research at Project MAC was having payoffs out in the marketplace. The publicity from the project, and the deal with GE for Multics, inspired companies to develop their own commercial time-sharing systems. The first was from DEC, the PDP-6, which came out in 1965. General Electric and IBM came out with time-sharing service offerings, which usually was a single programming language environment as their sole service. It was rather like the experience of using the early microcomputers in the late 1970s, come to think of it. When you turned them on, up would come the Basic programming language, and all you’d have was a prompt of some sort where you could type commands to make something happen, or write a program for the computer to run.

By 1968 there were 20 separate firms offering time-sharing services. GE’s service was operating in 20 cities. The University Computing Company was operating a time-sharing service in 30 states, and a dozen countries.

In 1968, Bill Gates, who was in the 8th grade, got his first chance to use the Basic programming language on a GE Mark II time-sharing system, using a teletype, through his school in Lakeside, Washington. Basic was invented at Dartmouth as its own time-sharing system. The main reason I mention this is Basic would play a major role in the formation of Microsoft in the 1970s. The first product Gates and Paul Allen created that led to the founding of the company was a version of Basic for the Altair computer. Microsoft Basic went on to become the company’s first big-selling software product, before IBM came calling.

The first release of Multics came out in 1969. AT&T left Project MAC the same year. GE sold its computer division to Honeywell in 1970. Along with GE’s hardware came the Multics software. Other time-sharing systems outsold Multics, and it never caught up as a commercial product. This was mainly because it was 3 years late. Multics had been beset with delays. The complexity of it overwhelmed the software development team at MIT. ARPA considered ending the project. An evaluation team was created to look at its viability. They ultimately decided it should continue, because of the new technologies that were being developed out of the project, and the knowledge that was being generated as a result of working on it. ARPA continued funding development on Multics into the early 1970s.

Honeywell didn’t want to sell Multics to customers, and only did so after considerable prompting by Larry Roberts, who was IPTO director at this time. A major barrier for it was that it was basically off-loaded onto Honeywell. Most of the people in the company, particularly its sales staff, didn’t know what it was! This was a chronic problem for the life of the product. Honeywell sold 77 Multics systems, in all, to customers over the next 20+ years. They stopped selling it in the early 1990s. Wikipedia claims the last Multics system was retired at the Canadian Department of National Defence in the year 2000.

Despite Multics being a dud in the commercial market, much was learned from it. It was the inspiration for Unix (which was originally named “Unics”, a play on “Multics”). Ken Thompson and Dennis Ritchie, the creators of Unix, had worked on Multics at AT&T Bell Labs. They missed the kind of interaction they were able to have with the Multics system, though they disliked the operating system’s bigness, and bureaucratic restrictions, which were key to its security. They brought into Unix what they thought were the best ideas from Multics and Project Genie (which I describe below).

I once had a conversation with a computer science professor while I was in college regarding the design of Unix. From what he described, and what I know of the design of early time-sharing systems, my sense is the early design of Unix was not that different from a time-sharing system. The main difference was that it shared computer time between multiple programs running at the same time, as well as between people using the system. This scheme of operation was called “multitasking.”

Unix would spread into the academic community in the years that followed. The main reason being that under an antitrust agreement with the government, AT&T was not allowed to get into other businesses besides telecommunications at the time. So they retained ownership of the rights to Unix, and the trademark to its name, but gave away copies of it to customers under a free license. They made the binary distribution, as well as all source code available to those who wanted it. This made it very attractive to universities. They liked getting the source code, because it was something that computer science students could investigate and use. It contributed to the academic environment.

Unix continues to be used today in IT environments, and in Apple products. It inspired the development of Linux, beginning in the early 1990s.

The Multics project taught the Project MAC team members valuable lessons in software engineering. These people went on to work at Digital Equipment Corp. (which was bought by Compaq in 1998, which was bought by Hewlett-Packard in 2002), Data General (which was bought by EMC in 1999), Prime (which was bought by Parametric Technology Corp. in 1998), and Apollo (which was bought by HP in 1989). (source: Wikipedia) A few of these companies created their own “lite” versions of Multics for commercial sale, under other product names.

When microcomputers/personal computers started being sold commercially in the late 1970s, the idea of the information utility was pared back as a widespread system strategy. It’s tempting to think that it died out, because computers got smaller and faster, and the idea of a central computing utility became old-hat. That didn’t really happen, though. It continued on in services like CompuServe, BIX, GEnie (offered by GE, no relation to Project Genie), and America Online of the late ’80s, and into the ’90s. The idea of large, centralized computing services has recently had a resurgence in such areas as software-as-a-service, and cloud computing. Individually owned computers, in all their forms, are mainly seen now as portals, a way to consume a service offered by much larger, centralized computer systems.

Project Genie

The information on this project comes from the Wikipedia page on it, and its list of references. The section on Evans and Sutherland is sourced from Waldrop’s book and Wikipedia. I do not have much of the details of this project, nor of what Dave Evans and Ivan Sutherland did at the University of Utah. This is best read as a couple anecdotes about the kind of positive influence that ARPA had on institutions and students of computer science at the time.

Project Genie began in 1963 at Berkeley. Harry Husky, who had been in charge of it, left the project in 1964, leaving Ed Feigenbaum, an associate professor with a background in electrical engineering, and Dave Evans, a junior professor with a background in physics and electrical engineering, to manage it. Both were dubious about their chances of accomplishing much of anything. I think it’s safe to say that what actually happened exceeded their expectations.

Feigenbaum wouldn’t stay with the project for long, though they had their system almost operational by the time he left. He joined John McCarthy’s Artificial Intelligence Laboratory at Stanford in 1964.

A team of graduate students who joined this project, which included Peter DeutschButler LampsonChuck Thacker, and Ken Thompson, modified a Scientific Data Systems 930 minicomputer to do time sharing as their post-graduate project. This project was perhaps unique in its goal, since other time sharing efforts had been done on mainframes, since only they had enough computing power to share. Licklider wanted to fund time sharing projects of various scales, presumably to see how each would work out.

Over the next 3 years the team wrote their own operating system for the 930, which came to be called the Berkeley Time Sharing System. The main technological advancements I can pick out of it were paged virtual memory, and a particular design for multitasking (the idea of sharing computer time between programs loaded into memory, as well as people)., and state-restoring crash recovery.

Paged virtual memory is a scheme where the system can “swap” information in memory out to hard disk in standard-sized segments, and bring it back in again, at the system’s discretion. This is done to allow the system to operate on a collection of programs and data that, put all together, are greater in size than the computer’s memory. This scheme is used on all modern personal computer systems, and on Unix and Linux.

State-restoring crash recovery is something we have yet to see on modern computer systems. It allows a system to restore itself to an operational state after a crash has occurred, picking up where it left off, without rebooting. It should be noted that this project did not invent this idea. The earliest version of it I’ve heard about is in Bob Barton’s Burroughs B5000 computer, which was created in 1961.

Not a lot has been said about this project in the literature I’ve researched, and I think it’s because the innovations that were developed were technical. No social study took place during the development of this system that I can tell, as took place with Project MAC at MIT, where many things about necessary time-sharing features were learned. Nevertheless, the technical talent that was developed during this project is legendary in the field of computing.

Scientific Data Systems eventually chose to market the Berkeley Time Sharing System as the SDS 940 computer, with some encouragement by Bob Taylor, who was IPTO director at the time. This system would go on to be used by Doug Engelbart for his NLS system, in the late 1960s (which I describe below).

Edit 8/5/2012: I added in “The trail to Xerox” as it provides background for the discussion of Xerox PARC in Part 3.

The trail to Xerox

Scientific Data Systems wasn’t enthusiastic about the 940, though. They considered it a one-off machine, and they had no intention of developing it further. Butler Lampson and others in Project Genie got impatient with SDS. They left Berkeley, and founded their own company, the Berkeley Computer Corporation (BCC), in 1969. Bob Taylor left ARPA the same year.

SDS was bought by Xerox in 1969, and its name was changed to Xerox Data Systems (XDS).

BCC didn’t last long. It was going broke in 1970. Taylor had founded Xerox Palo Alto Research Center (PARC) that year, and recruited a bunch of people from BCC to create their Computer Science Laboratory.

XDS lost so much money that Xerox shut it down in 1975.

Evans and Sutherland, and the University of Utah
David_C._Evans

David C. Evans

Dave Evans left Berkeley to chair a brand new computer science department at the University of Utah, his alma mater, in 1966. The department was one of ARPA/IPTO’s projects, and was fully funded by the IPTO.

Back then, the term “computer science” was new, and was considered an aspirational title for what the department was doing. It was expected that computation would become a science someday, but it was clearly acknowledged in Evans’s department that it was not a science then. I can safely say that it still isn’t, anywhere.

Ivan Sutherland, who had left ARPA to teach at Harvard, joined the computer science department at the University of Utah in 1968, serving as a professor. He came on the condition that Evans would join him in starting a new company. The new company was founded that year, called Evans & Sutherland. It was, and still is, a maker of computer graphics hardware.

Young Ivan Sutherland

Ivan Sutherland

Evans and Sutherland made a deliberate decision to stick to the philosophy of “doing one thing well,” and so the department specialized in computer graphics.

It should be noted that at the time their computer science department only took post-graduate students. There was no undergraduate computer science program. Dave Evans and Ivan Sutherland would have a who’s who of computing pass through their department. They had students participate in IPTO projects, and the IPTO fully funded many of their tuitions. Some of the most notable of their students (not all were shared between Evans and Sutherland) were Alan KayNolan BushnellJohn WarnockEdwin CatmullJim Clark, and Alan Ashton. (sources: Wikipedia, and utahstories.com)

I get more into the work of Alan Kay, Chuck Thacker, and Butler Lampson in Part 3.

Contributions of the students

Peter Deutsch, who had been with Project Genie at Berkeley, went on to work at Xerox PARC, and then Sun Microsystems. He is best known for his work in programming languages. In 1994 he became a Fellow at the Association of Computing Machinery (ACM). He developed Ghostscript, an open source PostScript and PDF viewer. More recently he’s taken up music composition. (Sources: Wikipedia, and Coders at Work)

Nolan Bushnell and Ted Dabney founded the video game company Atari in 1972. They produced two arcade video games that were big hits, Pong and Breakout. Steve Jobs and Steve Wozniak briefly worked for Atari during this period, helping to develop Breakout. When Wozniak developed his first computer, built with parts that some Atari employees had “liberated” from Atari, which would become the Apple I, they showed it to Bushnell to see if he was interested in marketing it. He turned them down. The rest, as they say, is history… Bushnell created the Pizza Time (eventually named “Chuck E. Cheese”) chain of restaurants as a way of marketing Atari arcade games in 1977. Atari was sold to Warner Communications that same year. Bushnell founded Catalyst Technologies, a business incubator, in the early 1980s. Bushnell resigned from Chuck E. Cheese in 1984. (Source: Atari history timelines) More recently Bushnell has been part of a business venture called uWink that’s been trying to set up futuristic, technology-laden restaurants. In 2010, the third iteration of Atari (the most recent acquirer of the rights to the name and intellectual property of Atari is a company formerly known as Infogrames) announced that Bushnell had joined their board of directors. There are a bunch of other entrepreneurial ventures he was involved in that I will not list here. Check out his Wikipedia page for more details.

John Warnock worked for a time at Evans & Sutherland, and then at Xerox PARC, where he and a colleague, Charles Geschke, developed InterPress, a computer language to control laser printers. Warnock couldn’t convince Xerox management to market it, and so he and Geschke founded Adobe in 1982, where they marketed a newer laser printing control language called PostScript. In 1991, Warnock created a preliminary concept called “Camelot” that led to Postscript Document Format, otherwise known as “PDF.”

Edwin Catmull developed some fundamental concepts used in 3D graphics today, and went on to become Vice President of the computer graphics division at Lucasfilm in 1979. This division was eventually sold to Steve Jobs, and became Pixar, where Catmull became Chief Technical Officer. He was a key developer of Pixar’s RenderMan software, used in such films as Toy Story and Finding Nemo. He has received 4 Academy awards for his technical work in cinematic computer graphics. He is today President of Walt Disney and Pixar Animation Studios.

Jim Clark worked for a time at Evans & Sutherland. He founded Silicon Graphics in 1982, Netscape in 1994, Healtheon in 1996 (which became WebMD), and myCFO in 1999 (an asset management company for Silicon Valley entrepreneurs, which is now called “Harris myCFO”). More recently, he co-produced the Academy award-winning documentary The Cove.

Alan Ashton co-founded Satellite Software International with one of his students, Bruce Bastian, in 1979. The company created the WordPerfect word processor. SSI was eventually renamed WordPerfect Corporation. Throughout the 1980s, WordPerfect was the leading word processing product sold on PCs. The company was bought by Novell in 1994. Ashton stayed on with Novell until 1996. He founded ASH Capital, a venture capital firm, in 1999.

For those who are interested, you can watch a 2004 presentation with Ivan Sutherland and his brother Bert, talking about their lives and work, here. The guest host for this presentation is Ivan’s long-time friend and business associate, Bob Sproull.

Dave Evans went on to help design the Arpanet, which I describe below.

Doug Engelbart and NLS

If, in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?

— Doug Engelbart, introducing NLS at Menlo Park in 1968

“Doug and staff using NLS to support 1967 meeting with sponsors — probably the first computer-supported conference. The facility was rigged for a meeting with representatives of the ARC’s research sponsors NASA, Air Force, and ARPA. A U-shaped table accommodated setup CRT displays positioned at the right height and angle. Each participant had a mouse for pointing. Engelbart could display his hypermedia agenda and briefing materials, as well as the documents in his laboratory’s knowledge base.”
— caption and photo from Doug Engelbart Institute

It’s a bit striking to me that after the interactive computing that was done in the 1950s, which I covered in Part 1, that a lot of effort in the ARPA work was put into making interactive systems that operated only from a command line, rather than from a visual, graphical display. At one point Licklider asked the Project MAC team if it might be possible for everybody on the CTSS time-sharing system to have a visual display, but the answer was no. It was too expensive. In the scheme of projects that ARPA funded, NLS was in a class by itself.

In 1962, an electrical engineer named Doug Engelbart was having trouble getting his computer research project funded. Funders in his proximity couldn’t see the point in his idea. Most of the people he talked to were openly hostile to it. He wanted to work on interactive computing, but they’d say that computers were only meant to do accounting and the like, using batch processing. Engelbart’s vision was larger, and a little more disciplined than Licklider’s. He wanted to create a system that was specifically designed to help groups work together more effectively to solve complex problems.

Lick at ARPA/IPTO allocated funding for his project that lasted several years. Engelbart also got funding from NASA (via. Bob Taylor), and the Air Force’s Rome Air Development Center. With this, he established the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI).

Engelbart envisioned word processing at a time when it had just been invented, and even then only on an experimental basis. He spent a lot of time exploring how to enhance human capability with information systems. In the mid-1960s, he envisioned the potential for online research libraries, online address books, online technical support, online discussion forums, online transcripts of discussions, etc., all cross-linked with each other. He drew inspiration from Vannevar Bush’s Atlantic Monthly article called, “As We May Think,” which was published in 1945 (Vannevar Bush is no relation to the Bush political family), and Ivan Sutherland’s Sketchpad project. Engelbart said that he came up with the idea of hyperlinked documents on his own, though he later recalled reading Bush’s article, which focused on a theoretical concept of a stored library of linked documents that Bush called a “Memex.”

The system that was ultimately created out of the ARC’s work was built on an SDS 940 time-sharing system, the system software for which was developed by Project Genie. Engelbart’s software ran in 192 kilobytes of memory, and 8 MB of hard disk space, if I recall correctly. He called the end result NLS, for “oN-Line System.” He demonstrated the system at the Fall Joint Computer Conference at Menlo Park, CA in 1968. It was all recorded. The full-length annotated video is here. It’s interesting, but gets very technical.

What Engelbart and his team accomplished was so far ahead of its time, there were people in the audience who didn’t think it was real, because, you see…computers couldn’t do this kind of stuff back then! It was real alright. What Engelbart had created was the world’s first computer mouse (that’s what people in the computing field give him credit for today, but there was so much more), one of the first systems to implement rudimentary windows on a screen, the first system to integrate graphics and text together on a screen, the first system to implement hyperlinking (these were like links on a web page, though much more sophisticated), and the first collaborative computing platform that was networked, all in one system. He demonstrated live for the audience how he and a worker in another office, in a separate building, could collaboratively work on the same system, at the same time, on a shared workspace that appeared on both of their screens. Each person was able to use their own mouse and keyboard to issue commands to the system, and edit documents. They could see each other on their terminals, since there were cameras set up in both locations to capture their images. They each had headsets with microphones, and a speaker system set up, so that they could hear each other.

The whole thing was a prototype, and a lot of infrastructure had to be set up just for the demo, to make it all work. Not all of it was digital. In the collaborative computing part of the demo, the video display of the remote worker, and the audio system, were analog. Everything else was digital. Even so, it was a revolutionary event in the history of computers. This was 16 years before the first Apple Macintosh, which couldn’t do most of this! NLS cost a lot more, too, more than a million dollars (more than $6 million in today’s money)! There is still technology Engelbart developed (which I haven’t discussed here) that has not made it into computer products yet, nor the typical web environment most of us use.

After the demo, Bob Taylor, who was the director at IPTO at this time, agreed to continue funding Engelbart’s work. However, other government priorities reduced the funding available to him, mainly due to the cost of the Vietnam War.

Considering the magnitude of this achievement, one would think there would be nothing but better and better things coming out of this project, but that was not to be. A lot of Engelbart’s technical talent left his project to work at Xerox PARC, pursuing the idea of personal computing, in the 1970s. Innovation with NLS slowed, and the system became more bloated and unwieldy with each update. ARPA could see this directly, since they ended up using NLS to do their own work. The IPTO decided to cut off Engelbart’s funding in 1975, since their goals had diverged.

NLS was transferred out of SRI to a company called Tymshare in 1978. They attempted to make it into a commercial product called “Augment.” Tymshare was bought by McDonnel-Douglas in 1984. They authorized the transfer of Augment to the Bootstrap Alliance, founded by Doug Engelbart, which is now known as the Doug Engelbart Institute. NLS/Augment was ported to a modern computer system, thanks to funding from ARPA in the mid-1990s, and continues to be used at the Doug Engelbart Institute today! (source: article on NLS/Augment at The Doug Engelbart Institute)

His efforts to get his ideas out into the world would be somewhat vindicated in the research and development that was carried out by Xerox PARC, and ultimately in the computers we use today, but this is only a pale shadow compared to what he had in mind. Still, we owe Engelbart a great debt of gratitude. It is not a stretch to say that the interactive computers we’ve been using for the past 26 years would’ve come into existence quite a bit later in our history had it not been for his work. The Apple Lisa and the Apple Macintosh wouldn’t have come out in 1983 and 1984, respectively. Something like Microsoft Windows would’ve been a more recent invention. It’s possible most of us would still be using command-line interfaces today, using a text display, rather than a graphical one, were it not for him. It also should be noted that the IPTO’s funding (by Licklider) of Engelbart’s ARC project at SRI was crucial to NLS happening at all. It was this funding that lent it credibility so that Engelbart could pursue funding from the other sources I mentioned. Were it not for Licklider, Engelbart’s dream would have remained just a dream.

To really drive home the significance of Engelbart’s accomplishment, I’m including this video, created by Smeagol Studios, which demonstrates some of the technology Engelbart invented, but was eventually brought to us by others.

Edit 9/26/2022: A documentary on Engelbart’s work was produced in 2020, called, “The Augmentation of Douglas Engelbart” that does a good job of giving an overarching view of this history, and what it means.

The Arpanet

In terms of the technology we use today, all of the other projects that ARPA funded generated ideas that we see in what we use. They were inspirations to others. The Arpanet is the one ARPA project that directly resulted in a technology we use today, in the sense that you can trace, in terms of technology and management, the internet we’re using now directly back to ARPA initiatives.

The basis for the Arpanet, and later the internet, is a concept called packet switching, which divides up information into standard sized pieces, called “packets,” that have a “from” and “to” address attached to them, so that they can be routed on a network. It turned out ARPA did not invent this concept. It had gone through a couple false starts before ARPA implemented it.

The earliest known version of this idea was created by Paul Baran (pronounced “Barren”) at RAND Corporation in 1964. He had done an amazing amount of work designing a packet switching network for a digital phone system (think voice over IP, as an analogy!) for the U.S. government. The big difference between his idea and what we eventually got was that he designed his system only for transmitting digitized sound information over a network, not generic computer data. The amount of design documentation he produced filled 11 volumes. This is where the idea came from to design a network that could survive a nuclear attack. Baran’s goals were dovish. He wanted to avoid all-out nuclear war with the Soviets. His rationale was that if the government’s conventional communication system was wiped out in a first strike, our government would have no choice but to launch all its nuclear missiles, because it would be deaf and blind, unable to communicate effectively with its forces, or with the Soviet government. In his scheme, the communication network would route around the damage, and continue to allow the leaders of both countries to communicate, hopefully allowing them to decelerate the conflict. It didn’t get anywhere. Baran tried working through the Defense Communications Agency to get it built, but the engineers there, and at AT&T (Baran’s system was designed to work over the existing long-distance phone network), didn’t understand the concept. Baran even proposed that the U.S. government offer his system to the Soviets, so that if the roles were reversed, they could still communicate with the U.S. That didn’t convince anybody to go ahead with it, either. It was shelved and forgotten, until it was discovered by Donald Davies, a researcher at the National Physical Laboratory (NPL) in Teddington, UK.

Davies’s idea to develop a packet switching network (Davies is the one who invented the term) was inspired by a conversation he had with Larry Roberts and Licklider at a conference on time sharing Davies hosted in 1965. Davies found Baran’s project later while researching how to build his own system. Davies had come up with the concept of using separate message processing computers to route traffic on the network, what we call “routers” today. He finished his design in 1966. He got one node going, where a bunch of terminals communicated with each other through a single message processor, demonstrating that the idea could work. Like with Baran’s system, his idea was to use the UK’s phone network. He ran into a similar problem: The British Postal Service, which at the time was in charge of the country’s phone network, didn’t see the point in it, didn’t want to fund it, and so it died. Just imagine if things had turned out differently. The Brits might’ve taken all the credit!

Bob Taylor, from Stanford University

ARPA eventually got going with its packet switching network, but it took some searching, and leadership, on the part of a psychologist named Bob Taylor. Taylor moved from NASA to the IPTO in 1965. He served under Ivan Sutherland, who had replaced Licklider as IPTO director. Taylor and Licklider had worked in the same area of psychological research, called psychoacoustics, in the 1950s, and they were both passionate about developing interactive computing.

One of the first things Taylor noticed was that the IPTO had several teletype terminals set up to communicate with their different working groups, because each working group’s computer system had its own way of communicating. He thought this was something that needed to be remedied. The thought occurred to him that there should be just one network to connect these groups.

Wes Clark once again enters the story, and introduces a new idea into this mix. I’ll give a little background.

Clark had come up with a “backwater” concept in 1961 at MIT of individual computing, the idea that computers should be small and something that individuals could afford and own. Clark had developed what was considered a “small” computer at the time. It had a portion that fit on a desk (though it took up almost the entire desk), which contained a bunch of components fitted together. It had a keyboard and a 6″ screen. The part on the desk was hooked up to a single cabinet of electronics that stood on its own. It might’ve been the world’s first minicomputer. Clark dubbed his creation the LINC, after the place where he invented it, Lincoln Lab, which is part of MIT.

It was a highly technical box. It wasn’t something that most people would want to sit down and use for the stuff we use computers to do today, but it was the beginning of a concept that would come to define the first 25 years of personal computing. The LINC came to be manufactured and sold, and was used in scientific laboratories.

Clark obtained funding from the IPTO in 1965 to continue his work in developing his individual computing technology, at Washington University at St. Louis. From what I’ve read of his history, Clark did not end up playing a role in the development of the personal computer as most people would come to know it, though he nevertheless contributed to the vision of the networked world we have today, in more ways than one.

Sutherland and Taylor met with Clark in 1965. Taylor tried out the LINC, typing out characters and seeing them appear on its small screen. As he used it, he and Clark talked about networking. The idea of the “Intergalactic Network” that Lick had talked about a couple years earlier came into Taylor’s mind. Clark regarded time sharing as a mistake. To him, the future was in networking small machines like his. Taylor could see the validity of Clark’s argument. Clark hadn’t incorporated the idea of networking into the LINC, but he thought that was something that would come along later. Taylor observed that what made time sharing a valid concept was the community it created. He asked himself questions like, “Why not expand that community into a cross-country network,” and “Why couldn’t the network serve the same function as the time-sharing system in forming communities?” Clark agreed with this idea, and urged Taylor to go for it. Clark didn’t like the idea of linking time-sharing systems together, but link thousands, or millions of individually owned computers together? Yes!

Edit 9/26/2022: Here are a couple videos talking about the history of software development around the LINC, showing a bit of how one would operate it, and a couple graphical demos. You can see Wes Clark in the second video, enjoying seeing his computer operate again, and watching the person using it take up a coding challenge on it. These are from the 2007 Vintage Computer Festival.

Note that the LINC came in two parts. There was the user panel that sat on top of the desk, along with the keyboard. This is where the person using it would load tapes, and operate the computer. It also contained the small video display. The actual computer was in a tall cabinet that was placed to the right of where the person using it is sitting, in these videos.

This next video gives you a better idea of what the display was like.

Licklider encouraged Taylor to pursue the idea of a network as well. Lick said the only reason he didn’t try to implement his idea while at ARPA was the technology wasn’t ready for it. They went on to discuss ideas about how the network might be used. They thought about online collaboration, digital libraries, electronic commerce, etc. Electronic commerce is now well established, but the other areas they talked about are still being developed on the internet today.

When Taylor got back together with the IPTO’s working groups, he discussed the idea of creating a nationwide network, with them involved. Most of them didn’t like the idea at all. The only one who was excited about it was Doug Engelbart. It was one thing to talk about a network, as they had in the past. It was another to actually want to do it, primarily because most of them didn’t want to share computer time with outsiders. They wanted to keep it within their own communities, because they were having enough trouble keeping computer performance up with the users they had. Plus, they didn’t want to contribute funding to the project. All projects before this had come from the bottom up. Researchers put forward proposals, and ARPA would pick which to fund. This was a project coming from ARPA, top-down.

Taylor talked with Charles Herzfeld, who was the director of ARPA at the time, about the idea of building the network. He liked it, and they agreed on an initial funding amount of $1 million (about $7 million in today’s money). This allayed some fears so that investigations for how to build the network could move forward. Taylor assured the working groups that if they needed additional computing power to take on the load of being a part of the network, ARPA would find a way to provide it.

Ivan Sutherland left ARPA to become a professor of electrical engineering at Harvard in 1966, and Taylor became IPTO director. Taylor would come to play a role in the development of the Arpanet, the ancestor to the internet, and would later lead the effort to develop modern personal computers at Xerox PARC.

roberts1970

Larry Roberts

Larry Roberts, an electrical engineer from MIT, was brought into the Arpanet project, after some arm twisting by Taylor and Herzfeld. He was made a chief scientist at ARPA, to lead the technical team that would design and build the network. Roberts, Len Kleinrock, and Dave Evans developed a preliminary design for the Arpanet, independently from Donald Davies and Paul Baran, in 1967. Roberts noticed that in addition to the systems associated with the IPTO, all of the time-sharing systems that were sprouting up around the country had no way to communicate with each other. People who used one time-sharing system could communicate with each other, but they couldn’t easily communicate information, programs, and data with others who used a different system. Roberts wanted to help them all communicate with each other, furthering the goals of Project MAC.

Like the other ideas that came before it, the Arpanet was designed to work over AT&T’s long-distance phone network. The idea was to open dedicated phone connections and never hang up, so that data could be sent as soon as possible at all times. Roberts ran into hostile resistance at the Defense Communications Agency over this idea from some of the same people who told Baran his idea wouldn’t work. The difference this time was the man who wanted to implement it worked for the Pentagon, and ARPA was fully behind it. Part of ARPA’s mission was to cut through the bureaucratic red tape to get things done, and that’s what happened.

The original idea Roberts had was to have each time-sharing system on the network devote time to routing data packets on it. One of the objections raised by the ARPA working groups was that most or all of their computer time would be taken up routing packets. Unbeknownst to them at the time, this problem had already been solved by Davies over in England. Roberts changed his mind about this arrangement after he had a conversation with Wes Clark. He suggested having dedicated “interchange” computers, so that the computers used by people wouldn’t have to handle routing traffic–again, routers. Clark got this idea by thinking about the network as a highway system. He observed that they had specialized ramps to get from one highway to another when they’d intersect–interchanges. Thus was born Arpanet’s Information Message Processors (IMPs).

Roberts’s first inclination was to make the data rate on the Arpanet 9.6Kbps. He thought that would be plenty fast for what they would need. He reconsidered, though, upon meeting with Roger Scantlebury and his colleagues from the UK’s National Physical Laboratory. Scantlebury had worked with Donald Davies. They suggested a faster data rate. Roberts saw that a speed of 56Kbps 50 Kbps was achievable, and decided that should be the data rate for the network (this was later increased to 56 Kbps, the same speed that’s used when people now use a dial-up connection to get on the internet), though this was the network’s speed, not just the speed at which the people who used it connected with it.

Three groups worked on the implementation details for the Arpanet: The ARPA group led by Roberts, a second group at BBN led by Robert Kahn, a mathematician and electrical engineer, and a third group made up of graduate students from many different universities (really whoever wanted to participate), who were part of what became the Network Working Group, which was coordinated by Steve Crocker at UCLA. For the techies reading this, Crocker was the one who invented the term “RFC” for the internet’s design documentation, which stands for “Request For Comments.” He was looking for something to call the design that the graduate students had done, without coming across as superior to the people at ARPA leading the effort.

BBN won the contract from ARPA in a competitive bidding process to manufacture and install the IMPs. BBN chose the Honeywell 516 computer as the basis for them. It was customized to do the job. Each IMP was the size of a refrigerator. Graduate students and employees at the sites where these IMPs were installed were expected to come up with their own software (their own network stack), based on a specification that had been hammered out by the aforementioned working groups, to send and receive packets to and from the IMPs.

Crocker and the Network Working Group came up with several basic protocols for how computers would interact with each other over the Arpanet, including “telnet” and “file transfer protocol” (FTP). BBN created the system and protocol that enabled e-mail over the Arpanet in 1972. It became the most popular service on the network. These protocols would become the foundation for the protocols on the internet, as it developed several years later.

Along the way, the Arpanet’s designers had an idea that we would recognize as cloud computing today, where computers would have distributed, dedicated functionality, so that it would not have to be duplicated from machine to machine. A computer application that needed functionality could access it through the network interface. You could say that the idea of “the network is the computer” was starting to form, though from what I can tell, no one has pursued this idea with vigor until recently.

Also of note, ARPA had contracts with BBN in 1969 to research how medical information could be used and accessed over a network, to see what doctors could do with online access to medical records. Perhaps this research would be relevant today.

The first node on the Arpanet was set up at UCLA in September 1969. Bob Taylor left his position as IPTO director soon after. He was replaced by Larry Roberts. Another five sites were set up at SRI, UC Santa Barbara, the University of Utah, BBN, and MIT, by the end of 1970. The Arpanet became fully operational in 1975, and its management was transferred from ARPA to, ironically, the Defense Communications Agency.

Edit 9/26/2022: Here is Leonard Kleinrock describing how the Arpanet first came to life in September 1969, including some detail about why the first effort to connect the first two nodes (in October) had a failure that caused the incomplete message that was sent. The intent was to connect the two systems by sending the message “LOGIN” from the system in UCLA to the other at SRI. The sender was going to type out the letters L-O-G, which would cause the receiving system to do an auto-complete on the message, sending back the letters “IN”, which would make out the word “LOGIN” on both systems. However, when the “G” was sent, the network crashed, causing the person at the other end to only see “LO”–the first “utterance” on the Arpanet.

Here’s a good synopsis on the history of the internet Arpanet I’ve just covered, done by a couple of students at Vanderbilt University.

People often look askance at nostalgic notions that there was ever such a thing as a “golden age” of anything, but Bob Taylor said of this period of computer research, “Goddammit there was a Golden Age!” It was a special time when great leaps were made in computing. A lot of it had to do with changing the dominant paradigm in how they could be used.

What happened to the people involved?

Edit 11-23-2013: As I hear more news on these individuals, since the writing of this post, I will update this section.

Ken Olsen, who I introduced at the beginning of this post, retired from Digital Equipment Corp. in 1992. As noted earlier, DEC was purchased by Compaq in 1998, and Compaq was purchased by Hewlett-Packard in 2002. Olsen died on February 6, 2011.

Harlan Anderson was fired from DEC by Olsen in 1966 over a disagreement on what to do about the company’s growth. He founded the Anderson Investment Company in 1969. He was a trustee of financing for Rensselaer Polytechnic Institute for 16 years. He served as a trustee of the King School of Stamford, Connecticut, and the Norwalk Community Technical College Foundation. He is currently a member of the Board of Advisors for the College of Engineering at the University of Illinois. He is also a trustee of the Boston Symphony Orchestra, and the Harlan E. Anderson Foundation. He has written a book called, “Learn, Earn & Return: My Life as a Computer Pioneer.” (Sources: Wikipedia, the Boston Globe, and Rensselaer Board of Trustees)

Wes Clark left Washington University in St. Louis in 1972, and founded Clark, Rockoff, and Associates, a consulting firm in Brooklyn, New York, with his wife, Maxine Rockoff. His oldest son, Douglas Clark, is a professor of computer science at Princeton University. (Source: IEEE Computer Society) Wes Clark died on February 22, 2016.

John McCarthy worked at Stanford in the field he invented, artificial intelligence research, from 1962 until he retired in 2000. He remained a professor emeritus at Stanford until his death on October 24, 2011.

Fernando Corbató became a professor at the Department of Electrical Engineering and Computer Science at MIT in 1965. (see my note about Robert Fano below re. this department) He was an Associate Department Head of Computer Science and Engineering at MIT from 1974-1978, and 1983-1993. He retired in 1996. (Source: MIT Computer Science and Artificial Intelligence Laboratory). He passed away on July 12, 2019 (Source: Engadget).

Jack Ruina served as a professor of electrical engineering at MIT from 1963 to 1997, and is currently professor emeritus. During a two-year leave of absence from MIT he served as president of the Institute for Defense Analysis. In addition, he served as deputy for research to the assistant secretary of research and engineering of the U.S. Air Force, and Assistant Director of Defense Research and Engineering for the Office of the Secretary of Defense. He also served on a couple presidential appointments to the General Advisory Committee, from 1969 to 1977, and as senior consultant to the White House Office of Science and Technology Policy from 1977 to 1980.

Ivan Sutherland left the University of Utah in 1976 to help create the computer science department at the California Institute of Technology (CalTech). He served as professor of computer science there until 1980. In that year he founded a consulting firm with one of his students, Bob Sproull, called Sutherland, Sproull and Associates. Sproull is the son of Dr. Robert Sproull, who was one of ARPA’s directors. (Sources: BookRags.com and “the DARPA video”) The firm was purchased by Sun Microsystems in 1990, becoming Sun Labs. Sutherland became a Fellow and Vice President at Sun Microsystems. Sun was purchased by Oracle in 2010, and Sun Labs was renamed Oracle Labs. (Source: Wikipedia) Sutherland, and his wife, Marly Roncken, are currently involved with computer science research at Portland State University. (Sources: Wikipedia and digenvt.com)

Bob Fano, who supervised Project MAC, stayed with the project until 1968. He became an associate department head for computer science in MIT’s Department of Electrical Engineering in 1971. Soon after, the Electrical Engineering department was renamed the Department of Electrical Engineering and Computer Science, and Project MAC was renamed the Laboratory of Computer Science (LCS). He was a member of the National Academy of Sciences, and the National Academy of Engineering. He was a fellow with the American Academy of Arts and Sciences, and the Institute of Electrical and Electronics Engineers (IEEE). (Sources: Wikipedia.org and “Did My Brother Invent E-Mail With Tom Van Vleck?” from the New York Times opinion blog) He died on July 13, 2016. (source: “Robert Fano, computing pioneer and founder of CSAIL, dies at 98”)

Ken Thompson was elected to the National Academy of Engineering in 1980 for his work on Unix. Bell Labs was spun off into Lucent in 1996. He retired from Lucent in the year 2000. He joined Entrisphere, Inc. as a fellow, and worked there until 2006. He now works at Google as a Distinguished Engineer.

Dennis Ritchie – I feel I would be remiss if I did not note that Ritchie developed a computer programming language called “C” as a part of his work on Unix, in 1972. It came to be known and used by professional software developers, and became pervasive in the software industry in the 1990s, and beyond. Most of the software people have used on computers for the past 20+ years was at some level written in C. This language continues to be used to this day, though it’s gone through some revisions. Its influence is felt in the use of many different programming languages used by professional developers. At some level, some of the software you are using to view this web page was likely written in C, or some derivative of it.

Dennis Ritchie worked in the Computing Science Research Center at Bell Labs throughout his career. Bell Labs became Lucent in 1996. Lucent and Alcatel merged in 2006. Ritchie retired from Alcatel-Lucent in 2007. He died on October 12, 2011. (sources: Wikipedia.org and Dennis Ritchie’s home page)

Dave Evans – Aside from his computer science work at the University of Utah, he was a devout member of the Church of Jesus Christ of Latter-day Saints for 27 years. I do not have documentation on when he left the University of Utah. All I’ve found is that he retired from Evans & Sutherland in 1994, and that he died on October 3, 1998.

Doug Engelbart went into management consulting after Tymshare was sold. He tried to spread his information technology vision in large organizations that he thought would benefit from having their knowledge generation and storage/retrieval processes improved. He continued his work at the Doug Engelbart Institute at SRI until his death on July 2, 2013. There are many more details about his professional career at his Wikipedia page.

Len Kleinrock joined UCLA in the mid-1960s, and never left, becoming a member of the faculty. He served as Chairman of their Computer Science Department from 1991-1995. He was President and Co-founder of Linkabit Corporation, the co-founder of Nomadix, Inc., and Founder and Chairman of TTI/Vanguard, an advanced technology forum organization. He is today a Distinguished Professor of Computer Science. He is a member of the National Academy of Engineering, the National Academy of Arts and Sciences, an IEEE fellow, an ACM fellow (Association of Computing Machinery), an INFORMS fellow (INstitute For Operations Research and the Management Sciences), an IEC fellow (International Electrotechnical Commission), a Guggenheim fellow (which I assume refers to the Guggenheim arts museum), and a founding member of the Computer Science and Telecommunications Board of the National Research Council. (Source: Len Kleinrock’s home page)

Steve Crocker has worked on the internet community since its inception. Early in his career he served as the first area director of security for the Internet Engineering Task Force (IETF). He also served on the Internet Architecture Board (IAB), the IETF Administrative Support Activity Oversight Committee (IAOC), the Board of the Internet Society, and the Board of The Studio Theatre in Washington, DC. Other items on his resume are that he conducted research management at the Information Sciences Institute at the University of Southern California, and The Aerospace Corporation. He was once vice-president of Trusted Information Systems, and co-founded CyberCash, Inc., and Longitude Systems. He was Chair of ICANN’s Security and Stability Advisory Committee (SSAC) from its inception in 2002 until December 2010. He is currently chairman of the board for the Internet Committee for Assigned Names and Numbers (ICANN). He is also CEO and co-founder of Shinkuro, Inc., a start-up focusing on dynamic sharing of information on the internet, and on improved security protocols for the internet. (Source: ICANN Biographical Data)

I’ll close this post with a video that gives a wider view of what ARPA was doing during the late ’50s, when it was founded, into the mid-70s. Note that “Defense” was added to ARPA’s name in 1972, so it became “DARPA.” This is “the DARPA video” I’ve referred to a couple times. There is a brief segment with Licklider in it.

In Part 3 I cover the later events and operating prerogatives at DARPA that are mentioned in this video. I will talk a little more about Larry Roberts, Ed Feigenbaum, and Bob Kahn, more about Licklider, the IPTO, Robert Taylor’s work at Xerox PARC, along with Alan Kay, Chuck Thacker, and Butler Lampson in the invention of personal computing.

—Mark Miller, https://tekkie.wordpress.com

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

16 thoughts on “A history lesson on government R&D, Part 2

  1. Pingback: A history lesson on government R&D, Part 1 « Tekkie

  2. @omanuel:

    You are not alone in the belief that technology has screwed up our ability to think. Jerry King, a mathematician at Lehigh University, complained about something similar, saying that it’s screwed up math education in schools to the point of absurdity. I talked about that belief here. A point I keep making in discussions about this is it’s not technology screwing us up. It’s our conception about how it is useful. It’s seen as a crutch, rather than a means to expand our understanding. I recognize you from E.M.Smith’s blog. I talked about this in comments to one of his recent posts. When it comes to climate science, computer models are definitely being misused. I talk about that here. Note particularly the video I embedded done by Dr. Alan Kay. I wrote a very long piece here criticizing the state of climate science. In part of it I address computer models.

    I would pay particular attention to what Alan Kay has to say about this. He is a pioneering computer scientist, with an emphasis on “scientist.” He is a scientist at heart, and understands how people can fool themselves with computers. He said once,

    You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

    By “language system” he was alluding to writing a program for a machine, but he was making a wider point that any system we use to try to explain something is subject to this principle, “We can write anything.” I’ve made this point with regard to scientific literature. It should be read, but it should also be taken with some grains of salt. In itself it is not science, but it is part of the scientific process–communicating the findings. Actually going out and doing experiments and observation is real science, but that’s something we should do ourselves as much as we can. As has been pointed out by Dr. Richard Lindzen, one reason people think so many of the world’s scientists know about AGW is they say it in what they publish. He said that if you look at their data, though, there’s nothing in them that confirms the theory! They just say it because that’s what’s expected of them. They may even believe it, though they have not confirmed it through observation. Again, “We can write anything.”

    Alan Kay has tried to teach how to properly use computers to teach math and science, and in the end, how to use them in furtherance of both disciplines, together. He stresses that it’s important to go from experience to model, not the other way around. He discusses that in a video I reference in my article links above. He demonstrates it in a couple videos I show in this post.

  3. @ Mark Miller

    Thanks again. If possible please direct me to a short video or message from Dr. Alan Kay that confirms his interest in divergent points of view, not just in improved techniques of teaching or communicating.

    Reality – revealed by precise experimental measurements and observations since I started research in 1960 – differs sharply from consensus opinions that seemed to be guided away from reality by computer models:

    Life on planets is a natural part of the great dynamic universe, powered by neutron repulsion and continuously bathed in radiation (E = hv) and potential energy stored as particle rest mass (E = mc^2) flowing from pulsar cores of stars.

    About Oliver K. Manuel and the 64 years preceding Climategate

  4. @omanuel:

    I’ve done some searching to try to find what you asked, and it is rather tough. In his public statements and appearances he tends to give more critique and explanation than encourage discussion. I see that. He has described himself as a scientist, and he has said that an aspect of science is that it welcomes a lot of ideas, but then tries to winnow out the weak ideas from the strong ones, and favors the stronger ideas via. “negotiation with Nature,” and criticism from fellow scientists re. the thoughts that arise from that “negotiation.” I’m paraphrasing. I’m sure this isn’t exactly how he would describe his view on it.

    The closest thing I found to a statement along the lines you’re asking was this comment from a discussion on the Java programming language:

    I certainly agree that it is a good idea for agnostics to know not just the Bible, but as much as possible about religions and anthropology. Especially if one is interesting in not just doing science but also being able to explain it and to help those who believe otherwise to learn it.

    So I definitely agree that the kids need to “learn Java”.

    However, I think that it is difficult to learn medicine in a Christian Science hospital (if there were any). And it is difficult to learn science in a fundamentalist church.

    So I definitely think it is difficult to learn computing in a Java sect (which so much of academia has lapsed into).

    For many years I’ve put forth the idea that “there is no language good enough to be learned as ‘the one’ in isolation”.

    Because of “imprinting of concepts” and even though it is difficult, several (maybe three) very different languages should be taught at the same time.

    In other words, the epistemological stance of the educative environment is what is really important here. For most things, including science, religion and computing, this should be a “critical of everything” and comparative stance whose purpose is not to trash but to learn for both good and bad features.

    Another worthy point here is that academia (and business) have been chasing the red herring of “programming” for far too long, and have missed that the main concerns should be “systems” and “scaling”. Once the ARPA community started thinking about “intergalactic networks” (as Lick termed them), it also had to start thinking about “communication with aliens” (also a Lick idea).

    In other words, at intergalactic scales (say, over the entire planet to start with), it won’t be possible to create any kind of detailed orthodoxy, so a real problem to be solved is how to inter-operate and coordinate diverse offerings successfully into larger systems.

    This was what OOP was supposed to be all about in supplying modules that were essentially virtual machines with some form of pure messaging between them. This was successfully done at the physical computer level with TCP/IP, and Smalltalk at Xerox PARC was a lesser gesture at the software level, but a pointer in this direction.

    But the C community, etc., completely missed what was really going on in computing vis a vis systems and scaling, etc. And it is this set of misconceptions that we find in many universities and businesses today — and one of many poor results in this directly was Java.

    The big purpose of education goes far beyond knowledge and fluency — to critical multiple perspectives that allow shifts to more powerful points of view.

    If computing could get back to this, then it would know in what fashion students should “learn Java”.

    Best wishes.

    Alan

    Just some notes. His comments are specifically targeted at the computer science community, that today it tends to prefer consensus over inquiry, and that the consensus tends to seem more like a religious orthodoxy than a science, and then he describes what “would be a practice closer to science” for computer science. It’s in that that I think reveals some of his attitudes towards “divergent points of view,” as you termed it.

    He refers to some analogies that Lick used (Lick is a major figure in what I wrote in my post here). I talked about the “Intergalactic Network” concept in my post. One idea that Kay has been talking about recently is this “communicating with aliens” concept. What he (and Lick) meant by it (Kay explains a bit of it in his comment here) is a concept of communication, and how this applies to software systems, that, as predicted, there are different systems on our global network (the internet), and it’s difficult to get them to interoperate with each other, if not impossible at times. This is the “alien” concept he spoke of, and a problem he puts forward is, “How to get these alien systems communicating so they can interoperate,” though I think he thinks there are implications beyond computing, into our human experience.

    This is the best I could do in answering your request. I don’t know if I’ve done him justice.

  5. Thank you Mark, for your time and consideration.

    Slowly, painfully slowly, I finally found words to conclude Climategate:

    From Hiroshima in Aug 1945 to Climategate in Nov 2009

    Living inside the “sphere of influence” of the Sun’s pulsar core
    Is like living in electron orbits around an atom’s nuclear core
    We are humbly connected to RTG (Reality, Truth, God), or
    We are arrogantly connected to false illusions of control !

    THERE ARE ABSOLUTELY NO EXCEPTIONS

    http://omanuel.wordpress.com/

  6. Pingback: Doug Engelbart has passed away | Tekkie

  7. Pingback: If you’re entering computer science, please look at this | Tekkie

  8. Pingback: Were the first movie computer graphics in a Hitchcock film? | Tekkie

  9. Pingback: A history lesson on government R&D, Part 3 | Tekkie

  10. Pingback: Alan Kay: Rethinking CS education | Tekkie

  11. Pingback: A history lesson on government R&D, Part 4 | Tekkie

  12. Pingback: A history lesson on government R&D Part 2 | Mark's favorites

  13. Pingback: My latest blog post on government-funded research that led to quite a bit of the technology we use today | Mark's favorites

  14. Pingback: From ARPANET to AI: A Historical Journey Through the Internet’s Milestones – TUTOR PUH

Leave a comment