Feeds:
Posts
Comments

Archive for the ‘History’ Category

See Part 1, Part 2, and Part 3

It’s been a while since I’ve worked on this. I used up a lot of my passion for working on this series in the first three parts. Working on this part was like eating my spinach, not something I was enthused about, but it was a good piece of history to learn.

Nevertheless, I am dedicating this to Bob Taylor, who passed away on April 13. He got the ball rolling on building the Arpanet, when a lot of his colleagues in the Information Processing Techniques Office at ARPA didn’t want to do it. This was the predecessor to the internet. When he started up the Xerox Palo Alto Research Center (PARC), he brought in people who ended up being important contributors to the development of the internet. His major project at PARC, though, was the development of the networked personal computer in the 1970s, which I covered in Part 3 of this series. He left Xerox and founded the Systems Research Center at Digital Equipment Corp. in the mid-80s. In the mid-90s, while at DEC, he developed the internet search engine AltaVista.

He and J.C.R. Licklider were both ARPA/IPTO directors, and both were psychologists by training. Among all the people I’ve covered in this series, both of them had the greatest impact on the digital world we know today, in terms of providing vision and support to the engineers who did the hard work of turning that vision into testable ideas that could then inspire entrepreneurs who brought us what we are now using.

Part 3 Part 2 covers the history of how packet switching was devised, and how the Arpanet got going. You may wish to look that over before reading this article, since I assume that background knowledge.

My primary source for this story, as it’s been for this series, is “The Dream Machine,” by M. Mitchell Waldrop, which I’ll refer to occasionally as “TDM.”

Ethernet: The first internetworking protocol

Bob Metcalfe,
from University of Texas at Austin

In 1972, Bob Metcalfe was a full-time staffer on Project MAC, building out the Arpanet, using his work as the basis for his Ph.D. in applied mathematics from Harvard. While he was setting up the Arpanet node at Xerox PARC, he stumbled upon the ALOHAnet System, an ARPA-funded project that was created by Norman Abramson at the University of Hawaii. It was a radio-based packet-switching network. The Hawaiian islands created a natural environment for this solution to arise, since telephone connections across the islands were unreliable and expensive. The Arpanet’s protocol waited for a gap in traffic before sending packets. On ALOHAnet, this would have been impractical. Since radio waves were prone to interference, a sending terminal might not even hear a gap. So, terminals sent packets immediately to a transceiver switch. A network switch is a computer that receives packets and forwards them to their destination. In this case, it retransmitted the sender’s signal to other nodes on the network. If the sending terminal received acknowledgement that its packets were received, from the switch, it knew they got to their destination successfully. If not, it assumed a collision occurred with another sending terminal, garbling the packet signal. So, it waited a random period of time before resending the lost packets, with the idea of trying to send them when other terminals were not sending their packets. It had a weakness, though. The scheme Abramson used only operated at 17% of capacity, because the random wait scheme could become overloaded with collisions if it got above that. If use kept increasing to that limit, the system would eventually grind to a halt. Metcalfe thought this could be improved upon, and ended up collaborating with Abramson. Metcalfe incorporated his improvements into his Ph.D. thesis. He came to work at PARC in 1972 to use what he’d learned working on ALOHAnet to develop a networking scheme for the Xerox Alto. Instead of using radio signals, he used coaxial cable, and found he could transmit data at a much higher speed that way. He called what he developed Ethernet.

The idea behind it was to create local area networks, or LANs, using coaxial cable, which would allow the same kind of networking one could get on the Arpanet, but in an office environment, rather than between machines spread across the country. Rather than having every machine on the main network, there would be subnetworks that would connect to a main network at junction points. Along with that, Metcalfe and David Boggs developed the PUP (PARC Universal Packet) protocol. This allowed Ethernet to connect to other packet switching networks that were otherwise incompatible with each other. The developers of TCP/IP, the protocol used on the internet, would realize these same goals a little later, and begin their own research on how to accomplish it.

In 1973, Bob Taylor set a goal in the Computer Science Lab at Xerox PARC to make the Alto computer networked. He said he wanted not only personal computing, but distributed personal computing, at the outset.

The creation of the Internet

Bob Kahn,
from the ACM

Another man who enters the picture at this point is Bob Kahn. He joined Larry Roberts at DARPA in 1972, originally to work on a large project exploring computerized manufacturing, but that project was cancelled by Congress the moment he got there. Kahn had been working on the Arpanet at BBN, and he wanted to get away from it. Well…Roberts informed him that the Arpanet was all they were going to be working on in the IPTO for the next several years, but he persuaded Kahn to stay on, and work on new advancements to the Arpanet, such as mobile satellite networking, and packet radio networking, promising him that he wouldn’t have to work on maintaining and expanding the existing system. DARPA had signed a contract to expand the Arpanet into Europe, and networking by satellite was the route that they hoped to take. Expanding the packet radio idea of ALOHAnet was another avenue in which they were interested. The idea was to look into the possibility of allowing mobile military forces to communicate in the field using these wireless technologies.

Kahn got satellite networking going using the Intelsat IV satellite. He planned out new ideas for network security and digital voice transmission over Arpanet, and he planned out mobile terminals that the military could use, using packet radio. He had the thought, though, how are all of these different modes of communication going to communicate with each other?

As you’ll see in this story, Bob Kahn with Vint Cerf, and Bob Metcalfe, in a sense, ended up duplicating each other’s efforts; Kahn working at DARPA, Cerf at Stanford, and Metcalfe at Xerox in Palo Alto. They came upon the same basic concepts for how to create internetworking, independently. Metcalfe just came upon them a bit earlier than everyone else.

Within the DARPA community, Kahn thought it would seem easy enough to make the different networks he had in mind communicate seamlessly. Just make minor modifications to the Arpanet protocol for each system. He also wondered about future expansion of the network with new modes of communication that weren’t on the front burner yet, but probably would be one day. Making one-off modifications to the Arpanet protocol for each new communications method would eventually make network maintenance a mess. It would make the network more and more unweildy as it grew, which would limit its size. He understood from the outset that this method of network augmentation was a bad management and engineering approach. Instead of integrating the networks together, he thought a better method was to design the network’s expansion modularly, where each network would be designed and managed seperately, with its own hardware, software, and network protocol. Gateways would be created via. hybrid routers that had Arpanet hardware and software on one side, and the foreign network’s hardware and software on the other side, with a computer in the middle translating between the two. Kahn recognized how this structure could be criticized. The network would be less efficient, since each packet from a foreign network to the Arpanet would have to go through three stages of processing, instead of one. It would be less reliable, since there would be three computers that could break, instead of one. It would be more complex, and expensive. The advantage would be that something like satellite communication could be managed completely independently from the Arpanet. So long as both the Arpanet and the foreign network adhered to the gateway interface standard, it would work. You could just connect them up, and the two networks could start sending and receiving packets. The key design idea is neither network would have to know anything about the internal details of the other. Instead of a closed system, it would be open, a network of networks that could accommodate any specialized network. This same scheme is used on the internet today.

Kahn needed help in defining this open concept. So, in 1973, he started working with Vint Cerf, who had also worked on the Arpanet. Cerf had just started work at the computer science department of Stanford University.

Vint Cerf,
from Wikipedia

Cerf came up with an idea for transporting packets between networks. Firstly, a universal transmission protocol would be recognized across the network. Each sending network would encode its packets with this universal protocol, and “wrap” them in an “envelope” that used “markings” that the sending network understood in its protocol. The gateway computer would know about the “envelopes” that the sending and receiving networks understood, along with the universal protocol encoding they contained. When the gateway received a packet, it would strip off the sending “envelope,” and interpret the universal protocol enclosed within it. Using that, it would wrap the packet in a new “envelope” for the destination network to use for sending the packet through itself. Waldrop had a nice analogy for how it worked. He said it would be as if you were sending a card from the U.S. to Japan. In the U.S., the card would be placed in an envelope that had your address and the destination address written in English. At the point when it left the U.S. border, the U.S. envelope would be stripped off, and the card would be placed in a new envelope, with the source and destination addresses translated to Kanji, so it could be understood when it reached Japan. This way, each network would “think” it was transporting packets using its own protocol.

Kahn and Cerf also understood the lesson from ALOHAnet. In their new protocol, senders wouldn’t look for gaps in transmission, but send packets immediately, and hope for the best. If any packets were missed at the other end, due to collisions with other packets, they would be re-sent. This portion of the protocol was called TCP (Transmission Control Protocol). A separate protocol was created to manage how to address packets to foreign networks, since the Arpanet didn’t understand how to do this. This was called IP (Internet Protocol). Together, the new internetworking protocol was called TCP/IP. Kahn and Cerf published their initial design for the internet protocol in 1974, in a paper titled, “A Protocol for Packet Network Interconnection.”

Cerf started up his internetworking seminars at Stanford in 1973, in an effort to create the first implementation of TCP/IP. Cerf described his seminars as being as much about consensus as technology. He was trying to get everyone to agree to a universal protocol. People came from all over the world, in staggered fashion. It was a drawn out process, because each time new attendees showed up, the discussion had to start over again. By the end of 1974, the first detailed design was drawn up in a document, and published on the Arpanet, called Request For Comments (RFC) 675. Kahn chose three contractors to create the first implementations: Cerf and his students at Stanford, Peter Kirstein and his students at University College in London, and BBN, under Ray Tomlinson. All three implementations were completed by the end of 1975.

Bob Metcalfe, and a colleague named John Schoch from Xerox PARC, eagerly joined in with these seminars, but Metcalfe felt frustrated, because his own work on Ethernet, and a universal protocol they’d developed to interface Ethernet with Arpanet and Data General’s network, called PUP (Parc Universal Packet) was proprietary. He was able to make contributions to TCP/IP, but couldn’t overtly contribute much, or else it would jeopardize Xerox’s patent application on Ethernet. He and Schoch were able to make some covert contributions by asking some leading questions, such as, “Have you thought about this?” and “Have you considered that?” Cerf picked up on what was going on, and finally asked them, in what I can only assume was a humorous moment, “You’ve done this before, haven’t you?” Also, Cerf’s emphasis on consensus was taking a long time, and Metcalfe eventually decided to part with the seminars, because he wanted to get back to work on PUP at Xerox. In retrospect, he had second thoughts about that decision. He and David Boggs at Xerox got PUP up and running well before TCP/IP was finished, but it was a proprietary protocol. TCP/IP was not, and by making his decision, he had cut himself off from influencing the direction of the internet into what it eventually became.

PARC’s own view of Ethernet was that it would be used by millions of other networks. The initial vision of the TCP/IP group was that it would be an extension of Arpanet, and as such, would only need to accommodate the needs that DARPA had for it. Remember, Kahn set out for his networking scheme to accommodate satellite, and military communications, and some other uses that he hadn’t thought of yet. Metcalfe saw that TCP/IP would only accommodate 256 other networks. He said, “It could only work with one or two nets per country!” He couldn’t talk to them about the possibility of millions of networks using it, because that would have tipped others off to proprietary technology that Xerox had for local area networks (LANs). Waldrop doesn’t address this technical point of how TCP/IP was expanded to accommodate beyond 256 networks. Somehow it was, because obviously it’s expanded way beyond that.

Several years later, people on the project started using the term “Internet,” which was just a shortened version of the term “internetworking.” It took many years before TCP/IP was considered stable enough to port the Arpanet over to it completely. The Department of Defense adopted TCP/IP as its official standard in 1980, and the Arpanet was converted to TCP/IP by January 1, 1983, thus creating the Internet. This made it much easier to expand the network than was the case with the Arpanet’s old protocol, because TCP/IP incorporates protocol translation into the network infrastructure. So it doesn’t matter if a new network comes along with its own protocol. It can be brought into the internet, with a junction point that translates its protocol.

We should keep in mind as well that at the point in time that the internet officially came on line nothing had changed with respect to the communications hardware (which I covered in Part 3). 56Kbps was still the maximum speed of the network, as all network communications were still conducted over modems and dedicated long-distance phone lines. This became an issue as the network expanded rapidly.

The TCP/IP working group worked on porting the protocol (called the Kahn-Cerf internetworking protocol in the early days) to many operating systems, and migrating subprotocols from the Arpanet to the Kahn-Cerf network, from the mid-1970s into 1980.

The next generation of the internet

While the founding of the internet would seem to herald an age of networking harmony, this was not the case. Competing network standards proliferated before, during, and after TCP/IP was developed, and the networking world was just as fragmented when TCP/IP got going as before. People who got on one network could not transfer information, or use their network to interact with people and computers on other networks. The main reason for this was that TCP/IP was still a defense program.

The National Science Foundation (NSF) would unexpectedly play the critical role of expanding the internet beyond the DoD. The NSF was not known for funding risky, large-scale, enterprising projects. The people involved with computing research never had much respect for it, because it was so risk-averse, and politicized. Waldrop said,

Unlike their APRA counterparts, NSF funding officers had to submit every proposal for “peer-review” by a panel of working scientists, in a tedious decision-by-committee process that (allegedly) quashed anything but the most conventional ideas. Furthermore, the NSF had a reputation for spreading its funding around to small research groups all over the country, a practice that (again allegedly) kept Congress happy but most definitely made it harder to build up a Project MAC-style critical mass of talent in any one place.

Still, the NSF was the only funding agency chartered to support the research community as a whole. Moreover, it had a long tradition of undertaking … big infrastructure efforts that served large segments of the community in common. And on those rare occasions when the auspices were good, the right people were in place, and the planets were lined up just so, it was an agency where great things could happen. — TDM, p. 458

In the late 1970s, researchers at “have not” universities began to complain that while researchers at the premier universities had access to the Arpanet, they didn’t. They wanted equal access. So, the NSF was petitioned by a couple schools in 1979 to solve this, and in 1981, the NSF set up CSnet (Computer Science Network), which linked smaller universities into the Arpanet using TCP/IP. This was the first time that TCP/IP was used outside of the Defense Department.

Steve Wolff,
from Internet2

The next impetus for expanding the internet came from physicists, who thought of creating a network of supercomputing centers for their research, since using pencil and paper was becoming untenable, and they were having to beg for time on supercomputers at Los Alamos and Lawrence Livermore that were purchased by the Department of Energy for nuclear weapons development. The NSF set up such centers at the University of Illinois, Cornell University, Princeton, Carnegie Mellon, and UC San Diego in 1985. With that, the internet became a national network, but as I said, this was only the impetus. Steve Wolff, who would become a key figure in the development of the internet at the NSF, said that as soon as the idea for these centers was pitched to the NSF, it became instantly apparent to them that this network could be used as a means for communication among scientists about their work. That last part was something the NSF just “penciled in,” though. It didn’t make plans for how big this goal could get. From the beginning, the NSF network was designed to allow communication among the scholarly community at large. The NSF tried to expand the network outward from these centers to K-12 schools, museums, “anything having to do with science education.” We should keep in mind how unusual this was for the NSF. Waldrop paraphrased Wolff, saying,

[The] creation of such a network exceeded the foundation’s mandate by several light-years. But since nobody actually said no—well, they just did it. — TDM, p. 459

The NSF declared TCP/IP as its official standard in 1985 for all digital network projects that it would sponsor, thenceforth. The basic network that the NSF set up was enough to force other government agencies to adopt TCP/IP as their network standard. It forced other computer manufacturers, like IBM and DEC, to support TCP/IP as well as their own networking protocols that they tried to push, because the government was too big of a customer to ignore.

The expanded network, known as “NSFnet,” came online in 1986.

And the immediate result was an explosion of on-campus networking, with growth rates that were like nothing anyone had imagined. Across the country, those colleges, universities, and research laboratories that hadn’t already taken the plunge began to install local-area networks on a massive scale, almost all of them compatible with (and having connections to) the long-distance NSFnet. And for the first time, large numbers of researchers outside the computer-science departments began to experience the addictive joys of electronic mail, remote file transfer and data access in general. — TDM, p. 460

That same year, a summit of network representatives came together to address the problem of naming computers on the internet, and assigning network addresses. From the beginnings of the Arpanet, a paper directory of network names and addresses had been updated and distributed to the different computer sites on the network, so that everyone would know how to reach each other’s computers on the network. There were so few computers on it, they didn’t worry about naming conflicts. By 1986 this system was breaking down. There were so many computers on the network that assigning names to them started to be a problem. As Waldrop said, “everyone wanted to claim ‘Frodo’.” The representatives came to the conclusion that they needed to automate the process of creating the directory for the internet. They called it the Domain Name Service (DNS). (I remember getting into a bit of an argument with Peter Denning in 2009 on the issue of the lack computer innovation after the 1970s. He brought up DNS as a counter-example. I assumed DNS had come in with e-mail in the 1970s. I can see now I was mistaken on that point.)

Then there was the problem of speed,

“When the original NSFnet went up, in nineteen eighty-six, it could carry fifty-six kilobits per second, like the Arpanet,” says Wolff. “By nineteen eighty-seven it had collapsed from congestion.” A scant two years earlier there had been maybe a thousand host computers on the whole Internet, Arpanet included. Now the number was more like ten thousand and climbing rapidly. — TDM, p. 460

This was a dire situation. Wolff and his colleagues quickly came up with a plan to increase the internet’s capacity by a factor of 30, increasing its backbone bandwidth to 1.5 Mbps, using T1 lines. They didn’t really have the authority to do this. They just did it. The upgrade took effect in July 1988, and usage exploded again. In 1991, Wolff and his colleagues boosted the network’s capacity by another factor of 30, going to T3 lines, at 45 Mbps.

About Al Gore

Before I continue, I thought this would be a good spot to cover this subject, because it relates to what happened next in the internet’s evolution. Bob Taylor said something about this in the Q&A section in his 2010 talk at UT Austin. (You can watch the video at the end of Part 3.) I’ll add some more to what he said here.

Gore became rather famous for supposedly saying he “invented” the internet. Since politics is necessarily a part of discussing government R&D, I feel it’s necessary to address this issue, because Gore was a key figure in the development of the internet, but the way he described his involvement was confusing.

First of all, he did not say the word “invent,” but I think it’s understandable that people would get the impression that he said he invented the internet, in so many words. This story originated during Gore’s run for the presidency in 1999, when he was Clinton’s vice president. What I quote below is from an interview with Gore on CNN’s Late Edition with Wolf Blitzer:

During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth, environmental protection, improvements in our educational system, during a quarter century of public service, including most of it coming before my current job. I have worked to try to improve the quality of life in our country, and our world. And what I’ve seen during that experience is an emerging future that’s very exciting, about which I’m very optimistic …

The video segment I cite cut off after this, but you get the idea of where he was going with it. He used the phrase, “I took the initiative in creating the Internet,” several times in interviews with different people. So it was not just a slip of the tongue. It was a talking point he used in his campaign. There are some who say that he obviously didn’t say that he invented the internet. Well, you take a look at that sentence and try to see how one could not infer that he said he had a hand in making the internet come into existence, and that it would not have come into existence but for his involvement. In my mind, if anybody “took the initiative in creating the Internet,” it was the people involved with ARPA/IPTO, as I’ve documented above, not Al Gore! You see, to me, the internet was really an evolutionary step made out of the Arpanet. So the internet really began with the Arpanet in 1969, at the time that Gore had enlisted in the military, just after graduating from Harvard.

The term “invented” was used derisively against him by his political opponents to say, “Look how delusional he is. He thinks he created it.” Everyone knew his statement could not be taken at face value. His statement was, in my own judgment, a grandiose and self-serving claim. At the time I heard about it, I was incredulous, and it got under my skin, because I knew something about the history of the early days of the Arpanet, and I knew it didn’t include the level of involvement his claim stated, when taken at face value. It felt like he was taking credit where it wasn’t due. However, as you’ll see in statements below, some of the people who were instrumental in starting the Arpanet, and then the internet, gave Gore a lot of slack, and I can kind of see why. Though Gore’s involvement with the internet didn’t come into view until the mid-1980s, apparently he was doing what he could to build political support for it inside the government all the way back in the 1970s, something that has not been visible to most people.

Gore’s approach to explaining his role, though, blew up in his face, and that was the tragedy in it, because it obscured his real accomplishments. What would’ve been a more precise way of talking about his involvement would’ve been for him to say, “I took initiatives that helped create the Internet as we know it today.” The truth of the matter is he deserves credit for providing and fostering political, intellectual, and financial support for a next generation internet that would support the transmission of high-bandwidth content, what he called the “information superhighway.” Gore’s father, Al Gore, Sr., sponsored the bill in the U.S. Senate that created the Interstate highway system, and I suspect that Al Gore, Jr. wanted, or those in his campaign wanted him to be placed right up there with his father in the pantheon of leaders of great government infrastructure projects that have produced huge dividends for this country. That would’ve been nice, but seeing him overstate his role took him down a peg, rather than raising him up in public stature.

Here are some quotes from supporters of Gore’s efforts, taken from a good Wikipedia article on this subject:

As far back as the 1970s Congressman Gore promoted the idea of high-speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship […] the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises.

— a joint statement by Vint Cerf and Bob Kahn

The sense I get from the above statement is that Gore was a political and intellectual cheerleader for the internet in the 1970s, spreading a vision about how it could be used in the future, to help build political support for its future funding and development. The initiative Gore talked about I think is expressed well in this statement:

A second development occurred around this time, namely, then-Senator Al Gore, a strong and knowledgeable proponent of the Internet, promoted legislation that resulted in President George H.W Bush signing the High Performance Computing and Communication Act of 1991. This Act allocated $600 million for high performance computing and for the creation of the National Research and Education Network. The NREN brought together industry, academia and government in a joint effort to accelerate the development and deployment of gigabit/sec networking.

— Len Kleinrock

This is basically accurate, but more of a summary. Here is some more detail from TDM.

In 1986, Democratic Senator Al Gore, who had a keen interest in the internet, and the development of the networked supercomputing centers created in 1985, asked for a study on the feasability of getting gigabit network speeds using fiber optic lines to connect up the existing computing centers. This created a flurry of interest from a few quarters. DARPA, NASA, and the Energy Department saw opportunities in it for their own projects. DARPA wanted to included it in their Strategic Computing Initiative, begun by Bob Kahn in the early 1980s. DEC saw a technological opportunity to expand upon ideas it had been developing. To Gore’s surprise, he received a multiagency report in 1987 advocating a government-wide “assault” on computer technology. It recommended that Congress fund a billion-dollar research initiative in high-performance computing, with the goal of creating computers in several years whose performance would be an order of magnitude greater than the fastest computers available at the time. It also recommended starting the National Research and Education Network (NREN), to create a network that would send data at gigabits per second, an order of magnitude greater than what the internet had just been converted to. In 1988, after getting more acquainted with these ideas, he introduced a bill in Congress to fund both initiatives, called the Gore Bill. It put DARPA in charge of developing the gigabit network technology, and officially authorized and funded the NSF’s expansion of NSFnet, and it was also given the explicit mission in the bill to connect up the whole federal government to the internet, and the university system of the United States. It ran into a roadblock, though, from the Reagan Administration, which argued that these initiatives for faster computers and faster networking should be left to the private sector.

This stance causes me to wonder if perhaps there was a mood to privatize the internet back then, but it just wasn’t accomplished until several years later. I talked with a friend a few years ago about the internet’s history. He had been a tech innovator for many years, and he said he was lobbying for the network to be privatized in the ’80s. He said he and other entrepreneurs he knew were chomping at the bit to develop it further. Perhaps there wasn’t a mood for that in Congress.

The funding for high-performance computing research in Gore’s original bill was passed in 1991, under the George H.W. Bush Administration. The second half, on funding the gigabit network architecture, was passed in 1993, as the High-Performance Computing and Communications Act (HPCCA), under President Clinton.

Waldrop says, though, that this bump in the road hardly mattered, because the agencies were quietly setting up a national network on their own. Gorden Bell at the NSF said that their network was getting bigger and bigger. Program directors from the NSF, NASA, DARPA, and the Energy Department created their own ad hoc shadow agency, called the Federal Research Internet Coordinating Committee (FRICC). If they agreed on a good idea, they found money in one of their agencies’ budgets to do it. This committee would later be reconstituted officially as the Federal Networking Council. These agencies also started standardizing their networks around TCP/IP. By the beginning of the 1990s, the de facto national research network was officially in place.

Marc Andreeson said that the development of Mosaic, the first publicly available web browser, and the prototype for Netscape’s browser, couldn’t have happened when it did without funding from the HPCCA. “If it had been left to private industry, it wouldn’t have happened. At least, not until years later,” he said. Mosaic was developed at the National Center for Supercomputer Applications (NCSA) at the University of Illinois in 1993. Netscape was founded in December 1993.

The 2nd generation network takes shape

Recognizing in the late ’80s that the megabit speeds of the internet were making the Arpanet a dinosaur, DARPA started decommissioning their Information Message Processors (IMPs), and by 1990, the Arpanet was officially offline.

By 1988, there were 60,000 computers on the internet. By 1991, there were 600,000.

CERN in Switzerland had joined the internet in the late 1980s. In 1990, an English physicist working at CERN, named Tim Berners-Lee, created his first system for hyperlinking documents on the internet, what we would now know as a web server and a web browser. Berners-Lee had actually been experimenting with data linking long before this, long before he’d heard of Vannevar Bush, Doug Engelbart, or Ted Nelson. In 1980, he linked files together on a single computer, to form a kind of database, and he had repeated this metaphor for years in other systems he created. He had written his World Wide Web tools and protocol to run on the NeXT computer. It would be more than a year before others would implement his system on other platforms.

How the internet was privatized

Wolff, at NSF, was beginning to try to get the internet privatized in 1990. He said,

I pushed the Internet as hard as I did because I thought it was capable of becoming a vital part of the social fabric of the country—and the world,” he says. “But it was also clear to me that having the government provide the network indefinitely wasn’t going to fly. A network isn’t something you can just buy; it’s a long-term, continuing expense. And government doesn’t do that well. Government runs by fad and fashion. Sooner or later funding for NSFnet was going to dry up, just as funding for the Arpanet was drying up. So from the time I got here, I grappled with how to get the network out of the government and instead make it part of the telecommunications business. — TDM, p. 462

The telecommunications companies, though, didn’t have an interest in taking on building out the network. From their point of view, every electronic transaction was a telephone call that didn’t get made. Wolff said,

“As late as nineteen eighty-nine or ‘ninety, I had people from AT&T come in to me and say—apologetically—’Steve, we’ve done the business plan, and we just can’t see us making any money.'” — TDM, p. 463

Wolff had planned from the beginning of NSFnet to privatize it. After the network got going, he decentralized its management into a three-tiered structure. At the lowest level were campus-scale networks operated by research laboratories, colleges, and universities. In the middle level were regional networks connecting the local networks. At the highest level was the “backbone” that connected all of the regional networks together, operated directly by the NSF. This scheme didn’t get fully implemented until they did the T1 upgrade in 1988.

Wolff said,

“Starting with the inauguration of the NSFnet program in nineteen eighty-five,” he explains, “we had the hope that it would grow to include every college and university in the country. But the notion of trying to administer a three-thousand-node network from Washington—well, there wasn’t that much hubris inside the Beltway.” — TDM, p. 463

It’s interesting to hear this now, given that HealthCare.gov is estimated to be between 5 and 15 million lines of code (no official figures are available to my knowledge. This is just what’s been disclosed on the internet by an apparent insider), and it is managed by the federal government, seemingly with no plans to privatize it. For comparison, that’s between the size of Windows NT 3.1 and the flight control software of the Boeing 787.

Wolff also structured each of the regional service providers as non-profits. Wolff told them up front that they would eventually have to find other customers besides serving the research community. “We don’t have enough money to support the regionals forever,” he said. Eventually, the non-profits found commercial customers—before the internet was privatized. Wolff said,

“We tried to implement an NSF Acceptable Use Policy to ensure that the regionals kept their books straight and to make sure that the taxpayers weren’t directly subsidizing commercial activities. But out of necessity, we forced the regionals to become general-purpose network providers.” — TDM, p. 463

This structure became key to how the internet we have unfolded. Waldrop notes that around 1990, there were a number of independent service providers that came into existence to provide internet access to anyone who wanted it, without restrictions.

After a dispute erupted between one of the regional networks and NSF over who should run the top level network (NSF had awarded the contract to a company called ANS, a consortium of private companies) in 1991, and a series of investigations, which found no wrongdoing on NSF’s part, Congress passed a bill in 1992 that allowed for-profit Internet Service Providers to access the top level network. Over the next few years, the subsidy the NSF provided to the regional networks tapered off, until on April 30, 1995, NSFnet ceased to exist, and the internet was self-sustaining. The NSF continued operating a much smaller, high-speed network, connecting its supercomputing centers at 155 Mbps.

Where are they now?

Bob Metcalfe left PARC in 1975. He returned to Xerox in 1978, working as a consultant to DEC and Intel, to try to hammer out an open standard agreement with Xerox for Ethernet. He started 3Com in 1979 to sell Ethernet technology. Ethernet became an open standard in 1982. He left 3Com in 1990. He then became a publisher, and wrote a column for InfoWorld Magazine for ten years. He became a venture capitalist in 2001, and is now a partner with Polaris Venture Partners.

Bob Kahn founded the Corporation for National Research Initiatives, a non-profit organization, in 1986, after leaving DARPA. Its mission is “to provide leadership and funding for research and development of the National Information Infrastructure.” He served on the State Department’s Advisory Committee on International Communications and Information Policy, the President’s Information Technology Advisory Committee, the Board of Regents of the National Library of Medicine, and the President’s Advisory Council on the National Information Infrastructure. Kahn is currently working on a digital object architecture for the National Information Infrastructure, as a way of connecting different information systems. He is a co-inventor of Knowbot programs, mobile software agents in the network environment. He is a member of the National Academy of Engineering, and is currently serving on the State Department’s Advisory Committee on International Communications and Information Policy. He is also a Fellow of the IEEE, a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), a Fellow of the ACM, and a Fellow of the Computer History Museum. (Source: The Corporation for National Research Initiatives)

Vint Cerf joined DARPA in 1976. He left to work at MCI in 1982, where he stayed until 1986. He created MCI Mail, the first commercial e-mail service that was connected to the internet. He joined Bob Kahn at the Corporation for National Research Initiatives, as Vice President. In 1992, he and Kahn, among others, founded the Internet Society (ISOC). He re-joined MCI in 1994 as Senior Vice President of Technology Strategy. The Wikipedia page on him is unclear on this detail, saying that at some point, “he served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks, including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.” There are a lot of accomplishments listed on his Wikipedia page, more than I want to list here, but I’ll highlight a few:

Cerf joined the board of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1999, and served until the end of 2007. He has been a Vice President at Google since 2005. He is working on the Interplanetary Internet, together with NASA’s Jet Propulsion Laboratory. It will be a new standard to communicate from planet to planet, using radio/laser communications that are tolerant of signal degradation. He currently serves on the board of advisors of Scientists and Engineers for America, an organization focused on promoting sound science in American government. He became president of the Association for Computing Machinery (ACM) in 2012 (for 2 years). In 2013, he joined the Council on Cybersecurity’s Board of Advisors.

Steve Wolff left the NSF in 1994, and joined Cisco Systems as a business development manager for their Academic Research and Technology Initiative. He is a life member of the Institute of Electrical and Electronic Engineers (IEEE), and is a member of the American Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), and the Internet Society (ISOC) (sources: Wikipedia, and Internet2)

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I haven’t written about this in a while, but one of my first exposures to computing was from using Atari 8-bit computers, beginning in 1981. I’ve written about this in a few posts: Reminiscing, Part 1, Reminiscing, Part 3, and My journey, Part 1.

I found out about Antic–The Atari 8-bit podcast through a vintage computer group on Google+. They have some great interviews on there. At first blush, the format of these podcasts sounds amateurish. The guys running the show sound like your stereotypical nerds, and sometimes the audio sounds pretty bad, because the guest responses are being transmitted through Skype, or perhaps using cell phones, but if you’re interested in computer history like I am, you will get information on the history of Atari, and the cottage industry that was early personal computing and video gaming that you’re not likely to find anywhere else. The people doing these podcasts are also active contributors to archive.org, keeping these interviews for posterity.

What comes through is the business was informal. It was also possible for one or two people to write a game, an application, or a complete operating system. Those days are gone.

Most of the interviews I’ve selected cover the research and development Atari did from 1977 until 1983, when Warner Communications (now known as Time-Warner) owned the company, because it’s the part of the history that I think is the most fascinating. So much of what’s described here is technology that was only hinted at in old archived articles I used to read. It sounded so much like opportunity that was lost, and indeed it was, because even though it was ready to be produced, the higher-ups at Atari didn’t want to put it into production. The details make that much more apparent. Several of these interviews cover the innovative work of the researchers who worked under Alan Kay at Atari. Kay left Xerox PARC, and worked at Atari as its chief scientist from 1981 to 1984. In 1983, Atari, and the rest of the video games industry went through a market crash. Atari’s consumer electronics division was sold to Jack Tramiel in 1984. Atari’s arcade game business was spun off into its own company. Both used the name “Atari” for many years afterward. For those who remember this time, or know their history, Jack Tramiel founded Commodore Business Machines in the 1950s as a typewriter parts company. Commodore got into producing computers in the late 1970s. Jack Tramiel had a falling out with Commodore’s executives over the future direction of the company in 1984, and so he left, taking many of Commodore’s engineers with him. He bought Atari the same year. As I’ve read the history of this period, it sounds literally as if the two companies did a “switch-a-roo,” where Amiga’s engineers (who were former Atari employees) came to work at Commodore, and many of Commodore’s engineers came to work at Atari. 1984 onward is known as the “Tramiel era” at Atari. I only have one interview from that era, the one with Bob Brodie. That time period hasn’t been that interesting to talk about.

I’ll keep adding more podcasts to this post as I find more interesting stuff. I cover in text a lot of what was interesting to me in these interviews. I’ve provided links to each of the podcasts as sources for my comments.

Show #4 – An interview with Chris Crawford. Crawford is a legend in the gaming community, along with some others, such as Sid Meier (who worked at Microprose). Crawford is known for his strategy/simulation games. In this interview, he talked about what motivated him to get into computer games, how he came to work at Atari, his work at Atari’s research lab, working with Alan Kay, and the games he developed through his career. One of the games he mentioned that seemed a bit fascinating to me is one that Atari didn’t release. It was called “Gossip.” He said that Alan encouraged him to think about what games would be like 30 years in the future. This was one product of Crawford thinking about that. I found a video of “Gossip.”

It’s not just a game where you fool around. There is actual scoring in it. The point of the game is to become the most popular, or “liked” character, and this is scored by what other characters in the game think of you, and of other characters. This is determined by some mathematical formula that Crawford came up with. AI is employed in the game, because the other characters are played by the computer, and each is competing against you to become the most popular. You influence the other players by gossiping about them, to them. The other characters try to influence you by sharing what they think other characters are thinking about you or someone else. Since Atari computers had limited resources, there is no actual talking, though they added the effect of you hearing something that resembles a human voice as each computer character gossips. You gossip via. a “facial expression vocabulary.” It’s not easy to follow what’s going on in the game. You call someone. You tell them, “This is how I feel about so-and-so.” They may tell you, “This is what X feels about Y,” using brackets and a pulsing arrow to denote, “who feels about whom.” The character with brackets around them shows a facial expression to convey what the character on the other end of the phone is telling you.

Interview #7  with Bill Wilkinson, founder of Optimized Systems Software (OSS), and author of the “Insight: Atari” column at Compute! Magazine. He talked about how he became involved with the projects to create Atari Basic, and Atari DOS, working as a contractor with Atari through a company called Shepardson Microsystems. He also talked about the business of running OSS. There were some surprises. One was Wilkinson said that Atari Basic compiled programs down to bytecode, line by line, and ran them through a virtual machine, all within an 8K cartridge. I had gotten a hint of this in 2015 by learning how to explore the memory of the Atari (running as an emulator on my Mac) while running Basic, and seeing the bytecodes it generated. After I heard what he said, it made sense. This runs counter to what I’d heard for years. In the 1980s, Atari Basic had always been described as an interpreter, and the descriptions of what it did were somewhat off-base, saying that it “translated Basic into machine code line by line, and executed it.” It could be that “bytecode” and “virtual machine” were not part of most programmers’ vocabularies back then, even though there had been such systems around for years. So the only way they could describe it was as an “interpreter.”

Another surprise was that Wilkinson did not paint a rosy picture of running OSS. He said of himself that in 20/20 hindsight, he shouldn’t have been in business in the first place. He was not a businessman at heart. All I take from that is that he should have had someone else running the business, not that OSS shouldn’t have existed. It was an important part of the developer community around Atari computers at the time. Its Mac/65 assembler, in particular, was revered as a “must have” for any serious Atari developer. Its Action! language compiler (an Atari 8-bit equivalent of the C language) was also highly praised, though it came out late in the life of the Atari 8-bit computer line. Incidentally, Antic has an interview with Action!‘s creator, Clinton Parker, and with Stephen Lawrow, who created Mac/65.

Wilkinson died on November 10, 2015.

Interview #11 with David Small, columnist for Creative Computing, and Current Notes. He was the creator of the Magic Sac, Spectre 128, and Spectre GCR Macintosh emulators for the Atari ST series computers. It was great hearing this interview, just to reminisce. I had read Small’s columns in Current Notes years ago, and usually enjoyed them. I had a chance to see him speak in person at a Front Range Atari User Group meeting while I was attending Colorado State University, around 1993. I had no idea he had lived in Colorado for years, and had operated a business here! He had a long and storied history with Atari, and with the microcomputer industry.

Interview #30 with Jerry Jessop. He worked at Atari from 1977 to 1984 (the description says he worked there until 1985, but in the interview he says he left soon after it was announced the Tramiels bought the company, which was in ’84). He worked on Atari’s 8-bit computers. There were some inside baseball stories in this interview about the development of the Atari 1400XL and 1450XLD, which were never sold. The most fascinating part is where he described working on a prototype called “Mickey” (also known as the 1850XLD) which might have become Atari’s version of the Amiga computer had things worked out differently. It was supposed to be a Motorola 68000 computer that used a chipset that Amiga Inc. was supposed to develop for Atari, in exchange for money Atari had loaned to Amiga for product development. There is also history which says that Atari structured the loan in a predatory manner, thinking that Amiga would never be able to pay it off, in which case Atari would’ve been able to take over Amiga for free. Commodore intervened, paid off the loan, got the technology, and the rest is history. This clarified some history for me, because I had this vague memory for years that Atari planned to make use of Amiga’s technology as part of a deal they had made with them. The reason the Commodore purchase occurred is Amiga ran through its money, and needed more. It appears the deal Atari had with Amiga fell through when the video game crash happened in 1983. Atari was pretty well out of money then, too.

Interview #37 with David Fox, creator of Rescue on Fractalus, and Ballblazer at Lucasfilm Games. An interesting factoid he talked about is that they wrote these games on a DEC Vax, using a cross-assembler that was written in Lisp. I read elsewhere that the assembler had the programmer write code using a Lisp-like syntax. They’d assemble the game, and then download the object code through a serial port on the Atari for testing. Fox also said that when they were developing Ballblazer, they sometimes used an Evans & Sutherland machine with vector graphics to simulate it.

Interview #40 with Doug Carlston, co-founder and CEO of Brøderbund Software. Brøderbund produced a series of hit titles, both applications and games for microcomputers. To people like me who were focused on the industry at the time, their software titles were very well known. Carlston talked about the history of the company, what led to its rise, and fall. He also had something very interesting to say about software piracy. He was the first president of the Software Publishers Association, and he felt that they took too harsh a stand on piracy, because from his analysis of sales in the software industry, it didn’t have the negative effect on it that most people thought it did. He saw piracy as a form of marketing. He thought it increased the exposure of software to people who would have otherwise not known about it, or thought to buy it. He said when he compared the growth of software sales from this era to when software was first released on CDs, during a time when recording to CDs was too expensive for most consumers, and the volume of data held on CDs would have taken up a very large chunk of users’ hard drive space, if not exceeded it (so piracy was nearly impossible), he said he didn’t notice that big of a difference in the sales growth curves. Nevertheless, some developers at Brøderbund were determined to foil pirates, and a story he told about one developer’s efforts was pretty funny. 🙂 Though, he thought it went “too far,” so they promptly took out the developer’s piracy-thwarting code.

Interview #49 with Curt Vendel and Marty Goldberg. They spoke in detail about the history of how Atari’s video game and computer products developed, while Atari was owned by Warner Communications. A surprising factoid in this interview is they revealed that modern USB technology had its beginning with the development of the Atari 8-bit’s Serial Input/Output (SIO) port. This port was used to daisy-chain peripherals, such as disk drives, serial/parallel ports, and printers to the Atari computer. The point was it was an intelligent port architecture that was able to identify what was hooked up to the computer, without the user having to configure hardware or software. A driver could be downloaded from each device at bootup (though not all SIO devices had built-in drivers). Each device was capable of being ready to go the moment the user plugged it into either the computer, or one of the other SIO-compatible peripherals, and turned on the computer.

Interview #58 with Jess Jessop, who worked in Atari’s research lab. In one project, called E.R.I.C., he worked on getting the Atari 800 to interact with a laserdisc player, so the player could run video segments that Atari had produced to market the 800, and allow people to interact with the video through the Atari computer. The laserdisc player would run vignettes on different ways people could use the computer, and then have the viewer press a key on the keyboard, which directed the player to go to a specific segment. He had a really funny story about developing that for the Consumer Electronics Show. I won’t spoil it. You’ll have to listen to the interview. I thought it was hilarious! 😀 If you listen to some of the other podcasts I have here, Kevin Savetz, who conducted most of these interviews, tried to confirm this story from other sources. All of them seem to have denied it, or questioned it (questioned whether it was possible). So, this story, I think, is apocryphal.

Here’s a bit of the laserdisc footage.

Jessop also mentioned Grass Valley Lab (Cyan Engineering), a separate research facility that worked on a variety of technology projects for Atari, including “pointer” technology, using things like tablet/stylus interfaces, and advanced telephony. Kevin Savetz interviewed a few other people from Grass Valley Lab in other podcasts I have here.

Jessop said the best part of his job was Alan Kay’s brown bag lunch sessions, where all Alan would do was muse about the future, what products would be relevant 30 years down the road, and he’d encourage the researchers who worked with him to do the same.

The really interesting part was listening to Jessop talk about how the research lab had worked on a “laptop” project in the early 1980s with Alan Kay. He said they got a prototype going that had four processors in it. Two of them were a Motorola 68000 and an Intel processor. He said the 68000 was the “back-end processor” that ran Unix, and the Intel was a “front-end processor” for running MS-DOS, which he said was for “interacting with the outside world” (ie. running the software everyone else was using). He said the Unix “back end” ran a graphical user interface, “because Alan Kay,” and the “front-end processor” ran DOS in the GUI. He said the prototype wasn’t going to end up being a “laptop,” since it got pretty heavy, ran hot with all the components in it, and had loud cooling fans. It also had a rather large footprint. The prototype was never put into production, according to Jessop, because of the market crash in 1983.

Jessop expressed how fun it was to work at Atari back when Warner Communications was running it. That’s something I’ve heard in a few of these interviews with engineers who worked there. They said it spoiled people, because most other places have not allowed the free-wheeling work style that Atari had. Former Atari engineers have since tried to recreate that, or find someplace that has the same work environment, with little success.

Interview #65 with Steve Mayer, who helped create Grass Valley Labs, and helped design the hardware for the Atari 2600 video game system, and the Atari 400 and 800 personal computers. The most interesting part of the interview for me was where he talked about how Atari talked to Microsoft about writing the OS for the 400 and 800. They were released in 1979, and Mayer was talking about a time before that when both computers were still in alpha development. He said Microsoft had a “light version of DOS,” which I can only take as an analogy to some other operating system they might have been in possession of, since PC-DOS didn’t even exist yet. I remember Bill Gates saying in the documentary “Triumph of the Nerds” that Microsoft had a version of CP/M on some piece of hardware (the term “softcard” comes to mind) that it had licensed from Digital Research by the time IBM came to them for an OS for their new PC, in about 1980. Maybe this is what Mayer was talking about.

Another amazing factoid he talked about is Atari considered having the computers boot into a graphical user interface, though he stressed that they didn’t envision using a mouse. What they looked at instead was having users access features on the screen using a joystick. What they decided on instead was to have the computer either boot into a “dumb editor” called “Memo Pad,” where the user could type in some text, but couldn’t do anything with it, or, if a Basic cartridge was inserted, to boot into a command line interface (the Basic language).

Mayer said that toward the end of his time at Atari, they were working on a 32-bit prototype. My guess is this is either the machine that Jess Jessop talked about above, or the one Tim McGuinness talked about in a podcast below. From documentation I’ve looked up, it seems there were two or three different 32-bit prototypes being developed at the time.

Interview #73 with Ron Milner, engineer at Cyan Engineering from 1973-1984. Milner described Cyan as a “think tank” that was funded by Atari. You can see Milner in the “Splendor in the Grass” video above. He talked about co-developing the Atari 2600 VCS for the first 15 minutes of the interview (in the podcast). For the rest, he talked about experimental technologies. He mentioned working on printer technology for the personal computer line, since he said printers didn’t exist for microcomputers when the Atari 400/800 were in development. He worked on various pointer control devices under Alan Kay. He mentioned working on a touch-screen interface. He described a “micro tablet” design, where they were thinking about having it be part of the keyboard housing (sounding rather like the trackpads we have on laptops). Perhaps this is connected to what Tim McGuinness talked about (below). He also described a walnut-sized optical mouse he worked on (perhaps also connected with what Tim McGuinness discussed). These latter two technologies were intended for use on the 8-bit computers, before Apple introduced their GUI/mouse interface in 1983 (with the Lisa), though Atari never produced them.

The most striking thing he talked about was a project he worked on shortly after leaving Atari, I guess under a company he founded, called Applied Design Laboratories (ADL). He called it a “space pen.” His setup was a frame that would fit around a TV set, with what he called “ultrasonic transducers” on it that could detect a pen’s position in 3D space. A customer soon followed that had made investments in virtual reality technology, and which used his invention for 3D sculpting. This reminded me so much of VR/3D sculpting technology we’ve seen demonstrated in recent years.

Interview #75 with Steve Davis, Director of Advanced Research for Atari, who worked at Cyan Engineering under Alan Kay. He worked on controlling a laserdisc player from an Atari 800, which has been talked about above (E.R.I.C.), a LAN controller for the Atari 800 (called ALAN-K, which officially stood for “Atari Local Area Network, Model-K,” but it was obviously a play on Alan Kay’s name. Tim McGuinness talked about this project in a podcast below), and “wireless video game cartridges.” Davis said these were all developed around 1980, but at least some of this had to have been started in 1981 or later, because Kay didn’t come to Atari until then. Davis mentioned working on some artificial intelligence projects, but as I recall, didn’t describe any of them.

He said the LAN technology was completed as a prototype. They were able to share files, and exchange messages with it. It was test-deployed at a resort in Mexico, but it was never put into production.

The “wireless video game cartridge” concept was also a completed prototype. This was only developed for the Atari 2600. From Davis’s description, you’d put a special cartridge in the 2600. You’d choose a game from a menu, pay for it, the game would be downloaded to the cartridge wirelessly, and you could then play it on your 2600. It sounded almost as if you rented the game remotely, downloading the game into RAM in the cartridge, because he said, “There was no such thing as Flash memory,” suggesting that it was not stored permanently on the cartridge. The internet didn’t exist at the time, and even if it did, it wouldn’t have allowed this sort of activity on it, since all commercial activity was banned on the early internet. He said Atari did not use cable technology to transmit games, but cooperated with a few local radio stations to act as servers for these games, which would be transmitted over FM subcarrier. He didn’t describe how this system handled paying for the games, either. In any case, it was not something that Atari put on the broad market.

Kevin Savetz, who’s done most of these interviews, has described APX, the Atari Program eXchange, as perhaps the first “app store,” in the sense that it was a catalog of programs sponsored by Atari, exclusively for Atari computer users, with software written by Atari employees, and non-Atari (independent) authors, who were paid for their contributions based on sales. Looking back on it now, it seems that Atari pioneered some of the consumer technology and consumer market technologies that are now used on modern hardware platforms, using distribution technology that was cutting edge at the time (though it’s ancient by today’s standards).

Davis said that Warner Communications treated Atari like it treated a major movie production. This was interesting, because it helps explain a lot about why Atari made its decisions. With movies, they’d spend millions, sometimes tens of millions of dollars, make whatever money they made on it, and move on to the next one. He said Warner made a billion dollars on Atari, and once the profit was made, they sold it, and moved on. He said Warner was not a technology company that was trying to advance the state of the art. It was an entertainment company, and that’s how they approached everything. The reason Atari was able to spend money on R&D was due to the fact that it was making money hand over fist. A few of its projects from R&D were turned into products by Atari, but most were not. A couple I’ve heard about were eventually turned into products at other companies, after Atari was sold to Jack Tramiel. From how others have described the company, nobody was really minding the store. During its time with Warner, it spent money on all sorts of things that never earned a return on investment. One reason for that was it had well-formed ideas that could be turned into products, but they were in markets that Atari and Warner were not interested in pursuing, because they didn’t apply to media or entertainment. There have been several stories I’ve heard about through these podcasts where people inside the company came up with advanced technology for business use cases, that were cutting edge for their time, but the executives at Atari didn’t want to pursue them.

Interview #76 Tim McGuinness worked as a hardware engineer at Atari from 1980 to 1982. He co-developed the Atari 1200XL, and the 1400XL line, along with its peripherals. He was also the designer of the first version of the Amiga computer (which started development at Atari). This was the next best interview to the one with Ted Kahn (below).

McGuinness described an incident where he went to show Alan Kay the Atari 800, to discuss putting a graphical user interface on it, and when he brought it back, his boss freaked out, and threatened him with arrest, just for transporting it from one Atari office to another, to show it to someone else inside the company! McGuinness said there were fiefdoms inside the company, and this was one of them.

He talked about the relationship between Atari and Microsoft. Atari had contracted with a third party company, Shepardson Microsystems, to create Atari Basic. He said they needed a better version of Basic. Atari licensed Bill Gates’s and Paul Allen’s version of Basic for their computer line for $5 million ($14.7 million in today’s money). According to McGuinness, this was a godsend to Gates’s and Allen’s company, because at the time it was going through a rough patch, and was teetering on going belly-up. This would’ve probably been in 1980. He said Gates and Allen were so hard up for money that Atari had to fly them out to New York just so they could cash their check! McGuinness boldly declared, “Atari begat Microsoft.” He claims Gates’s and Allen’s company wasn’t called Microsoft at the time. It was called Personal Software. “Microsoft” was the name of one of their products (I guess their Basic language?). Once they licensed Microsoft Basic to Atari, Gates and Allen changed the name of their company to Microsoft.

McGuinness talked about some products he worked on at Atari that were not put into production. There was “Puffer,” which was a modification to the Battlezone arcade game where they hooked up an exercise bike to it. You’d manipulate the handlebars to turn the tank, and pedal forward or backward depending on if you wanted to advance, or back up, as bad guys came after you.

This isn’t “Puffer,” just Battlezone with its standard controls, but you can get an idea of what he was talking about.

He worked on developing the ALAN-K network interface (described above). It was a token ring network. He worked on a tablet device that could be used to trace and digitize figures on paper. The software that was developed for it also allowed editing the digitized drawing. He worked on what he called an “optical mouse” for the Atari 8-bit computers, but what he described was a ball mouse where the ball had dots on it, so its movement could be traced with an optical sensor inside the mouse. Another project was a high-volume rewritable laser optical storage device, which he said was ready for production in 1981. It wasn’t a disk, but a card format. What he described was the machine would draw in an optical card, and as it did so, a laser would read and write to it, and rewrite information on it, as the card was moved. The media could be rewritten 10 times before it started failing. Blue Cross-Blue Shield considered it a possibility for storing basic medical data in 1983. Another project he worked on was a 32-bit, 4 Mhz dual processor computer, using General Instruments processors, that he thought could be Atari’s next generation machine after the 8-bit series. He said the Amiga computer was the result of Ray Kassar, the CEO of Atari, deep-sixing this 32-bit project, and opting for a 16-bit design, which was code-named the 1800 at Atari. McGuinness designed a transparent touchpad into it, similar to the touchpads used on modern laptops, with an LCD display beneath it, so you could use it as a reprogrammable keypad. It also had a stylus that could be used as a pointing/drawing device on the touchpad.

He got into talking about something that I think is truly incredible. He said that Microsoft used some of the code from the version of Atari DOS that came out for the Atari 1200XL in its operating systems, including Windows! He claimed that if you have a 5.25″ floppy drive hooked up to a Windows machine, you can use it to read Atari DOS diskettes without emulation. He explained the reason is Atari shared their DOS code with Microsoft, since Atari wanted them to do some “tweaking” of it for the 1400XL series they were thinking of producing. Microsoft completed the modifications, but Atari never used it, since they decided not to produce the computer series. Microsoft, however, kept the Atari DOS code, and used its formatting scheme in their operating systems. So, Windows understands Atari 8-bit diskettes! I am amazed by this news, and I am half tempted to give it a try!

He helped explain in a succinct, understandable way why Atari went bust under the management of Warner Communications. He said that Ray Kassar, the CEO who was brought in to lead Atari when Warner bought the company, didn’t understand the tech business. However, Atari desperately needed one skill he had, which was the ability to manage a large business. Atari had 17,000 employees, and owned 120 buildings in Silicon Valley alone. Kassar was originally the CEO of a sock manufacturer. He didn’t understand that in order for Atari to continue to exist, it needed to innovate with new products. From what McGuinness described, Kassar saw the video game/computer business like a commodity business. You come up with a product, you optimize the process of making it, and you keep selling it indefinitely. There’s no point in innovating, because it’s a commodity.

McGuinness recounted the ripple effects of Atari’s collapse. The video game industry collapsed in a matter of 6 months in 1983. Tens of millions of dollars of inventory was in the supply chain to retailers when this happened. Ninety percent of video game developers lost their jobs. The collapse put Toys ‘R Us into bankruptcy, and nearly destroyed the Sears retail chain.

McGuinness left Atari in frustration, and founded Romox in 1982. They invented a reprogrammable cartridge, and software kiosks, which would sell and download software onto these cartridges. They also invented the idea of the “shopping cart” and online advertising for e-commerce. The kiosks not only sold software, but also dynamically downloaded ads that marketers would pay Romox for placement on these kiosks. The retailers that rented the kiosks could also buy ads on them for their own products and services, and the turnaround time was very quick, within several hours, since it was all digital. He said if 7-Eleven, for example, had a special one day on something they sold, they could pay for the ad for the special that morning, and it would be on their kiosks that afternoon. All of what he described was in production, and was being used by customers. Once the video game market went bust, he transferred Romox’s technology to a company in the UK. He and his associates founded Bloc Development Company, a software publishing company, in 1985. They created “FormTool,” which is a software product that’s still published today, though from what he described, it’s no longer published by his company. He said, “It’s been through several hands.” FormTool helps the user design paper forms. Bloc also produced several other top-selling software titles. The company ultimately became TigerDirect, which still exists today.

Interview #77 with Tandy Trower, who worked with Microsoft to develop a licensed version of Microsoft Basic for the Atari computers. Based on the interview with McGuinness (above), this probably would have been in 1980. Trower said that Atari’s executives were so impressed with Microsoft that they took a trip up to Seattle to talk to Bill Gates about purchasing the company! Keep in mind that at the time, this was entirely plausible. Atari was one of the fastest growing companies in the country. It had turned into a billion-dollar company due to the explosion of the video game industry, which it led. It had more money than it knew what to do with. Microsoft was a small company, whose only major product was programming languages, particularly its Basic language. Gates flatly refused the offer, but it’s amazing to contemplate how history would have been different if he had accepted!

Trower told another story of how he left to go work at Microsoft, where he worked on the Atari version of Microsoft Basic from the other end. He said that before he left, an executive at Atari called him into his office and tried to talk him out of it, saying that Trower was risking his career working for “that little company in Seattle.” The executive said that Atari was backed by Warner Communications, a mega-corporation, and that he would have a more promising future if he stayed with the company. As we now know, Trower was onto something, and the Atari executive was full of it. I’m sure at the time it would’ve looked to most people like the executive was being sensible. Listening to stories like this, with the hindsight we now have, is really interesting, because it just goes to show that when you’re trying to negotiate your future, things in the present are not what they seem.

Interview #132 with Jerry Jewel, a co-founder of Sirius Software. Their software was not usually my cup of tea, though I liked “Repton,” which was a bit like Defender. This interview was really interesting to me, though, as it provided a window into what the computer industry was like at the time. Sirius had a distribution relationship with Apple Computer, and what he said about them was kind of shocking, given the history of Apple that I know. He also tells a shocking story about Jack Tramiel, who at the time was still the CEO of Commodore.

The ending to Sirius was tragic. I don’t know what it was, but it seemed like the company attracted not only some talented people, but also some shady characters. The company entered into business relationships with a series of the latter, which killed the company.

Interview #201 with Atari User Group Coordinator, Bob Brodie. He was hired by the company in about 1989. This was during the Tramiel era at Atari. I didn’t find the first half of the interview that interesting, but he got my attention in the second half when he talked about some details on how the Atari ST was created, and why Atari got out of the computer business in late 1993. I was really struck by this information, because I’ve never heard anyone talk about this before.

Soon after Jack Tramiel bought Atari in 1984, he and his associates met with Bill Gates. They actually talked to Gates about porting Windows to the Motorola 68000 architecture for the Atari ST! Brodie said that Windows 1.0 on Intel was completely done by that point. Gates said Microsoft was willing to do the port, but it would take 1-1/2 years to complete. That was too long of a time horizon for their release of the ST, so the Tramiels passed. They went to Digital Research, and used their GEM interface for the GUI, and a port of CP/M for the 68000; what Atari called “TOS” (“The Operating System”). From what I know of the history, Atari was trying to beat Commodore to market, since they were coming out with the Amiga. This decision to go with GEM, instead of Windows, would be kind of a fateful one.

Brodie said that the Tramiels ran Atari like a family business. Jack Tramiel’s three sons ran it. He doesn’t say when the following happened, but I’m thinking this would’ve been about ’92 or ’93. When Sam Tramiel’s daughter was preparing to go off to college, the university she was accepted at said she needed a computer, and it needed to be a Windows machine. Sam got her a Windows laptop, and he was so impressed with what it could do, he got one for himself as well. It was more capable, and its screen presentation looked better than what Atari had developed with their STacy and ST Book portable models. Antonio Salerno, Atari’s VP of Application Development, had already quit. Before heading out the door, he said to them that, “they’d lost the home computer war, and that Windows had already won.” After working with his Windows computer for a while, Sam probably realized that Salerno was right. Brodie said, “I think that’s what killed the computer business,” for Atari. “Sam went out shopping for his daughter, and for the first time, really got a good look at what the competition was.” I was flabbergasted to hear this! I had just assumed all those years ago that Atari was keeping its eye on its competition, but just didn’t have the engineering talent to keep up. From what Brodie said, they were asleep at the switch!

Atari had a couple next-generation “pizza box” computer models (referring to the form factor of their cases, similar to NeXT’s second-generation model) that were in the process of being developed, using the Motorola 68040, but the company cancelled those projects. There was no point in trying to develop them, it seems, because it was clear Atari was already behind. What I heard at the time was that Atari’s computer sales were flat as well, and that they were losing customers. After ’93, Atari focused on its portable and console video game business.

Interview #185 with Ted Kahn, who created the Atari Institute for Education Action Research, while Warner owned Atari. This was one of Antic’s best interviews. Basically Kahn’s job with Atari was to make connections with non-profits, such as foundations, museums, and schools across the country, and internationally, to see if they could make good use of Atari computers. His job was to find non-profits with a mission that was compatible with Atari’s goals, and donate computers to them, for the purpose of getting Atari computers in front of people, as a way of marketing them to the masses. It wasn’t the same as Apple’s program to get Apple II computers into all the schools, but it was in a similar vein.

What really grabbed my attention is I couldn’t help but get the sneaking suspicion that I was touched by his work as a pre-teen. In 1981, two Atari computers were donated to my local library by some foundation whose name I can’t remember. Whether it had any connection to the Atari Institute, I have no idea, but the timing is a bit conspicuous, since it was at the same time that Kahn was doing his work for Atari.

He said that one of the institutions he worked with in 1981 was the Children’s Museum in Washington, D.C. My mom played a small part in helping to set up this museum in the late 1970s, while we lived in VA, when I was about 7 or 8 years old. She moved us out to CO in 1979, but I remember we went back to Washington, D.C. for some reason in the early 1980s, and we made a brief return visit to the museum. I saw a classroom there that had children sitting at Atari computers, working away on some graphics (that’s all I remember). I wondered how I could get in there to do what they were doing, since it looked pretty interesting, but you had to sign up for the class ahead of time, and we were only there for maybe a half-hour. This was the same classroom that Kahn worked with the museum to set up. It was really gratifying to hear the other end of that story!

Interview #207 with Tom R. Halfhill, former editor with Compute! Magazine. I had the pleasure of meeting Tom in 2009, when I just happened to be in the neighborhood, at the Computer History Museum in Mountain View, CA, for a summit on computer science education. I was intensely curious to get the inside story of what went on behind the scenes at the magazine, since it was my favorite when I was growing up in the 1980s. We sat down for lunch and hashed out the old times. This interview with Kevin Savetz covers some of the same ground that Tom and I discussed, but he also goes into some other background that we didn’t. You can read more about Compute! at Reminiscing, Part 3. You may be interested to read the comments after the article as well, since Tom and some other Compute! alumni left messages there.

A great story in this interview is how Charles Brannon, one of the editors at the magazine, wrote a word processor in machine language, called SpeedScript, that was ported and published in the magazine for all of their supported platforms. An advertiser in the magazine, called Quick Brown Fox, which sold a word processor by the same name for Commodore computers, complained to the magazine that what they were publishing was in effect a competing product, and selling it for a much lower price (the price of a single issue). They complained it was undercutting their sales. Tom said he listened to their harangue, and told them off, saying that SpeedScript was written in a couple months by an untrained programmer, and if people were liking it better than a commercial product that was developed by professionals, then they should be out of business! Quick Brown Fox stopped advertising in Compute! after that, but I thought Tom handled that correctly!

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

Read Full Post »

Various computer historians, including Charles Babbage’s son, built pieces of Babbage’s Analytical Engine, but none built the whole thing. Doron Swade and colleagues are planning on finally building the whole thing for the London Science Museum! They have a blog for their project, called Plan 28, where you can read about their progress.

Here is John Graham-Cumming in a TEDx presentation from 2012 talking about some of the history of the Analytical Engine project that Babbage and Ada Lovelace engaged in, in the mid-1800’s, and the announcement that Cumming and others are going to build it. Very exciting! I got the opportunity to see the 2nd replica that Swade and his team built of Babbage’s Difference Engine #2 at the Computer History Museum in Mountain View, CA, back in 2009. I’d love to see this thing go!

Related post: Realizing Babbage

Read Full Post »

See Part 1, Part 2

This post has been a long time coming. I really thought I was going to get it done about this time last year, but I got diverted into other research that I hope to convey on this blog in the future. I also found more sources to explore for the portion on Xerox PARC. I’ve been as faithful as I can to the history, but there’s always a possibility I made some mistakes. I welcome corrections from those who know better. 🙂

This is not a complete history of computing in the period I cover. I mean to convey the closing years of ARPA’s “great leaps” in ideas about computing, and then cover the attempt to continue those “great leaps” privately, at Xerox PARC.

My primary source material for this part is the book, “The Dream Machine,” by M. Mitchell Waldrop. I’ll just refer to it as “TDM.”

I’ve been dividing up this series roughly into decades, or “eras.” Part 1 focused on the 1940s and 50s. Part 2 focused on the 1960s. This part is devoted to the 1970s.

In Part 2 I covered the creation of the Advanced Research Projects Agency (ARPA) at the Department of Defense, and the IPTO (Information Processing Techniques Office, a program within ARPA focused on computer research), and the many innovative projects in which it had been engaged during the 1960s.

Decline at the IPTO

There were signs of “trouble in paradise” at the IPTO, beginning around 1967, with the acceleration of the Vietnam War. Charles Herzfeld stepped down as ARPA director that year, and was replaced by Eberhardt Rechtin. There was talk within the Department of Defense of ending ARPA altogether. Defense Secretary Robert McNamera did not like what he saw happening in the program. Money was starting to get tight. People outside of ARPA had lost track of its mission. It was recognized for producing innovative technology, but it was stuff that academics could get fascinated about. Most of it was not being translated into technologies the military could use. Its work with the academic community created political problems for it within the military ranks, because academia was viewed as being opposed to the war.

Rechtin cut a lot of programs out of ARPA. Some projects were transferred to other funders, or were spun off into their own operations. He wanted to see operational technology come out of the agency. The IPTO had some allies in John Foster at the Department of Defense Research & Engineering (DDR&E), the direct supervisor of ARPA, and Stephen Lukasik, who had had the chance to use the Whirlwind computer (which I covered in Part 1). Lukasik was very excited about the potential of computer technology. He succeeded Rechtin as ARPA director in 1970. The IPTO’s budget remained protected, mainly, it seems, due to Lukasik pitching the Arpanet to his superiors as a critical technology for the military in the nuclear age (I covered the Arpanet in Part 2).

In 1970 a rule called the Mansfield Amendment, created by Democratic Senator Mike Mansfield, came into effect. It established a rule for one year mandating that all monies spent by the Defense Department had to have some stated relevance to the country’s military mission. Wikipedia says that this rule was renewed in 1973, specifically targeting ARPA. Waldrop claims it was really a round-about way of cutting defense spending, but it didn’t mean anything in terms of the technology that was being developed at ARPA. Nevertheless, it had social reverberations within the agency’s working groups. In light of the controversy over the Vietnam War, it tarnished ARPA’s image in the academic community, because it required all projects funded through it to justify their existence in military terms. When non-military research proposals would be sent to ARPA, military “relevance statements” would be written up and slapped on them at the end of the approval process, unbeknownst to the researchers, in order to comply with the law. These relevance statements would contain descriptions of the potential, or supposed intended use of the technology for military applications. This caused embarrassing moments when students would find out about these statements through disclosures obtained through the Freedom of Information Act. They would confront ARPA-funded researchers with them. This was often the first time the researchers had seen them. They’d be left explaining that while, yes, the military was funding their project, the focus of their research was peaceful. The relevance statements made it look otherwise. Secondly, it gave the working groups the impression that Congress was looking over their shoulder at everything they did. The sense that they were protected from interference had been pierced. This dampened enthusiasm all around, and made the recruiting of new students into ARPA projects more difficult.

ARPA’s name was changed to DARPA in 1972, adding the word “Defense” to its name. It didn’t mean anything as far as the agency was concerned, but given the academic community’s opposition to the Vietnam War, it didn’t help with student recruiting.

In 1972 the consulting firm Bolt Beranek and Newman (BBN), which had built the hardware for the Arpanet, wanted to commercialize its technology–to sell it to other customers besides the Defense Department. IPTO director Larry Roberts announced in 1973 that he was leaving DARPA to work for BBN’s new networking subsidiary, named Telenet. This created a big problem for Lukasik, because Roberts hadn’t groomed a deputy director, as past IPTO directors had done, so that there would be someone who could jump right in and replace him when he left. It was up to Lukasik to find a replacement, only that wasn’t so easy. He had created a bit of a monster within the IPTO. Every year he had pushed its budget higher, while other projects within DARPA were having their budgets cut. He tried to recruit a new director from within the IPTO’s research community, but everyone he thought would be suited for it turned him down. They were enjoying their research too much. Lukasik came upon J.C.R. Licklider, the IPTO’s founding director, as his last resort.

J.C.R. “Lick” Licklider, from ARS 327

Licklider (he liked to be called “Lick”), who I talked about extensively in Part 2, had returned to MIT and ARPA in 1968. He became a tenured professor of electrical engineering. He began his own ARPA project at MIT, called the Dynamic Modeling group, in 1971. He observed that the Multics project, which I covered in Part 2, nearly overwhelmed Project MAC’s software engineers, and almost resulted in Multics being cancelled. The goal of his project was to create software development systems that would make complex software engineering projects more comprehensible. As part of his work he got into computer graphics, looking at how virtual entities can be represented on the screen. He also studied human-computer interaction with a graphics display.

Larry Roberts tried to find someone besides Lick to be his replacement, more as an act of mercy on him, but in the end he couldn’t come up with anyone, either. He contacted Lick and asked him if he’d be his replacement. He reluctantly accepted, and returned to his post as Director in 1974.

Lick became an enthusiastic supporter of Ed Feigenbaum’s artificial intelligence/expert systems research. I talked a bit about Feigenbaum in Part 2. Through his guidance, Feigenbaum founded Stanford’s Heuristic Programming Project. Feigenbaum’s work would become the basis for all expert systems produced during the 1980s. He also supported Feigenbaum’s effort to integrate artificial intelligence (AI) into medical research, granting an Arpanet connection to Stanford. This allowed a community of researchers to collaborate on this field.

The IPTO’s favored place within DARPA did not last. Lukasik didn’t get along with his new superior at DDR&E, Malcom Currie. They had different philosophies about what DARPA needed to be doing. Waldrop wrote, “Currie wanted solutions out of ARPA now,” and he wanted to replace Lukasik with someone of like mind.

Bob Taylor at the Xerox Palo Alto Research Center had his eye on Lukasik. He needed someone who understood how they did research, and how to talk to business executives. Taylor had been an IPTO director in the late ’60s (I cover his tenure there in Part 2). He brought its style of computer research to Xerox, but he could see that they were not set up to create products out of it. He thought Lukasik would be a perfect fit. Taylor had been recruiting many of DARPA’s brightest minds into PARC, and it was a source of frustration at the agency. Nevertheless, Lukasik was intensely curious to find out just what they were up to. He came out to see what they were building. PARC was accomplishing things that Lick had only dreamed of years earlier. Lukasik was so impressed, he left DARPA to join Xerox in 1975.

George Heilmeier, from Wikipedia

His replacement was George Heilmeier. Waldrop characterized him as hard-nosed, someone who took an applied science approach to research. He had little patience for open-ended, basic research–the kind that had been going on at DARPA since its inception. He was a solutions man. He wanted the working groups to identify goals, where they expected their research to end up at some point in time. His emphasis was not exploration, but problem solving. Though it was jarring to everybody at first, most program directors came to accommodate it. Lick, however, was greatly dismayed that this was the new rule of the day. He thought Heilmeier’s methods went against everything DARPA stood for. Lick liked the idea of finding practical applications for research, but he wanted solutions that were orders of magnitude better than older ones–world changing–not just short-term goals of improving old methods by ten percent. Further, he could see that micromanagement had truly entered the agency. It wasn’t just a perception anymore.

Heilmeier explained in a 1991 interview,

For all the wonderful technology IPTO had sponsored, [Heilmeier insisted], it was the worst mess in the agency. And artificial intelligence was the worst mess in IPTO. “You see, there was this so-called DARPA community, and a large chunk of our money went to this community. But when I looked at the so-called proposals, I thought, Wait a second; there’s nothing here. Well, Lick and I tangled professionally on this issue. He said, ‘You don’t understand. What you do is give good people the money and they go off and do good things and that’s it.’ I said, ‘Lick, I understand that. And these may be good people, but for the life of me I can’t tell you what they’re going to do. And I don’t know whether they are going to reinvent the wheel, because there’s no discussion of the current practice and there’s no discussion of the implications, so I can’t tell whether this is a wise investment for DoD or not.'” (TDM, p. 402)

Lick had a very good track record of picking research projects, using a more subjective, intuitive sense about people, and he didn’t believe you could achieve the same quality of research by trying to use a more objective method of selecting people and projects.

Bob Kahn, who had joined DARPA in 1972, explained the differences between them,

[The] fact was that … both men were right. “There were some things that were more conducive to George’s directed style,” says Kahn. “Packet radio, for example, and maybe the Internet. But speech understanding was not amenable. In fact, almost none of AI fit in. The problem was that George kept looking for a kind of road map to the field of AI. He wanted to know what was going to happen, on schedule, into the future, to make the field a reality. And he thought it was quite reasonable to ask for that road map, because he had no idea how hard it would be to produce. Suppose you were Lewis and Clark exploring the West, where you had no idea what you were going to encounter, and people wanted to know exactly what routes you were going to take, where you would camp, and what you would do out there. Well, this was just not an engineering job, where you could work out the whole plan. So George was looking for something that Lick couldn’t provide.”

Unfortunately, when Lick tried to explain that to his boss, it was a bit like Seymour Papert’s or Alan Kay’s trying to explain exploratory education to a back-to-basics hard-liner. “IPTO really didn’t have a program-management structure,” declares Heilmeier. “They had a financial management structure, and they had a cheering section.” (TDM, p. 403)

Interestingly, I found this article in EETimes that saw this from a very different perspective, saying that what DARPA had been running was a “good-old-boys network” (and it’s difficult to see how it’s not implicating Lick in this), and that Heilmeier snapped them into shape.

Lick tried to see it from Heilmeier’s perspective, that researchers should not see applications as a threat to basic research, and that they should make common cause with engineers, to bring their work into the world of people who will use their technology. He thought this could provide an avenue where researchers could receive feedback that would be valuable for their work, and would hopefully soften the antagonism that was being directed at open-ended research. Meanwhile, he’d quietly try to keep Heilmeier from destroying what he had worked so hard to achieve.

Heilmeier seemed to agree with this approach. He issued some challenges to the IPTO working groups of technical problems he saw needed to be solved. He said to them, “Look, if some of you guys would sign up for these challenges I can justify more fundamental work in AI.” Some of them did just as he asked, and got to work on the challenges.

Lick declared in 1975 that the Arpanet, a project which began at DARPA in 1967, was fully operational, and handed control of the network over to the Defense Communications Agency.

Lick had to kill funding for some projects, which was never easy for him. One in particular was Doug Engelbart’s NLS project at Stanford Research Institute. I talked about this project in Part 2, and what ended up happening with it. A lot of Engelbart’s talent had left to pursue the development of personal computing at Xerox PARC. From Lick’s and DARPA’s perspective, he just wasn’t able to innovate the way he had before. From Engelbart’s perspective, Lick had been captured by the hype around AI, and just wasn’t able to see the good he was doing. It was a sad parting of the ways for both of them. (Source: Nerd TV interview with Doug Engelbart)

Heilmeier was pleased with the changes at the IPTO. They had “gotten with the program,” and he thought they were producing good results. For Lick, however, the change was exhausting, and depressing. He left DARPA for good in late 1975, and returned to MIT, where his spirit soared again. This didn’t mean that he left computing behind. He came up with his own projects, and he encouraged students to play with computers, as he loved to do.

Bob Kahn at DARPA asked an important question. He agreed with Heilmeier’s “wire brushing” of the agency; that some projects had become self-indulgent, but he cautioned against “too much of a good thing” the other way.

ARPA’s current obsession with “relevance” had come dangerously close to destroying what made the agency so special. Remember, says Kahn, when it came to basic computer research–the kind of high-risk, high-payoff work that might not mature for a decade or more–ARPA was almost the only game in town. The computer industry itself was oriented much more toward products and services, he says, which meant that “there was actually very little research going on that was as innovative as ARPA’s.” And while there were certainly some shining exceptions to that rule–notably [Xerox] PARC, IBM and [AT&T] Bell Labs–“many of the leading scientists and researchers couldn’t be supported that way. So if universities didn’t do basic research, where would industry get its trained people?”

Yet it was ARPA’s basic research that was getting cut. “The budget for basic R&D was only about one third of what it was before Lick came in,” says Kahn. “Morale in the whole computer-science community was very low.” (TDM, p. 417)

Kahn pursued a research initiative, anticipating the future of VLSI (Very-Large-Scale Integration) design and fabrication for microchips, in order to try to revive the basic research culture at the IPTO. He won approval for it in 1977. Through it, methods were developed for creating computer languages for designing VLSI chips. Kahn was also the co-developer of TCP/IP with Vint Cerf at DARPA, the protocol that would create the internet.

Heilmeier left DARPA in 1977, and was replaced by Bob Fossum, who had a management style that was more amenable to basic research. The question was, though, “Okay. This is good, but what about the next director?” It had been common practice for people at DARPA to cycle out of administrative positions every few years. Was it too good to last?

Kahn began the Strategic Computing Initiative in 1983, a $1 billion program (about $2.3 billion in today’s money) to continue his work on advancing computer hardware design, and to advance research in artificial intelligence. Fossum was replaced by Robert Cooper that same year. Cooper took Kahn’s research goals and reimplemented them as an agency-wide program, giving every part of DARPA a piece of it. According to Waldrop, Kahn was disgusted by this, and took it as his cue to leave.

The era of revolutionary research in computing at the agency seems to come to an end at this point.

Here’s a video talking about what else was going on at DARPA during the period I’ve just covered:

Lick’s vision of the future

He would happily sit for hours, spinning visions of graphical computing, digital libraries, on-line banking and E-commerce, software that would live on the network and move wherever it was needed, a mass migration of government, commerce, entertainment, and daily life into the on-line world–possibilities that were just mind-blowing in the 1970s. (TDM, p. 413)

Lick had long been a futurist, a very reliable one. In a book published in 1979, “The Computer Age: A Twenty-Year View,” he looked into the future, to the year 2000, about what he could see happening–if he thought optimistically–with a nationwide digital network that he called “the Multinet.” The term “Internet” was not in wide use yet, though work on TCP/IP, what Lick called the “Kahn-Cerf internetworking protocol,” had been in progress for several years. The internet wouldn’t come into being for another 4 years.

“Waveguides, optical fibers, rooftop satellite antennae, and coaxial cables, provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphic display and speech input and output have become almost as common as television sets.”

Great. But what would all those gadgets add up to, Lick wondered, other than a bigger pile of gadgets? Well, he said, if we continued to be optimists and assumed that all this technology was connected so that the bits flowed freely, then it might actually add up to an electronic commons open to all, as “the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.” Indeed, he went on, looking back from the imagined viewpoint of the year 2000, “[the electronic commons] has supplanted the postal system for letters, the dial-tone phone system for conversations and tele-conferences, stand-alone batch processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms and libraries for information storage and retrieval.”

The Multinet would permeate society, Lick wrote, thus achieving the old MIT dream of an information utility, as updated for the decentralized network age: “many people work at home, interacting with coworkers and clients through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its cable television and electronic funds transfer functions, and a few receive delivery of small items through adjacent pneumatic tube networks . . . Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants.” Thanks to ironclad guarantees of privacy and security, Lick added, the Multinet would likewise offer on-line banking, on-line stock-market trading, on-line tax payment–the works.

In short, Lick wrote, the Multinet would encompass essentially everything having to do with information. It would function as a network of networks that embraced every method of digital communication imaginable, from packet radio to fiber optics–and then bound them all together through the magic of the Kahn-Cerf internetworking protocol, or something very much like it.

Lick predicted its mode of operation would be “one featuring cooperation, sharing, meetings of minds across space and time in a context of responsive programs and readily available information.” The Multinet would be the worldwide embodiment of equality, community, and freedom.

If, that is, the Multinet ever came to be. (TDM, p. 413)

He tended to be pessimistic that this all would come true. With the world as it was, he thought a more tightly controlled scenario was more likely, one where the Multinet did not get off the ground. He thought big technology companies would not get into networking, as it would invite government regulation. Communications companies like AT&T would see it as a threat to their business. Government, he thought, would not want to share information, and would rather use computer technology to keep proprietary files on people and corporations. He thought the only way his positive vision would come to pass was if a consensus of hundreds of thousands, or millions of people came about which agreed that an open Multinet was desirable. He felt this would require leadership from someone with a vision that agreed with this idea.

At this time there were packet switching network options offered by various technology companies, including IBM, Digital Equipment Corp. (DEC), and Xerox, but none of them offered openness. In fact, business customers wanted closed networks. They feared openness, due to possibilities of security leaks and industrial espionage. Things were not looking up for this “marketplace” vision of a future network, even in academia. Michael Dertouzos, who led the Laboratory of Computer Science (LCS) at MIT (formerly Project MAC), was very interested in Lick’s vision, but complained that his fellow academics in the program were not. In fact they were openly hostile to it. It felt too outlandish to them.

Microcomputers had taken off in 1975, with the introduction of the MITS Altair, created by electrical engineer Ed Roberts. (This was also the launching point for a small venture created by Bill Gates and Paul Allen, called “Micro Soft,” with their first product, a version of the Basic programming language for the Altair.) Lick bought an IBM PC in the early 1980s, “but it never had the resources to do what he wanted,” said his son, Tracy.

Yes, Lick knew, these talented little micros had been good enough to reinvent the computer in the public mind, which was no small thing. But so far, at least, they had shown people only the faintest hint of what was possible. Before his vision of a free and open information commons could be a reality, the computer would have to be reinvented several more times yet, becoming not just an instrument for individual empowerment but a communication device, an expressive medium, and, ultimately, a window into on-line cyberspace.

In short, the mass market would have to give the public something much closer to the system that had been created a decade before at Xerox PARC. (TDM, p. 437)

Personal computing

The major sources I used for this part of the story were Alan Kay’s retrospective on his days at Xerox, called “The Early History of Smalltalk” (which I will call “TEHS” hereafter), and “The Alto and Ethernet Software,” by Butler Lampson (which I will call “TAES”).

Parallel to the events I describe at DARPA came a modern notion of personal computing. The vision of a personal computer existed at DARPA, but as I’ll describe, the concept we would recognize today was developed solely in the private sector, using knowledge and research methods that had been developed previously through DARPA funding.

Alan Kay

Alan Kay, from Wikipedia

The idea of a computer that individuals could buy and own had been around since 1961. Wes Clark had that vision with the LINC computer he invented that year at MIT. Clark invited others to become a part of that vision. One of them was Bob Taylor. What got in the way of this idea becoming something that less technical people could use was the large and expensive components that were needed to create sufficiently powerful machines, and some sense among developers about just how ordinary people would use computers. Most of the aspects that make up personal computing were invented on larger machines, and were later miniaturized, watered down in sophistication, and incorporated into microcomputers from the mid-1970s into the 1990s and beyond.

Most people in our society who are familiar with personal computers think the idea began with Steve Jobs and Steve Wozniak. The more knowledgeable might intone, and give credit to Ed Roberts at MITS, and Bill Gates. These people had their own ideas about what personal computers would be, and what they would represent to the world, but in my estimation, the idea of personal computing that we would recognize today really began with Alan Kay, a post-graduate student with a background in mathematics and molecular biology, who had become enrolled in the IPTO’s computer science program at the University of Utah (which I talk a bit about in Part 2), in the late 1960s. A good deal of credit has to be given to Doug Engelbart as well, who was doing research during the 1960s at the Stanford Research Institute, funded by ARPA/IPTO, NASA, and the Air Force, on how to improve group knowledge processes using computers. He did not pursue personal computing, but he was the one who came up with the idea for a point-and-click interface, combining graphics and text, using a device he and his team had invented called a “mouse.” He also created the first system that enabled linked documents, which foreshadowed the web we’ve known for the last 20 years, and enabled collaborative computing with teleconferencing. Describing his work this way does not do it justice, but it’ll suffice for this discussion.

Seymour Papert
Doug Engelbart,
from ibiblio.com
Ivan Sutherland,
from Wikipedia
Seymour Papert, from MIT

Flex’s “self-portrait,” from lurvely.com

Kay began developing his ideas about personal computing in 1968. He had started on his first proof-of-concept desktop machine, called Flex, a year earlier. He was aware of Moore’s Law (from Gordon Moore), a prediction that as time passed, more and more transistors would fit in the same size space of silicon. The implication of this is that more computing functionality could fit in a smaller space, which would thereby allow machines which filled up a room at the time to become smaller as time passed. As a graduate student, he imagined miniaturized technology that seemed unfathomable to other computer scientists of his day. For example, a hard drive as small as the crook of your finger. Back then, a hard drive was the size of a floor cabinet, or medium-sized refrigerator, and was typically used with mainframes that took up the space of a room.

Kay’s ideas about what personal computing could be were heavily influenced by Doug Engelbart, Ivan Sutherland (from his Sketchpad project), Tom Ellis and Gabriel Groner at RAND Corp. (from a system called GRAIL), and Seymour Papert, Wally Feurzig, and Cynthia Soloman at BBN with their work with children and Logo, among many others. (Source: TEHS)

Engelbart thought of computing as a “vehicle” for thinking about, and sharing information, and developing group knowledge processes. Kay got a profound sense of the personal computer’s place in the world from Papert’s work with children using Logo. He realized that personal computers could not just be a vehicle, but a new medium.

People can get confused about this concept. Kay wasn’t thinking that personal computers would allow people to use and manipulate text, images, audio, and video (movies and TV)–what most of us think of as “media”–for the sake of doing so. He wasn’t thinking that they would be a new way to store, transmit, and present old media, and the ideas they expressed most easily. Rather, they would allow people to explore a whole new category of ideas and expression that the other forms of media did not express as easily, if at all. It’s not that the old media couldn’t be part of the new, but they would be represented as models, as part of a knowledge system, and operate under a person’s control at whatever grain would facilitate what they wanted to understand.

Butler Lampson described the concept this way:

Kay was pursuing a different path to Licklider’s man-computer symbiosis: the computer’s ability to simulate or model any system, any possible world, whose behavior can be precisely defined. And he wanted his machine to be small, cheap, and easy for nonprofessionals to use. (TAES, p. 2)

Kay could see from Papert’s work that using computers as an interactive medium could enable children to understand subjects that would otherwise have been difficult, if not impossible to grasp at their age, particularly aspects of mathematics and science.

Xerox PARC

Xerox enters the story in 1970. They had a different set of priorities from Kay’s, and it’s interesting to note that even though this was true, they were willing to accommodate not only his priorities, but those of other researchers they brought in.

Xerox was concerned that as the photocopier market grew, competitors would come into the space, and Xerox didn’t want to put all their “eggs” in it. They thought computers would be a good way for them to diversify their product line. Their primary goal was to…

…develop the “architecture of information” and establish the technical foundation for electronic office systems that could become products in the 1980s. It seemed likely that copiers would no longer be a high-growth business by that time, and that electronics would begin to have a major effect on office sys­tems, the firm’s major business. Xerox was a large and prosperous company with a strong commitment to basic research and a clear need for new technology in this area. (TAES, p. 2)

Xerox wanted to site their research facility near a community of computer research. They looked at a few places around the country, and ultimately decided on Palo Alto, CA, since it was near Berkeley and Stanford universities, which were centers of computer research. They wanted to bring in a research director who was familiar with the field, and would be respected by the research community. The right person for the job was not obvious. When they talked to computer researchers of the time, all paths eventually led back to Bob Taylor, who had been an ARPA/IPTO director in the late 1960s. The thing was he had no computer science research of his own under his belt. His background was in psychoacoustics, a field of psychology (though Taylor thinks of it as applied physics). What made him the right candidate in Xerox’s eyes was that everyone who was anyone in the field Xerox wanted to explore knew and respected him. So they invited Taylor in to assist in setting up their research group.  Thus was born the Xerox Palo Alto Research Center (PARC). It was totally funded by Xerox. Taylor was not officially given the job as PARC’s director, because of his lack of research background, but he was allowed to run the facility the same way that the IPTO had conducted computer research. You could say that PARC during the 1970s was “the ARPA way done privately.”

One of the research techniques used at ARPA and Xerox PARC was to anticipate the speed and memory capacities of future computers that would be in wide use. Engineers who were part of the research teams would either find hardware that fit these anticipated specifications, or would build their own, and then see what software they could develop on it that was significantly better in some capacity than the systems that were available in their present. As one can surmise, this was an expensive thing to do. In order to exceed the capacity of the machines that were in wide use, one had to not think about what was most economical, but rather go for the hardware that was only used by a relative few, if anyone was using it at all, because it would be prohibitively expensive for most to acquire. Butler Lampson described this with Xerox PARC’s Alto research system:

The Alto system was affected not only by the ideas its builders had about what kind of system to build, but also by their ideas about how to do computer systems research. In particular, we thought that it is important to predict the evolution of hardware technology, and start working with a new kind of system five to ten years before it becomes feasible as a commercial product.

Our insistence on work­ing with tomorrow’s hardware accounts for many of the differences between the Alto system and the early personal computers that were coming into existence at the same time. (TAES, p. 3)

(To get an idea of what Lampson was talking about in that last sentence, I encourage people look at another post I wrote a while back, called “Triumph of the Nerds.”)

Taylor hired a bunch of people from the failing Berkeley Computer Corp., which was started by Butler Lampson, along with students that had joined him as part of Project Genie. Among the people Taylor brought in were Chuck Thacker, Charles Simonyi, and Peter Deutsch. I talk a little about Project Genie in Part 2. He also hired many people from ARPA’s IPTO projects, including Ed McCreight from Carnegie Mellon University, and some of the best talent from Engelbart’s NLS project at the Stanford Research Institute, such as Bill English. Jerry Elkind hired people from BBN, which had been an ARPA contractor, including Danny Bobrow, Warren Teitelman, and Bert Sutherland (Ivan Sutherland’s older brother). Another major “get” was hiring Allen Newell from CMU as a consultant. A couple of Newell’s students, Stuart Card and Tom Moran, came to PARC and pioneered the field we now know as Human-Computer Interaction.

As Larry Tesler put it, Xerox told the researchers, “Go create the new world. We don’t understand it. Here are people who have a lot of ideas, and tremendous talent.” Adele Goldberg said of PARC, “People came there specifically to work on 5-year programs that were their dreams.” (Source: Robert X. Cringely’s documentary, “Triumph of the Nerds”) In a recent interview, which I’ll refer to at the end of this post, she said that PARC invited researchers in to work on anything that they thought in 5 years could have an impact on the company.

Alan Kay came to work at PARC in 1970, started the Learning Research Group (LRG), and got them thinking about what would come to be known as “portable computers,” though Kay called them “KiddieKomps.” They also got working on font technology.

In 1972 Kay committed his thoughts on personal computing to paper in a document called, “A Personal Computer for Children of All Ages.” In it he described a conceptual model he called a “Dynabook,” or “dynamic book.” I’ll quote from a blog post I wrote in 2006, called, “Great moments in modern computer history,” as it summarizes what I mean to get across about this:

Kay envisioned the Dynabook as a portable computer, 9″ x 12″ x 3/4″, about the size of a modern laptop, with its own battery, and would weigh less than 4 pounds. In fact he said, “The size should be no larger than a notebook.” He envisioned that it would use removable media for file storage (about 1 MB in size, he said), that it might have a keyboard, and that it would record and play audio files, in addition to displaying text. He said that if no physical keyboard came with the unit, a software keyboard could be brought up, and the screen could be made touch-sensitive so that the user could just type on the screen. … Oh, and he made a wild guess that it would cost no more than $500 to the consumer.

He said, “The owner will be able to maintain and edit his own files of text and programs, when and where he chooses.”

He envisioned that the Dynabook would be able to “dock” with a larger computer system at work. The user could download data, and recharge its battery while hooked up. He figured the transfer rate would be 300 Kbps.

Mock up of the Dynabook,
from history-computer.com

He imagined the Dynabook being connected to an “information utility,” like what was called at the time “the ARPA network,” which later came to be called the Internet. He predicted this would open up online access to schools and libraries of information, “stores” (a.k.a. e-commerce sites), and would bring “billboards” (a.k.a. web ads) to the user. I LOVE this quote: “One can imagine one of the first programs an owner will write is a filter to eliminate advertising!”

Another forward-looking concept he imagined is that it might have a flat-panel plasma display. He wasn’t sure if this would work, since it would draw a lot of power, but he thought it was worth trying. … He thought an LCD flat-panel screen was another good option to consider.

Kay thought it essential that the machine make it possible to use different fonts. He and fellow researchers had already done some experiments with font technology, and he showed some examples of their results in his paper.

children using the Dynabook 2

Alan Kay’s conception of children using the Dynabook

He doesn’t elaborate on this, but he hints at a graphical interface for the device. He describes in spots how the user can create and save “dynamic graphics.” In a scenario he illustrates in the paper, two children are playing a game like Space War on the Dynabook, involving graphics animation, experimenting with concepts of gravity. This scenario lays down the concept of it being a learning machine, and one that’s easy enough for children to manipulate through a programming language.

He also imagined that Dynabooks would be able to communicate with each other wirelessly, peer-to-peer, so that groups of students would be able to easily work on projects together, without needing an external network.

Working with Dan Ingalls, an electrical engineer with a background in physics, Diana Merry, and colleagues, the LRG began development of a system called “Smalltalk” in 1972. This was a first effort to create the Dynabook in software. It would come to formalize another of Kay’s concepts, of virtual objects in a computer. The idea was that entities (visual and non-visual) could be fashioned by the person using a computer, which are as versatile as tools that are used in the real world, and can be used in any combination at any time to accomplish tasks that may not have been evident when they were created.

Computer programming was an important part of Kay’s concept of how people would interact with this medium, though he developed doubts about it. What he really wanted was some way that people could build models in the computer. Programming just seemed to be a good way to do it at the time.

Objects could be networked together via. a concept called “message passing,” with the goal of having them work together for some purpose. As Smalltalk was developed over 8 years, this idea would come to include everything from the desktop interface, to windows, to text characters, to buttons, to menus, to “paint” brushes, to drawing tools, to icons, and more. His idea of overlapping windows came from his desire to allow people to work on several projects at the same time, allowing them to make the most of limited screen space. (source: TEHS)

“The best way to predict the future is to invent it”

The graphical interface he and his team developed for Smalltalk was motivated by educational research on children, which had surmised that they have a strong visual sense, and that they relate to manipulating visual objects better than explicitly manipulating symbols, as older interactive computer systems had insisted upon. A key phrase in the paragraph below is, “doing with images makes symbols,” that is, symbols in our own minds, and, if you will, in the “mind” of the computer. His concept of “doing with images” was much more expansive and varied than has typically been allowed on computers consumers have used.

All of the elements eventually used in the Smalltalk user interface were already to be found in the sixties–as different ways to access and invoke the functionality provided by an interactive system. The two major centers were Lincoln Labs [at MIT] and RAND Corp–both ARPA funded. The big shift that consolidated these ideas into a powerful theory and long-lived examples came because the LRG focus was on children. Hence we were thinking about learning as being one of the main effects we wanted to have happen. Early on, this led to a 90 degree rotation of the purpose of the user interface from “access to functionality” to “environment in which users learn by doing”. This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children’s curriculum to a “curriculum of the user interface.”

The particular aim of LRG was to find the equivalent of writing–that is learning and thinking by doing in a medium–our new “pocket universe”. For various reasons I had settled on “iconic programming” as the way to achieve this, drawing on the iconic representations used by many ARPA projects in the sixties. My friend Nicolas Negroponte, an architect, was extremely interested in embedding the new computer magic in familiar surroundings. I had quite a bit of theatrical experience in a past life, and remembered Coleridge’s adage that “people attend ‘bad theatre’ hoping to forget, people attend ‘good theatre’ aching to remember“. In other words, it is the ability to evoke the audience’s own intelligence and experiences that makes theatre work.

Putting all this together, we want an apparently free environment in which exploration causes desired sequences to happen (Montessori); one that allows kinesthetic, iconic, and symbolic learning–“doing with images makes symbols” (Piaget & Bruner); the user is never trapped in a mode (GRAIL); the magic is embedded in the familiar (Negroponte); and which acts as a magnifying mirror for the user’s own intelligence (Coleridge). (source: TEHS)

A demonstration of Smalltalk-80

Chuck Thacker Butler Lampson
Dan Ingalls,
from Wikipedia
Diana-Merry Shapiro,
from Linkedin
Chuck Thacker,
from the ACM
Butler Lampson,
from MIT

Chuck Thacker, who has a background in physics and designing computer hardware, and a small team he assembled, created the first “Interim Dynabook,” as Kay called it, at PARC’s Computer Science Lab (CSL) in 1973. It was named “Bilbo.” It was not a handheld device, but more like the size of a desk, perhaps connected to a larger cabinet of electronics. It had a monitor with a bitmap display of 606 x 808 pixels (which gave it the profile of an 8-1/2″ x 11″ sheet of paper), and 128 kilobytes of memory. Smalltalk was brought over to it from its original “home,” a Data General Nova computer.

Thacker’s team created the first Alto models (named after Palo Alto) shortly thereafter. Butler Lampson, a computer scientist with a background in physics, and electrical engineering, led a team which created the operating system for it. The Alto computer fit under a desk. It was the size of a small refrigerator. The keyboard, mouse, and display sat on top of the desk. It had the same size display, and internal memory as “Bilbo,” ran at about 6 Mhz, and had a removable hard drive that stored 2.5 MB per disk. (source: History of Computers)

As the years passed, CSL would develop bigger, more powerful machines, with code names Dolphin, and Dorado.

Adele Goldberg, from PC Forum

Beginning in 1973, Adele Goldberg, who had a background in mathematics and information systems, and Alan Kay tried Smalltalk out on students from ages 12-15, who volunteered to take programming courses from them, and to come up with their own ideas for things to try out on it. Goldberg developed software design schema and curricular materials for these courses, and helped guide the education process as they got results.

They had some success with an approach developed by Goldberg where they had the students build more and more sophisticated models with graphical objects. They thought they were building the skill of students up to more sophisticated approaches to computing, and in some ways they were, though the influence these lessons were having on the students’ conception of computing was not as broad as they thought. Kay realized after teaching programming to some adult students that they could only get so far before they ran into a “literature” barrier. The same had been true with the teen and pre-teen students they had taught earlier, it turned out. From Kay’s description, the way I’d summarize the problem is that they required background knowledge in organizing their ideas, and they needed practice in doing this.

Kay said in retrospect that literature renders ideas. Any medium needs literature in order to be powerful. Literature’s purpose is to provide a body of ideas that can be discussed in the medium. (Source: TEHS) Without this literary background, the students were unable to write about, or discuss the more sophisticated ideas through programming code. No matter how good a tool or instrument is, what’s produced by the people using it can only be as good as the ideas they use in fashioning the product. Likewise, the quality of what is written in text or notation by an author or composer, or produced in sound by a musician, or imagery and sound for a movie, is only as good as a) the skill of the creators, b) the outlook they have on what they are expressing, and c) their knowledge, which they can apply to the effort.

Reconsidering the Dynabook

There came a “dividing point” at PARC in 1975. Kay met with other members of the LRG and discussed “starting over.” He could see that the developed ideas of Smalltalk were taking them away from the educational goals he set out to accomplish. Professional considerations had started to take hold with the group, though, and they saw potential with Smalltalk. The majority of the group wanted to stick with it. His argument for “starting over” with the educational project did not win out. He said of this point in time that while he disagreed with the decision of his colleagues, he held no ill will towards them for it. He knew them to be wonderful people. The sense I get from reading Kay’s account of this history is they saw potential perhaps in creating more sophisticated user environments that professionals would be interested in using. I infer this from the projects that PARC engaged in with Smalltalk thereafter.

Kay turned his focus to a new project, as if to pick up again his “KiddieKomps” idea. He designed a computer he called NoteTaker. Adele Goldberg said Kay’s purpose in doing this was to develop a proof of concept for the Dynabook’s hardware (see the interview with her at the end of this post). While Kay went off in this direction, Goldberg took over management of the Smalltalk project. The educational program at PARC faded away in 1976. (Source: TEHS)

The original concept for NoteTaker was a laptop design, with what Kay called a “tab mouse,” a physical control that was small, mounted on the computer, yet agile enough to allow a person using it to move a cursor around a graphical interface. This did not end up on the the machine, but NoteTaker was made useable with a keyboard, mouse, and a touch screen, and it had stereo sound output. (Source: “Joining the Mac Group,” by Bruce Horn)

Once Smalltalk-76 was done (each version of Smalltalk was named by the year it was completed), Dan Ingalls and Ted Kaehler ported it to the NoteTaker, and by around 1978 Kay had created the first “luggable” portable computer. Kay recalls it using three Intel 8086 processors (though others remember it using two Motorola 68000 processors), had 256 kilobytes of memory, and was able to run on batteries. It wasn’t a laptop (it was too big and heavy), but it could fit on a desktop. At first glance, it looks similar to the Osborne 1, the first commercially available portable computer, and there’s a reason for that. The Osborne’s case design was based on NoteTaker.

To put it through its paces, a team from PARC took the NoteTaker on an airplane trip, “running an object-oriented system with a windowed interface at 35,000 feet.”

Kay lamented that there wasn’t enough corporate will to use their own know-how to create better hardware for it (Kay considered the Intel processors barely sufficient), much less turn it into a commercial product.

The NoteTaker, from the Computer History Museum

Looking back on the experience, Kay was dismayed to see that PARC had pushed aside his educational goals, and had co-opted Smalltalk as a purely professional’s tool. The technologies that could have made his original Dynabook vision a physical reality were coming into being at just this time, but there was no will at Xerox to make it into an actual product. (source: TEHS)

He has pursued development of his educational ideas ever since. He sees computers in the same way that a few in the Middle Ages saw the invention of the printing press, enabling new ways of interacting and thinking, to, as Licklider would have said, allow people to “think as no one has ever thought before.”

Developing the office of the future

93ewis17
From left to right: Larry Tesler, from Twitter.com; Timothy Mott, from startupgenome.co; Charles Simonyi

A separate project at PARC also got started in the early 1970s, called POLOS (the Parc On-Line Office System), in the System Science Lab (SSL). The goal there was to develop a networked computer office system.

Edit 5/31/2016: I had suggested in the way I wrote the following paragraph that Larry Tesler had invented cut, copy, and paste actions in digital text editing. Upon further review, I think that historical interpretation is wrong. Doug Engelbart had copy and paste (and probably a “cut” action) in NLS, and there may have been earlier incarnations of it.

One of the first projects in this effort was a word processor called Gypsy, designed and implemented by Larry Tesler and Timothy Mott, both computer scientists. (Source: TAES) The unique thing about Gypsy was that Tesler tried to make it “modeless.” Its behavior was like our modern day word processors, where you always have a cursor on the screen, and all actions mean the same thing at all times. Wherever the cursor is, that’s where text is entered. He also invented drag selection/highlighting with a mouse, and which was incorporated into the a cut, copy, and paste process. This has been a familiar feature whenever we work with digital text.

Later, Charles Simonyi, an electrical engineer, and Butler Lampson began work on the first What You See Is What You Get (WYSIWYG) text processor, called Bravo. They developed their first version in 1974. It had its own graphical interface, allowed the use of fonts, and basic document elements, like italics and boldface, but it worked in “modes.” The person using it either used the computer’s keyboard to edit text, or to issue commands to manipulate text. The keyboard did something different depending on which mode was operating at any point in time. This was followed by BravoX, which was a “modeless” version. BravoX had a menu system, which allowed the use of a mouse for executing commands, making it more like what we’d recognize as a word processor today. (source: Wikipedia) Each team was trying out different capabilities of digital text, and trying to see how people worked with a text system most productively.

Andrew Birrell Roger Needham
From left to right: Andrew Birrell, from Microsoft Research; Roy Levin, from Linkedin; Roger Needham, from ACM SIGSOFT

A couple graphical e-mail clients were developed by a team led by Doug Brotz, named “Laurel” and “Hardy,” which allowed easy review and filing of electronic mail messages. (sources: The Xerox “Star”: A Retrospective) From Lampson’s description, they appeared to be “e-mail terminals.” They didn’t store, or allow one to write e-mails. They just provided an organized display for them, and a means of telling a separate messaging system what you wanted to do with them, or that you wanted to create a new message to send. (Source: TAES) Andrew Birrell, Roy Levin, Roger Needham (who had a background in mathematics and philosophy), and Michael Schroeder, a computer scientist, developed a distributed service, which the e-mail clients used to compose, receive, and transmit network messages, called Grapevine. From the description, it appeared to work on a distributed peer-to-peer basis, with no central server controlling its services for an organization. Grapevine also provided authentication, file access control, and resource location services. Its purpose was to provide a way to transmit messages, provide network security, and find things like computers and printers on a network. It had its own name service by which clients could identify other systems. (source: Grapevine: An Exercise in Distributed Computing) To get a sense of the significance of this last point, the Arpanet did not have a domain name service, and the internet did not get its Domain Name Service (DNS) until the mid-1980s.

PARC had a complete office system going, with all of this, plus networked file systems, print servers (using Ethernet, created by Bob Metcalfe and Chuck Thacker at PARC), and laser printing (invented at PARC by Gary Starkweather, with software written by Butler Lampson) by 1975! A year later they had created a digital scanner.

You can see one of the e-mail clients being demonstrated, with WYSIWYG technology/laser printing, on the Alto, in this Xerox promotional video:

Clarence Ellis Gary Nutt
Clarence Ellis,
from the ACM
Gary Nutt,
from CU Boulder

Clarence Ellis and Gary Nutt, both computer scientists, developed OfficeTalk, a prototype office automation system, at PARC. It tracked “job” documents as they went from person to person in an organization. (source: “The Xerox ‘Star’: A Retrospective”) In addition, Ellis came up with the idea of clicking on a graphical image to start up an application, or to issue a command to a computer, rather than typing out words to do the same thing. This is something we see in all modern user interfaces. (source: Answers.com)

Why Xerox missed the boat

For this section I go back to Waldrop’s book as my main source.

One might ask if the future in PCs was invented at Xerox–sounding an awful lot like what is in use with business IT systems today–why didn’t they own the PC industry? An even bigger question, why weren’t people back then using systems like what we’re using now? We use Windows, Mac, and Linux systems today, each with their own graphical interfaces, which descended from what PARC created. We had word processing, and network LAN file servers for years, beginning in the 1980s, and later, e-mail over the internet, print servers, and later still, office “task” automation software. The future of personal computing as we would come to know it was sitting right in front of them in the mid-1970s. We can say that now, since all of these ideas have been made into products we use in the work world. If we were to look at this technology back then, we might’ve been clueless to its significance. The executives at Xerox were an example of this.

For several years management couldn’t understand the vision that was developing at PARC. The research culture there didn’t mesh that well with the executive culture, especially after the company’s founder, Joe Wilson, died in 1971. Wilson had been open to new ideas and venturing into new technologies. At the time of his death, the company was also struggling from its own growth. The demand for its copiers was outstripping its ability to produce them competently. So a new management team was brought in which understood how to manage big companies. These were “numbers men,” though, and their methodology didn’t allow them to understand how to translate the kind of deep R&D the company had been doing with computers into products, because of course they didn’t have metrics to make an accurate measurement of how much money they’d make with a technology that was totally unknown to them. The company had plenty of metrics about costs and revenue that came from copier development and sales, so it was no problem for them to make reliable estimates for a new copier model. The most profitable part of their business was copiers, so that’s where they put their product development resources. The old hands at Xerox who had set up PARC ran interference for their operation, to keep the “numbers men” from shutting down their work. The people at PARC kept trying to convince the higher-ups that what they had developed was useful for office productivity, but it was for naught. In a depressing anecdote, Waldrop wrote:

One top-level Xerox executive, after a day of being shown the wonders of PARC, had posed precisely one question to the researchers: “Where can I get some of those beanbag chairs?” (TDM, p. 408)

(A famous “feature” of PARC was their use of beanbag chairs for group discussions, called “dealer meetings.”)

Stephen Lukasik came to Xerox from DARPA in 1975. He set up what was called the Systems Development Division (SDD). Its purpose was to create new salable products out of the research that was going on inside Xerox. The problem for him was that Xerox’s executives were willing to give him the authority to set up the division, but they weren’t willing to listen to what he said was possible, nor were they willing to give him the funding to make it happen. Lukasik left Xerox in 1976. He said that though his time there was short, he valued the experience. He just didn’t see the point in staying longer. Nevertheless, SDD would become useful to Xerox. It just had to wait for the right management.

Xerox set up a big demo of its products in Boca Raton, FL. in 1977, which included the prototypes from PARC. All the company executives, their wives, family and friends were invited. The people from PARC did the best bang-up job they could to make their stuff look impressive for business computing.

Back at PARC, says [Gary] Starkweather, he and his colleagues saw this as their last, best chance. “The feeling was, ‘If they don’t get this, we don’t know what we can do.'” So, he says, with John Ellenby coordinating an all-out effort, “We stripped everything out of PARC down to the power cords and set it all up again in Boca. Computers, networks, printers–the whole thing! I built a laser printer that did color. Bill English had a word processor that did Japanese. We were going to show them space flight!”

They certainly tried. [At the event], Xerox executives and their families swarmed through the Grenada Rooms of the Boca Raton Hotel for a hands-on demonstration of WYSIWYG editing in Bravo, graphical programming in Smalltalk, E-mailing in Laurel, artistry in Paint and Draw–the works. “The idea was a mental slam-dunk!” says Starkweather. “And some people did see it.” The executives’ wives, for example–many of them former secretaries who knew all about carbon paper, Wite-Out, and having to retype whole pages to correct a single mistake–took one look at Bravo and got it. “The wives were so ecstatic they came over and kissed me,” remembers Jack Goldman. “They said, ‘Wonderful things you’re doing!’ Years later, I’d see them and they’d still remember Boca.”

Then there were the delegates from Fuji Xerox, the company’s Japanese partner: they were beside themselves over Bill English’s word processor. “Fuji clamored, ‘Give us this! We’ll manufacture it!'” recalls Goldman. In fact, he says, that was a near-universal reaction: “People from Europe, people from South America, marketing groups around the U.S.–everyone who went out of that conference was excited by what they had seen.”

Everyone, that is, except the copier executives, the real power brokers in Xerox. You couldn’t miss them; they were the ones standing in the background with the puzzled, So-What? look on their faces.

Indeed, one of the corporation’s purposes in calling the conference was to rally the troops for the coming era of ever-more-ferocious competition. In fairness, those executives in the background had to worry about defending the homeland now, not ten years from now. Or maybe they were simply too bound by the culture of the executive suite, vintage 1977. The xerographers lived in a world in which typing was women’s work and keyboards were for secretaries. It was a rare executive who would even deign to touch one. (TDM, p. 408)

There was a change in leadership in 1978 which recognized that the management style they had been using wasn’t working. The Japanese had entered the copier market, and were taking market share away from them. This is just what Joe Wilson had anticipated eight years earlier. Secondly, the new management recognized the vision at PARC. They wanted to create products from it. Work on the Xerox Star system began that year at SDD.

SDD had previously created a machine called Dandelion, Xerox’s most powerful computer to date. The Star would be the Dandelion translated into a business system. (Source: TAES)

Xerox and Apple

There’s been increasing awareness about the history of Xerox and Apple as time has passed, but there are some misconceptions about it that deserve to be cleared up.

Xerox and Apple developed a brief business relationship in 1979. Apple got ideas about how to create the user experience with their Lisa and Macintosh computers from Xerox PARC. The misperceptions are in how this happened. This is how the meeting between Xerox and Apple in 1979 has been perceived among those who are generally familiar with this history (from the 1999 TNT made-for-TV movie “Pirates of Silicon Valley”):

There’s some truth to this, but some of it is myth. One could almost say it was trumped up to bolster Steve Jobs’s image as a visionary. It’s a story that’s been propagated for many years, including by yours truly. So I mean to correct the record.

Much of Waldrop’s account of what happened between Xerox and Apple is a retelling of an account from Michael Hiltzik’s book, “Dealers of Lightning.” From what he says, the Apple team’s visit at PARC was not a total revelation about graphical user interfaces, but it was an inspiration for them to improve on what they had developed.

The idea of a graphical user interface on a computer was already out there. People at Apple knew of it prior to visiting Xerox. Xerox had been publishing information about the concept, and had been holding public demos of some of the graphical technology PARC had developed. So the idea of a graphical user interface was not a secret at all, as has been portrayed. However, this does not mean that every aspect of Xerox’s user interface development was made public, as I’ll discuss below.

Apple had been working on a computer called Lisa since 1978. It was a 16-bit machine that had a high-resolution bitmapped display. The Lisa team called it a “graphical computer.” Apple was approached by the Xerox Development Corporation (XDC) to take a look at what PARC had developed. The director of XDC felt that the technology needed to be licensed to start-ups, or else it would languish. Steve Jobs rebuffed the offer at first, but was convinced by Jef Raskin at Apple to form a team to go to PARC for a demonstration.

Apple and Xerox made a trade. Xerox bought a stake in Apple that was worth about $1 million, in exchange for Apple getting access to PARC’s technology. Waldrop says that Jobs and a team of Apple engineers made two visits to PARC in December 1979. According to an article from Stanford University, Jobs was not with them for the first visit. Waldrop notes that by this point Apple had already added a graphical user interface to the Lisa, but that it was clunky. By Larry Tesler’s account, there were more visits by the Apple team than Waldrop talks about. He has a different recollection of some of the details of the story as well. I include these different sources to give a “spectrum” of this history.

Going by Waldrop’s account, at their first visit in December ’79, they got a “standard” demo of the Alto, given by Adele Goldberg, that many other visitors to PARC had already seen. The group from Apple saw it being used with a mouse, the Bravo word processor, some drawing programs, etc., and then they left, apparently satisfied with what they had seen. It should be noted that this demo did not show the Xerox “desktop interface” in the sense of what people have come to know on personal computers and laptops. Each program they saw was a separate entity, which took over the entire operation of the machine, using the Alto’s graphical capabilities to show graphics and text. The Apple team did not see the desktop metaphor, which was in Smalltalk, and which was being developed for the Xerox Star. Goldberg considered Smalltalk proprietary. Jobs and the team eventually realized that they hadn’t really seen “the good stuff.”

The Apple team came back to PARC, unannounced. This is the visit that’s usually talked about in Apple-PARC lore, and is portrayed in the Pirates of Silicon Valley clip above. Jobs demanded to see the Smalltalk system *NOW*. A heated argument ensued between PARC and Xerox headquarters over this. Xerox’s executives ordered the PARC team to demonstrate Smalltalk to the Apple team, citing the partnership with Apple brokered by XDC. Bob Taylor was out of town at the time. He later said that he had no respect for Steve Jobs, and if he had been there, he would’ve kicked the Apple team out of the building, no if’s, and’s, or but’s! He figured Xerox would’ve fired him for it, but that would’ve been fine with him.

This time the people from Apple were prepared. They asked very detailed technical questions, and they got to see what Smalltalk was capable of. They saw educational software written by Goldberg, programming tools written by Larry Tesler, and animation software written by Diana Merry that combined graphics with text in a single document. They got to see multiple tasks on the screen at the same time with overlapping windows, in a “desktop” metaphor. Jobs was beside himself. He exploded, “Why hasn’t this company brought this to market?! What’s going on here? I don’t get it!” This was when Jobs had his epiphany about the graphical user interface, though Waldrop doesn’t delve into what that was. The breakthrough for him may have been the desktop metaphor, how it allowed multiple, different views of information, and multiple tasks to be worked on at the same time, along with everything else he’d seen. The way Waldrop portrays it is that Jobs realized that using a computer should be a fulfilling and fun experience for the person using it.

According to Hiltzik, the partnership between Xerox and Apple, of which the Apple stock trade had been a part, quickly fell apart after this, due to a culture clash. The Lisa’s chief programmer, Bill Atkinson, had to go off of what he remembered seeing at PARC, since he no longer had access to their detailed technical information.

So the idea that Apple (and Microsoft for that matter) “stole” stuff from Xerox is not accurate, though there was trepidation at PARC that Apple would steal “the kitchen sink.” While the visit gave the Lisa team ideas about how to make computer interaction better, and what the potential of the GUI was, Hiltzik said it wasn’t that big of an influence on the Lisa, or the Macintosh, in terms of their overall system design.

What this account means is that the visits to PARC were tangentially influential on Apple’s version of the idea of a graphical user interface. They did not go in ignorant of what the concept was, and it did not give them all of the ideas they needed to create one. It just helped make their idea better.

The microcomputer “revolution”

The people at PARC were aware of the nascent microcomputer phenomenon that was occurring under their noses. Some of them went to the Homebrew Computer Club meetings, where Steve Jobs and Steve Wozniak used to hang out. They read the upstart computer press that was raving about what Bill Gates and Steve Jobs were doing with their new companies, Microsoft and Apple Computer. According to Waldrop, it galled them that this upstart industry was getting so much attention and adoration that they felt it didn’t deserve.

“It had never occurred to us that people would buy crap,” declares Alan Kay, who considered the hobbyists in their garages down the hill to be very bright and very creative ignoramuses–undisciplined kids who didn’t read and didn’t have a clue about what had already been done. They were successful only because their customers were just as unsophisticated. “What none of us was thinking was that there would be millions of people out there who would be perfectly happy with the McDonald’s hamburger approach.” (TDM, p. 437)

Steve Jobs was somewhat of an exception. At least he got a hint of “what had already been done.” It took some prodding, but once he got it, he paid attention. Still, he didn’t fully understand the significance of the research that went into what he saw. For example, Jobs was very hostile to the idea of computer networking at the time, because he thought that would deprive the personal computer user of their autonomy. Freedom from dependency on larger computer systems was a notion he held to ideologically. When he railed against IBM as “big brother” it wasn’t just for show. If only he had been aware of Licklider’s vision for the internet (which was published at the time), the notion of a distributed network, with independent units, and the DARPA work that was creating it, perhaps he would’ve cottoned to it the way he had the graphical user interface. We’ll never know. He came to understand the importance of the network features that had been developed at PARC about 6 years later when he left Apple and started up NeXT.

There’s a famous quote from Jobs about his visits to PARC that illustrates what I’m talking about, from the PBS mini-series, “Triumph of the Nerds” (I wrote a post about this mini-series here):

They showed me, really, three things, but I was so blinded by the first one that I didn’t really ”see” the other two. One of the things they showed me was object-oriented programming. They showed me that, but I didn’t even “see” that. The other one they showed me was really a networked computer system. They had over 100 Alto computers all networked, using e-mail, etc., etc. I didn’t even “see” that. I was so blinded by the first thing they showed me, which was the graphical user interface. I thought it was the best thing I had ever seen in my life. Now, remember it was very flawed. What we saw was incomplete. They had done a bunch of things wrong, but we didn’t know that at the time. Still, though, the germ of the idea was there, and they had done it very well. And within ten minutes it was obvious to me that all computers would work like this, someday.

Too much, too late for Xerox

Xerox released the Star in 1981 as the 8010 Information System.

It had a Smalltalk-like graphical user interface, anywhere from 384 kilobytes of memory up to 1.5 MB, an 8″ floppy drive, a 10 to 40 MB hard disk, a monitor measuring 17″ diagonally, a mouse, a bevy of system features, Ethernet, and laser printing. (source: The Xerox “Star”: A Retrospective) The 8010 cost $16,500 per unit, though a customer couldn’t purchase just the computer. It was designed to be an integrated system, with networking and laser printing included. A minimal installation cost $100,000 (about $252,438 in today’s money). Xerox was going for the Fortune 500, which could afford expensive, large-scale systems. Even though the designers thought of it as a “personal computer” system, the 8010 was marketed under the old mainframe business model.

The designers at SDD put the kitchen sink into it. It had every cool thing they could think of. The system was designed so that no third-party software could be installed on it at all. All of the hardware and software came from Xerox, and had to be installed by Xerox employees. This was also par for the course with typical mainframe setups.

In the Star operating system, all of the software was loaded into memory at boot-up, and kept memory-resident (very much like Smalltalk). Users didn’t worry about what applications to run. All they had to focus on was their documents, since all documents and data were implicitly linked to the appropriate software. It presented an object-oriented approach to information. It didn’t say to you, “You’re working with a word processor.” Instead, you worked with your document. The system software just accommodated the document by surreptitiously activating the appropriate part of the system for its manipulation, and presenting the appropriate interface.

Since the Xerox brass recognized they knew nothing about computers, they turned the Star’s design totally over to SDD. There appeared to have been no input from marketing. Even so, as Steve Jobs said of the executives, “They were copier-heads. They just had no clue about a computer, what it could do.” It’s unclear whether input from marketing would’ve helped with product development. I have a feeling “the innovator’s dilemma” applies here.

The designers at SDD were also told that the cost of the product was no object, since their target market was used to paying hundreds of thousands of dollars for large-scale systems. The Xerox executives miscalculated on this point. While it’s true that their target market had no problem paying the system’s price tag, personal computing was a new, untested concept to them, and they were wary about risking that much money on it. Xerox didn’t have a low-risk, low-cost entry configuration to offer. It was all or nothing. Microcomputers were a much easier sell in this environment, because their individual cost, by comparison, went unnoticed in corporate budgets. Corporate managers were able to sneak them in, buying them with petty cash, even though IT managers (who believed wholeheartedly in mainframes) tried their darnedest to keep them out.

The microcomputer market, which just about everyone at Xerox saw as a joke, was eating their lunch. Bob Frankston and Dan Bricklin at VisiCorp (both alumni of the Project MAC/Multics project) had come out with VisiCalc, the world’s first commercial spreadsheet software, in 1979, and it was only available for micros. Xerox didn’t have an equivalent on the 8010 when it was released. They had put all of their “eggs” into word processing, databases, and e-mail. These things were needed in business, but it was the same thing the researchers at PARC had run into when they tried to show the Alto to the Xerox brass in years passed: People in the corporate hierarchy saw themselves as having certain roles, and they thought they needed different tools than what Xerox was offering. Word processing was what stenographers needed, in the minds of business customers. Managers didn’t want it. They wanted spreadsheets, which were the “killer applications” of the era. They would buy a microcomputer just so they could run a spreadsheet on it.

The designers at SDD didn’t understand their target market. The philosophy at PARC was “eat your own dog food” (though I imagine they used a different term for it). From what I’ve heard, listening to people who were part of the IPTO in the 1960s, it was Doug Engelbart who invented this development concept. The problem for Xerox was this applied to product development as well as to overall quality control. From the beginning, they wanted to use all of the technology they developed, internally, with the idea that if they found any deficiencies, there would be no running away from them. It would motivate them to make their systems great.

Thacker and Lampson mentioned in a retrospective, which I refer to at the end of this post, that someone at PARC had come up with a spreadsheet for the Alto, but that nobody there wanted to use it. They were not accountants. Xerox eventually released a spreadsheet for the 8010, once they saw the demand for it, but the writing was already on the wall by then.

It’s hard for me to say whether this was intentional, but the net effect of the way SDD designed the Star resulted in an implicit assumption that customers in their target market would want the capabilities that the designers thought were important. Their focus was on building a complete integrated system, the likes of which no one had ever seen. The problem was nothing in their strategy accounted for the needs and perceptions that business customers would assert. They may have assumed that customers would be so impressed by the innovativeness of the system that it would sell itself, and people would adjust their roles to it in a McLuhan-esque “the medium is the message” sort of way.

The researchers at PARC recognized that producing a microcomputer had always been an option. In 1979, Bob Belleville suggested going with a 16-bit microcomputer design, maybe using the Motorola 68000, or the Intel 8086 processor, instead of going forward with the Star. He built a prototype and showed that it could work, but the team working on the Star didn’t have the patience for it. They would’ve had to throw out everything they had done, hardware and software, and start over. Secondly, they wouldn’t have been able to do as much with it as they were able to do with the approach they had been pursuing. Thacker and Lampson saw as well that building a microcomputer would be more expensive than going ahead with their approach. It was, however, a fateful decision, because the opposite would be true two years later, when the 8010 Information System was released.

Lampson said, ironically, that this time, “the problem wasn’t a shortage of vision at headquarters. If anything, it was an excess of vision at PARC.”

When Apple released their Lisa computer in 1983, Xerox realized that they had missed their chance. The future belonged to IBM, Apple, and Microsoft.

Dispersion

From about 1980 on, people left PARC to join other companies. They were bleeding talent. Taylor was seen as part of the problem. He had an overriding vision, and it was his way, or the highway, so others have said. He understood full well where computing was going. He was just ahead of his time. He could be very supportive, if you agreed with his vision, but he had utter contempt for any ideas he didn’t approve of, and he would “take out the flamethrower” if you opposed him. This sounds a lot like what people once complained about with Steve Jobs, come to think of it. The director of PARC kindly asked Taylor to change his attitude. He took it as an insult, and left in frustration in 1983, taking some of PARC’s best engineers with him.

From looking at the story, it appears the failure of the Star project was “the last straw” for Xerox’s foray into computer research. After Taylor left, the era of innovative computer research ended at PARC.

The visions at Xerox were grandiose, and were too much, too late for the market. I don’t mean to take anything away from what they did by saying this. What they had was pretty good. It represented the future, but it was too far ahead of where the business computing market was. Competitors had grabbed the attention of customers, and they preferred what the competition was offering.

The PARC researchers got it both ways. When they had the chance to develop products in 1975, ahead of the explosion in the microcomputer market, their vision wasn’t recognized by the company that housed them. By the time it was recognized, the market had changed such that much less powerful machines, backed by more innovative business models, won out.

Even though Alan Kay brought the idea of a small, handheld computer to PARC, something that ordinary people would love to use, he valued computing power over the size of the machine. As he would later say about that period, the idea of the Dynabook was more of a service metaphor. The size and form of the hardware was incidental to the concept. (source: An Interview with Computing Pioneer Alan Kay) Thacker and Lampson, the designers of the 8010, shared that value as well. By the late 1970s the market was thinking “small is beautiful,” and they were willing to tolerate clunky, low-powered machines, because their economies of scale met immediate needs, and they created less friction from corporate politics.

Apple and Microsoft used ideas developed at PARC to further develop the personal computer market, and so watered down pieces of that vision got out to customers over about 20 years. Today we live in a world that resembles what Bob Taylor envisioned, with individual computers, networking, and digital printing, using graphical systems.

PARC’s legacy

The outside world would come to know the desktop graphical user interface metaphor because of Smalltalk, though the metaphor was repurposed away from Alan Kay’s powerful ideas about a modeling system, into a system that made it easy to run applications, and emulated integrating older media together into virtual paper documents.

To me, the most interesting contributor to the desktop interface was Diana Merry. She was originally hired at PARC as a secretary. She just happened to understand Alan Kay’s goals, and so she was invited into the LRG. She and Kay created the first implementation of overlapping windows, in Smalltalk. She also wrote a lot of the basic software Smalltalk used to display and animate graphics. (Sources: “The Mouse and the Desktop — Designing Interactions,” by Bill Moggridge, and TEHS)

The Paint, Draw, and Write applications that appeared on the Apple Lisa and the first Macintosh systems were inspired by similar works that had been created years earlier at PARC.

Microsoft Windows, and Microsoft Word benefitted from the research that was done at PARC, though some inspiration came from the Macintosh as well. The look and feel of the first versions of Windows owed more to the X/Window system on Unix. Where Microsoft copied Apple was in how people interacted with Windows (icons and menus), and the application suite that came with the system. (source: The Secret Origin of Windows)

Software developers who have been working on apps. for Apple products in the most recent generations of systems may be interested to know that the Objective-C language they’ve used was first created by a company called StepStone in the early 1980s. The design of the language and its runtime were somewhat influenced by the Smalltalk system.

Several companies licensed a version of Smalltalk, called Smalltalk-80, from Xerox during the 1980s. Among them were Tektronix, Hewlett-Packard, Digital Equipment Corp., and Apple. (source: “Smalltalk-80: Bits of History, Words of Advice”) Apple got a version running on the Macintosh XL in 1985. (source: The Long View) Apple’s version of Smalltalk was updated in the mid-1990s by Alan Kay and some of the original Learning Research Group gang from Xerox. They got Apple’s permission to release it into the public domain, and it has since been ported to many platforms. Professional developers call it by its new name, “Squeak.” An educational version also exists, maintained by the Squeakland Foundation, which goes by the name “Etoys.”

Charles Simonyi joined Microsoft. He brought his knowledge from developing Bravo with him, and it went into creating Microsoft Word. He became the lead developer on their Multiplan, and Excel spreadsheet software.

Where are they now?

(Edit 4/15/2017: As events unfold, I will be updating this section.)

Stephen Lukasik became a Chief Scientist at the FCC from 1979-1982. He is a member of the International Institute for Strategic Studies, the American Physical Society, and the American Association for the Advancement of Science. He is a founder of The Information Society journal, and has served on the Boards of Trustees of Harvey Mudd College and Stevens Institute of Technology. (Source: Georgia Tech)

Larry Roberts was chief executive at Telenet until 1980. Telenet was sold to GTE in 1979 and subsequently became the data division of Sprint. In 1983 he became the CEO of NetExpress, an Asynchronous Transfer Mode (ATM) equipment company. Roberts then became president of ATM Systems from 1993 to 1998. He then went back to packet networking, founding Caspian Networks, which focused on IP flow management (IP as in “Internet Protocol”), until 2004. He founded Anagran in effort to do what Caspian did, but more efficiently. (Source: Larry Roberts’s home page)

J.C.R. Licklider – In the late 1970s Lick visited Xerox PARC regularly. He served on the Committee on Government Relations at the Association of Computing Machinery (ACM), and as deputy chairman of the Social Security Administration’s Data Management System. He spent a year in 1978 on a task force commissioned by the Carter Administration, which examined the government’s data-processing needs. He served as president of the Boston Computer Society. He was an investor and advisor to a company called Infocom in 1979, which was founded by eight of his former students from his Dynamic Modeling group at MIT. He spent part of his time working at VisiCorp. He continued to work at MIT until he retired in 1985. He died on June 26, 1990.

Ed Feigenbaum – In 1984 he became a Fellow at the American College of Medical Informatics. From 1994 to 1997 he served as Chief Scientist of the U. S. Air Force. He founded the Knowledge Systems Laboratory at Stanford University, and is now professor emeritus at Stanford. He was co-founder of several start-ups, such as IntelliCorp, Teknowledge, and Design Power Inc. He has served on the National Science Foundation Computer Science Advisory Board, on the National Research Council’s Computer Science and Technology Board, and as a member of the Board of Regents of the National Library of Medicine. He is a Fellow at the the Association for the Advancement of Artificial Intelligence, the American Institute of Medical and Biological Engineering, and of the American Association for the Advancement of Science. He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences. (Source: Feigenbaum’s Turing Award citation at the ACM)

George Heilmeier became vice president at Texas Instruments in 1977. In 1983 he was promoted to Senior Vice President and Chief Technical Officer. In his current position, he is responsible for all TI research, development, and engineering activities. From 1991-1996 he also served as president and CEO of Bellcore (now Telcordia), ultimately overseeing its sale to Science Applications International Corporation (SAIC). He served as the company’s chairman and CEO from 1996-1997, and afterwards as its chairman emeritus. He serves on the board of trustees of Fidelity Investments and of Teletech Holdings, and the Board of Overseers of the School of Engineering and Applied Science of the University of Pennsylvania. He is a member of the National Academy of Engineering, the Defense Science Board, and is Chairman of the Technical Advisory Board of Southern Methodist University. (sources: Wikipedia, and IEEE Global History Network)

It should be noted that Heilmeier is credited with being the inventor of the Liquid Crystal Display (LCD), a technology Alan Kay hoped to use for his Dynabook in the 1970s, and which became the basis for the digital display most of us use today.

Alan Kay left PARC on sabbatical in 1980, and never came back. He came to Atari in 1981, as Chief Scientist, doing research on interactive computing (you can see some samples of it here). He then joined Apple as a research Fellow in 1984, where he worked on improving education in conjunction with technology. He joined Disney as a Fellow and Imagineer from 1996 to 2001. He founded Viewpoints Research Institute in 2001. He then became a research fellow at Hewlett-Packard from 2002 to 2005. During that time he was a Visiting Professor at Kyoto University, an adjunct professor of Computer Science at MIT, and was involved with the development of the XO Laptop, developed by the One Laptop Per Child program at MIT. Today he is an adjunct professor of Computer Science at UCLA, and he continues his work at Viewpoints. He is also on the advisory board of TTI/Vanguard. Since 2006 Viewpoints has been working on a project, sponsored by the National Science Foundation, to reinvent personal computing. (sources: The New York TimesChap. 12 from Howard Rheingold’s “Tools For Thought,” Wikipedia, Bio. on Alan Kay at Answers.comVPRI: Inventing Fundamental New Computing Technologies)

Bob Taylor went on to found the Systems Research Center (SRC) at Digital Equipment Corp. (DEC) in 1984. He retired from DEC in 1996. He died on April 13, 2017.

Butler Lampson also left PARC with Taylor, and joined him at Digital Equipment Corp. He became a Fellow of the ACM in 1992. He now works at Microsoft Research, and is an adjunct professor at MIT. He became a Fellow at the Computer History Museum in 2008.

Dan Ingalls left PARC to work at Apple in 1984. Beginning in 1987 he helped run the Homestead Hotel, a family business, until 1993. He stayed on with Apple until 1996. From 1996 to 2001 he was Principal Staff Director at Disney Imagineering. He worked as a consultant for Hewlett-Packard and at Viewpoints Research Institute from 2001 to 2005. He joined Sun Labs in 2005, where he developed his newest project, called Lively Kernel. He left Sun in 2010 to become a Fellow at SAP, where he continues to work today. He continues development of his Lively Kernel Project at the Hasso Plattner Institute. (Sources: Dan Ingalls’s Linkedin page, and his Wikipedia page)

Diana Merry – After leaving Xerox in 1986, she has continued her work in Smalltalk with various employers. You can see her complete work history at her LinkedIn page.

Chuck Thacker left PARC the same time Bob Taylor did, and joined DEC as a founder of its Systems Research Center. He then joined Microsoft in 1997 to help found Microsoft Research in Cambridge, UK. He returned to the U.S., and developed the technology which was used in Microsoft’s Tablet PC. He is currently working at Microsoft Research in Silicon Valley on computer architecture.

Adele Goldberg became president of the Association of Computing Machinery (ACM) from 1984 to 1986, and continued to have a long association with the organization, taking on various roles. She continued on at PARC until 1987. She wanted to make sure that Smalltalk technology got out to a wider audience, so she worked out a technology exchange agreement with Xerox in the late 1980s and founded ParcPlace Systems, which commercialized a version of Smalltalk. She served as CEO and chairwoman of ParcPlace until 1995, when the company merged with Digitalk, another Smalltalk vendor, to become ParcPlace-Digitalk. In 1997 the company changed its name to ObjectShare. In 1999 ObjectShare was sold to Cincom, which has continued to develop and sell a Smalltalk development suite. Goldberg was inducted as a Fellow at the ACM in 1994. She co-founded Neometron in 1999, and now also works at Bullitics. She is a board member of Cognito Learning Media. She has continued her interest in education, formulating computer science courses at community colleges in the U.S. and abroad.

Charles Simonyi went to work at Microsoft in 1981. He led their development efforts to create Word, Multiplan, and Excel. He left Microsoft in 2002 to found Intentional Software, where he’s been at work on what I’d call his “Domain-Oriented Development” software and techniques.

Larry Tesler went to work at Apple in 1980, becoming Vice President of the Advanced Technology Group, and Chief Scientist. He worked on the team that developed the Lisa computer. In 1990 he led the effort to develop the Apple Newton, one of the first of what we’d recognize as a personal digital assistant. Tesler left Apple in 1997 to co-found a company called Stagecast Software. In 2001 he joined Amazon.com as its Vice President of Shopping Experience. In 2005 he joined Yahoo! as Vice President of its User Experience and Design group. In 2008 he went to work for 23andMe, a personal genetics information company. Since 2009 he’s worked as an independent consultant.

Timothy Mott left Xerox PARC to co-found Electronic Arts in 1982. In 1990 he became Director at Electronic Arts, and co-founded Macromedia, staying with them until 1994. In 1995 he co-founded Audible.com, and stayed with them until 1998. He stayed on with Electronic Arts until 2007. You can see his complete work profiles at Bloomberg BusinessWeek and CrunchBase.

Doug Brotz – It’s been difficult finding much information on him. All I have is that he joined Adobe and was a co-developer of the PostScript laser printer control language.

Andrew Birrell – He left Xerox PARC in 1984 to join the Systems Research Center at DEC, where he worked on Network Objects, a distributed object system, Virtual Paper, a system for easy online document reading, and the Personal Jukebox, the first multi-gigabyte portable audio player. DEC was acquired by Compaq in 1998. He stayed with Compaq until 2001, when he went to Microsoft Research. He retired in 2014. He died on December 7, 2016. (Source: the ACM)

Roy Levin became a senior researcher at the DEC Systems Research Center in 1984. In 1996 he became its director. In 2001 he co-founded Microsoft Research in Silicon Valley, became a Distinguished Engineer, and its Managing Director. He continues work there today.

Roger Needham worked as a consultant at Xerox PARC from 1977-1984. He then worked at DEC’s Systems Research Center from 1984-1997, and at Hitachi Advanced Research Laboratory from 1994-1997. He became a Fellow at the Royal Academy of Engineering in 1993, and of the ACM in 1994. He was a Fellow at Wolfson College, in Cambridge, UK, from 1966-2002. He joined Microsoft Research in 1997, and founded their European Research Labs. He was a longtime member of the International Association for Cryptographic Research, the IEEE Computer Society Technical Committee on Security and Privacy, and the University Grants Committee, an advisory committee to the British government. He died on March 1, 2003. (source: Wikipedia)

Michael Schroeder – I haven’t found detailed information on Schroeder, except to say that as a professor at MIT he worked on the Multics project (I covered this project in Part 2 of this series), before coming to Xerox PARC, and that after PARC he worked at DEC’s Systems Research Center. He then came to Microsoft Research in 2001, where he continues to work today. He became a Fellow of the ACM in 2004.

Clarence Ellis – After leaving Xerox PARC, he became head of the Groupware Research Program at the Microelectronics Computer Consortium in Austin, TX. He also worked at Los Alamos Labs, and Argonne National Laboratory. He held academic positions at Stanford, the University of Texas, MIT, and the Stevens Institute of Technology. In 1991 he became the chief architect of the FlowPath workflow product at Bull S.A. In 1992 he came to the University of Colorado at Boulder as a professor of computer science, and, with Gary Nutt, formed the Collaborative Technology Research Group (CTRG) to work on project workflow systems. He became professor emeritus of computer science at CU in 2010. He was on the editorial board of various journals. He was a member of the National Science Foundation (NSF) Computer Science Advisory Board, the University of Singapore ISS International Advisory Board, and the NSF Computer Science Education Committee. He died on May 17, 2014. (sources: Computer Scientists of the African DiasporaCTRG Groupware and Workflow Research, the Daily Camera)

A note I found in Ellis’s bio. also says that he was the first African American to receive a Ph.D. in computer science, in 1969, from the University of Illinois at Urbana-Champaign. He did his post-graduate work developing the Illiac IV supercomputer, an ARPA/IPTO project, and one of the first massively parallel computer systems.

Gary Nutt worked at Xerox PARC from 1978-1980. He worked on collaboration systems at Bell Labs in Denver, CO. from 1980-81. He took a couple executive roles at technology firms in Boulder, CO. from 1981-1986. He returned to the University of Colorado at Boulder in 1986 as a CS professor (he was previously an associate CS professor at CU Boulder from 1972-1978). While on sabbatical in 1993, he worked for Group Bull in Paris, France, developing collaboration products. In 2000 he worked as VP of Engineering for Bookface.com, managing their intellectual property. He went to Inktomi in 2001 to work on content and media distribution over the internet. He also advised on managing their intellectual property. He retired from CU Boulder in 2010, and is now professor emeritus. You can see his work history in more detail at his web page, which I’ve linked to here.

Steve Jobs – I wrote about Jobs’s work in a separate post. He died on October 5, 2011.

Bob Belleville – I couldn’t find much on him. What I have is that he joined Apple in 1981, becoming the chief engineer on the Lisa, and later the Macintosh project. He later joined Silicon Graphics’s R&D Dept. (Source: Good-bye Woz and JobsMaking the Macintosh)

You can watch a retrospective from 2001 given by Chuck Thacker and Butler Lampson on their days at PARC, and what they did, here, if you’re interested. It gets pretty technical, and is an hour and 20 minutes long.

You can see a 2010 interview with Adele Goldberg at the Computer History Museum, where she reviews her academic, educational, and business career here. It’s about an hour and 30 minutes.

Here’s a presentation given by the University of Texas at Austin in 2010, with Mitchell Waldrop, the author of “The Dream Machine,” and Michael Hiltzik, the author of “Dealers of Lightning,” along with an interview with Bob Taylor. It’s a nice bookend to the history I’ve discussed here.

Epilogue

The closing chapter in Waldrop’s book is called, “Lick’s Kids.” It talks about the people who were mentored by Licklider, either through ARPA, or at MIT, and who went out into the world to bring their ideas to life. It also talked about his waning years. He lived to see “the wheel reinvented” with microcomputers. The companies that were going gangbusters with them were repeating many of the lessons he and his researchers had learned in the 1960s, working at ARPA. The implication being that if they had merely taken time to look at what had already been learned 20 years earlier they would’ve avoided the same mistakes. He also saw the first glimmers of his vision of the Multinet come into being with the internet.

When reflecting on his life’s work, Lick was humble. He didn’t give himself much credit for creating our digital world. He thought of himself as just happening to be at the right place at the right time while some very bright people did the real work. But those who were mentored, and funded by him gave him a great deal of credit. They said if it wasn’t for him, their ideas would not have gotten off the ground, and often he was the only one who could see promise in them. He was indeed in the right place at the right time, but what was important was that he was in the right position to give them the support they needed to bring their dreams into reality.

I’ll talk more about Bob Kahn, and the development of the internet, in Part 4.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I mentioned this on my Facebook and Google+ pages, but thought I’d highlight it here, because I think it’s an interesting question. I’ve done some research on the history of computer graphics in the past (and some on my blog here), and what I’d always read was the first use of computer graphics in a movie was in 1976’s Futureworld.

As I read in this Wikipedia page, it was not the first use of CGI in film. Instead, it was the first use of 3D graphics in a feature film. The computer graphics are displayed on a monitor, showing a rotating hand, and a human face. The sequence was originally created by Ed Catmull (someone I’ve talked about before) in 1972, for a short film called, obviously enough, A Computer Animated Hand. There are earlier examples of computer animation in short films going back to the 1960s on the Wikipedia page.

However, what if CGI in films went back even further, to 1957 1958? I heard about this possibility through a video presented by John Hess on some film special effects history. He mentioned that a computer was used in creating the opening sequence for Alfred Hitchcock’s movie Vertigo. I watched it, and was amazed! Yes! These look like computer graphics!

An article in Rhizome describes it, saying that John Whitney programmed these graphics using a computer that was originally designed to aim artillery during WW II. A pendulum (which contained pressurized paint) was placed above a drawing surface that was attached to a platform. The platform was moved by the computer according to mathematical equations as the pendulum swung back and forth across it. This created precise spiral designs. There’s a part of the opening sequence where you can see these spiral designs change shape. These changes were created by altering the formulas for each frame that was drawn by the computer/pendulum combination. In my mind, this is similar to how computers interacted with oscilloscopes in the earliest visual computer displays, though it sounds like the computer could not turn the paint on and off.

Considering this, I’m wondering why this isn’t considered by historians as the first use of computer graphics in a feature film.

Read Full Post »

I came upon some videos from a vintage computer collector named Rudolf Brandstötter. He goes by the handle “alker33” on YouTube. He also has a blog here.

I was gratified to come upon these videos, because they allowed me to take a look at these old computers. The Apple models I’d used growing up were the Apple II+, the IIe, the 512K “Fat” Macintosh, either the Lisa 1 or Lisa 2, and the Mac Plus.

The size of each video window below is so small that if you want to take a good look at what’s going on on the video screen of each computer, you need to blow the video up to fullscreen. You do this by hitting the “broken square” icon in the lower-right corner of the video window. To take it back to normal size, hit the “shrink” icon in the lower-right corner, or hit the ESC key.

The Apple I

This was a kind of mythical machine back when I was a teenager in the early 1980s. People didn’t talk about it much, but when they did, there was some admiration for it, because it’s what started the company, and it was the first computer Steve Wozniak built. Even back then they were collector’s items, fetching (as I recall) tens of thousands of dollars. I don’t think I got an idea of what it actually was until the early 1990s, for it was not sold as a consumer computer. It was just a motherboard. It came out in 1976, and famously sold for $666.66 ($2,691.95 in today’s money). Steve Wozniak was asked about the price recently, and he said it wasn’t a marketing gimmick. There was no satanic meaning behind it. He said they looked at the cost it took to make it, and ran it through a standard profit margin formula, and it just happened to come out to that number!

It did not come with a keyboard, a monitor, power supply, nor a storage device. It came with a MOS Technology 8-bit 6502 microprocessor running at 1 Mhz, and 4 kilobytes of memory, upgradeable to 16K. It came with composite video output built-in. The only software it contained in Read-Only Memory (ROM) was a machine language monitor that it booted into when you turned it on (that was the operating system!). The owner could use it to enter and run programs in hexadecimal via. keyboard. This was an improvement over earlier microcomputers like the Altair, which had their owners entering programs one byte at a time using a panel of switches. A tape recorder interface card had to be purchased separately to store and load programs. The card came with Steve Wozniak’s Integer Basic on cassette tape.

An Apple I in a homemade case, from Wikipedia

Brandstötter, as I recall, got his from someone who had framed the motherboard. Some of the chips had to be replaced, but otherwise it was in working condition, as you can see in this video.

An original Apple I

I have to admit, watching this is anticlimactic. There’s not much to see, but I was glad I got to see it anyway. Finally, I could look at this machine that I’d heard rumors about for decades.

Brandstötter takes you on a “tour” of the machine, and shows the Apple I manuals. The computer boots into the machine language monitor, from which he can either program in hexadecimal, or load machine code from a cassette tape. Brandstötter does one of the standard tests of typing in a “hello world” program in hexadecimal. You can read about the built-in operating system (all 256 bytes of it!), and the assembly mnemonics of this little test program here. Then he loads Integer Basic from tape (after which he types in a short Basic program), and then he loads a program that displays Woz’s and then Steve Jobs’s mugs as ASCII art. The Apple I did not have a graphics mode.

Lest one think that maybe Apple used some sort of digitizer to get their faces into binary and saved them to tape, I learned not too long ago that it was common back in the 1950s and 1960s among those who were into digital media to hand-digitize photographs in ASCII. That may have been what happened here.

What’s special about this, as Brandstötter notes in the video, is he owns one of 6 working original Apple I motherboards in the world. There are modern Apple I replicas that have been made that work exactly like the original. What I read in the discussion to this video (or one of the other Apple I videos I found) is that when Apple came out with the Apple II (the next one I show below), they had a trade-in program where people could turn in their Apple I motherboards. The reason they did this was it saved them on customer support costs. So there aren’t that many vintage motherboards around. (I’ve read a couple claims that there are between 40-60 of them in the whole world.)

The Apple II

The video below was an interesting find. I remember hearing from somebody years ago that there was such a thing as an Apple II before the II+. This is the first time I’ve seen one of them. Just from how it runs, it seems no different from a II+, though there were some minor differences (I derived this information from this Wikipedia page).

The II came with the same 6502 CPU as the Apple I, with configurations from 4 kilobytes up to 48K of memory. In its lowest configuration it sold for $1,298 ($4,921 in today’s money). The 48K configuration sold for $2,638 ($10,002 in today’s money). It came as a complete unit, with its own case (no assembly required), ready to be hooked up to a monitor or TV. It had several internal expansion slots, Woz’s Integer Basic, and an assembler/disassembler built into ROM. If you just booted into the ROM it would take you to Integer Basic by default. When it came out in 1977, owners still could only load/save programs on cassette tape. A disk interface, created by Steve Wozniak, and disk drive, along with a Disk Operating System (DOS) written by a company called Shepardson, came out for it a year later. Applesoft Basic (written by Microsoft), which handled floating-point, also came out for it later on tape. As you’ll see in the video, the II had a graphics mode. What you don’t see (due to the monochrome monitor) is that it was capable of displaying color.

The II+ came out in 1979, with 16, 32, or 48K configurations, and could be expanded to 64K. It had a starting price of $1,195 ($3,777 in today’s money). It came with Applesoft Basic in ROM (replacing Integer Basic as the standard language). The assembler/disassembler was removed from ROM to make room for Applesoft Basic, though it retained a machine language monitor that users could enter using a “Call” command from Basic. Owners could load Integer Basic from disk. In addition, the graphics capabilities were enhanced.

The Apple II (not the II+)

Brandstötter types in a brief Basic program that prints numbers across the screen. He copies some files from one disk to another to show that both disk drives work. He brought back memories loading up Apple Writer. This was one of the first word processors I learned to use in Jr. high school. Lastly, he loads up a game called “Bug.”

The Apple III

I only saw Apple III‘s back in the day used as props on a TV show called “Whiz Kids.” The computer came out in 1980, and was designed as a business machine. It used a Synertek 8-bit 6502A 1.8 Mhz CPU. I think the model Brandstötter uses in the video has 128K of memory (it was capable of going up to 256K). It sold for from $4,340 -$7,800 ($12,084 – $21,718 in today’s money). The OS it booted into was called SOS, for “Sophisticated Operating System.” As you’ll notice, it defaults to an 80-column display (the prior Apple II models had 40-column displays). Interestingly, the OS runs through a menu system, not a command-line interface. It’s reminiscent of ProDOS, which I remember running on the Apple IIe sometimes.

The Apple III was designed to either run off of a floppy drive, or a 5 MB hard drive Apple sold called “Profile” (or both). You don’t see the Profile in this video, though you’ll see it in the video I have below on the Lisa.

The Apple III

Here’s Steve Jobs in 1980 describing how the company got started with the Apple I, his philosophical outlook at the time with the Apple II, and what he was looking forward to with “future products.” I really liked his perspective on what the computer enabled at the time. I get the sense he had a very clear idea of what its value was. With regard to “future products,” I think you can read from some of his answers that he was talking about the Apple Lisa, and possibly the Macintosh, though he was being tight lipped about getting into specifics, of course. Unfortunately there are some glitches in the video tape, and there’s a part that’s unintelligible, because it’s too badly damaged.

The Lisa

This is the Apple GUI computer that preceded the Macintosh. It came out in 1983. It used a 16-bit 5 Mhz Motorola 68000 processor, and came with 1 MB of memory, expandable to 2 MB. It sold for $9,995 ($23,022 in today’s money). The OS featured pre-emptive multitasking, enabling the user to run more than one application at the same time.

Here’s the Wikipedia article on it.

The Apple Lisa

Brandstötter tells an interesting story about the “Twiggy” floppy drives. They’re the two 5-1/4″ drives on the right side of the case. They were named after a 1960s fashion model who was famously thin. From a get-together Jobs had with Bill Gates 7 years ago, where Jobs mentioned them, I thought he was talking about the 3-1/2″ drives that ended up on the Macintosh, but in fact he was talking about these drives. Brandstötter says that Apple tried using it on the Mac during its development, but they ended up going with the Sony 3-1/2″ drive, because these 5-1/4″ drives were so unreliable.

They used special floppy disks that were only made for use on this drive (talk about lock-in!). They stored 870K per disk. Brandstötter shows them to the camera, and my goodness! They have two “windows” per side where they can be accessed by two read-write heads, in a double-sided fashion. (Normal 5-1/4″ disks only had one “window” per side.) The two read/write heads (one on top, one on the bottom) were positioned on opposite sides of the spindle, and moved in tandem across the disk, because the designers were concerned about head wear in a conventional double-sided disk configuration (with two heads opposing each other, on the same side of the spindle). Each head was opposed by a pad to press the disk against the head.

Apple ended up having buyers trade in their Lisas for Lisa 2’s (which came out in 1984), which had the more reliable 3-1/2″ drive. However, Brandstötter shows off the fact that his “Twiggy” drives work. After a reboot, he shows a bit of what LisaDraw can do.

This was a really interesting romp through some history, most of which was before my time!

Postscript:

As I was doing my research for this post, I noticed that there was some talk of Apple I software emulators, enabling people to experience using this vintage machine on their modern computer. If you desire to give it a whirl, here’s a page with several Apple I, and Apple 8-bit series emulators that run on various platforms. I haven’t tried any of them out. It looks like there might be a little setup necessary to get the Apple I emulator running. I noticed there was a tape Prom file (for, I assume, accessing tape files, to simulate loading/saving programs). Usually this just involves putting files like the Prom file in a known location in your storage, and directing the emulator to where it is when you first run it. Also, here’s the Apple I Operating Manual, which contains the complete hex and assembly listing of the machine monitor, and a complete electronic schematic of the motherboard. It’s up to you to figure out what to do with it. 🙂

I leave it up to readers to find emulators for the other platforms. I know there are at least Apple II emulators available for various platforms. Just google for them.

Related posts:

Remembering Steve Jobs and Apple Computer

Reminiscing, Part 2

Read Full Post »

I found a version of Smalltalk-72 online through Gilad Bracha on Google+. It’s being written by Dan Ingalls, the original implementor of Smalltalk, on his relatively new web authoring platform called Lively Kernel (or Lively Web). I don’t know what Ingalls has planned for it, whether this will be up for long. I’m just writing about it because I found it an interesting and enlightening experience, and I wanted to share about it with others so that hopefully you’ll gain similar value from trying it out. You can use ST-72 here. I have some words of warning about it, as I’ll gradually describe in this post.

First of all, I’ve noticed as of this writing that as it’s loading up it puts up a scary “error occurred” message, and a bunch of “loading” messages get spewed all over the screen. However, if you are patient, all will be well. The error screen clears after a bit, and you can start using ST-72. This looks to be a work in progress, and some of it works.

What you’ll see is a white vertical rectangle surrounded by a yellowish box that contains some buttons. The white rectangle represents the screen area you work in. At the bottom of that screen area is a command line, which starts off with “Welcome to SMALLTALK.”

This environment was created using an original saved image of a running Smalltalk-72 system from 40 years ago. Even though the title says “ALTO Smalltalk-72,” it’s apparently running on a Nova emulator. The Nova was a minicomputer from Data General, which was the original platform Smalltalk was written on at Xerox PARC in the early 1970s. I’m unclear on what the connection is to the Alto’s hardware. You can even look at a Nova assembly monitor display by clicking on the “Show Nova” button, which will show you a disassembly of the machine instructions that the emulator is executing as you use Smalltalk. (To get back to Smalltalk, click on “Show Smalltalk.”) The screen display you see is as it was originally.

Smalltalk-72 is not the first version of Smalltalk, but it comes pretty close. According to Alan Kay’s retelling in “The Early History of Smalltalk,” there was an earlier version (which was the result of a bet he made with his PARC colleagues), but it was rudimentary.

This version does not run like the versions of Smalltalk most people familiar with it have used, which were based on Smalltalk-80. There are no workspace windows (yet), other than the command line interface at the bottom of the screen area, though there is a class editor (which as of this writing I’d say is non-functional). It doesn’t look like there’s a debugger. Anytime an error occurs, a subwindow with an error message pops up (which can be dismissed by pressing ctrl-D). You can still get some things done with the environment. To really get an idea of how to use it you need to click on “Open ST-72 Manual.” This will bring up a PDF of the original Smalltalk-72 documentation.

What you should keep in mind, though, (again, as of this writing) is that some features will not work at all. File functions do not work. There’s a keystroke the manual calls “‘s”, which is supposed to recall a specified object attribute, such as, for example, “title’s string” to access a variable called “string” out of an object called “title.” (“title” receives “‘s string” as separate messages (“‘s” and “string”), but “‘s” is supposed to be received as a single character, not two, as shown.) I have not been able to find this keystroke on the keyboard, though you can substitute a different message for it (whatever you want to use) inside a class.  The biggest thing missing at this point is that while it can read the mouse pointer’s position, this ST-72 environment does not detect any mouse button presses (the original mouse that was used with it had several mouse buttons). This disables any ability to use the built-in class editor, which makes using ST-72 a chore for anything more than “hello world”-type stuff, because besides adding a method to a class (which is easy to do), it seems the only way to change code inside a class is to erase the class (assign nil to its name), and then completely rewrite it. I also found this made later code examples in the documentation (which got more into sophisticated graphics) impossible to try out, as much as I tried to use keyboard actions to substitute for mouse clicks.

What I liked about seeing this version was it helped explain what Kay was talking about when he described the first versions of Smalltalk as “iconic” programming languages, and what he described with Smalltalk’s parsing technique, in “The Early History of Smalltalk” (which I will call “TEHS” hereafter).

To use this version, you’ll need to get familiar with how to use your keyboard to type out the right symbols. It’s not cryptic like APL, but it’s impossible to do much without using some symbolic characters in the code.

The most fascinating aspect to me is that parsing was a major part of how objects operated in this environment. If you’re familiar with Lisp, ST-72 will seem a bit Lisp-like. In fact, Kay says in TEHS that one of his objectives was to improve on Lisp’s use of “special forms” by making them unnecessary. He said that he did this by making parsing an integral part of how objects operate, and by making what appear to be syntactic elements (what would be reserved words or characters in other languages) their own classes, which receive messages, which are just other elements of a Smalltalk expression in your code; so that the parsing action goes pretty deep into how the whole system operates.

Objects in ST-72 exist in what I’d call a “nether world” between the way that Lisp functions and macros behave (without the code generation part). Rather than substitute bound values and evaluate, like Lisp does, it parses tokens and evaluates (though it has bound values in class, instance, and temporary variables).

It’s possible to create a class which has no methods (or just an implicit anonymous one). Done this way, it looks almost like a lambda (an anonymous function) in Lisp with no parameters. However, to pass parameters to an object, the preferred method is to execute some parsing actions inside the class’s code. ST-72 makes this pretty easy. Once you start doing it, using the “eyeball” character, classes start to look a bit like Lisp functions with a COND expression in it.

Distinguishing between using a class and an instance of it can be confusing in this environment. If you reference a class and pass it a message, that’s executing a class’s code using its class variable(s). (In ST-80 this would be called using “class-side code,” though in ST-72 the only distinction between class and instance is what variables inside the class specification get used. The code that gets executed is exactly the same in both cases.) In order to use an object instance (using its instance variable(s)), you have to assign the class to a variable first, and then pass your message to that variable. Assigning a class to a variable automatically creates an instance of the class. There is no “new” operator. The semantics of assignment look similar to that of SETQ in Lisp.

A major difference I notice between ST-72 and ST-80 is that in ST-80 you generally set up a method with a signature that lays out what message it will receive, and its parameter values, and this is distinct from the functionality that the method executes. In ST-72 you create a method by setting up a parsing expression (using the “eyeball” character) within the body of the class code, followed by an “implies” (conditional) expression, which then may contain additional parsing/implies expressions within it to get the message’s parameters, and somewhere in the chain of these expressions you execute an action once the message is fully parsed. I imagine this could get messy with sophisticated interactions, but at the same time I appreciated it, because it can allow a very straightforward programming style when using the class.

Creating a class is easy. You use the keyword “to” (“to” is itself a class, though I won’t get into that). If anyone has used the Logo language, this will look familiar. One of Kay’s inspirations for Smalltalk was Logo. After the “to”, you give the class a name, and enter any temporary, instance, and/or class variables. After that, you start a vector (a list) which contains your code for the class, and then you close off the vector to complete the class specification.

All graphics in ST-72 are done using a “turtle,” (though it is possible to use x-y coordinates with it to position it, and to direct it to draw lines). So it’s easy to do turtle graphics in this version of Smalltalk. If you go all the way through the documentation, you’ll see that Kay and Adele Goldberg, who co-wrote the documentation, do some sophisticated things with the graphics capabilities, such as creating windows, and menus, though since mouse button presses are non-functional at this point, I found this part difficult, if not impossible to deal with.

I have at times found that I’ve wanted to just start the emulator over and erase what I’ve done. You can do this by hitting the “Show Nova” button, hitting the “Restart” button, and then hitting “Show Smalltalk.”

Here is the best I could do for an ST-72 key map. Some of this is documented. Some of it I found through trial and error. You’ll generally get the gist of what these keys do as you go through the ST-72 documentation.

function keystroke description
! (“do it”) \ (backslash) evaluates expression entered on the command line, and executes it
hand shift-‘ quotes a literal (equivalent to “quote” in Lisp)
eyeball shift-5 “look for” – for parsing tokens
open colon ctrl-C fetch the next token, but do not evaluate (looks like two small circles one on top of the other)
keyhole ctrl-K peek at the input stream, but do not fetch token from it
implies shift-/ used to create if…then…else… expressions
return shift-1 returns value
smiley/turtle shift-2 for turtle graphics
open square shift-7 bitwise operation, followed by:
a * (number) = a AND (number)
a + (number) = a OR (number)
a – (number) = a XOR (number)
a / (number) = a left-shift by (number)
? shift-~ (tilde) question mark character
‘s (unknown)
done ctrl-D exit subwindow
unary minus
less-than-or-equal ctrl-A
greater-than-or-equal ctrl-Z
not equal ctrl-N
% ctrl-V
@ ctrl-F
! ctrl-Q exclamation point character
ctrl-O double quote character
zero(?) shift-4
up-arrow shift-6
assignment shift-_ displays a left-arrow, equivalent to “=” in Algol-derived languages.
temp/instance/class variable separator : (colon) (this character looks like ‘/’ in the ST-72 documentation, but you should use “:” instead)

When entering Smalltalk code, the Return key does not enter the code into the system. It just does a line-feed so you can enter more code. To execute code you must “do it,” by pressing ‘\’.

You’ll notice some of the keystrokes are just for typing out a literal character, like ‘?’. This is due to the emulator re-mapping some of the keystrokes on your keyboard to do something else, so it was necessary to also re-map the substituted characters to something else so they could still be used.

Using ST-72 feels like playing around in a sandbox. It’s fun seeing what you can find out. Enjoy.

Read Full Post »

Christina Engelbart posted a page on her site to mention places that commemorated the 45th Anniversary of Doug Engelbart’s (her father’s) “Mother of All Demos.” The Computer History Museum in Mountain View, CA did an event on Dec. 9, the date of the demo, to talk about it, what was accomplished from it, and what from it has yet to be brought into our digital world. There have been some other anniversary events for this demo in the past, but this one is kind of poignant I think, because Doug Engelbart passed away in July. Maybe C-SPAN will be showing it?

I was intrigued by this line in the description of the CHM event:

Some of the main records of his laboratory at SRI are in the Museum’s collection, and form a crucial part of the CHM Internet History Program.

Since I heard about what an admirable job the Augmentation Research Center had done in building NLS architecturally, I’ve been curious to know, “How can today’s developers take a look at what they did, so they could learn something from it?”

The reason I particularly wanted to point out this commemoration, though, is one of the pages Christina referenced was a post by Brad Neuberg demonstrating HyperScope, a modern, web-based system that implements some of the document and linking features of NLS, and Engelbart’s original Augment system (NLS was renamed “Augment” in the late 1970s). Watching Engelbart’s demo gives one a flavor of what it was like to use it, and its capabilities, but it doesn’t lend itself to helping the audience understand how it deals with information, and how it operates–why it’s an important artifact–beyond being dazzled by what was accomplished 45 years ago. Watching Brad’s videos gives one a better sense of these things.

Read Full Post »

If, in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?

— Doug Engelbart, introducing his NLS system in 1968

Doug Engelbart, from Wikipedia

I got the news from Howard Rheingold on Google+ that Doug Engelbart died last night. It is not a stretch to say that without Engelbart the experience we’ve had with computers for the last 30+ years would be very different. He was a pioneer in interactive computing, bringing computers out of the era of batch processing with punch cards and teletypes as the only means of using them, into an age where you could directly interact with one. He wanted so much more to come out of that, and he put that desire into his NLS system.

He is best known today as the man who invented the computer mouse, but in my opinion that makes light of what he did. He gave us mouse interaction with a graphical interface. He gave us the idea of using a visual pointer (he called the visual pointer a “bug”–a little something that flies or crawls around the screen…) and information schemas that one could change on the fly to manipulate and organize information, and to save that information to your own personal file so that you could later retrieve it, and share it with others in your group. He gave us hyperlinking, long before most of us understood what the concept was. His hyperlinks were more than what we have today. They not only pointed to information, but also specified how the creator of the link wanted that information displayed. He gave us a mixture of graphics and text on the screen at the same time, to enhance our ability to communicate. He gave us the idea of sub-second access to information, so that our brains could be “in flow,” and not have mental breaks with our thoughts, because of slow technology. He gave us collaborative computing with video conferencing. In an amazing display, when he demoed NLS, he brought in other participants, using a combination of analog technology, and a digital network, who could work with Doug on the same document, live, at the same time. Using video cameras, television displays, and microphones, and speakers, they could see each other, and talk to each other as they worked.

During the 1960s he thought about word processing at a time when it was just experimental, and how the idea could be used to make forming documents easier. He thought about online discussions, address books, online technical support, and online research libraries. He brought some of what he thought about into NLS.

If you’d like to watch the full presentation, you can look through an annotated version here.

All of this was created in 1968, before the Arpanet, the predecessor to the internet, came into being. The audience that saw this demo was dazzled. Some questioned whether it was real, whether it was all a mock up, because hardly anybody expected that computers could do these things. Another computing pioneer, Chuck Thacker, said of Engelbart that day that he was “dealing lightning with both hands.” Indeed he did.

There are more ideas he had, which he came up with more than 45 years ago, and were implemented in NLS, that have yet to be introduced into our digital world. He wanted systems like NLS to be a “vehicle,” as he called it, for implementing a method for improving “collective intelligence.”

Smeagol Studios came up with what I thought was a nice video summary of Engelbart’s work, and how it was expressed in products that we in the mass market came to use.

Thank you, Doug, for all that you gave us. You took a great leap in making computers approachable, which helped me fall in love with them.

Related links:

The Doug Engelbart Institute

From my blog:

A history lesson in government R&D, Part 2

Getting beyond paper and linear media

Does computer science have a future?

Tales of inventing the future

Great moments in modern computer history

Edit: A couple other links I thought to include:

Doug Engelbart and Ted Nelson come to dinner – This happened on Engelbart’s birthday last year. Howard Rheingold invited Engelbart and Nelson over, and just had them chew the fat on what they thought of the web.

More on getting beyond paper and linear media – Christina Engelbart, who is one of Doug’s daughters, and the director of the Doug Engelbart Institute, wrote a follow-up post to “Getting beyond paper and linear media,” on her blog, Collective IQ, talking about Doug’s concept of augmenting the human intellect.

Read Full Post »

Older Posts »