Feeds:
Posts
Comments

Archive for the ‘Technology and Software’ Category

This has been on my mind for a while, since I had a brush with it. I’ve been using Jungle Disk cloud-based backup since about 2007, and I’ve been pretty satisfied with it. I had to take my laptop “into the shop” early this year, and I used another laptop while mine was being repaired. I had the thought of getting a few things I was working on from my cloud backup, so I tried setting up the Jungle Disk client. I was dismayed to learn that I couldn’t get access to my backed up files, because I didn’t have my Amazon S3 Access Key. I remember going through this before, and being able to recover my key from Amazon’s cloud service, after giving Amazon my sign-in credentials. I couldn’t find the option for it this time. After doing some research online, I found out they stopped offering that. So, if you don’t have your access key, and you want your data back, you are SOL. You can’t get any of it back. Period, even if you give Amazon’s cloud services your correct sign-on credentials. I also read stories from very disappointed customers who had a computer crash, didn’t have their key, and had no means to recover their data from the backup they had been paying for for years. This is an issue that Jungle Disk should’ve notified customers about a while ago. It’s disconcerting that they haven’t done so, but since I know about it now, and my situation was not catastrophic, it’s not enough to make me want to stop using it. The price can’t be beat, but this is a case where you get what you pay for as well.

My advice: Write down–on a piece of paper, or in some digital note that you know is secure and recoverable if something bad happens to your device–your Amazon S3 Access Key. Do. It. NOW! You can find it by bringing up Jungle Disk’s Activity Monitor, and then going to its Desktop Configuration screen. Look under Application Settings, and then Jungle Disk Account information. It’s a 20-character, alphanumeric code. Once you have it, you’re good to go.

Edit 12/23/2015: I forgot to mention that to re-establish a connection with your backup account, you also need what’s called a Secret Key. You set this up when you first set up Jungle Disk. You should keep this with your S3 Access Key. From my research, though, it seems the most essential thing for re-establishing the connection with your backup account is keeping a copy of your S3 Access Key. The Secret Key is important, but you can generate a new one, if you don’t know what it is. Amazon no longer reveals your Secret Key. Where’s My Secret Access Key? talks about this. It sounds relatively painless to generate a new one, so I assume you don’t lose access to your backup files by doing this. Amazon’s system limits you to two generated Secret Keys “at a time.” You can generate more than two, but you have to go through a step of deleting one of the old ones first, if you reach this limit. The article explains how to do that.

Read Full Post »

See Part 1, Part 2

This post has been a long time coming. I really thought I was going to get it done about this time last year, but I got diverted into other research that I hope to convey on this blog in the future. I also found more sources to explore for the portion on Xerox PARC. I’ve been as faithful as I can to the history, but there’s always a possibility I made some mistakes. I welcome corrections from those who know better. 🙂

This is not a complete history of computing in the period I cover. I mean to convey the closing years of ARPA’s “great leaps” in ideas about computing, and then cover the attempt to continue those “great leaps” privately, at Xerox PARC.

My primary source material for this part is the book, “The Dream Machine,” by M. Mitchell Waldrop. I’ll just refer to it as “TDM.”

I’ve been dividing up this series roughly into decades, or “eras.” Part 1 focused on the 1940s and 50s. Part 2 focused on the 1960s. This part is devoted to the 1970s.

In Part 2 I covered the creation of the Advanced Research Projects Agency (ARPA) at the Department of Defense, and the IPTO (Information Processing Techniques Office, a program within ARPA focused on computer research), and the many innovative projects in which it had been engaged during the 1960s.

Decline at the IPTO

There were signs of “trouble in paradise” at the IPTO, beginning around 1967, with the acceleration of the Vietnam War. Charles Herzfeld stepped down as ARPA director that year, and was replaced by Eberhardt Rechtin. There was talk within the Department of Defense of ending ARPA altogether. Defense Secretary Robert McNamera did not like what he saw happening in the program. Money was starting to get tight. People outside of ARPA had lost track of its mission. It was recognized for producing innovative technology, but it was stuff that academics could get fascinated about. Most of it was not being translated into technologies the military could use. Its work with the academic community created political problems for it within the military ranks, because academia was viewed as being opposed to the war.

Rechtin cut a lot of programs out of ARPA. Some projects were transferred to other funders, or were spun off into their own operations. He wanted to see operational technology come out of the agency. The IPTO had some allies in John Foster at the Department of Defense Research & Engineering (DDR&E), the direct supervisor of ARPA, and Stephen Lukasik, who had had the chance to use the Whirlwind computer (which I covered in Part 1). Lukasik was very excited about the potential of computer technology. He succeeded Rechtin as ARPA director in 1970. The IPTO’s budget remained protected, mainly, it seems, due to Lukasik pitching the Arpanet to his superiors as a critical technology for the military in the nuclear age (I covered the Arpanet in Part 2).

In 1970 a rule called the Mansfield Amendment, created by Democratic Senator Mike Mansfield, came into effect. It established a rule for one year mandating that all monies spent by the Defense Department had to have some stated relevance to the country’s military mission. Wikipedia says that this rule was renewed in 1973, specifically targeting ARPA. Waldrop claims it was really a round-about way of cutting defense spending, but it didn’t mean anything in terms of the technology that was being developed at ARPA. Nevertheless, it had social reverberations within the agency’s working groups. In light of the controversy over the Vietnam War, it tarnished ARPA’s image in the academic community, because it required all projects funded through it to justify their existence in military terms. When non-military research proposals would be sent to ARPA, military “relevance statements” would be written up and slapped on them at the end of the approval process, unbeknownst to the researchers, in order to comply with the law. These relevance statements would contain descriptions of the potential, or supposed intended use of the technology for military applications. This caused embarrassing moments when students would find out about these statements through disclosures obtained through the Freedom of Information Act. They would confront ARPA-funded researchers with them. This was often the first time the researchers had seen them. They’d be left explaining that while, yes, the military was funding their project, the focus of their research was peaceful. The relevance statements made it look otherwise. Secondly, it gave the working groups the impression that Congress was looking over their shoulder at everything they did. The sense that they were protected from interference had been pierced. This dampened enthusiasm all around, and made the recruiting of new students into ARPA projects more difficult.

ARPA’s name was changed to DARPA in 1972, adding the word “Defense” to its name. It didn’t mean anything as far as the agency was concerned, but given the academic community’s opposition to the Vietnam War, it didn’t help with student recruiting.

In 1972 the consulting firm Bolt Beranek and Newman (BBN), which had built the hardware for the Arpanet, wanted to commercialize its technology–to sell it to other customers besides the Defense Department. IPTO director Larry Roberts announced in 1973 that he was leaving DARPA to work for BBN’s new networking subsidiary, named Telenet. This created a big problem for Lukasik, because Roberts hadn’t groomed a deputy director, as past IPTO directors had done, so that there would be someone who could jump right in and replace him when he left. It was up to Lukasik to find a replacement, only that wasn’t so easy. He had created a bit of a monster within the IPTO. Every year he had pushed its budget higher, while other projects within DARPA were having their budgets cut. He tried to recruit a new director from within the IPTO’s research community, but everyone he thought would be suited for it turned him down. They were enjoying their research too much. Lukasik came upon J.C.R. Licklider, the IPTO’s founding director, as his last resort.

J.C.R. “Lick” Licklider, from ARS 327

Licklider (he liked to be called “Lick”), who I talked about extensively in Part 2, had returned to MIT and ARPA in 1968. He became a tenured professor of electrical engineering. He began his own ARPA project at MIT, called the Dynamic Modeling group, in 1971. He observed that the Multics project, which I covered in Part 2, nearly overwhelmed Project MAC’s software engineers, and almost resulted in Multics being cancelled. The goal of his project was to create software development systems that would make complex software engineering projects more comprehensible. As part of his work he got into computer graphics, looking at how virtual entities can be represented on the screen. He also studied human-computer interaction with a graphics display.

Larry Roberts tried to find someone besides Lick to be his replacement, more as an act of mercy on him, but in the end he couldn’t come up with anyone, either. He contacted Lick and asked him if he’d be his replacement. He reluctantly accepted, and returned to his post as Director in 1974.

Lick became an enthusiastic supporter of Ed Feigenbaum’s artificial intelligence/expert systems research. I talked a bit about Feigenbaum in Part 2. Through his guidance, Feigenbaum founded Stanford’s Heuristic Programming Project. Feigenbaum’s work would become the basis for all expert systems produced during the 1980s. He also supported Feigenbaum’s effort to integrate artificial intelligence (AI) into medical research, granting an Arpanet connection to Stanford. This allowed a community of researchers to collaborate on this field.

The IPTO’s favored place within DARPA did not last. Lukasik didn’t get along with his new superior at DDR&E, Malcom Currie. They had different philosophies about what DARPA needed to be doing. Waldrop wrote, “Currie wanted solutions out of ARPA now,” and he wanted to replace Lukasik with someone of like mind.

Bob Taylor at the Xerox Palo Alto Research Center had his eye on Lukasik. He needed someone who understood how they did research, and how to talk to business executives. Taylor had been an IPTO director in the late ’60s (I cover his tenure there in Part 2). He brought its style of computer research to Xerox, but he could see that they were not set up to create products out of it. He thought Lukasik would be a perfect fit. Taylor had been recruiting many of DARPA’s brightest minds into PARC, and it was a source of frustration at the agency. Nevertheless, Lukasik was intensely curious to find out just what they were up to. He came out to see what they were building. PARC was accomplishing things that Lick had only dreamed of years earlier. Lukasik was so impressed, he left DARPA to join Xerox in 1975.

George Heilmeier, from Wikipedia

His replacement was George Heilmeier. Waldrop characterized him as hard-nosed, someone who took an applied science approach to research. He had little patience for open-ended, basic research–the kind that had been going on at DARPA since its inception. He was a solutions man. He wanted the working groups to identify goals, where they expected their research to end up at some point in time. His emphasis was not exploration, but problem solving. Though it was jarring to everybody at first, most program directors came to accommodate it. Lick, however, was greatly dismayed that this was the new rule of the day. He thought Heilmeier’s methods went against everything DARPA stood for. Lick liked the idea of finding practical applications for research, but he wanted solutions that were orders of magnitude better than older ones–world changing–not just short-term goals of improving old methods by ten percent. Further, he could see that micromanagement had truly entered the agency. It wasn’t just a perception anymore.

Heilmeier explained in a 1991 interview,

For all the wonderful technology IPTO had sponsored, [Heilmeier insisted], it was the worst mess in the agency. And artificial intelligence was the worst mess in IPTO. “You see, there was this so-called DARPA community, and a large chunk of our money went to this community. But when I looked at the so-called proposals, I thought, Wait a second; there’s nothing here. Well, Lick and I tangled professionally on this issue. He said, ‘You don’t understand. What you do is give good people the money and they go off and do good things and that’s it.’ I said, ‘Lick, I understand that. And these may be good people, but for the life of me I can’t tell you what they’re going to do. And I don’t know whether they are going to reinvent the wheel, because there’s no discussion of the current practice and there’s no discussion of the implications, so I can’t tell whether this is a wise investment for DoD or not.'” (TDM, p. 402)

Lick had a very good track record of picking research projects, using a more subjective, intuitive sense about people, and he didn’t believe you could achieve the same quality of research by trying to use a more objective method of selecting people and projects.

Bob Kahn, who had joined DARPA in 1972, explained the differences between them,

[The] fact was that … both men were right. “There were some things that were more conducive to George’s directed style,” says Kahn. “Packet radio, for example, and maybe the Internet. But speech understanding was not amenable. In fact, almost none of AI fit in. The problem was that George kept looking for a kind of road map to the field of AI. He wanted to know what was going to happen, on schedule, into the future, to make the field a reality. And he thought it was quite reasonable to ask for that road map, because he had no idea how hard it would be to produce. Suppose you were Lewis and Clark exploring the West, where you had no idea what you were going to encounter, and people wanted to know exactly what routes you were going to take, where you would camp, and what you would do out there. Well, this was just not an engineering job, where you could work out the whole plan. So George was looking for something that Lick couldn’t provide.”

Unfortunately, when Lick tried to explain that to his boss, it was a bit like Seymour Papert’s or Alan Kay’s trying to explain exploratory education to a back-to-basics hard-liner. “IPTO really didn’t have a program-management structure,” declares Heilmeier. “They had a financial management structure, and they had a cheering section.” (TDM, p. 403)

Interestingly, I found this article in EETimes that saw this from a very different perspective, saying that what DARPA had been running was a “good-old-boys network” (and it’s difficult to see how it’s not implicating Lick in this), and that Heilmeier snapped them into shape.

Lick tried to see it from Heilmeier’s perspective, that researchers should not see applications as a threat to basic research, and that they should make common cause with engineers, to bring their work into the world of people who will use their technology. He thought this could provide an avenue where researchers could receive feedback that would be valuable for their work, and would hopefully soften the antagonism that was being directed at open-ended research. Meanwhile, he’d quietly try to keep Heilmeier from destroying what he had worked so hard to achieve.

Heilmeier seemed to agree with this approach. He issued some challenges to the IPTO working groups of technical problems he saw needed to be solved. He said to them, “Look, if some of you guys would sign up for these challenges I can justify more fundamental work in AI.” Some of them did just as he asked, and got to work on the challenges.

Lick declared in 1975 that the Arpanet, a project which began at DARPA in 1967, was fully operational, and handed control of the network over to the Defense Communications Agency.

Lick had to kill funding for some projects, which was never easy for him. One in particular was Doug Engelbart’s NLS project at Stanford Research Institute. I talked about this project in Part 2, and what ended up happening with it. A lot of Engelbart’s talent had left to pursue the development of personal computing at Xerox PARC. From Lick’s and DARPA’s perspective, he just wasn’t able to innovate the way he had before. From Engelbart’s perspective, Lick had been captured by the hype around AI, and just wasn’t able to see the good he was doing. It was a sad parting of the ways for both of them. (Source: Nerd TV interview with Doug Engelbart)

Heilmeier was pleased with the changes at the IPTO. They had “gotten with the program,” and he thought they were producing good results. For Lick, however, the change was exhausting, and depressing. He left DARPA for good in late 1975, and returned to MIT, where his spirit soared again. This didn’t mean that he left computing behind. He came up with his own projects, and he encouraged students to play with computers, as he loved to do.

Bob Kahn at DARPA asked an important question. He agreed with Heilmeier’s “wire brushing” of the agency; that some projects had become self-indulgent, but he cautioned against “too much of a good thing” the other way.

ARPA’s current obsession with “relevance” had come dangerously close to destroying what made the agency so special. Remember, says Kahn, when it came to basic computer research–the kind of high-risk, high-payoff work that might not mature for a decade or more–ARPA was almost the only game in town. The computer industry itself was oriented much more toward products and services, he says, which meant that “there was actually very little research going on that was as innovative as ARPA’s.” And while there were certainly some shining exceptions to that rule–notably [Xerox] PARC, IBM and [AT&T] Bell Labs–“many of the leading scientists and researchers couldn’t be supported that way. So if universities didn’t do basic research, where would industry get its trained people?”

Yet it was ARPA’s basic research that was getting cut. “The budget for basic R&D was only about one third of what it was before Lick came in,” says Kahn. “Morale in the whole computer-science community was very low.” (TDM, p. 417)

Kahn pursued a research initiative, anticipating the future of VLSI (Very-Large-Scale Integration) design and fabrication for microchips, in order to try to revive the basic research culture at the IPTO. He won approval for it in 1977. Through it, methods were developed for creating computer languages for designing VLSI chips. Kahn was also the co-developer of TCP/IP with Vint Cerf at DARPA, the protocol that would create the internet.

Heilmeier left DARPA in 1977, and was replaced by Bob Fossum, who had a management style that was more amenable to basic research. The question was, though, “Okay. This is good, but what about the next director?” It had been common practice for people at DARPA to cycle out of administrative positions every few years. Was it too good to last?

Kahn began the Strategic Computing Initiative in 1983, a $1 billion program (about $2.3 billion in today’s money) to continue his work on advancing computer hardware design, and to advance research in artificial intelligence. Fossum was replaced by Robert Cooper that same year. Cooper took Kahn’s research goals and reimplemented them as an agency-wide program, giving every part of DARPA a piece of it. According to Waldrop, Kahn was disgusted by this, and took it as his cue to leave.

The era of revolutionary research in computing at the agency seems to come to an end at this point.

Here’s a video talking about what else was going on at DARPA during the period I’ve just covered:

Lick’s vision of the future

He would happily sit for hours, spinning visions of graphical computing, digital libraries, on-line banking and E-commerce, software that would live on the network and move wherever it was needed, a mass migration of government, commerce, entertainment, and daily life into the on-line world–possibilities that were just mind-blowing in the 1970s. (TDM, p. 413)

Lick had long been a futurist, a very reliable one. In a book published in 1979, “The Computer Age: A Twenty-Year View,” he looked into the future, to the year 2000, about what he could see happening–if he thought optimistically–with a nationwide digital network that he called “the Multinet.” The term “Internet” was not in wide use yet, though work on TCP/IP, what Lick called the “Kahn-Cerf internetworking protocol,” had been in progress for several years. The internet wouldn’t come into being for another 4 years.

“Waveguides, optical fibers, rooftop satellite antennae, and coaxial cables, provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphic display and speech input and output have become almost as common as television sets.”

Great. But what would all those gadgets add up to, Lick wondered, other than a bigger pile of gadgets? Well, he said, if we continued to be optimists and assumed that all this technology was connected so that the bits flowed freely, then it might actually add up to an electronic commons open to all, as “the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.” Indeed, he went on, looking back from the imagined viewpoint of the year 2000, “[the electronic commons] has supplanted the postal system for letters, the dial-tone phone system for conversations and tele-conferences, stand-alone batch processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms and libraries for information storage and retrieval.”

The Multinet would permeate society, Lick wrote, thus achieving the old MIT dream of an information utility, as updated for the decentralized network age: “many people work at home, interacting with coworkers and clients through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its cable television and electronic funds transfer functions, and a few receive delivery of small items through adjacent pneumatic tube networks . . . Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants.” Thanks to ironclad guarantees of privacy and security, Lick added, the Multinet would likewise offer on-line banking, on-line stock-market trading, on-line tax payment–the works.

In short, Lick wrote, the Multinet would encompass essentially everything having to do with information. It would function as a network of networks that embraced every method of digital communication imaginable, from packet radio to fiber optics–and then bound them all together through the magic of the Kahn-Cerf internetworking protocol, or something very much like it.

Lick predicted its mode of operation would be “one featuring cooperation, sharing, meetings of minds across space and time in a context of responsive programs and readily available information.” The Multinet would be the worldwide embodiment of equality, community, and freedom.

If, that is, the Multinet ever came to be. (TDM, p. 413)

He tended to be pessimistic that this all would come true. With the world as it was, he thought a more tightly controlled scenario was more likely, one where the Multinet did not get off the ground. He thought big technology companies would not get into networking, as it would invite government regulation. Communications companies like AT&T would see it as a threat to their business. Government, he thought, would not want to share information, and would rather use computer technology to keep proprietary files on people and corporations. He thought the only way his positive vision would come to pass was if a consensus of hundreds of thousands, or millions of people came about which agreed that an open Multinet was desirable. He felt this would require leadership from someone with a vision that agreed with this idea.

At this time there were packet switching network options offered by various technology companies, including IBM, Digital Equipment Corp. (DEC), and Xerox, but none of them offered openness. In fact, business customers wanted closed networks. They feared openness, due to possibilities of security leaks and industrial espionage. Things were not looking up for this “marketplace” vision of a future network, even in academia. Michael Dertouzos, who led the Laboratory of Computer Science (LCS) at MIT (formerly Project MAC), was very interested in Lick’s vision, but complained that his fellow academics in the program were not. In fact they were openly hostile to it. It felt too outlandish to them.

Microcomputers had taken off in 1975, with the introduction of the MITS Altair, created by electrical engineer Ed Roberts. (This was also the launching point for a small venture created by Bill Gates and Paul Allen, called “Micro Soft,” with their first product, a version of the Basic programming language for the Altair.) Lick bought an IBM PC in the early 1980s, “but it never had the resources to do what he wanted,” said his son, Tracy.

Yes, Lick knew, these talented little micros had been good enough to reinvent the computer in the public mind, which was no small thing. But so far, at least, they had shown people only the faintest hint of what was possible. Before his vision of a free and open information commons could be a reality, the computer would have to be reinvented several more times yet, becoming not just an instrument for individual empowerment but a communication device, an expressive medium, and, ultimately, a window into on-line cyberspace.

In short, the mass market would have to give the public something much closer to the system that had been created a decade before at Xerox PARC. (TDM, p. 437)

Personal computing

The major sources I used for this part of the story were Alan Kay’s retrospective on his days at Xerox, called “The Early History of Smalltalk” (which I will call “TEHS” hereafter), and “The Alto and Ethernet Software,” by Butler Lampson (which I will call “TAES”).

Parallel to the events I describe at DARPA came a modern notion of personal computing. The vision of a personal computer existed at DARPA, but as I’ll describe, the concept we would recognize today was developed solely in the private sector, using knowledge and research methods that had been developed previously through DARPA funding.

Alan Kay

Alan Kay, from Wikipedia

The idea of a computer that individuals could buy and own had been around since 1961. Wes Clark had that vision with the LINC computer he invented that year at MIT. Clark invited others to become a part of that vision. One of them was Bob Taylor. What got in the way of this idea becoming something that less technical people could use was the large and expensive components that were needed to create sufficiently powerful machines, and some sense among developers about just how ordinary people would use computers. Most of the aspects that make up personal computing were invented on larger machines, and were later miniaturized, watered down in sophistication, and incorporated into microcomputers from the mid-1970s into the 1990s and beyond.

Most people in our society who are familiar with personal computers think the idea began with Steve Jobs and Steve Wozniak. The more knowledgeable might intone, and give credit to Ed Roberts at MITS, and Bill Gates. These people had their own ideas about what personal computers would be, and what they would represent to the world, but in my estimation, the idea of personal computing that we would recognize today really began with Alan Kay, a post-graduate student with a background in mathematics and molecular biology, who had become enrolled in the IPTO’s computer science program at the University of Utah (which I talk a bit about in Part 2), in the late 1960s. A good deal of credit has to be given to Doug Engelbart as well, who was doing research during the 1960s at the Stanford Research Institute, funded by ARPA/IPTO, NASA, and the Air Force, on how to improve group knowledge processes using computers. He did not pursue personal computing, but he was the one who came up with the idea for a point-and-click interface, combining graphics and text, using a device he and his team had invented called a “mouse.” He also created the first system that enabled linked documents, which foreshadowed the web we’ve known for the last 20 years, and enabled collaborative computing with teleconferencing. Describing his work this way does not do it justice, but it’ll suffice for this discussion.

Seymour Papert
Doug Engelbart,
from ibiblio.com
Ivan Sutherland,
from Wikipedia
Seymour Papert, from MIT

Flex’s “self-portrait,” from lurvely.com

Kay began developing his ideas about personal computing in 1968. He had started on his first proof-of-concept desktop machine, called Flex, a year earlier. He was aware of Moore’s Law (from Gordon Moore), a prediction that as time passed, more and more transistors would fit in the same size space of silicon. The implication of this is that more computing functionality could fit in a smaller space, which would thereby allow machines which filled up a room at the time to become smaller as time passed. As a graduate student, he imagined miniaturized technology that seemed unfathomable to other computer scientists of his day. For example, a hard drive as small as the crook of your finger. Back then, a hard drive was the size of a floor cabinet, or medium-sized refrigerator, and was typically used with mainframes that took up the space of a room.

Kay’s ideas about what personal computing could be were heavily influenced by Doug Engelbart, Ivan Sutherland (from his Sketchpad project), Tom Ellis and Gabriel Groner at RAND Corp. (from a system called GRAIL), and Seymour Papert, Wally Feurzig, and Cynthia Soloman at BBN with their work with children and Logo, among many others. (Source: TEHS)

Engelbart thought of computing as a “vehicle” for thinking about, and sharing information, and developing group knowledge processes. Kay got a profound sense of the personal computer’s place in the world from Papert’s work with children using Logo. He realized that personal computers could not just be a vehicle, but a new medium.

People can get confused about this concept. Kay wasn’t thinking that personal computers would allow people to use and manipulate text, images, audio, and video (movies and TV)–what most of us think of as “media”–for the sake of doing so. He wasn’t thinking that they would be a new way to store, transmit, and present old media, and the ideas they expressed most easily. Rather, they would allow people to explore a whole new category of ideas and expression that the other forms of media did not express as easily, if at all. It’s not that the old media couldn’t be part of the new, but they would be represented as models, as part of a knowledge system, and operate under a person’s control at whatever grain would facilitate what they wanted to understand.

Butler Lampson described the concept this way:

Kay was pursuing a different path to Licklider’s man-computer symbiosis: the computer’s ability to simulate or model any system, any possible world, whose behavior can be precisely defined. And he wanted his machine to be small, cheap, and easy for nonprofessionals to use. (TAES, p. 2)

Kay could see from Papert’s work that using computers as an interactive medium could enable children to understand subjects that would otherwise have been difficult, if not impossible to grasp at their age, particularly aspects of mathematics and science.

Xerox PARC

Xerox enters the story in 1970. They had a different set of priorities from Kay’s, and it’s interesting to note that even though this was true, they were willing to accommodate not only his priorities, but those of other researchers they brought in.

Xerox was concerned that as the photocopier market grew, competitors would come into the space, and Xerox didn’t want to put all their “eggs” in it. They thought computers would be a good way for them to diversify their product line. Their primary goal was to…

…develop the “architecture of information” and establish the technical foundation for electronic office systems that could become products in the 1980s. It seemed likely that copiers would no longer be a high-growth business by that time, and that electronics would begin to have a major effect on office sys­tems, the firm’s major business. Xerox was a large and prosperous company with a strong commitment to basic research and a clear need for new technology in this area. (TAES, p. 2)

Xerox wanted to site their research facility near a community of computer research. They looked at a few places around the country, and ultimately decided on Palo Alto, CA, since it was near Berkeley and Stanford universities, which were centers of computer research. They wanted to bring in a research director who was familiar with the field, and would be respected by the research community. The right person for the job was not obvious. When they talked to computer researchers of the time, all paths eventually led back to Bob Taylor, who had been an ARPA/IPTO director in the late 1960s. The thing was he had no computer science research of his own under his belt. His background was in psychoacoustics, a field of psychology (though Taylor thinks of it as applied physics). What made him the right candidate in Xerox’s eyes was that everyone who was anyone in the field Xerox wanted to explore knew and respected him. So they invited Taylor in to assist in setting up their research group.  Thus was born the Xerox Palo Alto Research Center (PARC). It was totally funded by Xerox. Taylor was not officially given the job as PARC’s director, because of his lack of research background, but he was allowed to run the facility the same way that the IPTO had conducted computer research. You could say that PARC during the 1970s was “the ARPA way done privately.”

One of the research techniques used at ARPA and Xerox PARC was to anticipate the speed and memory capacities of future computers that would be in wide use. Engineers who were part of the research teams would either find hardware that fit these anticipated specifications, or would build their own, and then see what software they could develop on it that was significantly better in some capacity than the systems that were available in their present. As one can surmise, this was an expensive thing to do. In order to exceed the capacity of the machines that were in wide use, one had to not think about what was most economical, but rather go for the hardware that was only used by a relative few, if anyone was using it at all, because it would be prohibitively expensive for most to acquire. Butler Lampson described this with Xerox PARC’s Alto research system:

The Alto system was affected not only by the ideas its builders had about what kind of system to build, but also by their ideas about how to do computer systems research. In particular, we thought that it is important to predict the evolution of hardware technology, and start working with a new kind of system five to ten years before it becomes feasible as a commercial product.

Our insistence on work­ing with tomorrow’s hardware accounts for many of the differences between the Alto system and the early personal computers that were coming into existence at the same time. (TAES, p. 3)

(To get an idea of what Lampson was talking about in that last sentence, I encourage people look at another post I wrote a while back, called “Triumph of the Nerds.”)

Taylor hired a bunch of people from the failing Berkeley Computer Corp., which was started by Butler Lampson, along with students that had joined him as part of Project Genie. Among the people Taylor brought in were Chuck Thacker, Charles Simonyi, and Peter Deutsch. I talk a little about Project Genie in Part 2. He also hired many people from ARPA’s IPTO projects, including Ed McCreight from Carnegie Mellon University, and some of the best talent from Engelbart’s NLS project at the Stanford Research Institute, such as Bill English. Jerry Elkind hired people from BBN, which had been an ARPA contractor, including Danny Bobrow, Warren Teitelman, and Bert Sutherland (Ivan Sutherland’s older brother). Another major “get” was hiring Allen Newell from CMU as a consultant. A couple of Newell’s students, Stuart Card and Tom Moran, came to PARC and pioneered the field we now know as Human-Computer Interaction.

As Larry Tesler put it, Xerox told the researchers, “Go create the new world. We don’t understand it. Here are people who have a lot of ideas, and tremendous talent.” Adele Goldberg said of PARC, “People came there specifically to work on 5-year programs that were their dreams.” (Source: Robert X. Cringely’s documentary, “Triumph of the Nerds”) In a recent interview, which I’ll refer to at the end of this post, she said that PARC invited researchers in to work on anything that they thought in 5 years could have an impact on the company.

Alan Kay came to work at PARC in 1970, started the Learning Research Group (LRG), and got them thinking about what would come to be known as “portable computers,” though Kay called them “KiddieKomps.” They also got working on font technology.

In 1972 Kay committed his thoughts on personal computing to paper in a document called, “A Personal Computer for Children of All Ages.” In it he described a conceptual model he called a “Dynabook,” or “dynamic book.” I’ll quote from a blog post I wrote in 2006, called, “Great moments in modern computer history,” as it summarizes what I mean to get across about this:

Kay envisioned the Dynabook as a portable computer, 9″ x 12″ x 3/4″, about the size of a modern laptop, with its own battery, and would weigh less than 4 pounds. In fact he said, “The size should be no larger than a notebook.” He envisioned that it would use removable media for file storage (about 1 MB in size, he said), that it might have a keyboard, and that it would record and play audio files, in addition to displaying text. He said that if no physical keyboard came with the unit, a software keyboard could be brought up, and the screen could be made touch-sensitive so that the user could just type on the screen. … Oh, and he made a wild guess that it would cost no more than $500 to the consumer.

He said, “The owner will be able to maintain and edit his own files of text and programs, when and where he chooses.”

He envisioned that the Dynabook would be able to “dock” with a larger computer system at work. The user could download data, and recharge its battery while hooked up. He figured the transfer rate would be 300 Kbps.

Mock up of the Dynabook,
from history-computer.com

He imagined the Dynabook being connected to an “information utility,” like what was called at the time “the ARPA network,” which later came to be called the Internet. He predicted this would open up online access to schools and libraries of information, “stores” (a.k.a. e-commerce sites), and would bring “billboards” (a.k.a. web ads) to the user. I LOVE this quote: “One can imagine one of the first programs an owner will write is a filter to eliminate advertising!”

Another forward-looking concept he imagined is that it might have a flat-panel plasma display. He wasn’t sure if this would work, since it would draw a lot of power, but he thought it was worth trying. … He thought an LCD flat-panel screen was another good option to consider.

Kay thought it essential that the machine make it possible to use different fonts. He and fellow researchers had already done some experiments with font technology, and he showed some examples of their results in his paper.

children using the Dynabook 2

Alan Kay’s conception of children using the Dynabook

He doesn’t elaborate on this, but he hints at a graphical interface for the device. He describes in spots how the user can create and save “dynamic graphics.” In a scenario he illustrates in the paper, two children are playing a game like Space War on the Dynabook, involving graphics animation, experimenting with concepts of gravity. This scenario lays down the concept of it being a learning machine, and one that’s easy enough for children to manipulate through a programming language.

He also imagined that Dynabooks would be able to communicate with each other wirelessly, peer-to-peer, so that groups of students would be able to easily work on projects together, without needing an external network.

Working with Dan Ingalls, an electrical engineer with a background in physics, Diana Merry, and colleagues, the LRG began development of a system called “Smalltalk” in 1972. This was a first effort to create the Dynabook in software. It would come to formalize another of Kay’s concepts, of virtual objects in a computer. The idea was that entities (visual and non-visual) could be fashioned by the person using a computer, which are as versatile as tools that are used in the real world, and can be used in any combination at any time to accomplish tasks that may not have been evident when they were created.

Computer programming was an important part of Kay’s concept of how people would interact with this medium, though he developed doubts about it. What he really wanted was some way that people could build models in the computer. Programming just seemed to be a good way to do it at the time.

Objects could be networked together via. a concept called “message passing,” with the goal of having them work together for some purpose. As Smalltalk was developed over 8 years, this idea would come to include everything from the desktop interface, to windows, to text characters, to buttons, to menus, to “paint” brushes, to drawing tools, to icons, and more. His idea of overlapping windows came from his desire to allow people to work on several projects at the same time, allowing them to make the most of limited screen space. (source: TEHS)

“The best way to predict the future is to invent it”

The graphical interface he and his team developed for Smalltalk was motivated by educational research on children, which had surmised that they have a strong visual sense, and that they relate to manipulating visual objects better than explicitly manipulating symbols, as older interactive computer systems had insisted upon. A key phrase in the paragraph below is, “doing with images makes symbols,” that is, symbols in our own minds, and, if you will, in the “mind” of the computer. His concept of “doing with images” was much more expansive and varied than has typically been allowed on computers consumers have used.

All of the elements eventually used in the Smalltalk user interface were already to be found in the sixties–as different ways to access and invoke the functionality provided by an interactive system. The two major centers were Lincoln Labs [at MIT] and RAND Corp–both ARPA funded. The big shift that consolidated these ideas into a powerful theory and long-lived examples came because the LRG focus was on children. Hence we were thinking about learning as being one of the main effects we wanted to have happen. Early on, this led to a 90 degree rotation of the purpose of the user interface from “access to functionality” to “environment in which users learn by doing”. This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children’s curriculum to a “curriculum of the user interface.”

The particular aim of LRG was to find the equivalent of writing–that is learning and thinking by doing in a medium–our new “pocket universe”. For various reasons I had settled on “iconic programming” as the way to achieve this, drawing on the iconic representations used by many ARPA projects in the sixties. My friend Nicolas Negroponte, an architect, was extremely interested in embedding the new computer magic in familiar surroundings. I had quite a bit of theatrical experience in a past life, and remembered Coleridge’s adage that “people attend ‘bad theatre’ hoping to forget, people attend ‘good theatre’ aching to remember“. In other words, it is the ability to evoke the audience’s own intelligence and experiences that makes theatre work.

Putting all this together, we want an apparently free environment in which exploration causes desired sequences to happen (Montessori); one that allows kinesthetic, iconic, and symbolic learning–“doing with images makes symbols” (Piaget & Bruner); the user is never trapped in a mode (GRAIL); the magic is embedded in the familiar (Negroponte); and which acts as a magnifying mirror for the user’s own intelligence (Coleridge). (source: TEHS)

A demonstration of Smalltalk-80

Chuck Thacker Butler Lampson
Dan Ingalls,
from Wikipedia
Diana-Merry Shapiro,
from Linkedin
Chuck Thacker,
from the ACM
Butler Lampson,
from MIT

Chuck Thacker, who has a background in physics and designing computer hardware, and a small team he assembled, created the first “Interim Dynabook,” as Kay called it, at PARC’s Computer Science Lab (CSL) in 1973. It was named “Bilbo.” It was not a handheld device, but more like the size of a desk, perhaps connected to a larger cabinet of electronics. It had a monitor with a bitmap display of 606 x 808 pixels (which gave it the profile of an 8-1/2″ x 11″ sheet of paper), and 128 kilobytes of memory. Smalltalk was brought over to it from its original “home,” a Data General Nova computer.

Thacker’s team created the first Alto models (named after Palo Alto) shortly thereafter. Butler Lampson, a computer scientist with a background in physics, and electrical engineering, led a team which created the operating system for it. The Alto computer fit under a desk. It was the size of a small refrigerator. The keyboard, mouse, and display sat on top of the desk. It had the same size display, and internal memory as “Bilbo,” ran at about 6 Mhz, and had a removable hard drive that stored 2.5 MB per disk. (source: History of Computers)

As the years passed, CSL would develop bigger, more powerful machines, with code names Dolphin, and Dorado.

Adele Goldberg, from PC Forum

Beginning in 1973, Adele Goldberg, who had a background in mathematics and information systems, and Alan Kay tried Smalltalk out on students from ages 12-15, who volunteered to take programming courses from them, and to come up with their own ideas for things to try out on it. Goldberg developed software design schema and curricular materials for these courses, and helped guide the education process as they got results.

They had some success with an approach developed by Goldberg where they had the students build more and more sophisticated models with graphical objects. They thought they were building the skill of students up to more sophisticated approaches to computing, and in some ways they were, though the influence these lessons were having on the students’ conception of computing was not as broad as they thought. Kay realized after teaching programming to some adult students that they could only get so far before they ran into a “literature” barrier. The same had been true with the teen and pre-teen students they had taught earlier, it turned out. From Kay’s description, the way I’d summarize the problem is that they required background knowledge in organizing their ideas, and they needed practice in doing this.

Kay said in retrospect that literature renders ideas. Any medium needs literature in order to be powerful. Literature’s purpose is to provide a body of ideas that can be discussed in the medium. (Source: TEHS) Without this literary background, the students were unable to write about, or discuss the more sophisticated ideas through programming code. No matter how good a tool or instrument is, what’s produced by the people using it can only be as good as the ideas they use in fashioning the product. Likewise, the quality of what is written in text or notation by an author or composer, or produced in sound by a musician, or imagery and sound for a movie, is only as good as a) the skill of the creators, b) the outlook they have on what they are expressing, and c) their knowledge, which they can apply to the effort.

Reconsidering the Dynabook

There came a “dividing point” at PARC in 1975. Kay met with other members of the LRG and discussed “starting over.” He could see that the developed ideas of Smalltalk were taking them away from the educational goals he set out to accomplish. Professional considerations had started to take hold with the group, though, and they saw potential with Smalltalk. The majority of the group wanted to stick with it. His argument for “starting over” with the educational project did not win out. He said of this point in time that while he disagreed with the decision of his colleagues, he held no ill will towards them for it. He knew them to be wonderful people. The sense I get from reading Kay’s account of this history is they saw potential perhaps in creating more sophisticated user environments that professionals would be interested in using. I infer this from the projects that PARC engaged in with Smalltalk thereafter.

Kay turned his focus to a new project, as if to pick up again his “KiddieKomps” idea. He designed a computer he called NoteTaker. Adele Goldberg said Kay’s purpose in doing this was to develop a proof of concept for the Dynabook’s hardware (see the interview with her at the end of this post). While Kay went off in this direction, Goldberg took over management of the Smalltalk project. The educational program at PARC faded away in 1976. (Source: TEHS)

The original concept for NoteTaker was a laptop design, with what Kay called a “tab mouse,” a physical control that was small, mounted on the computer, yet agile enough to allow a person using it to move a cursor around a graphical interface. This did not end up on the the machine, but NoteTaker was made useable with a keyboard, mouse, and a touch screen, and it had stereo sound output. (Source: “Joining the Mac Group,” by Bruce Horn)

Once Smalltalk-76 was done (each version of Smalltalk was named by the year it was completed), Dan Ingalls and Ted Kaehler ported it to the NoteTaker, and by around 1978 Kay had created the first “luggable” portable computer. Kay recalls it using three Intel 8086 processors (though others remember it using two Motorola 68000 processors), had 256 kilobytes of memory, and was able to run on batteries. It wasn’t a laptop (it was too big and heavy), but it could fit on a desktop. At first glance, it looks similar to the Osborne 1, the first commercially available portable computer, and there’s a reason for that. The Osborne’s case design was based on NoteTaker.

To put it through its paces, a team from PARC took the NoteTaker on an airplane trip, “running an object-oriented system with a windowed interface at 35,000 feet.”

Kay lamented that there wasn’t enough corporate will to use their own know-how to create better hardware for it (Kay considered the Intel processors barely sufficient), much less turn it into a commercial product.

The NoteTaker, from the Computer History Museum

Looking back on the experience, Kay was dismayed to see that PARC had pushed aside his educational goals, and had co-opted Smalltalk as a purely professional’s tool. The technologies that could have made his original Dynabook vision a physical reality were coming into being at just this time, but there was no will at Xerox to make it into an actual product. (source: TEHS)

He has pursued development of his educational ideas ever since. He sees computers in the same way that a few in the Middle Ages saw the invention of the printing press, enabling new ways of interacting and thinking, to, as Licklider would have said, allow people to “think as no one has ever thought before.”

Developing the office of the future

93ewis17
From left to right: Larry Tesler, from Twitter.com; Timothy Mott, from startupgenome.co; Charles Simonyi

A separate project at PARC also got started in the early 1970s, called POLOS (the Parc On-Line Office System), in the System Science Lab (SSL). The goal there was to develop a networked computer office system.

Edit 5/31/2016: I had suggested in the way I wrote the following paragraph that Larry Tesler had invented cut, copy, and paste actions in digital text editing. Upon further review, I think that historical interpretation is wrong. Doug Engelbart had copy and paste (and probably a “cut” action) in NLS, and there may have been earlier incarnations of it.

One of the first projects in this effort was a word processor called Gypsy, designed and implemented by Larry Tesler and Timothy Mott, both computer scientists. (Source: TAES) The unique thing about Gypsy was that Tesler tried to make it “modeless.” Its behavior was like our modern day word processors, where you always have a cursor on the screen, and all actions mean the same thing at all times. Wherever the cursor is, that’s where text is entered. He also invented drag selection/highlighting with a mouse, and which was incorporated into the a cut, copy, and paste process. This has been a familiar feature whenever we work with digital text.

Later, Charles Simonyi, an electrical engineer, and Butler Lampson began work on the first What You See Is What You Get (WYSIWYG) text processor, called Bravo. They developed their first version in 1974. It had its own graphical interface, allowed the use of fonts, and basic document elements, like italics and boldface, but it worked in “modes.” The person using it either used the computer’s keyboard to edit text, or to issue commands to manipulate text. The keyboard did something different depending on which mode was operating at any point in time. This was followed by BravoX, which was a “modeless” version. BravoX had a menu system, which allowed the use of a mouse for executing commands, making it more like what we’d recognize as a word processor today. (source: Wikipedia) Each team was trying out different capabilities of digital text, and trying to see how people worked with a text system most productively.

Andrew Birrell Roger Needham
From left to right: Andrew Birrell, from Microsoft Research; Roy Levin, from Linkedin; Roger Needham, from ACM SIGSOFT

A couple graphical e-mail clients were developed by a team led by Doug Brotz, named “Laurel” and “Hardy,” which allowed easy review and filing of electronic mail messages. (sources: The Xerox “Star”: A Retrospective) From Lampson’s description, they appeared to be “e-mail terminals.” They didn’t store, or allow one to write e-mails. They just provided an organized display for them, and a means of telling a separate messaging system what you wanted to do with them, or that you wanted to create a new message to send. (Source: TAES) Andrew Birrell, Roy Levin, Roger Needham (who had a background in mathematics and philosophy), and Michael Schroeder, a computer scientist, developed a distributed service, which the e-mail clients used to compose, receive, and transmit network messages, called Grapevine. From the description, it appeared to work on a distributed peer-to-peer basis, with no central server controlling its services for an organization. Grapevine also provided authentication, file access control, and resource location services. Its purpose was to provide a way to transmit messages, provide network security, and find things like computers and printers on a network. It had its own name service by which clients could identify other systems. (source: Grapevine: An Exercise in Distributed Computing) To get a sense of the significance of this last point, the Arpanet did not have a domain name service, and the internet did not get its Domain Name Service (DNS) until the mid-1980s.

PARC had a complete office system going, with all of this, plus networked file systems, print servers (using Ethernet, created by Bob Metcalfe and Chuck Thacker at PARC), and laser printing (invented at PARC by Gary Starkweather, with software written by Butler Lampson) by 1975! A year later they had created a digital scanner.

You can see one of the e-mail clients being demonstrated, with WYSIWYG technology/laser printing, on the Alto, in this Xerox promotional video:

Clarence Ellis Gary Nutt
Clarence Ellis,
from the ACM
Gary Nutt,
from CU Boulder

Clarence Ellis and Gary Nutt, both computer scientists, developed OfficeTalk, a prototype office automation system, at PARC. It tracked “job” documents as they went from person to person in an organization. (source: “The Xerox ‘Star’: A Retrospective”) In addition, Ellis came up with the idea of clicking on a graphical image to start up an application, or to issue a command to a computer, rather than typing out words to do the same thing. This is something we see in all modern user interfaces. (source: Answers.com)

Why Xerox missed the boat

For this section I go back to Waldrop’s book as my main source.

One might ask if the future in PCs was invented at Xerox–sounding an awful lot like what is in use with business IT systems today–why didn’t they own the PC industry? An even bigger question, why weren’t people back then using systems like what we’re using now? We use Windows, Mac, and Linux systems today, each with their own graphical interfaces, which descended from what PARC created. We had word processing, and network LAN file servers for years, beginning in the 1980s, and later, e-mail over the internet, print servers, and later still, office “task” automation software. The future of personal computing as we would come to know it was sitting right in front of them in the mid-1970s. We can say that now, since all of these ideas have been made into products we use in the work world. If we were to look at this technology back then, we might’ve been clueless to its significance. The executives at Xerox were an example of this.

For several years management couldn’t understand the vision that was developing at PARC. The research culture there didn’t mesh that well with the executive culture, especially after the company’s founder, Joe Wilson, died in 1971. Wilson had been open to new ideas and venturing into new technologies. At the time of his death, the company was also struggling from its own growth. The demand for its copiers was outstripping its ability to produce them competently. So a new management team was brought in which understood how to manage big companies. These were “numbers men,” though, and their methodology didn’t allow them to understand how to translate the kind of deep R&D the company had been doing with computers into products, because of course they didn’t have metrics to make an accurate measurement of how much money they’d make with a technology that was totally unknown to them. The company had plenty of metrics about costs and revenue that came from copier development and sales, so it was no problem for them to make reliable estimates for a new copier model. The most profitable part of their business was copiers, so that’s where they put their product development resources. The old hands at Xerox who had set up PARC ran interference for their operation, to keep the “numbers men” from shutting down their work. The people at PARC kept trying to convince the higher-ups that what they had developed was useful for office productivity, but it was for naught. In a depressing anecdote, Waldrop wrote:

One top-level Xerox executive, after a day of being shown the wonders of PARC, had posed precisely one question to the researchers: “Where can I get some of those beanbag chairs?” (TDM, p. 408)

(A famous “feature” of PARC was their use of beanbag chairs for group discussions, called “dealer meetings.”)

Stephen Lukasik came to Xerox from DARPA in 1975. He set up what was called the Systems Development Division (SDD). Its purpose was to create new salable products out of the research that was going on inside Xerox. The problem for him was that Xerox’s executives were willing to give him the authority to set up the division, but they weren’t willing to listen to what he said was possible, nor were they willing to give him the funding to make it happen. Lukasik left Xerox in 1976. He said that though his time there was short, he valued the experience. He just didn’t see the point in staying longer. Nevertheless, SDD would become useful to Xerox. It just had to wait for the right management.

Xerox set up a big demo of its products in Boca Raton, FL. in 1977, which included the prototypes from PARC. All the company executives, their wives, family and friends were invited. The people from PARC did the best bang-up job they could to make their stuff look impressive for business computing.

Back at PARC, says [Gary] Starkweather, he and his colleagues saw this as their last, best chance. “The feeling was, ‘If they don’t get this, we don’t know what we can do.'” So, he says, with John Ellenby coordinating an all-out effort, “We stripped everything out of PARC down to the power cords and set it all up again in Boca. Computers, networks, printers–the whole thing! I built a laser printer that did color. Bill English had a word processor that did Japanese. We were going to show them space flight!”

They certainly tried. [At the event], Xerox executives and their families swarmed through the Grenada Rooms of the Boca Raton Hotel for a hands-on demonstration of WYSIWYG editing in Bravo, graphical programming in Smalltalk, E-mailing in Laurel, artistry in Paint and Draw–the works. “The idea was a mental slam-dunk!” says Starkweather. “And some people did see it.” The executives’ wives, for example–many of them former secretaries who knew all about carbon paper, Wite-Out, and having to retype whole pages to correct a single mistake–took one look at Bravo and got it. “The wives were so ecstatic they came over and kissed me,” remembers Jack Goldman. “They said, ‘Wonderful things you’re doing!’ Years later, I’d see them and they’d still remember Boca.”

Then there were the delegates from Fuji Xerox, the company’s Japanese partner: they were beside themselves over Bill English’s word processor. “Fuji clamored, ‘Give us this! We’ll manufacture it!'” recalls Goldman. In fact, he says, that was a near-universal reaction: “People from Europe, people from South America, marketing groups around the U.S.–everyone who went out of that conference was excited by what they had seen.”

Everyone, that is, except the copier executives, the real power brokers in Xerox. You couldn’t miss them; they were the ones standing in the background with the puzzled, So-What? look on their faces.

Indeed, one of the corporation’s purposes in calling the conference was to rally the troops for the coming era of ever-more-ferocious competition. In fairness, those executives in the background had to worry about defending the homeland now, not ten years from now. Or maybe they were simply too bound by the culture of the executive suite, vintage 1977. The xerographers lived in a world in which typing was women’s work and keyboards were for secretaries. It was a rare executive who would even deign to touch one. (TDM, p. 408)

There was a change in leadership in 1978 which recognized that the management style they had been using wasn’t working. The Japanese had entered the copier market, and were taking market share away from them. This is just what Joe Wilson had anticipated eight years earlier. Secondly, the new management recognized the vision at PARC. They wanted to create products from it. Work on the Xerox Star system began that year at SDD.

SDD had previously created a machine called Dandelion, Xerox’s most powerful computer to date. The Star would be the Dandelion translated into a business system. (Source: TAES)

Xerox and Apple

There’s been increasing awareness about the history of Xerox and Apple as time has passed, but there are some misconceptions about it that deserve to be cleared up.

Xerox and Apple developed a brief business relationship in 1979. Apple got ideas about how to create the user experience with their Lisa and Macintosh computers from Xerox PARC. The misperceptions are in how this happened. This is how the meeting between Xerox and Apple in 1979 has been perceived among those who are generally familiar with this history (from the 1999 TNT made-for-TV movie “Pirates of Silicon Valley”):

There’s some truth to this, but some of it is myth. One could almost say it was trumped up to bolster Steve Jobs’s image as a visionary. It’s a story that’s been propagated for many years, including by yours truly. So I mean to correct the record.

Much of Waldrop’s account of what happened between Xerox and Apple is a retelling of an account from Michael Hiltzik’s book, “Dealers of Lightning.” From what he says, the Apple team’s visit at PARC was not a total revelation about graphical user interfaces, but it was an inspiration for them to improve on what they had developed.

The idea of a graphical user interface on a computer was already out there. People at Apple knew of it prior to visiting Xerox. Xerox had been publishing information about the concept, and had been holding public demos of some of the graphical technology PARC had developed. So the idea of a graphical user interface was not a secret at all, as has been portrayed. However, this does not mean that every aspect of Xerox’s user interface development was made public, as I’ll discuss below.

Apple had been working on a computer called Lisa since 1978. It was a 16-bit machine that had a high-resolution bitmapped display. The Lisa team called it a “graphical computer.” Apple was approached by the Xerox Development Corporation (XDC) to take a look at what PARC had developed. The director of XDC felt that the technology needed to be licensed to start-ups, or else it would languish. Steve Jobs rebuffed the offer at first, but was convinced by Jef Raskin at Apple to form a team to go to PARC for a demonstration.

Apple and Xerox made a trade. Xerox bought a stake in Apple that was worth about $1 million, in exchange for Apple getting access to PARC’s technology. Waldrop says that Jobs and a team of Apple engineers made two visits to PARC in December 1979. According to an article from Stanford University, Jobs was not with them for the first visit. Waldrop notes that by this point Apple had already added a graphical user interface to the Lisa, but that it was clunky. By Larry Tesler’s account, there were more visits by the Apple team than Waldrop talks about. He has a different recollection of some of the details of the story as well. I include these different sources to give a “spectrum” of this history.

Going by Waldrop’s account, at their first visit in December ’79, they got a “standard” demo of the Alto, given by Adele Goldberg, that many other visitors to PARC had already seen. The group from Apple saw it being used with a mouse, the Bravo word processor, some drawing programs, etc., and then they left, apparently satisfied with what they had seen. It should be noted that this demo did not show the Xerox “desktop interface” in the sense of what people have come to know on personal computers and laptops. Each program they saw was a separate entity, which took over the entire operation of the machine, using the Alto’s graphical capabilities to show graphics and text. The Apple team did not see the desktop metaphor, which was in Smalltalk, and which was being developed for the Xerox Star. Goldberg considered Smalltalk proprietary. Jobs and the team eventually realized that they hadn’t really seen “the good stuff.”

The Apple team came back to PARC, unannounced. This is the visit that’s usually talked about in Apple-PARC lore, and is portrayed in the Pirates of Silicon Valley clip above. Jobs demanded to see the Smalltalk system *NOW*. A heated argument ensued between PARC and Xerox headquarters over this. Xerox’s executives ordered the PARC team to demonstrate Smalltalk to the Apple team, citing the partnership with Apple brokered by XDC. Bob Taylor was out of town at the time. He later said that he had no respect for Steve Jobs, and if he had been there, he would’ve kicked the Apple team out of the building, no if’s, and’s, or but’s! He figured Xerox would’ve fired him for it, but that would’ve been fine with him.

This time the people from Apple were prepared. They asked very detailed technical questions, and they got to see what Smalltalk was capable of. They saw educational software written by Goldberg, programming tools written by Larry Tesler, and animation software written by Diana Merry that combined graphics with text in a single document. They got to see multiple tasks on the screen at the same time with overlapping windows, in a “desktop” metaphor. Jobs was beside himself. He exploded, “Why hasn’t this company brought this to market?! What’s going on here? I don’t get it!” This was when Jobs had his epiphany about the graphical user interface, though Waldrop doesn’t delve into what that was. The breakthrough for him may have been the desktop metaphor, how it allowed multiple, different views of information, and multiple tasks to be worked on at the same time, along with everything else he’d seen. The way Waldrop portrays it is that Jobs realized that using a computer should be a fulfilling and fun experience for the person using it.

According to Hiltzik, the partnership between Xerox and Apple, of which the Apple stock trade had been a part, quickly fell apart after this, due to a culture clash. The Lisa’s chief programmer, Bill Atkinson, had to go off of what he remembered seeing at PARC, since he no longer had access to their detailed technical information.

So the idea that Apple (and Microsoft for that matter) “stole” stuff from Xerox is not accurate, though there was trepidation at PARC that Apple would steal “the kitchen sink.” While the visit gave the Lisa team ideas about how to make computer interaction better, and what the potential of the GUI was, Hiltzik said it wasn’t that big of an influence on the Lisa, or the Macintosh, in terms of their overall system design.

What this account means is that the visits to PARC were tangentially influential on Apple’s version of the idea of a graphical user interface. They did not go in ignorant of what the concept was, and it did not give them all of the ideas they needed to create one. It just helped make their idea better.

The microcomputer “revolution”

The people at PARC were aware of the nascent microcomputer phenomenon that was occurring under their noses. Some of them went to the Homebrew Computer Club meetings, where Steve Jobs and Steve Wozniak used to hang out. They read the upstart computer press that was raving about what Bill Gates and Steve Jobs were doing with their new companies, Microsoft and Apple Computer. According to Waldrop, it galled them that this upstart industry was getting so much attention and adoration that they felt it didn’t deserve.

“It had never occurred to us that people would buy crap,” declares Alan Kay, who considered the hobbyists in their garages down the hill to be very bright and very creative ignoramuses–undisciplined kids who didn’t read and didn’t have a clue about what had already been done. They were successful only because their customers were just as unsophisticated. “What none of us was thinking was that there would be millions of people out there who would be perfectly happy with the McDonald’s hamburger approach.” (TDM, p. 437)

Steve Jobs was somewhat of an exception. At least he got a hint of “what had already been done.” It took some prodding, but once he got it, he paid attention. Still, he didn’t fully understand the significance of the research that went into what he saw. For example, Jobs was very hostile to the idea of computer networking at the time, because he thought that would deprive the personal computer user of their autonomy. Freedom from dependency on larger computer systems was a notion he held to ideologically. When he railed against IBM as “big brother” it wasn’t just for show. If only he had been aware of Licklider’s vision for the internet (which was published at the time), the notion of a distributed network, with independent units, and the DARPA work that was creating it, perhaps he would’ve cottoned to it the way he had the graphical user interface. We’ll never know. He came to understand the importance of the network features that had been developed at PARC about 6 years later when he left Apple and started up NeXT.

There’s a famous quote from Jobs about his visits to PARC that illustrates what I’m talking about, from the PBS mini-series, “Triumph of the Nerds” (I wrote a post about this mini-series here):

They showed me, really, three things, but I was so blinded by the first one that I didn’t really ”see” the other two. One of the things they showed me was object-oriented programming. They showed me that, but I didn’t even “see” that. The other one they showed me was really a networked computer system. They had over 100 Alto computers all networked, using e-mail, etc., etc. I didn’t even “see” that. I was so blinded by the first thing they showed me, which was the graphical user interface. I thought it was the best thing I had ever seen in my life. Now, remember it was very flawed. What we saw was incomplete. They had done a bunch of things wrong, but we didn’t know that at the time. Still, though, the germ of the idea was there, and they had done it very well. And within ten minutes it was obvious to me that all computers would work like this, someday.

Too much, too late for Xerox

Xerox released the Star in 1981 as the 8010 Information System.

It had a Smalltalk-like graphical user interface, anywhere from 384 kilobytes of memory up to 1.5 MB, an 8″ floppy drive, a 10 to 40 MB hard disk, a monitor measuring 17″ diagonally, a mouse, a bevy of system features, Ethernet, and laser printing. (source: The Xerox “Star”: A Retrospective) The 8010 cost $16,500 per unit, though a customer couldn’t purchase just the computer. It was designed to be an integrated system, with networking and laser printing included. A minimal installation cost $100,000 (about $252,438 in today’s money). Xerox was going for the Fortune 500, which could afford expensive, large-scale systems. Even though the designers thought of it as a “personal computer” system, the 8010 was marketed under the old mainframe business model.

The designers at SDD put the kitchen sink into it. It had every cool thing they could think of. The system was designed so that no third-party software could be installed on it at all. All of the hardware and software came from Xerox, and had to be installed by Xerox employees. This was also par for the course with typical mainframe setups.

In the Star operating system, all of the software was loaded into memory at boot-up, and kept memory-resident (very much like Smalltalk). Users didn’t worry about what applications to run. All they had to focus on was their documents, since all documents and data were implicitly linked to the appropriate software. It presented an object-oriented approach to information. It didn’t say to you, “You’re working with a word processor.” Instead, you worked with your document. The system software just accommodated the document by surreptitiously activating the appropriate part of the system for its manipulation, and presenting the appropriate interface.

Since the Xerox brass recognized they knew nothing about computers, they turned the Star’s design totally over to SDD. There appeared to have been no input from marketing. Even so, as Steve Jobs said of the executives, “They were copier-heads. They just had no clue about a computer, what it could do.” It’s unclear whether input from marketing would’ve helped with product development. I have a feeling “the innovator’s dilemma” applies here.

The designers at SDD were also told that the cost of the product was no object, since their target market was used to paying hundreds of thousands of dollars for large-scale systems. The Xerox executives miscalculated on this point. While it’s true that their target market had no problem paying the system’s price tag, personal computing was a new, untested concept to them, and they were wary about risking that much money on it. Xerox didn’t have a low-risk, low-cost entry configuration to offer. It was all or nothing. Microcomputers were a much easier sell in this environment, because their individual cost, by comparison, went unnoticed in corporate budgets. Corporate managers were able to sneak them in, buying them with petty cash, even though IT managers (who believed wholeheartedly in mainframes) tried their darnedest to keep them out.

The microcomputer market, which just about everyone at Xerox saw as a joke, was eating their lunch. Bob Frankston and Dan Bricklin at VisiCorp (both alumni of the Project MAC/Multics project) had come out with VisiCalc, the world’s first commercial spreadsheet software, in 1979, and it was only available for micros. Xerox didn’t have an equivalent on the 8010 when it was released. They had put all of their “eggs” into word processing, databases, and e-mail. These things were needed in business, but it was the same thing the researchers at PARC had run into when they tried to show the Alto to the Xerox brass in years passed: People in the corporate hierarchy saw themselves as having certain roles, and they thought they needed different tools than what Xerox was offering. Word processing was what stenographers needed, in the minds of business customers. Managers didn’t want it. They wanted spreadsheets, which were the “killer applications” of the era. They would buy a microcomputer just so they could run a spreadsheet on it.

The designers at SDD didn’t understand their target market. The philosophy at PARC was “eat your own dog food” (though I imagine they used a different term for it). From what I’ve heard, listening to people who were part of the IPTO in the 1960s, it was Doug Engelbart who invented this development concept. The problem for Xerox was this applied to product development as well as to overall quality control. From the beginning, they wanted to use all of the technology they developed, internally, with the idea that if they found any deficiencies, there would be no running away from them. It would motivate them to make their systems great.

Thacker and Lampson mentioned in a retrospective, which I refer to at the end of this post, that someone at PARC had come up with a spreadsheet for the Alto, but that nobody there wanted to use it. They were not accountants. Xerox eventually released a spreadsheet for the 8010, once they saw the demand for it, but the writing was already on the wall by then.

It’s hard for me to say whether this was intentional, but the net effect of the way SDD designed the Star resulted in an implicit assumption that customers in their target market would want the capabilities that the designers thought were important. Their focus was on building a complete integrated system, the likes of which no one had ever seen. The problem was nothing in their strategy accounted for the needs and perceptions that business customers would assert. They may have assumed that customers would be so impressed by the innovativeness of the system that it would sell itself, and people would adjust their roles to it in a McLuhan-esque “the medium is the message” sort of way.

The researchers at PARC recognized that producing a microcomputer had always been an option. In 1979, Bob Belleville suggested going with a 16-bit microcomputer design, maybe using the Motorola 68000, or the Intel 8086 processor, instead of going forward with the Star. He built a prototype and showed that it could work, but the team working on the Star didn’t have the patience for it. They would’ve had to throw out everything they had done, hardware and software, and start over. Secondly, they wouldn’t have been able to do as much with it as they were able to do with the approach they had been pursuing. Thacker and Lampson saw as well that building a microcomputer would be more expensive than going ahead with their approach. It was, however, a fateful decision, because the opposite would be true two years later, when the 8010 Information System was released.

Lampson said, ironically, that this time, “the problem wasn’t a shortage of vision at headquarters. If anything, it was an excess of vision at PARC.”

When Apple released their Lisa computer in 1983, Xerox realized that they had missed their chance. The future belonged to IBM, Apple, and Microsoft.

Dispersion

From about 1980 on, people left PARC to join other companies. They were bleeding talent. Taylor was seen as part of the problem. He had an overriding vision, and it was his way, or the highway, so others have said. He understood full well where computing was going. He was just ahead of his time. He could be very supportive, if you agreed with his vision, but he had utter contempt for any ideas he didn’t approve of, and he would “take out the flamethrower” if you opposed him. This sounds a lot like what people once complained about with Steve Jobs, come to think of it. The director of PARC kindly asked Taylor to change his attitude. He took it as an insult, and left in frustration in 1983, taking some of PARC’s best engineers with him.

From looking at the story, it appears the failure of the Star project was “the last straw” for Xerox’s foray into computer research. After Taylor left, the era of innovative computer research ended at PARC.

The visions at Xerox were grandiose, and were too much, too late for the market. I don’t mean to take anything away from what they did by saying this. What they had was pretty good. It represented the future, but it was too far ahead of where the business computing market was. Competitors had grabbed the attention of customers, and they preferred what the competition was offering.

The PARC researchers got it both ways. When they had the chance to develop products in 1975, ahead of the explosion in the microcomputer market, their vision wasn’t recognized by the company that housed them. By the time it was recognized, the market had changed such that much less powerful machines, backed by more innovative business models, won out.

Even though Alan Kay brought the idea of a small, handheld computer to PARC, something that ordinary people would love to use, he valued computing power over the size of the machine. As he would later say about that period, the idea of the Dynabook was more of a service metaphor. The size and form of the hardware was incidental to the concept. (source: An Interview with Computing Pioneer Alan Kay) Thacker and Lampson, the designers of the 8010, shared that value as well. By the late 1970s the market was thinking “small is beautiful,” and they were willing to tolerate clunky, low-powered machines, because their economies of scale met immediate needs, and they created less friction from corporate politics.

Apple and Microsoft used ideas developed at PARC to further develop the personal computer market, and so watered down pieces of that vision got out to customers over about 20 years. Today we live in a world that resembles what Bob Taylor envisioned, with individual computers, networking, and digital printing, using graphical systems.

PARC’s legacy

The outside world would come to know the desktop graphical user interface metaphor because of Smalltalk, though the metaphor was repurposed away from Alan Kay’s powerful ideas about a modeling system, into a system that made it easy to run applications, and emulated integrating older media together into virtual paper documents.

To me, the most interesting contributor to the desktop interface was Diana Merry. She was originally hired at PARC as a secretary. She just happened to understand Alan Kay’s goals, and so she was invited into the LRG. She and Kay created the first implementation of overlapping windows, in Smalltalk. She also wrote a lot of the basic software Smalltalk used to display and animate graphics. (Sources: “The Mouse and the Desktop — Designing Interactions,” by Bill Moggridge, and TEHS)

The Paint, Draw, and Write applications that appeared on the Apple Lisa and the first Macintosh systems were inspired by similar works that had been created years earlier at PARC.

Microsoft Windows, and Microsoft Word benefitted from the research that was done at PARC, though some inspiration came from the Macintosh as well. The look and feel of the first versions of Windows owed more to the X/Window system on Unix. Where Microsoft copied Apple was in how people interacted with Windows (icons and menus), and the application suite that came with the system. (source: The Secret Origin of Windows)

Software developers who have been working on apps. for Apple products in the most recent generations of systems may be interested to know that the Objective-C language they’ve used was first created by a company called StepStone in the early 1980s. The design of the language and its runtime were somewhat influenced by the Smalltalk system.

Several companies licensed a version of Smalltalk, called Smalltalk-80, from Xerox during the 1980s. Among them were Tektronix, Hewlett-Packard, Digital Equipment Corp., and Apple. (source: “Smalltalk-80: Bits of History, Words of Advice”) Apple got a version running on the Macintosh XL in 1985. (source: The Long View) Apple’s version of Smalltalk was updated in the mid-1990s by Alan Kay and some of the original Learning Research Group gang from Xerox. They got Apple’s permission to release it into the public domain, and it has since been ported to many platforms. Professional developers call it by its new name, “Squeak.” An educational version also exists, maintained by the Squeakland Foundation, which goes by the name “Etoys.”

Charles Simonyi joined Microsoft. He brought his knowledge from developing Bravo with him, and it went into creating Microsoft Word. He became the lead developer on their Multiplan, and Excel spreadsheet software.

Where are they now?

(Edit 4/15/2017: As events unfold, I will be updating this section.)

Stephen Lukasik became a Chief Scientist at the FCC from 1979-1982. He is a member of the International Institute for Strategic Studies, the American Physical Society, and the American Association for the Advancement of Science. He is a founder of The Information Society journal, and has served on the Boards of Trustees of Harvey Mudd College and Stevens Institute of Technology. (Source: Georgia Tech)

Larry Roberts was chief executive at Telenet until 1980. Telenet was sold to GTE in 1979 and subsequently became the data division of Sprint. In 1983 he became the CEO of NetExpress, an Asynchronous Transfer Mode (ATM) equipment company. Roberts then became president of ATM Systems from 1993 to 1998. He then went back to packet networking, founding Caspian Networks, which focused on IP flow management (IP as in “Internet Protocol”), until 2004. He founded Anagran in effort to do what Caspian did, but more efficiently. (Source: Larry Roberts’s home page)

J.C.R. Licklider – In the late 1970s Lick visited Xerox PARC regularly. He served on the Committee on Government Relations at the Association of Computing Machinery (ACM), and as deputy chairman of the Social Security Administration’s Data Management System. He spent a year in 1978 on a task force commissioned by the Carter Administration, which examined the government’s data-processing needs. He served as president of the Boston Computer Society. He was an investor and advisor to a company called Infocom in 1979, which was founded by eight of his former students from his Dynamic Modeling group at MIT. He spent part of his time working at VisiCorp. He continued to work at MIT until he retired in 1985. He died on June 26, 1990.

Ed Feigenbaum – In 1984 he became a Fellow at the American College of Medical Informatics. From 1994 to 1997 he served as Chief Scientist of the U. S. Air Force. He founded the Knowledge Systems Laboratory at Stanford University, and is now professor emeritus at Stanford. He was co-founder of several start-ups, such as IntelliCorp, Teknowledge, and Design Power Inc. He has served on the National Science Foundation Computer Science Advisory Board, on the National Research Council’s Computer Science and Technology Board, and as a member of the Board of Regents of the National Library of Medicine. He is a Fellow at the the Association for the Advancement of Artificial Intelligence, the American Institute of Medical and Biological Engineering, and of the American Association for the Advancement of Science. He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences. (Source: Feigenbaum’s Turing Award citation at the ACM)

George Heilmeier became vice president at Texas Instruments in 1977. In 1983 he was promoted to Senior Vice President and Chief Technical Officer. In his current position, he is responsible for all TI research, development, and engineering activities. From 1991-1996 he also served as president and CEO of Bellcore (now Telcordia), ultimately overseeing its sale to Science Applications International Corporation (SAIC). He served as the company’s chairman and CEO from 1996-1997, and afterwards as its chairman emeritus. He serves on the board of trustees of Fidelity Investments and of Teletech Holdings, and the Board of Overseers of the School of Engineering and Applied Science of the University of Pennsylvania. He is a member of the National Academy of Engineering, the Defense Science Board, and is Chairman of the Technical Advisory Board of Southern Methodist University. (sources: Wikipedia, and IEEE Global History Network)

It should be noted that Heilmeier is credited with being the inventor of the Liquid Crystal Display (LCD), a technology Alan Kay hoped to use for his Dynabook in the 1970s, and which became the basis for the digital display most of us use today.

Alan Kay left PARC on sabbatical in 1980, and never came back. He came to Atari in 1981, as Chief Scientist, doing research on interactive computing (you can see some samples of it here). He then joined Apple as a research Fellow in 1984, where he worked on improving education in conjunction with technology. He joined Disney as a Fellow and Imagineer from 1996 to 2001. He founded Viewpoints Research Institute in 2001. He then became a research fellow at Hewlett-Packard from 2002 to 2005. During that time he was a Visiting Professor at Kyoto University, an adjunct professor of Computer Science at MIT, and was involved with the development of the XO Laptop, developed by the One Laptop Per Child program at MIT. Today he is an adjunct professor of Computer Science at UCLA, and he continues his work at Viewpoints. He is also on the advisory board of TTI/Vanguard. Since 2006 Viewpoints has been working on a project, sponsored by the National Science Foundation, to reinvent personal computing. (sources: The New York TimesChap. 12 from Howard Rheingold’s “Tools For Thought,” Wikipedia, Bio. on Alan Kay at Answers.comVPRI: Inventing Fundamental New Computing Technologies)

Bob Taylor went on to found the Systems Research Center (SRC) at Digital Equipment Corp. (DEC) in 1984. He retired from DEC in 1996. He died on April 13, 2017.

Butler Lampson also left PARC with Taylor, and joined him at Digital Equipment Corp. He became a Fellow of the ACM in 1992. He now works at Microsoft Research, and is an adjunct professor at MIT. He became a Fellow at the Computer History Museum in 2008.

Dan Ingalls left PARC to work at Apple in 1984. Beginning in 1987 he helped run the Homestead Hotel, a family business, until 1993. He stayed on with Apple until 1996. From 1996 to 2001 he was Principal Staff Director at Disney Imagineering. He worked as a consultant for Hewlett-Packard and at Viewpoints Research Institute from 2001 to 2005. He joined Sun Labs in 2005, where he developed his newest project, called Lively Kernel. He left Sun in 2010 to become a Fellow at SAP, where he continues to work today. He continues development of his Lively Kernel Project at the Hasso Plattner Institute. (Sources: Dan Ingalls’s Linkedin page, and his Wikipedia page)

Diana Merry – After leaving Xerox in 1986, she has continued her work in Smalltalk with various employers. You can see her complete work history at her LinkedIn page.

Chuck Thacker left PARC the same time Bob Taylor did, and joined DEC as a founder of its Systems Research Center. He then joined Microsoft in 1997 to help found Microsoft Research in Cambridge, UK. He returned to the U.S., and developed the technology which was used in Microsoft’s Tablet PC. He continued working at Microsoft Research in Silicon Valley on computer architecture for many years. He died on June 12, 2017. (source: Arstechnica)

Adele Goldberg became president of the Association of Computing Machinery (ACM) from 1984 to 1986, and continued to have a long association with the organization, taking on various roles. She continued on at PARC until 1987. She wanted to make sure that Smalltalk technology got out to a wider audience, so she worked out a technology exchange agreement with Xerox in the late 1980s and founded ParcPlace Systems, which commercialized a version of Smalltalk. She served as CEO and chairwoman of ParcPlace until 1995, when the company merged with Digitalk, another Smalltalk vendor, to become ParcPlace-Digitalk. In 1997 the company changed its name to ObjectShare. In 1999 ObjectShare was sold to Cincom, which has continued to develop and sell a Smalltalk development suite. Goldberg was inducted as a Fellow at the ACM in 1994. She co-founded Neometron in 1999, and now also works at Bullitics. She is a board member of Cognito Learning Media. She has continued her interest in education, formulating computer science courses at community colleges in the U.S. and abroad.

Charles Simonyi went to work at Microsoft in 1981. He led their development efforts to create Word, Multiplan, and Excel. He left Microsoft in 2002 to found Intentional Software, where he’s been at work on what I’d call his “Domain-Oriented Development” software and techniques.

Larry Tesler went to work at Apple in 1980, becoming Vice President of the Advanced Technology Group, and Chief Scientist. He worked on the team that developed the Lisa computer. In 1990 he led the effort to develop the Apple Newton, one of the first of what we’d recognize as a personal digital assistant. Tesler left Apple in 1997 to co-found a company called Stagecast Software. In 2001 he joined Amazon.com as its Vice President of Shopping Experience. In 2005 he joined Yahoo! as Vice President of its User Experience and Design group. In 2008 he went to work for 23andMe, a personal genetics information company. Since 2009 he’s worked as an independent consultant.

Timothy Mott left Xerox PARC to co-found Electronic Arts in 1982. In 1990 he became Director at Electronic Arts, and co-founded Macromedia, staying with them until 1994. In 1995 he co-founded Audible.com, and stayed with them until 1998. He stayed on with Electronic Arts until 2007. You can see his complete work profiles at Bloomberg BusinessWeek and CrunchBase.

Doug Brotz – It’s been difficult finding much information on him. All I have is that he joined Adobe and was a co-developer of the PostScript laser printer control language.

Andrew Birrell – He left Xerox PARC in 1984 to join the Systems Research Center at DEC, where he worked on Network Objects, a distributed object system, Virtual Paper, a system for easy online document reading, and the Personal Jukebox, the first multi-gigabyte portable audio player. DEC was acquired by Compaq in 1998. He stayed with Compaq until 2001, when he went to Microsoft Research. He retired in 2014. He died on December 7, 2016. (Source: the ACM)

Roy Levin became a senior researcher at the DEC Systems Research Center in 1984. In 1996 he became its director. In 2001 he co-founded Microsoft Research in Silicon Valley, became a Distinguished Engineer, and its Managing Director. He continues work there today.

Roger Needham worked as a consultant at Xerox PARC from 1977-1984. He then worked at DEC’s Systems Research Center from 1984-1997, and at Hitachi Advanced Research Laboratory from 1994-1997. He became a Fellow at the Royal Academy of Engineering in 1993, and of the ACM in 1994. He was a Fellow at Wolfson College, in Cambridge, UK, from 1966-2002. He joined Microsoft Research in 1997, and founded their European Research Labs. He was a longtime member of the International Association for Cryptographic Research, the IEEE Computer Society Technical Committee on Security and Privacy, and the University Grants Committee, an advisory committee to the British government. He died on March 1, 2003. (source: Wikipedia)

Michael Schroeder – I haven’t found detailed information on Schroeder, except to say that as a professor at MIT he worked on the Multics project (I covered this project in Part 2 of this series), before coming to Xerox PARC, and that after PARC he worked at DEC’s Systems Research Center. He then came to Microsoft Research in 2001, where he continues to work today. He became a Fellow of the ACM in 2004.

Clarence Ellis – After leaving Xerox PARC, he became head of the Groupware Research Program at the Microelectronics Computer Consortium in Austin, TX. He also worked at Los Alamos Labs, and Argonne National Laboratory. He held academic positions at Stanford, the University of Texas, MIT, and the Stevens Institute of Technology. In 1991 he became the chief architect of the FlowPath workflow product at Bull S.A. In 1992 he came to the University of Colorado at Boulder as a professor of computer science, and, with Gary Nutt, formed the Collaborative Technology Research Group (CTRG) to work on project workflow systems. He became professor emeritus of computer science at CU in 2010. He was on the editorial board of various journals. He was a member of the National Science Foundation (NSF) Computer Science Advisory Board, the University of Singapore ISS International Advisory Board, and the NSF Computer Science Education Committee. He died on May 17, 2014. (sources: Computer Scientists of the African DiasporaCTRG Groupware and Workflow Research, the Daily Camera)

A note I found in Ellis’s bio. also says that he was the first African American to receive a Ph.D. in computer science, in 1969, from the University of Illinois at Urbana-Champaign. He did his post-graduate work developing the Illiac IV supercomputer, an ARPA/IPTO project, and one of the first massively parallel computer systems.

Gary Nutt worked at Xerox PARC from 1978-1980. He worked on collaboration systems at Bell Labs in Denver, CO. from 1980-81. He took a couple executive roles at technology firms in Boulder, CO. from 1981-1986. He returned to the University of Colorado at Boulder in 1986 as a CS professor (he was previously an associate CS professor at CU Boulder from 1972-1978). While on sabbatical in 1993, he worked for Group Bull in Paris, France, developing collaboration products. In 2000 he worked as VP of Engineering for Bookface.com, managing their intellectual property. He went to Inktomi in 2001 to work on content and media distribution over the internet. He also advised on managing their intellectual property. He retired from CU Boulder in 2010, and is now professor emeritus. You can see his work history in more detail at his web page, which I’ve linked to here.

Steve Jobs – I wrote about Jobs’s work in a separate post. He died on October 5, 2011.

Bob Belleville – I couldn’t find much on him. What I have is that he joined Apple in 1981, becoming the chief engineer on the Lisa, and later the Macintosh project. He later joined Silicon Graphics’s R&D Dept. (Source: Good-bye Woz and JobsMaking the Macintosh)

You can watch a retrospective from 2001 given by Chuck Thacker and Butler Lampson on their days at PARC, and what they did, here, if you’re interested. It gets pretty technical, and is an hour and 20 minutes long.

You can see a 2010 interview with Adele Goldberg at the Computer History Museum, where she reviews her academic, educational, and business career here. It’s about an hour and 30 minutes.

Here’s a presentation given by the University of Texas at Austin in 2010, with Mitchell Waldrop, the author of “The Dream Machine,” and Michael Hiltzik, the author of “Dealers of Lightning,” along with an interview with Bob Taylor. It’s a nice bookend to the history I’ve discussed here.

Epilogue

The closing chapter in Waldrop’s book is called, “Lick’s Kids.” It talks about the people who were mentored by Licklider, either through ARPA, or at MIT, and who went out into the world to bring their ideas to life. It also talked about his waning years. He lived to see “the wheel reinvented” with microcomputers. The companies that were going gangbusters with them were repeating many of the lessons he and his researchers had learned in the 1960s, working at ARPA. The implication being that if they had merely taken time to look at what had already been learned 20 years earlier they would’ve avoided the same mistakes. He also saw the first glimmers of his vision of the Multinet come into being with the internet.

When reflecting on his life’s work, Lick was humble. He didn’t give himself much credit for creating our digital world. He thought of himself as just happening to be at the right place at the right time while some very bright people did the real work. But those who were mentored, and funded by him gave him a great deal of credit. They said if it wasn’t for him, their ideas would not have gotten off the ground, and often he was the only one who could see promise in them. He was indeed in the right place at the right time, but what was important was that he was in the right position to give them the support they needed to bring their dreams into reality.

I’ll talk more about Bob Kahn, and the development of the internet, in Part 4.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I came upon some videos from a vintage computer collector named Rudolf Brandstötter. He goes by the handle “alker33” on YouTube. He also has a blog here.

I was gratified to come upon these videos, because they allowed me to take a look at these old computers. The Apple models I’d used growing up were the Apple II+, the IIe, the 512K “Fat” Macintosh, either the Lisa 1 or Lisa 2, and the Mac Plus.

The size of each video window below is so small that if you want to take a good look at what’s going on on the video screen of each computer, you need to blow the video up to fullscreen. You do this by hitting the “broken square” icon in the lower-right corner of the video window. To take it back to normal size, hit the “shrink” icon in the lower-right corner, or hit the ESC key.

The Apple I

This was a kind of mythical machine back when I was a teenager in the early 1980s. People didn’t talk about it much, but when they did, there was some admiration for it, because it’s what started the company, and it was the first computer Steve Wozniak built. Even back then they were collector’s items, fetching (as I recall) tens of thousands of dollars. I don’t think I got an idea of what it actually was until the early 1990s, for it was not sold as a consumer computer. It was just a motherboard. It came out in 1976, and famously sold for $666.66 ($2,691.95 in today’s money). Steve Wozniak was asked about the price recently, and he said it wasn’t a marketing gimmick. There was no satanic meaning behind it. He said they looked at the cost it took to make it, and ran it through a standard profit margin formula, and it just happened to come out to that number!

It did not come with a keyboard, a monitor, power supply, nor a storage device. It came with a MOS Technology 8-bit 6502 microprocessor running at 1 Mhz, and 4 kilobytes of memory, upgradeable to 16K. It came with composite video output built-in. The only software it contained in Read-Only Memory (ROM) was a machine language monitor that it booted into when you turned it on (that was the operating system!). The owner could use it to enter and run programs in hexadecimal via. keyboard. This was an improvement over earlier microcomputers like the Altair, which had their owners entering programs one byte at a time using a panel of switches. A tape recorder interface card had to be purchased separately to store and load programs. The card came with Steve Wozniak’s Integer Basic on cassette tape.

An Apple I in a homemade case, from Wikipedia

Brandstötter, as I recall, got his from someone who had framed the motherboard. Some of the chips had to be replaced, but otherwise it was in working condition, as you can see in this video.

An original Apple I

I have to admit, watching this is anticlimactic. There’s not much to see, but I was glad I got to see it anyway. Finally, I could look at this machine that I’d heard rumors about for decades.

Brandstötter takes you on a “tour” of the machine, and shows the Apple I manuals. The computer boots into the machine language monitor, from which he can either program in hexadecimal, or load machine code from a cassette tape. Brandstötter does one of the standard tests of typing in a “hello world” program in hexadecimal. You can read about the built-in operating system (all 256 bytes of it!), and the assembly mnemonics of this little test program here. Then he loads Integer Basic from tape (after which he types in a short Basic program), and then he loads a program that displays Woz’s and then Steve Jobs’s mugs as ASCII art. The Apple I did not have a graphics mode.

Lest one think that maybe Apple used some sort of digitizer to get their faces into binary and saved them to tape, I learned not too long ago that it was common back in the 1950s and 1960s among those who were into digital media to hand-digitize photographs in ASCII. That may have been what happened here.

What’s special about this, as Brandstötter notes in the video, is he owns one of 6 working original Apple I motherboards in the world. There are modern Apple I replicas that have been made that work exactly like the original. What I read in the discussion to this video (or one of the other Apple I videos I found) is that when Apple came out with the Apple II (the next one I show below), they had a trade-in program where people could turn in their Apple I motherboards. The reason they did this was it saved them on customer support costs. So there aren’t that many vintage motherboards around. (I’ve read a couple claims that there are between 40-60 of them in the whole world.)

The Apple II

The video below was an interesting find. I remember hearing from somebody years ago that there was such a thing as an Apple II before the II+. This is the first time I’ve seen one of them. Just from how it runs, it seems no different from a II+, though there were some minor differences (I derived this information from this Wikipedia page).

The II came with the same 6502 CPU as the Apple I, with configurations from 4 kilobytes up to 48K of memory. In its lowest configuration it sold for $1,298 ($4,921 in today’s money). The 48K configuration sold for $2,638 ($10,002 in today’s money). It came as a complete unit, with its own case (no assembly required), ready to be hooked up to a monitor or TV. It had several internal expansion slots, Woz’s Integer Basic, and an assembler/disassembler built into ROM. If you just booted into the ROM it would take you to Integer Basic by default. When it came out in 1977, owners still could only load/save programs on cassette tape. A disk interface, created by Steve Wozniak, and disk drive, along with a Disk Operating System (DOS) written by a company called Shepardson, came out for it a year later. Applesoft Basic (written by Microsoft), which handled floating-point, also came out for it later on tape. As you’ll see in the video, the II had a graphics mode. What you don’t see (due to the monochrome monitor) is that it was capable of displaying color.

The II+ came out in 1979, with 16, 32, or 48K configurations, and could be expanded to 64K. It had a starting price of $1,195 ($3,777 in today’s money). It came with Applesoft Basic in ROM (replacing Integer Basic as the standard language). The assembler/disassembler was removed from ROM to make room for Applesoft Basic, though it retained a machine language monitor that users could enter using a “Call” command from Basic. Owners could load Integer Basic from disk. In addition, the graphics capabilities were enhanced.

The Apple II (not the II+)

Brandstötter types in a brief Basic program that prints numbers across the screen. He copies some files from one disk to another to show that both disk drives work. He brought back memories loading up Apple Writer. This was one of the first word processors I learned to use in Jr. high school. Lastly, he loads up a game called “Bug.”

The Apple III

I only saw Apple III‘s back in the day used as props on a TV show called “Whiz Kids.” The computer came out in 1980, and was designed as a business machine. It used a Synertek 8-bit 6502A 1.8 Mhz CPU. I think the model Brandstötter uses in the video has 128K of memory (it was capable of going up to 256K). It sold for from $4,340 -$7,800 ($12,084 – $21,718 in today’s money). The OS it booted into was called SOS, for “Sophisticated Operating System.” As you’ll notice, it defaults to an 80-column display (the prior Apple II models had 40-column displays). Interestingly, the OS runs through a menu system, not a command-line interface. It’s reminiscent of ProDOS, which I remember running on the Apple IIe sometimes.

The Apple III was designed to either run off of a floppy drive, or a 5 MB hard drive Apple sold called “Profile” (or both). You don’t see the Profile in this video, though you’ll see it in the video I have below on the Lisa.

The Apple III

Here’s Steve Jobs in 1980 describing how the company got started with the Apple I, his philosophical outlook at the time with the Apple II, and what he was looking forward to with “future products.” I really liked his perspective on what the computer enabled at the time. I get the sense he had a very clear idea of what its value was. With regard to “future products,” I think you can read from some of his answers that he was talking about the Apple Lisa, and possibly the Macintosh, though he was being tight lipped about getting into specifics, of course. Unfortunately there are some glitches in the video tape, and there’s a part that’s unintelligible, because it’s too badly damaged.

The Lisa

This is the Apple GUI computer that preceded the Macintosh. It came out in 1983. It used a 16-bit 5 Mhz Motorola 68000 processor, and came with 1 MB of memory, expandable to 2 MB. It sold for $9,995 ($23,022 in today’s money). The OS featured pre-emptive multitasking, enabling the user to run more than one application at the same time.

Here’s the Wikipedia article on it.

The Apple Lisa

Brandstötter tells an interesting story about the “Twiggy” floppy drives. They’re the two 5-1/4″ drives on the right side of the case. They were named after a 1960s fashion model who was famously thin. From a get-together Jobs had with Bill Gates 7 years ago, where Jobs mentioned them, I thought he was talking about the 3-1/2″ drives that ended up on the Macintosh, but in fact he was talking about these drives. Brandstötter says that Apple tried using it on the Mac during its development, but they ended up going with the Sony 3-1/2″ drive, because these 5-1/4″ drives were so unreliable.

They used special floppy disks that were only made for use on this drive (talk about lock-in!). They stored 870K per disk. Brandstötter shows them to the camera, and my goodness! They have two “windows” per side where they can be accessed by two read-write heads, in a double-sided fashion. (Normal 5-1/4″ disks only had one “window” per side.) The two read/write heads (one on top, one on the bottom) were positioned on opposite sides of the spindle, and moved in tandem across the disk, because the designers were concerned about head wear in a conventional double-sided disk configuration (with two heads opposing each other, on the same side of the spindle). Each head was opposed by a pad to press the disk against the head.

Apple ended up having buyers trade in their Lisas for Lisa 2’s (which came out in 1984), which had the more reliable 3-1/2″ drive. However, Brandstötter shows off the fact that his “Twiggy” drives work. After a reboot, he shows a bit of what LisaDraw can do.

This was a really interesting romp through some history, most of which was before my time!

Postscript:

As I was doing my research for this post, I noticed that there was some talk of Apple I software emulators, enabling people to experience using this vintage machine on their modern computer. If you desire to give it a whirl, here’s a page with several Apple I, and Apple 8-bit series emulators that run on various platforms. I haven’t tried any of them out. It looks like there might be a little setup necessary to get the Apple I emulator running. I noticed there was a tape Prom file (for, I assume, accessing tape files, to simulate loading/saving programs). Usually this just involves putting files like the Prom file in a known location in your storage, and directing the emulator to where it is when you first run it. Also, here’s the Apple I Operating Manual, which contains the complete hex and assembly listing of the machine monitor, and a complete electronic schematic of the motherboard. It’s up to you to figure out what to do with it. 🙂

I leave it up to readers to find emulators for the other platforms. I know there are at least Apple II emulators available for various platforms. Just google for them.

Related posts:

Remembering Steve Jobs and Apple Computer

Reminiscing, Part 2

Read Full Post »

I found a version of Smalltalk-72 online through Gilad Bracha on Google+. It’s being written by Dan Ingalls, the original implementor of Smalltalk, on his relatively new web authoring platform called Lively Kernel (or Lively Web). I don’t know what Ingalls has planned for it, whether this will be up for long. I’m just writing about it because I found it an interesting and enlightening experience, and I wanted to share about it with others so that hopefully you’ll gain similar value from trying it out. You can use ST-72 here. I have some words of warning about it, as I’ll gradually describe in this post.

First of all, I’ve noticed as of this writing that as it’s loading up it puts up a scary “error occurred” message, and a bunch of “loading” messages get spewed all over the screen. However, if you are patient, all will be well. The error screen clears after a bit, and you can start using ST-72. This looks to be a work in progress, and some of it works.

What you’ll see is a white vertical rectangle surrounded by a yellowish box that contains some buttons. The white rectangle represents the screen area you work in. At the bottom of that screen area is a command line, which starts off with “Welcome to SMALLTALK.”

This environment was created using an original saved image of a running Smalltalk-72 system from 40 years ago. Even though the title says “ALTO Smalltalk-72,” it’s apparently running on a Nova emulator. The Nova was a minicomputer from Data General, which was the original platform Smalltalk was written on at Xerox PARC in the early 1970s. I’m unclear on what the connection is to the Alto’s hardware. You can even look at a Nova assembly monitor display by clicking on the “Show Nova” button, which will show you a disassembly of the machine instructions that the emulator is executing as you use Smalltalk. (To get back to Smalltalk, click on “Show Smalltalk.”) The screen display you see is as it was originally.

Smalltalk-72 is not the first version of Smalltalk, but it comes pretty close. According to Alan Kay’s retelling in “The Early History of Smalltalk,” there was an earlier version (which was the result of a bet he made with his PARC colleagues), but it was rudimentary.

This version does not run like the versions of Smalltalk most people familiar with it have used, which were based on Smalltalk-80. There are no workspace windows (yet), other than the command line interface at the bottom of the screen area, though there is a class editor (which as of this writing I’d say is non-functional). It doesn’t look like there’s a debugger. Anytime an error occurs, a subwindow with an error message pops up (which can be dismissed by pressing ctrl-D). You can still get some things done with the environment. To really get an idea of how to use it you need to click on “Open ST-72 Manual.” This will bring up a PDF of the original Smalltalk-72 documentation.

What you should keep in mind, though, (again, as of this writing) is that some features will not work at all. File functions do not work. There’s a keystroke the manual calls “‘s”, which is supposed to recall a specified object attribute, such as, for example, “title’s string” to access a variable called “string” out of an object called “title.” (“title” receives “‘s string” as separate messages (“‘s” and “string”), but “‘s” is supposed to be received as a single character, not two, as shown.) I have not been able to find this keystroke on the keyboard, though you can substitute a different message for it (whatever you want to use) inside a class.  The biggest thing missing at this point is that while it can read the mouse pointer’s position, this ST-72 environment does not detect any mouse button presses (the original mouse that was used with it had several mouse buttons). This disables any ability to use the built-in class editor, which makes using ST-72 a chore for anything more than “hello world”-type stuff, because besides adding a method to a class (which is easy to do), it seems the only way to change code inside a class is to erase the class (assign nil to its name), and then completely rewrite it. I also found this made later code examples in the documentation (which got more into sophisticated graphics) impossible to try out, as much as I tried to use keyboard actions to substitute for mouse clicks.

What I liked about seeing this version was it helped explain what Kay was talking about when he described the first versions of Smalltalk as “iconic” programming languages, and what he described with Smalltalk’s parsing technique, in “The Early History of Smalltalk” (which I will call “TEHS” hereafter).

To use this version, you’ll need to get familiar with how to use your keyboard to type out the right symbols. It’s not cryptic like APL, but it’s impossible to do much without using some symbolic characters in the code.

The most fascinating aspect to me is that parsing was a major part of how objects operated in this environment. If you’re familiar with Lisp, ST-72 will seem a bit Lisp-like. In fact, Kay says in TEHS that one of his objectives was to improve on Lisp’s use of “special forms” by making them unnecessary. He said that he did this by making parsing an integral part of how objects operate, and by making what appear to be syntactic elements (what would be reserved words or characters in other languages) their own classes, which receive messages, which are just other elements of a Smalltalk expression in your code; so that the parsing action goes pretty deep into how the whole system operates.

Objects in ST-72 exist in what I’d call a “nether world” between the way that Lisp functions and macros behave (without the code generation part). Rather than substitute bound values and evaluate, like Lisp does, it parses tokens and evaluates (though it has bound values in class, instance, and temporary variables).

It’s possible to create a class which has no methods (or just an implicit anonymous one). Done this way, it looks almost like a lambda (an anonymous function) in Lisp with no parameters. However, to pass parameters to an object, the preferred method is to execute some parsing actions inside the class’s code. ST-72 makes this pretty easy. Once you start doing it, using the “eyeball” character, classes start to look a bit like Lisp functions with a COND expression in it.

Distinguishing between using a class and an instance of it can be confusing in this environment. If you reference a class and pass it a message, that’s executing a class’s code using its class variable(s). (In ST-80 this would be called using “class-side code,” though in ST-72 the only distinction between class and instance is what variables inside the class specification get used. The code that gets executed is exactly the same in both cases.) In order to use an object instance (using its instance variable(s)), you have to assign the class to a variable first, and then pass your message to that variable. Assigning a class to a variable automatically creates an instance of the class. There is no “new” operator. The semantics of assignment look similar to that of SETQ in Lisp.

A major difference I notice between ST-72 and ST-80 is that in ST-80 you generally set up a method with a signature that lays out what message it will receive, and its parameter values, and this is distinct from the functionality that the method executes. In ST-72 you create a method by setting up a parsing expression (using the “eyeball” character) within the body of the class code, followed by an “implies” (conditional) expression, which then may contain additional parsing/implies expressions within it to get the message’s parameters, and somewhere in the chain of these expressions you execute an action once the message is fully parsed. I imagine this could get messy with sophisticated interactions, but at the same time I appreciated it, because it can allow a very straightforward programming style when using the class.

Creating a class is easy. You use the keyword “to” (“to” is itself a class, though I won’t get into that). If anyone has used the Logo language, this will look familiar. One of Kay’s inspirations for Smalltalk was Logo. After the “to”, you give the class a name, and enter any temporary, instance, and/or class variables. After that, you start a vector (a list) which contains your code for the class, and then you close off the vector to complete the class specification.

All graphics in ST-72 are done using a “turtle,” (though it is possible to use x-y coordinates with it to position it, and to direct it to draw lines). So it’s easy to do turtle graphics in this version of Smalltalk. If you go all the way through the documentation, you’ll see that Kay and Adele Goldberg, who co-wrote the documentation, do some sophisticated things with the graphics capabilities, such as creating windows, and menus, though since mouse button presses are non-functional at this point, I found this part difficult, if not impossible to deal with.

I have at times found that I’ve wanted to just start the emulator over and erase what I’ve done. You can do this by hitting the “Show Nova” button, hitting the “Restart” button, and then hitting “Show Smalltalk.”

Here is the best I could do for an ST-72 key map. Some of this is documented. Some of it I found through trial and error. You’ll generally get the gist of what these keys do as you go through the ST-72 documentation.

function keystroke description
! (“do it”) \ (backslash) evaluates expression entered on the command line, and executes it
hand shift-‘ quotes a literal (equivalent to “quote” in Lisp)
eyeball shift-5 “look for” – for parsing tokens
open colon ctrl-C fetch the next token, but do not evaluate (looks like two small circles one on top of the other)
keyhole ctrl-K peek at the input stream, but do not fetch token from it
implies shift-/ used to create if…then…else… expressions
return shift-1 returns value
smiley/turtle shift-2 for turtle graphics
open square shift-7 bitwise operation, followed by:
a * (number) = a AND (number)
a + (number) = a OR (number)
a – (number) = a XOR (number)
a / (number) = a left-shift by (number)
? shift-~ (tilde) question mark character
‘s (unknown)
done ctrl-D exit subwindow
unary minus
less-than-or-equal ctrl-A
greater-than-or-equal ctrl-Z
not equal ctrl-N
% ctrl-V
@ ctrl-F
! ctrl-Q exclamation point character
ctrl-O double quote character
zero(?) shift-4
up-arrow shift-6
assignment shift-_ displays a left-arrow, equivalent to “=” in Algol-derived languages.
temp/instance/class variable separator : (colon) (this character looks like ‘/’ in the ST-72 documentation, but you should use “:” instead)

When entering Smalltalk code, the Return key does not enter the code into the system. It just does a line-feed so you can enter more code. To execute code you must “do it,” by pressing ‘\’.

You’ll notice some of the keystrokes are just for typing out a literal character, like ‘?’. This is due to the emulator re-mapping some of the keystrokes on your keyboard to do something else, so it was necessary to also re-map the substituted characters to something else so they could still be used.

Using ST-72 feels like playing around in a sandbox. It’s fun seeing what you can find out. Enjoy.

Read Full Post »

As I’ve been blogging here, there have been various issues that I’ve identified, which I think need to be remedied in the world of computing, though they’re ideas that have occurred to me as I’ve written about other subjects. I went through a period, early on in writing here, of feeling lost, but exploring about, looking at whatever interested me in the moment. My goal was to rediscover, after spending several years in college, getting my CS degree, and then several more years in professional IT services, why I thought computers were so interesting so many years ago, when I was a teenager. I had this idea in my head back then that computers would lead society to an enlightened era, a new plateau of existence, where people could try out ideas in a virtual space, the way I had experienced on early computers, with semi-interactive programming. I saw the computer as always being an authoring platform, and that the primary method of creating with it would be through programming, and then through various apparatus. I imagined that people would gain new insight about their ideas through this work, the way I had. I did not see this materialize in my professional career. So I decided to leave that world for a while. I make it sound here like it was a very clear decision to me, but I’m really saying this in hindsight. Leaving that world was more of a gradual process, and for a while I wasn’t sure I wanted to do it. It felt scary to leave the only work I knew as a professional, which gave me a sense of pride, and meaning. I saw too many problems in it, though, and it eventually got to me. I expected IT to turn out better than it did, to become more sane, to make more sense. Instead the problems in IT development seemed to get worse as time passed. It made me wonder how much of what I’d done had really improved anything, or if I was just going through the motions, pretending that I was doing something worthwhile, and getting a paycheck for it. Secondly, I saw a major shift occurring in software development, perhaps for the better, but it was disorienting nevertheless. I felt the need to explore in my own way, rather than going down the beaten path that someone else, or some group, had laid out for me.

I spent as much time as I could trying to find better ideas for what I wanted to do. Secondly, I spent a lot of time trying to understand Alan Kay’s perspective on computing, as I was, and am, deeply taken with it. That’s what this blog has largely been about.

When people have asked me, “What are you doing,” “What are you up to,” I’ve been at a loss to explain it. I could explain generically, “I’ve been doing some reading and writing,” “I’m looking at an old programming language,” or, “I’m studying an operating system.” They’re humdrum explanations, but they’re the best I could do. I didn’t have any other words for it. I felt like I couldn’t say I was wandering, going here and there, sampling this and that. I thought that would sound dangerously aimless to most of the people I know and love.

I found this video by Bret Victor, “Inventing on Principle,” and finally I felt as though someone put accurate, concise words to what I’ve been trying to do all this time.

http://vimeo.com/36579366

His examples of interactive programming are wonderful. I first experienced this feeling of a fond wish come true, watching such a demonstration, when Alan Kay showed EToys on Squeak back in 2003. I thought it was one of the most wonderful things I had witnessed. Here was what I had hoped things would progress to. It was too bad such a principle wasn’t widespread throughout the industry. It should be.

What really excited me, watching Bret’s video, is that at 18 minutes in he demonstrated a programming environment that looks similar to a programming language design I sketched out last fall, except what I had in mind looked more like the “right side” of his interactive display, which showed example code, than the “left side.”

I was intrigued by an idea I heard Alan Kay express, by way of J.C.R. Licklider, of “communicating with aliens” as a general operating principle in computing. Kay said that one can think of it more simply as trying to communicate with another human being who doesn’t understand English. You can still communicate somewhat accurately by making gestures in a context. If I had to describe my sketch in concise terms, I’d call it “programming by suggestion.” I had a problem in mind of translating data from one format into another, and I thought rather than using a traditional approach of inputting data and then hard-coding what I wanted it translated into, “Why not try to express what I want to do using suggestions,” saying, “I want something like this…,” using some mutually recognizable patterns as primitives, and having the computer figure out what I want by “taking the hints,” and creating inferences about them. It sounds more like AI, though I was trying to avoid going down that path if I could, because I have zero skill in AI. Easier said than done. I haven’t given up on the idea, though as usual I’ve gone off on some other interesting diversions since then. I had an idea of building up to what I want from simpler system designs. I may yet continue to pursue that avenue.

I’ve used each problem I’ve encountered in pursuit of my technical goals as an opportunity to pursue an unexamined idea I consider “advanced” in difficulty. So I’m not throwing any of the ideas away. I’m just putting them on the shelf for a while, while I address what I think are some underlying issues, both with the idea itself, and my own knowledge of computing generally.

Bret hit it on the head for me about 35 minutes into his talk. He said that pursuing the way of life he described is a kind of social activism using technology. It’s not in the form of writing polemics, or trying to create a new law, or change an existing one. Rather, it’s creating a new way for computers to operate as a statement of principles. This gets to what I’ve been trying to do. He said finding your principle is a journey of your own making, and it can take time to define. I really liked that he addressed the issue of trying to answer the question of, “Why you’re here.” I’ve felt that pursuit very strongly since I was in high school, and I’ve gone down some dead ends since then. He said it took him 10 years to find his principle. Along the way he felt unsure just what he was pursuing. He saw things that bothered him, and things that interested him, and he paid attention to these things. It took time for him to figure out whether something was just interesting for a time, or whether it was something that fit into his reason for being. For me, I’ve been at this in earnest since 2006, when I was in my mid-30s, and I still don’t have a defined sense yet of what my principle of work is. One reason for that is I only started working along these lines a year and a half ago. I feel like a toddler. I have a sense of what my interest centers around, though; that I want to see system design improved. I’ve stated the different aspects of this in my blog from time to time. Still, I find new avenues to explore, lately in non-technical contexts.

I’ve long had an interest in what makes a modern civilization tick. I thought of minoring in political science when I was in college. My advisor joked, “So, you’ll get your computer science degree, and then run for office, eh?” Not exactly what I had in mind… I had no idea how the two related in the slightest. I ended up not going through with the minor, as it required more reading than I could stomach. I took a few poli-sci courses in the meantime, though. In hindsight, I was interested in politics (and continue to be) because I saw that it was the way our society made decisions about how we were going to relate and live with each other in a future society. I think I also saw it as a way for society to solve its problems, though I’m increasingly dubious of that view. That interest still tugs at me, though recently it’s drawn me closer to studying the different ways that humans think, and/or perhaps how we can think better, as this relates a great deal to how we conduct politics, which affects how we relate to/live with each other as a society. I see this as mattering a great deal, so it may become a major avenue as I progress in trying to define what I’m doing. I don’t see politics as a goal of mine anymore. It’s more a means for getting to other pursuits I am developing on societal issues.

When Bret talked about what “hurt” him to see, this focus on politics came into view for me. When I think about the one thing that pains me, it’s seeing people use weak arguments (created using a weak outlook for the subject) to advance major causes that involve large numbers of people’s lives, because I think the end result can only be misguided, and perhaps dangerous to a free society in the long run. When I see this I feel compelled to intervene in the discussion in some way, and to try to educate the people involved in a better way of seeing the issue they’re concerned about, though I’m a total amateur at it. Words often fail to explain the ideas I’m trying to get across, because the people receiving them don’t understand the outlook I’m assuming, which suggests that I either need a different way of expressing those ideas, or I need to get into educating children about powerful outlooks, or both. Secondly, most adults don’t like to have their ideas challenged, so their minds are closed to new ideas right from the start. And I’ve realized that while it’s legitimate to try to address this principle which I try to act on, I need to approach how I apply it differently. Beating one’s head against a wall is not a productive use of time.

It has sometimes gotten me reflecting on how societies going back more than 100 years degenerated into regimented autocracies, which also used terribly weak arguments to justify their existence, with equally terrible consequences. We are not immune from such a fate. Such regimes were created by human beings just like us. I have some ideas for what I can do to address this concern, and that of computing at the same time, though I feel I now have a more realistic sense of the breadth of the effect I can accomplish within my lifetime, which is quite limited, even if I were to fully develop the principles for my work. A way has already been paved for what I describe by another techno-social activist, but I don’t have a real sense yet of what going down that avenue means, or even if reconciling the two is really what I want to do. As I said, I’m still in the process of finding all this out.

Thinking on this, a difference between myself and Bret (or at least what he’s expressed so far) is that like Alan Kay and Doug Engelbart, I think I see my work in a societal context that goes beyond the development of technology. Kay has dedicated himself to developing a better way to educate people in the great ideas humans have invented. He uses computers as a part of that, but it’s not based in techno-centrism. His goal is not just to create more powerful “mind amplifiers” or media. The reason he does it is as part of a larger goal of creating the locus for a better future society.

I am grateful to Bret for talking so openly about his principle, and how he arrived at it. It’s reassuring to be able to put a term to what I’m trying to achieve.

Related posts:

Coding like writing

The “My Journey” series

Why I do this

Read Full Post »

Christina Engelbart posted a page on her site to mention places that commemorated the 45th Anniversary of Doug Engelbart’s (her father’s) “Mother of All Demos.” The Computer History Museum in Mountain View, CA did an event on Dec. 9, the date of the demo, to talk about it, what was accomplished from it, and what from it has yet to be brought into our digital world. There have been some other anniversary events for this demo in the past, but this one is kind of poignant I think, because Doug Engelbart passed away in July. Maybe C-SPAN will be showing it?

I was intrigued by this line in the description of the CHM event:

Some of the main records of his laboratory at SRI are in the Museum’s collection, and form a crucial part of the CHM Internet History Program.

Since I heard about what an admirable job the Augmentation Research Center had done in building NLS architecturally, I’ve been curious to know, “How can today’s developers take a look at what they did, so they could learn something from it?”

The reason I particularly wanted to point out this commemoration, though, is one of the pages Christina referenced was a post by Brad Neuberg demonstrating HyperScope, a modern, web-based system that implements some of the document and linking features of NLS, and Engelbart’s original Augment system (NLS was renamed “Augment” in the late 1970s). Watching Engelbart’s demo gives one a flavor of what it was like to use it, and its capabilities, but it doesn’t lend itself to helping the audience understand how it deals with information, and how it operates–why it’s an important artifact–beyond being dazzled by what was accomplished 45 years ago. Watching Brad’s videos gives one a better sense of these things.

Read Full Post »

Looking at the retrospectives on Steve Jobs, after news of his death, they mostly recalled the last 10 years of his life, the iPod, the iPhone, and finally the iPad. Most of the images show him as an old man, an elder of the digital age “who changed the world.” This is not the Steve Jobs of my memories, but it is one of our modern era, one that has created digital devices that run software in tightly controlled environments for the consumer market.

My fond memories of Apple began with some of the company’s first technology products, back in the early 1980s. I didn’t know who Steve Jobs was at first, but I found out rather quickly. My memories of him are as a vibrant young man who believed that small, personal computers were the wave of the future, though “small” meant something that would fit on your desk…

I can say that I “experienced” Steve through using the technology he helped develop.

Apple’s first big success: The Apple II

My first experience with their technology was the Apple II Plus computer.

The Apple II Plus, from Wikipedia.org

The electronics were designed by Steve Wozniak, who has been called a “Mozart” of electronics design. Through the creative use of select chips and circuitry, he was able to pack a lot of functionality into a small space. Jobs designed the case for the computer. At the time that the first Apple II came out, in 1977, it was one of the first microcomputers that looked like what we might expect of a computer today. Most computers of the time were constructed by hardware hackers. The Apple was different, because you didn’t have to worry about building any part of it yourself, if you didn’t want to. The thing I heard that was really appealing about the Apple when it was launched was that the company was very open about how it worked. They wouldn’t talk about 100% of everything in it, because some of it was proprietary, but they’d tell hackers about everything else. They said, “Go to town on the darn thing!” That was the reason it got an early lead on everyone else, because Jobs recognized that its market was mainly software hobbyists. It was appealing to people who wanted to do things with the electronics, to expand upon what was there, but it was targeted at people who wanted to manipulate the machine through software.

My first encounter with a II Plus was at my Jr. high school in 1982. The school had just 3 of them. One was in the library, and students had to sign up for time on it. The other two were owned by a couple teachers, and were kept in their offices. The following school year my school got a computer lab, which was filled with Apple IIe’s. That same year the local public library made an Apple II Plus available. Most of the programming I did in my teen years was done on the II.

It was a very simple, but a very technical machine, by today’s standards. When you’d start it up, it would come up in Applesoft Basic (written by Microsoft), a programming language environment that doubled as the computer’s command-line interface/operating system. All you’d see was a right-square-bracket on the lower-left side of a black screen, and a blinking square for a cursor.

Applesoft Basic, from Wikipedia.org

It offered an environment that allowed you to run an app., and manage your files on disk, by typing commands. What I liked about it was that it offered commands that allowed me to do commonsensical things. With other 8-bit microcomputers I had used, I had to go through gyrations, or go to a special program to maintain disk files. If I wanted to do some graphics, the commands were also right there in the language. With some other popular computer models, you had to do some complicated maneuvers to get that capability. It offered nothing for sound, though. If you wanted real sound, you had to get something like a Mockingboard add-on that had it’s own synthesizer hardware. The computer had several internal expansion slots. It was not designed for sound out of the box. If you had nothing else, you had to go into machine language to “tweak” the computer’s internal speaker to get that, since it was only designed to beep. This was not “fixed” until the Apple IIGS, which came out in 1986. Regardless, Apple games tried their best to get sound out of the computer’s internal speaker.

Basic was considered a beginner’s programming language, for newbies. It was less intimidating to learn, because the commands in the language looked kind of like English. Even though it was looked down upon by hackers, Basic was what I used on it most of the time. It was technology from an era when learning something about how the computer operated was expected of those who bought them.

To really harness the computer’s power you had to program in assembly language, or type bytes into the machine directly, using what was called the computer’s built-in machine monitor. The square bracket prompt would change into an asterisk (“*”), and you were in no man’s land. The Basic environment was turned off, and you were on your own, navigating the wilds of the machine’s memory, and built-in routines, giving commands and data to the machine in hexadecimal (a base-16 numbering system). This is what the pros used. You had total command of the machine. You also had all of the responsibility. There was no protected memory, and the machine gave you as much rope as you needed to hang yourself. If your program wandered off into disk operating system code by accident, you might corrupt the data you had on disk. Most of the commercial software written for the Apple II series was written in this mode, or using a piece of software called an assembler, that allowed the programmer to use mneumonic codes, which were easier to deal with. Programs written in machine code ran faster than Basic code. Basic programs ran in what’s called an interpreter, where the commands in the program were translated into executable code as the program ran, and this which was a slower process. As a consolation, some people used Basic compilers to translate their programs into a machine code version in one go, so they’d run faster.

If you wanted to run a commercial app., you would insert a disk that contained the app. into the floppy disk drive, and reboot the machine. The app. would automatically load off of disk. If you wanted to run a different app., you’d typically remove the disk from the disk drive, and insert a new one, and repeat the process. There was no desktop environment to operate from. You booted into each program, and you could only run one program at a time.

This was pretty much the world of the Apple II. Once graphical interfaces became popular, Berkeley Softworks came out with a piece of software called GEOS that gave the II a workable graphical interface, though I think most Apple users thought it was a novelty, because most of the applications people cared about didn’t run on it.

Another big market for the II was in the public schools. For many years it was the de facto standard in computing in America’s schools. A lot of educational software was written for it.

Stickybear on the Apple II, from atarimagazines.org

A third big market opened up for it when VisiCalc (Visible Calculator) came out in 1979, written by Dan Bricklin and Bob Frankston. It was the world’s first commercial spreadsheet, and it came out first on the Apple II. It was the II’s “killer app,” a piece of software so sought after that people would buy the computer just to be able to use it.

VisiCalc, from Wikipedia.org

I first learned what a spreadsheet was by using VisiCalc. Modern spreadsheets use many of the same basic commands, and offer the same basic features that it pioneered. The two main features it lacked were it did not support macros, and it had no graphing function. Each cell could contain a formula that could draw values from other cells into a calculation. Other than that it was not programmable.

An interesting bit of history from this era is that some of the software from it lives on. Castle Wolfenstein, by Muse Software, one of the popular games for the Apple II has had quite a lot of staying power, into our modern era. Remember Wolfenstein 3D by Id Software, and Return to Castle Wolfenstein on the PC? Wolfenstein started on the Apple II in 1981. The following video is from its sequel, Beyond Castle Wolfenstein, which came out in 1984. It gives you the same flavor of the game. Unlike its modern translations, it was a role-playing game. The object was to pick up items that would help you get to your objective. Shooting was a part of the game, but it wasn’t the only thing you had to do to get through it. As I remember, the ultimate goal was to blow up the castle.

Beyond Castle Wolfenstein

Another Apple original that has had a lot of staying power is Flight Simulator. It was originally written by a company called subLogic. They came out with Flight Simulator II, which was ported to a bunch of different computers, including the IBM PC. This was the second version of the product, which was a huge improvement on the original. It featured realistic maps of cities (as realistic as you could get with such a low-resolution display), colorized landscapes (rather than the wireframe graphics in the original), realistic weather conditions you could select, and a variety of aircraft you could fly. Later, expansion disks came out for it that featured maps of real cities you could fly to. Microsoft purchased the rights to Flight Simulator II, and developed all of its subsequent versions.

The original Flight Simulator on the Apple II

Their flops

Apple had some early flops. The first was a now-little-known computer called the Apple III, which came out in 1980. It was a slightly faster version of the II, using similar technology. It was designed and marketed as a business machine. Unlike the II it had an 80-column text display. The II had a 40-column text display, though in the early 1980s there were 80-column expansion cards you could get for the IIe. It had a higher memory capacity, and it was backward-compatible with the II, through a compatibility mode.

The Apple III, from Wikipedia

Their next flop came soon after, the Apple Lisa, which came out in 1983.

The Apple Lisa, from Wikipedia

A screen from the Lisa, from Wikipedia

It was also marketed as a business computer. Most people give props to the Macintosh as being Apple’s first computer with a graphical user interface, and a mouse, but it was the Lisa that had that distinction. This was Apple’s first crack at the idea. It had some pretty advanced features for microcomputers at the time. The main one was multi-tasking. It could run more than one application at a time. Its biggest problem was its price, about $10,000. Unlike the Apple III, the Lisa had some staying power. Apple marketed it for the next few years, trying variations on it to try to improve its appeal.

I had the opportunity to spend a little time with a Lisa at a computer show in the mid-1980s. It had a calendaring desk accessory that was a revelation to me. It was the first of its kind I had seen. In some ways it looked a lot like iCal on OS X. My memory is it functioned just like it. It would give you a monthly, weekly, and daily calendar view. If you wanted to schedule an event for it to alert you about, you entered information on a form (which looked like a conventional dialog box), and then when that date and time came up, it would alert you with a reminder.

When I was in Jr. high and high school, I used to carry around with me a pocket spiral-bound notebook so I could write down assignments, and when I had tests coming up. It looked pretty messy most weeks. I really wanted a way to just keep my schedule sane. The Lisa demonstrated that a computer could do that. I didn’t have regular access to a Lisa computer, though, and there was absolutely no way my mother could afford to get me one, especially just to give me something with a neat calendar! So in high school I set out to create my own weekly planner app. on an Apple II, using Basic. I didn’t own one, but the school had lots of them. I figured I could use them, or the one at the local public library, which I used regularly as well. I wrote about the development of it here. I called my app. “Week-In-Advance,” and I wanted it to have something of the feel of the Lisa calendar app. I saw. So I set out to create a small “graphical interface” in text mode. I succeeded in my efforts, and it showed me how hard it is to create something that’s easy to use! It was the biggest app. I had written up to that point.

The Macintosh

If you’re a modern Mac user, this was kind of its great-granddaddy… Anyway, it’s related. I’ll explain later. It came out in 1984, and was Steve Jobs’s baby.

The first Macintosh, from Wikipedia

I had the thought recently that Jobs invented the idea of the “beige case” for a computer with the Macintosh, which PC makers followed for years during the 1990s, and everyone got tired of it.

This almost was Apple’s third flop. It created a sensation, because of its simple design, and ease of use. Steve Jobs called it “The computer for the rest of us.” It was targeted at non-techie users who just wanted to get something done. They didn’t want to have to mess with the machine, or understand it. The philosophy was it should understand us, and what we wanted.

My local public library got a Mac for people to use a year after it came out. So I got plenty of time using one.

It was a cheaper version of the Lisa, so it was more accessible, but there wasn’t a whole lot you could do with it at first. The only applications available for it at its launch were from Apple: MacPaint, a drawing/painting program (rather like Microsoft Paint today, except with only two colors, and a bunch of patterns with which you could paint on the screen), and MacWrite, a word processor. Just from looking at it, you can tell that no Apple II programs would run on it, and I don’t think you’d want that anyway.

As you can see, it had a monochrome display. It could only display two colors, white and black. This drew some criticism, but it was still a useful machine. The Mac wouldn’t have a color display until the Macintosh II came out in 1987. Incidentally, other platforms had color graphical interfaces a year after the Mac first came out. There was GEM (Graphics Environment Manager) by Digital Research (which was mainly for the IBM PC), the Atari ST (which used GEM), and the Commodore Amiga, not to mention Version 1.0 of Microsoft Windows.

The Mac was probably the first computer Apple produced that represented a closed design. The first Macs were hardly expandable at all. You could expand their memory capacity, but that was it. It had a built-in floppy drive, but no internal hard drive, and no ability to put one inside. The Mac popularized the 3-1/2″ floppy disk format. Before it came along the standard was 5-1/4″ disks. It had an external connector for a hard drive, so any hard drive had to exist outside the case. It had some external ports so it could be hooked up to a printer, and I believe a phone modem.

In that era we were all used to floppy drives making noises. The first Mac’s floppy drive was also a bit noisy, but it had a “hum” sound to it, as it spun the disk. It sounded like it was “humming” some tune that only it knew. Bill Gates and Steve Jobs, when they talked about the development of the Mac at their D5 Conference appearance (below), called it a “twiggy” drive. The reason for this sound it made, I later discovered, is it used a data compression technique that Steve Wozniak had developed for the Apple II’s disk drives, called Group Code Recording (GCR). The drive was developed in an effort to store data uniformly on the disk, so as to fit more on it to make floppies a more reliable storage medium. The reason for the sound it made is they varied the speed of the drive, depending on where the read/write head was on the disk. You have to understand a little something about physics to get why they did this.

(Update: 10-21-2011: I realized after doing some research that I held a mistaken notion that the variable-speed drive was a way of achieving data compression. The reason they varied the speed of the drive is the format they used, called GCR, used a fixed sector size. The GCR format compressed data on the disk. Since they varied the speed, they were able to get fewer sectors closer in to the center than towards the outer edge. In effect, they inverted the “normal” way of storing data, and so more evenly distributed data on floppy disks. This article explains it in more detail, under Technical Details:Formatting:Group Code Recording.)

All other disk drives on other computers stored more data on the inner tracks of a disk than on the outer tracks. The reason was the disk was always spun at the same speed in those drives, no matter where the read/write head was on the disk, and the read/write head always read and stored data at a constant rate. In physics we know, though, that when you’re rotating anything at a constant rate, the part near the center moves at a slower speed than parts that are farther from the center. As a result, the disk drive will end up packing more data into a smaller space near the center of the disk than it will near the outside of it. The “density” of the data will vary depending on how far from the center it’s stored. Imagine dropping bits of material on a piece of paper that’s sliding beneath your hand. If you speed up the paper, the bits of material will be more spread out on it. I’m not sure how this was done on the Apple II drives, but on On the Mac, the way they tried to deal with this issue was to spin the disk faster when the head was moved towards the center, and slower as the head moved to the outer edge, thereby generating different “motor” sounds as it sped up and slowed down the disk.

Their GCR format compressed data, which made it possible to store a little more data per disk than on most other drives. Looking back on it, this technique might seem trivial, given the amount of data we store today, but back then it was rather significant. A conventional double-sided, double-density 3-1/2″ disk drive could store 720K on a disk. But a Mac could store 800K on the same disk, providing 11% more space per disk.

Back then most computer users didn’t have a lot of data to store. Applications and games were small in size. Documents might be 30K at most. Most people didn’t think of storing large images, and certainly not videos, on these early machines. The amount of data being passed around on networks tended to be pretty small as well. So even what seems like a piddly amount of extra disk space now was significant then.

Edit 10-17-2011: A minor point to add. You’ll notice in the picture of the Mac that there’s no disk eject button. This was because the computer apparently did some housekeeping tasks using your disk, and it would be damaging to data on your disk, and/or cause the system to crash, if you could eject the disk anytime you wanted. I have no idea what these housekeeping tasks were, but whenever the user wanted to eject the disk (which you could do by selecting a menu option, or pressing a command key combination, or dragging the disk icon to the trashcan icon on the desktop), the computer would spend time writing to the disk before ejecting it via. a servo mechanism. Sometimes it would spend a significant amount of time doing this. It may have been an operating system bug, but I saw instances where it would take 5 minutes to eject the disk! All the while the disk drive would be running…doing something you knew not what. In some rare instances the computer wouldn’t eject the disk at all, no matter what you tried. In those situations it had a pinhole just to the right of the disk slot, which you can see in the picture if you look closely, that you could stick the end of a paperclip into, to manually force the disk drive mechanism to eject the disk. I remember seeing a vendor once that sold nice colored “disk ejectors,” which had a handle that looked like a USB thumb drive, and a pin at one end that you’d stick into this hole. A paperclip did the trick just fine, though.

A major difference between the Mac and other computers of the day was it did not come with a programming language. There were programming languages available for it, but you had to get them separately. It was a bold departure from the hacker culture that had influenced all the other computers in the marketplace.

In contrast to the computers I had used prior to using the Mac, including the Apple II, the experience of using it was elegant and beautiful. It had the highest resolution of any computer I had used before. I loved the fact that I could look at images in greater, finer detail. On the desktop interface, windows appeared to “warp” in and out of view, except it was really the window’s frame that moved. It didn’t have enough computing power to move a whole window’s contents. The point is they didn’t just appear. You get that effect on the modern Mac UI as well, and it looks a lot neater.

The main drawback was that it could only run one app. at a time. Even so, it was possible to cut, copy, and paste across applications. How was this done? Well, Mac OS System (the name of the operating system at the time) had a desk accessory called “Scrapbook” that allowed you to add multiple images and document clippings to it. You would add clippings by using the cut and copy feature in an application. Scrapbook would automatically cache these clippings, possibly saving them to disk (this is reminding me that I used to do a lot of disk swapping with the old Macs, which was a reason that more financially well-endowed users bought a second floppy drive, or a hard drive). The scrapbook was finite in size, and would eventually cycle out old clippings, as I recall. Anyway, when you’d load a new application, the last clipping that was added to the scrapbook would be available for pasting by default. Desk accessories could be run at all times, and so you could open Scrapbook while you were using an app., go through its clippings, and grab anything else you wanted to paste. Needless to say, cutting and pasting across applications was an involved process.

This was soon fixed by a system utility written by Andy Hertzfeld called Switcher, once the Mac was given more memory (the very first model came with a total of 128 kilobytes of memory, and so wasn’t such a great candidate for this). The idea was to enable users to load multiple apps. into memory, and allow them to switch between them without having to quit out of one to get to the others. It enabled you to go back to the desktop to launch an app. without having to quit out of the apps you had already loaded. It was rather like how apps. on mobile devices work now.

I read up on the history of Switcher a few years ago. Microsoft was very enthusiastic about it at the time, because they recognized that users would buy more apps. if it was easier to launch more than one at a time. It was really nice to use. It was like using OS X’s multiple desktop feature, except that you could only see one app. on the screen at a time. It had the same effect, though. When you’d switch, the app. you were using would “slide” out of view, and the new one would slide on right behind it. It was like you were shifting your gaze from one app. to the other. It worked really well for early Mac apps., because there was no reason to be doing background processing with what was available. It created the illusion that all apps. were “running” at the same time, when they really weren’t. All the other apps. were in suspended animation when they weren’t in view. Copying clippings and pasting between apps. became a breeze.

It was said of Steve Jobs that he had high standards that drove engineers at Apple nuts, but it seems to me he was willing to compromise on some things. The original Mac had a monochrome display, which I’m sure he knew wasn’t as exciting as color. It was a single-tasking machine, so in the beginning people were running and quitting out of applications a lot. It had a small amount of memory for what it did, and so you couldn’t load multiple apps., which made multi-tasking impractical. You couldn’t cut and paste things between applications easily without the Switcher add-on, which came out about a year after the Mac was released. I’m sure all of these compromises were made to keep the price point low.

The Mac had its critics early on, calling it a “child’s toy.” “Productive people don’t need cute icons and menus to get work done. They get in the way.” There were a lot of advocates for the command-line interface over the GUI in those days. In a way, they were right. Alan Kay said years later that the reason they came up with an iconic, point-and-click graphical interface at Xerox PARC, which the Lisa and Mac drew inspiration from, was to create an easy-to-use environment for children. Not to say that a graphical interface has to be for children, but this is the model Apple drew from.

Jobs leaves Apple

The unthinkable happened at Apple in 1985. Jobs was ousted from the Mac project. He was replaced by his hand-picked CEO, John Sculley, and Jobs left. I remember reading about it in InfoWorld, and being kind of shocked. How could the man who started the company be ousted? How could they think that they could just get rid of this creative person, and keep it the same company? Would Apple survive without him? It felt like something was wrong with this. I wasn’t a big Apple fan at the time, but I knew enough to know that this was a big deal.

After this, I lost track of Jobs for a while. Apple seemed to just move along without him. As I mentioned earlier, they came out with newer, better computers. The Apple Mac had grown to 10% market share by the end of the 1980s, an impressive feat when PCs were growing in dominance by leaps and bounds. In hindsight, the only thing I can point to as a possible problem is they made only incremental improvements to the Mac. They coasted, and for a few years they got away with it.

The only thing about this period that I thought sucked at the time was Apple was suing every computer maker in sight that had a graphical interface, claiming they had violated its copyrights. It had the appearance that they were trying to kill off any competition. The only company they won against was Digital Research, with their GEM operating system, and then only on the PC version, which died out shortly thereafter. It was getting so bad that the technology press was calling on Apple to quit it, saying they were stifling innovation in the industry. It didn’t matter anyway, because Microsoft Windows was coming along, and it would eventually eat Apple’s lunch. Microsoft might’ve actually had Apple to thank for Windows’s success. Apple probably weakened Microsoft’s other competitors.

Nevertheless, Apple seemed to be succeeding without Jobs for a time. It was only when Sculley left in the early 1990s that things went downhill for them, from what I could see.

NeXT

I rediscovered Jobs a bit in college, when I heard about his new venture that created the NeXT computer in 1988.

The NeXT computer, from simson.net

The keyboard, monitor, and laser printer for the NeXT, from simson.net

The NeXTStep interface, from Wikipedia

Come to think of it, maybe Jobs was trying to communicate something in the name…

At the time that the NeXT came out, it seemed futuristic. The computer was shaped like a cube. The case was made out of magnesium, and it featured a removable magneto-optical drive (removable in the sense that you could take the magneto-optical disk, which was housed in a cartridge, out of the drive). Each disk held 256 MB, which was a lot in those days. Most people who had hard drives had 1/4 of that storage capacity at most. The disk was made out of a metal. The way the drive worked was a laser would heat a spot on the disk that it wanted to write to, to what’s called the Curie Point (a specific temperature), so that the magnetic write head could change its polarity. Pretty complicated! I guess this was the only way they could find at the time to achieve that kind of rewritable storage capacity on a removable disk, probably because it afforded a relatively large or imprecise read/write head. Only the part of the disk that was heated by a narrow laser beam would change. So the only part you had to worry about being terribly precise was the laser (and the read head).

Out of the gate, the NeXT computer’s operating system was based on Unix, using the Mach kernel, as I recall. It used Objective C as the standard development language for the system, and was accompanied by some visual user interface design tools. The display system used a technology called Display PostScript to create a good WYSIWYG (What You See Is What You Get) environment.

In 1990, NeXT made a splash by announcing a slimmer, sleeker model, coined the “pizza box,” because aside from its nice look, that’s what it looked like. The magneto-optical drive was gone. It was replaced by a hard drive, and a high-density floppy drive. The big feature that got developers’ attention was a Motorola digital signal processor (DSP) chip that was built into it. One of the ways it was used was to calculate complex mathematical equations at high speed, taking the load for that off of the main processor.

the second generation NeXT computer, from Wikipedia

I got only a few chances to use a NeXT, for a brief time. Again, the computer was way out of my price range. It seemed nice enough. It had that feel about it that was like the Mac, where it would do things–just fine touches–so you didn’t have to think about them. I remember having an “information” dialog open on a file, and doing something to the file. Rather than having to refresh the information window, it updated itself automatically in the background. We take stuff like this for granted now, but back then I noticed stuff like that, because no other computer acted that way.

Doing some research in retrospect about a year ago, I found a demo video that Jobs had created about the second generation NeXT computer. I discovered that they had designed software for it so it could be used as an office platform. You could embed things like spreadsheets, sales presentations, and audio clips in e-mails you’d send to people. This was before most people had even heard of the internet, and long before the MIME protocol was developed. They had advanced video hardware in it so that you could watch high-quality digital color video, which was really hard for most computers to do then. They had also shown an example of a subscription app., demonstrating how you could read your daily issue of the New York Times online. This was done around the same time that the very first web browser was invented. If this rings a bell, though, that’s because Apple has done demos like this within the last few years, as had Microsoft, when they first introduced Windows Vista.

A little trivia. Tim Berners-Lee wrote the world’s first web server, and the first web browser on a NeXT workstation.

The world’s first web browser

Once I got out into the work world, in the mid-90s, I read that things weren’t going so well for NeXT. They eventually sold off their hardware division to Canon. However, things weren’t looking totally down for Jobs. I learned that he had also been heading up a company called Pixar. “Toy Story” came out, and it was amazing. The computer graphics were not as good of an effect as “Jurassic Park,” which had come out a couple years earlier, but I was still pretty impressed with it, because it was the first feature-length movie to use only computer graphics for the whole thing. Mostly what appealed to me were the memories of the toys I had as a kid. The story was good, too.

It seemed like NeXT was on its last leg when Apple bought the company in late 1996. Apple wasn’t doing so hot, either, but it obviously had more cash on hand. The joke was on the people who did the deal, though, because in less than a year, Jobs was back on top and in charge at Apple.

In short order we had the iMacs, and amazingly they were selling like hotcakes. Apparently their shape and their color were what appealed most to customers, not what the computer actually did! No more beige boxes! Yay! Uh…and where did they come from? Eh, not important…

The first iMac, from Wikipedia

Jobs did some things that surprised me after he took over. He cancelled Hypercard, one of the most innovative pieces of software Apple ever produced. Hypercard was a multimedia authoring environment that enabled neophytes to programming to write their own programs on the Mac. You didn’t even need to know a programming language. It was a visual programming environment. You just needed to arrange media elements into a series of steps (“cards” in a “deck”), and set up buttons or links to trigger some actions. The closest equivalent to it on modern Macs is a program called “Automator.” I’ve tried using it, though, and it feels clunky. Hypercard had long been treated as a neglected stepchild at Apple, so in a way Jobs was putting it out of its misery.

He cancelled the Newton, Apple’s PDA. It had become the most popular handheld computer used by hospitals. As a result, they all had to find some other mobile platform to use, and all their mobile software had to be rewritten, because the Newton’s operating system architecture, and development language was proprietary.

Edit 10-13-2011: Thirdly, he cancelled Apple’s clone licensing program, which killed off Mac clone makers. This, to me, was more understandable from Apple’s perspective.

There had been efforts to make Mac clones in the 1980s. My memory is they were all the product of reverse-engineering. Apple finally allowed “genuine” Mac clones, under a license agreement, in the 1990s. Apple of course retained ownership over Mac OS. My memory is this happened after Sculley left. I could understand the appeal of this idea, since PCs (of the IBM variety) solidified their dominance in the market once clones came out. It didn’t work out the same way for Apple, however. A few things this strategy probably didn’t take into account. One is that Microsoft had to deal with a lot more variety in hardware in their operating system, in order to make the clone market work. I vaguely remember hearing about compatibility/stability problems with Mac clones. Secondly, the PC clone manufacturers had to accept much lower profit margins than IBM did when it owned the hardware market. Thirdly, Microsoft didn’t depend on hardware for its revenue. Apple’s business model did, and they were allowing competitors to undercut them on price. For Apple it was rather like Sun’s strategy with Java: have a loss-leader in a major, desirable technology, which the company owned the rights to, in hopes of gaining revenue on the back end in a market that was increasingly perceived as commoditized, which…didn’t really work out for them.

In a few years, NextStep would take over the Mac, with OS X. One of the things Jobs commented on recently was that after he left Apple in 1985, they just didn’t innovate like they had under his direction. It needed to catch up. So transplant NeXT’s work of 1992 into the Mac of ten years later!

Even though the OS X interface looks a lot different from the NeXT, under the covers it’s NeXTStep. The display technology is derivative from what was used on the NeXT. The NeXT operating system was Unix-based, as is OS X. Objective C was brought into the Mac platform as a supported language. In essence, OS X has been the “next” iteration of the NeXT computer. Like a phoenix, it rose again. This was apparently a part of the deal Apple made in buying the company. They recognized that some next-generation OS was needed for the Mac, since it was aging, and from the beginning they had planned to use NeXT technology to do that.

Old apps. written for Mac OS would no longer run on the new system, unless they were “carbonized.” This involved recompiling existing applications to use an emulation library. The problem was if you depended on a Mac OS app. written by a software company that was no longer in business, your best bet was not to upgrade.

Things were not so great in paradise. Apparently the transition from Mac OS to OS X was rough. I remember hearing vague complaints from Mac users about the switchover. They really didn’t get the memo that it was a whole new operating system that operated differently from what they had been used to for years. It may have not entirely been their fault, in the sense of not being willing to learn a new system. I remember hearing complaints about system instability as well.

To their credit, Apple quickly fixed a lot of the stability problems, from what I understand.

This was only for starters. Rather than focus solely on developing the desktop computer market, since Jobs said that Microsoft had “won that battle,” he took Apple in a whole new direction by saying that they should develop mobile devices “for the rest of us.” Apple has also been capturing the market for electronic publishing, with iTunes and the App Store. This combination has been the source of its meteoric success since then.

Unlike “the rest of the world,” I was never that enthused about Apple’s new direction. I haven’t owned an iPod, or any of their other mobile devices. I have an old Pocket PC, and a digital camera that I use. I bought my first Apple product, a MacBook Pro, in 2008, and aside from some kinks that needed to be worked out, it’s been a nice experience.

For the longest time I was not that big of an Apple fan. When I met other Apple users they often came across as elitist, like they had the bucks to buy the best technology, and they knew it. That turned me off. I used stuff from Apple from time to time, but I liked other technology better. That was because my priorities were different from most people. Nevertheless, there was something about Steve Jobs I liked. He had a creative, innovative spirit. I liked that he cared about quality. Ironically, Apple’s products always seemed more conservative than my tastes. It was an adjustment to use my current laptop. It’s allowed enough flexibility that I don’t feel totally hemmed in by its “ease of use,” but there are a few small things I miss.

Jobs was an inspiration. Like some other people I’ve seen around in my life, he was someone I followed and kept track of with interest for many years. He gave us technology that was worth our time to use. What I appreciated most about him was he pushed beyond what was widely thought of as “the way things are” in computing. Unlike most Apple fans and followers, I haven’t seen that much in the way of original ideas out of him. What I credit him with is taking the best ideas that others have come up with, trying to pare them down so that the average person can understand them, having the courage to make products out of them when no one else would, and then marketing the hell out of them. Part of what mattered to him was what computers made possible, and the experience of using them. In the beginning of all this, it seemed like his dreams were far out ahead of where most people were with respect to technology. In his return to Apple, he stayed out ahead, but he seemed to have a keen sense of not getting too far ahead of customers’ expectations. I think he discovered that there’s no virtue, at least in business, of getting too far out ahead of the crowd you’re trying to impress.

There were a couple really memorable moments with Jobs in the last 10 years that I’d like to cover. The first was his 2005 commencement address to the students at Stanford. Here he reveals some things about his life story that he had kept close to the vest for years. He had some good things to say about death as well. It’s one of the most inspirational speeches I’ve heard.

Below is a really great joint interview with Jobs and Bill Gates at the D5 Conference in 2007. It was interesting, engaging, and funny. It covers some of the history that I’ve talked about here, and what we’ve seen from Apple and Microsoft in the present.

This was a rare thing. I think this was one of only three times where Jobs and Gates had appeared together in public, and contrary to the mythology that they were rivals who hated each other,…well, they were rivals, but they got along swimmingly.

Just a little background, the intro. music you hear is from “Love Will Find A Way,” by Yes. Mitch Kapor, who’s introduced in the audience was the founder of Lotus Software. He developed the Lotus 1-2-3 spreadsheet for the PC (Lotus was bought by IBM in the 1990s). Last I heard some years ago, he had become a major advocate for open source software.

Jobs quoted Alan Kay, saying, “People that love software want to do their own hardware.” Maybe he did say that, but the quote I remember is, “People who are really serious about software *should* make their own hardware.” When I first heard that, I remember thinking Kay was putting a challenge to programmers, like, “Real programmers make their own hardware,” but I later realized what he probably meant was that software developers should take control away from the hardware engineers, because the hardware they had created, which was being used in computers, was a crappy design. So what he was probably saying was that really good software people would be better at making hardware to run their software. The way Jobs expressed this is a shallow interpretation of what Kay said, because Kay was very critical of what both Motorola and Intel did in their hardware designs. Apple has only used hardware from both of these companies for the main chipsets for their 16-, 32-, and 64-bit computers.

Note: There is a 7-minute introduction before the event with Jobs and Gates starts.

Gates said something towards the end that struck me, because it really showed the friendship between the two of them. He said, totally unprompted, that he wished he had Jobs’s taste. Jobs was famously quoted as saying in “Triumph of the Nerds”:

The only problem with Microsoft is they just have no taste. They have absolutely no taste. I don’t mean that in a small way. I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their product. And you say, “Why is that important?” Well, proportionally-spaced fonts come from typesetting and beautiful books. That’s where one gets the idea. If it weren’t for the Mac, they would never have that in their products. And so I guess I am saddened–not by Microsoft’s success. I have no problem with their success. They’ve earned their success, for the most part. I have a problem with the fact that they just make really third-rate products.

It felt like things had come full circle.

Saying goodbye

I was a bit shocked to hear of Jobs’s death last Wednesday. I knew that his health had been declining, but I thought he might live another year or so. He had only stepped down as Apple’s CEO in late August. In hindsight, though, it makes sense. He loved what he did. Rather than retire, and decline in obscurity, he held on until he couldn’t hold on any longer.

I’ve felt a little sad about his death at times. I know that Jobs’s favorite music was the Beatles and Bob Dylan, but on the day he died, this song, “It’s So Hard To Say Goodbye To Yesterday,” by Boyz II Men was running through my head. It expresses my sentiments pretty well.

Bye, Steve.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

** Warning: This article contains spoiler information. Do not read further if you don’t want the story revealed. I wrote this article for people who have seen the movie. **

I was disappointed in Tron Legacy at first. I didn’t get the same thrill out of it that I got out of Tron when I first saw it at age 14. In some ways it met my expectations. Based on the previews, I figured it would suggest that technologists have gotten obsessive about technology, and they need to “get out more.” It did that, but at first blush it appeared to do nothing more. I thought about what I had watched, and some things came more into focus that made me like it a lot more.

I’ve read a couple movie reviews on it, and I feel as though they missed the point, but that should not be surprising. Like with the original Tron, this movie works on a few levels. If you are a typical moviegoer, the movie’s story line will entertain you, and the special effects will dazzle you. A couple reviews have said that it follows the story line of a particular kind of a fairy tale. I think it does, but this is just superficial.

With Tron in 1982, the “surface” story was a bit like the movie The Incredible Shrinking Man in that a character is transported into a micro-reality, and everything that once appeared small and insignificant became huge and menacing. The main character, Kevin Flynn, had to face the games he created, inside the system created by his former employer. A virtual, mysterious and reclusive master overlord (the MCP) sought to grab up other entities (called “programs”) and systems to merge with itself, so it could become more powerful. A recurring theme was a kind of atheism. Programs were expected to believe that only their own reality existed, that there were no such things as “users,” (something greater than themselves, off in another reality they could not relate to, but which had a direct relationship with them). This was so that the programs would feel helpless, and would not fight the overlord. Flynn, a user, is sucked in because the system is so arrogant it thinks it can defeat him as well.

The message embedded in the film, which technologists would understand, was political: Were we going to have centralized control of computing, which had implications for our rights, or was computer access going to be democratized (a “free system”), so that people could have transparent access to their alter-egos inside the computer world? This was a futuristic concept in one sense, because most people were not aware of this relationship, even though it was very real at the time (but not in the sense of “little computer people”). I thought of it as expressing as well that the computer world reflected the consciousness of whichever user had the most influence over it (ie. Dillinger).

The director of “Tron,” Steven Lisberger, talked about how we had alter-egos in the computer world in the form of our tax records, and our financial transactions, and that in the future this alter-ego was only going to grow in its sophistication as more data was gathered about us. The future that was chosen largely agrees with the preferred outcome in the movie. Though we have this alter-ego that exists in the computer world, computer access was democratized, just not quite in the way the movie predicted.

There was a metaphysical message that’s more universal: Just as computer programs have users, perhaps we have “users” as well in some reality to which we can’t relate. The creators of the movie deliberately tried to make the real world look a little like the computer world to make this point. The theme that Lisberger has talked about many times now is that perhaps we all have a “better self,” and the question is are we going to strive to access that better self, or are we going to go through life never trying to get in touch with it?

What drew me into “Tron” when I first saw it in about 1983 was the idea that in the computer world things could be shaped by our thoughts and consciousness. I had a feel for that, since I had started programming computers 2 years earlier. Dr. Walter Gibbs’s confrontation with Dillinger particularly resonated with me:

You can remove men like Alan and me from the system, but we helped create it! And our spirit remains in every program we design for this computer!

Tron Legacy is a decidedly different movie from the old Tron. It has some elements that are reminiscent of it, but the message is different. I won’t talk too much about the fairy tale aspect, but instead focus on the message that I think is meant for technologists. This will be my own interpretation of it. This is what it inspired for me.

Instead of talking about a complaint about current conditions, as if they had no antecedent, the movie subtly complains about a problem that’s existed from the time when “Tron” was made, in our world: The legacy of the technical mentality that came into dominance at the same time that the democratization of computer access occurred, and has existed ever since.

On the surface, in the real world (in the movie), the computer industry is slouching towards cynical commercialism. Kevin Flynn disappeared 21 years earlier, leaving behind his son, Sam. Encom lost its visionary, and innovation at the company gradually slowed. In the present, the idea of technological innovation is dead. Encom is set to release yet another version of its operating system (Version 12), which they claim is the most secure they’ve ever released. Alan Bradley, a member of the board, asks something to the effect of, “What’s really new about this product?” He’s told, “We put the number 12 on it.” They decide to sell the OS commercially (as I recall, it was given away freely in prior versions, according to the history told in the movie). Alan is part of the company, but he doesn’t have much power. Instead of talking about what their R&D has produced (if any R&D existed), one of the executives touts the fact that Encom stock will soon be traded 24 hours a day, all around the world. The company has lost its soul, and is now only concerned with coasting, milking its past innovation for all it’s worth.

Sam exists outside Encom for the most part, but is the company’s largest stockholder. In a nod to the free software crowd, when he hears about the Encom announcement, he decides to break into the company (like his father did many years earlier), hack into its data center and make the operating system freely available on the internet (odd…I thought the operating system was the most secure ever…), dashing the company’s plans to sell it. Shortly thereafter, Alan shows up at Sam’s garage apartment, telling him he received a page from an old number his father used at the “Flynn’s” arcade. Sam is alienated and uninterested, saying his father is dead. He seems lost, and without purpose. His only involvement in the story is to create mischief. Going deeper into this, we can see in mischief a desire to be involved, to change things, and yet not take responsibility for it, to not really try to do better. Maybe the reason is there’s a sense of incompatibility with one’s surroundings, but the mischief makers can’t quite put their finger on what the problem is. So their only answer is to attack what is.

For years Alan said that Flynn was still alive. He persists with Sam, saying there must be a good reason his father disappeared, that it wasn’t because he had intentionally abandoned him. He throws Sam the keys to the old arcade, and thus the voyage “down the rabbit hole” begins…

Inside the computer world, Sam goes through a similar initiation that his father went through in “Tron,” and then he is entered into gladiatorial games–most of the same games that his father competed in, only more advanced and modernized. Sam competes well, and survives. After a similar escape from “the game grid” as his father pulled off in the original movie (except with the help of a computer character named Quorra), Sam meets his father, Kevin Flynn, in an isolated cave (though with very nice accommodations). The look of this “cave” is reminiscent of the end scene in 2001: A Space Odyssey. I won’t go into the details of what Sam and Kevin talk about. What I found interesting was that Kevin had spent a significant amount of time studying philosophy. Based on this background, he plays the role of a wise, though defeated, sage.

Kevin tells the story of how he became trapped in a world of his own creation (rather like in “Tron,” but this time Kevin never found a way out). A theme that emerges is the danger of perfectionism, a seductive quality of computer systems. This is embodied in a program Kevin created, named Clu. In the beginning of the computer world, Clu was helpful. As the system was being built from the inside, some mysterious entities “emerged” in the system. Kevin called them “isomorphs.” He marveled at them, and hoped they would become a part of the system. Their programming had such complexity and sophistication he had trouble understanding their makeup.

I recognize the idea of “emergence” from my days studying CS in college. There were many people back then who had this romantic idea that as the internet grew larger and larger, an “intelligence” would eventually “emerge” out of the complexity.

Later in the history told in the movie, Clu turned dogmatic about perfection. He killed off the isomorphs, and threatened Kevin. Kevin tried fighting Clu, but the more he did so, the stronger Clu got. So he hid in his cave, all this time. Meanwhile Clu built the game grid into his vision of “the perfect system.” Everything is “perfect” in his world. One would think this is ideal, but there is a flip side. Imperfections are rejected. Eventually Kevin came to understand that his desire to create “the perfect system” led to one that’s hostile, not utopian as he had imagined. He realizes he made a mistake. There is an interesting parallel between this story line and what happened with Encom, and indeed what happened with the computer industry in our world. By being trapped in his own system, being exposed to the isomorphs, and seeing how his vision was incompatible with this wonderful and mysterious new entity, and himself, Kevin is forced to come face to face with himself, and the vision he had for the computer world. He is given the opportunity to reconsider everything.

There were some subtle messages conveyed. I noticed that anytime one of the programs in the gladiatorial games, or one of Clu’s henchmen got hit with a weapon, or hit a barrier, they died instantly–derezzing. However, with Quorra, Kevin’s companion in the cave, when she gets hurt in a fight, the damaged part of her derezzes, but she remains alive. What this communicated to me is that Kevin and Clu imposed designs on the system whose only purpose was to serve a single goal. They imposed an image of perfection on everything they created, which meant that any imperfection that was introduced into one of these programs (a “wound”) caused it to fall apart and die. Is this not like our software?

Quorra was not created by Flynn, and her system did not demand perfection. She was fault-tolerant. If a piece of her system was damaged, the rest of her was affected (she goes into a “dormant” state), but she did not die. Sam realizes after she is damaged that Quorra is an isomorph. The last of her kind.

I realized, reading an article just recently on Category Theory, and it’s application to programmable systems, called, “Programmers go bananas,” by José Ortega-Ruiz, that “isomorph” is a term used in mathematics. Just translating the term, it means “equal form,” but if you read the article, you’ll get somewhat of a sense of what Quorra and the isomorphs represented:

A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the ‘arrows’ are provided by the relation “less or equal” (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b). What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock’s trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous [my emphasis]; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter‘s ideas).

Ruiz went on to talk about relationships between objects and categories being isomorphic if one object, or a set of objects in a category O could be transformed into another object/category O’, and back to O again. In other words, there was a way to make two different entities “equal” or equivalent with each other via. transforming functions (or functors). I think perhaps this is what they were getting at in the movie. Maybe an isomorph was equivalent to a biological entity, perhaps even a human, in the computer world, but in computational terms, not biological.

I offer a few quotes from my post, “Redefining computing, Part 2,” to help fill in the picture some more Re. the biological/computational analogy. In this post, I used Alan Kay’s keynote address at OOPSLA ’97, called “The Computer Revolution Hasn’t Happened Yet.” The goal of his presentation was to talk about software and network architecture, and he used a biological example as a point of inspiration, specifically an E. Coli bacterium. He starts by talking about the small parts of the bacterium:

Those “popcorn” things are protein molecules that have about 5,000 atoms in them, and as you can see on the slide, when you get rid of the small molecules like water, and calcium ions, and potassium ions, and so forth, which constitute about 70% of the mass of this thing, the 30% that remains has about 120 million components that interact with each other in an informational way, and each one of these components carries quite a bit of information [my emphasis]. The simple-minded way of thinking of these things is it works kind of like OPS5 [OPS5 is an AI language that uses a set of condition-action rules to represent knowledge. It was developed in the late 1970s]. There’s a pattern matcher, and then there are things that happen if patterns are matched successfully. So the state that’s involved in that is about 100 Gigs. … but it’s still pretty impressive as [an] amount of computation, and maybe the most interesting thing about this structure is that the rapidity of computation seriously rivals that of computers today, particularly when you’re considering it’s done in parallel. For example, one of those popcorn-sized things moves its own length in just 2 nanoseconds. So one way of visualizing that is if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagon, and it’s moving its own length in 2 nanoseconds. That’s about 8 feet on our scale of things. And can anybody do the arithmetic to tell me what fraction of the speed of light moving 8 feet in 2 nanoseconds is?…[there’s a response from the audience] Four times! Yeah. Four times the speed of light [he moves his arm up]–scale. So if you ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent, that we could not imagine it, even with the aid of computers. There’s nothing to be seen inside one of these things until you kill it, because it is just a complete blur of activity, and under good conditions it only takes about 15 to 18 minutes for one of these to completely duplicate itself. …

Another fact to relate this to us, is that these bacteria are about 1/500th the size of the cells in our bodies, which instead of 120 million informational components, have about 60 billion, and we have between 1012, maybe 1013, maybe even more of these cells in our body.

So to a person whose “blue” context might have been biology, something like a computer could not possibly be regarded as particularly complex, or large, or fast. Slow. Small. Stupid. That’s what computers are. So the question is how can we get them to realize their destiny?

So the shift in point of view here is from–There’s this problem, if you take things like doghouses, they don’t scale [in size] by a factor of 100 very well. If you take things like clocks, they don’t scale by a factor of 100 very well. Take things like cells, they not only scale by factors of 100, but by factors of a trillion. And the question is how do they do it, and how might we adapt this idea for building complex systems?

So a lot of the problem here is both deciding that the biological metaphor [my emphasis] is the one that is going to win out over the next 25 years or so, and then committing to it enough to get it so it can be practical at all of the levels of scale that we actually need. Then we have one trick we can do that biology doesn’t know how to do, which is we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it’s done today. And systems do have cystic fibrosis, and some of you may know that cystic fibrosis today for some people is treated by infecting them with a virus, a modified cold virus, giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus, and the cold virus is too weak to actually destroy the lungs like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. And that is what does the trick. That’s a very complicated way of reprogramming an organism’s DNA once it has gotten started.

Recall that when Kevin works on Quorra’s damaged body, he brings up a model of her internal programming, which looks like DNA. Recall as well that when Alan Bradley talked to Sam about the page he got, he told Sam about a conversation he had with Kevin Flynn before he disappeared. Kevin said that he had found something that was revolutionary, that would change science, religion, medicine, etc. I can surmise that Kevin was talking about the isomorphs. When I thought back on that, I thought about what I quoted above.

Moving on with Alan Kay’s presentation, here’s a quote that gets close to what I think is the heart of the matter for “Tron Legacy.” Kay brings up a slide that on one side has a picture of a crane, and on the other has a picture of a collection of cells. More metaphors:

And here’s one that we haven’t really faced up to much yet, that now we’ll have to construct this stuff, and soon we’ll be required to grow it. [my emphasis] So it’s very easy, for instance, to grow a baby 6 inches. They do it about 10 times in their life. You never have to take it down for maintenance. But if you try and grow a 747, you’re faced with an unbelievable problem, because it’s in this simple-minded mechanical world in which the only object has been to make the artifact in the first place, not to fix it, not to change it, not to let it live for 100 years.

So let me ask a question. I won’t take names, but how many people here still use a language that essentially forces you–the development system forces you to develop outside of the language [perhaps he means “outside the VM environment”?], compile and reload, and go, even if it’s fast, like Virtual Cafe (sic). How many here still do that? Let’s just see. Come on. Admit it. We can have a Texas tent meeting later. Yeah, so if you think about that, that cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go to trying to understand what the possibilities for interoperability is with things that already exist.

Now, I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago, and if you think about–the ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders of magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big!

Another thing I thought of, relating to the scene where Quorra is damaged, and then repaired, is the way the Squeak system operates, just as an example. It’s a fault-tolerant system that, when a thread encounters a fault, pauses the program, but the program doesn’t die. You can operate on it while it’s still “alive.” The same goes for the kernel of the system if you encounter a “kernel panic.” The whole system pauses, and it’s possible to cancel the operation that caused the panic, and continue to use the system. As I thought back on the movie, I could not help but see the parallels to my own experience learning about this “new” way of doing things.

The problem with creating a “perfect system” is it requires the imposition of rules that demand perfection. This makes everything brittle. The slightest disturbance violates the perfection and causes the whole thing to fall apart. Also, perfection is limiting, because it must have a criteria, and that criteria must necessarily be limited, because we do not know everything, and never will. Not that striving for perfection is itself bad. Having a high bar, even if it’s unachievable, can cause us to strive to be the best that we can be. However, if we’re creating systems for people to use, demanding perfection in the system in effect demands perfection from the people who design and use it. This causes systems that are vulnerable and brittle. This makes people feel scared to use them, because if they do something wrong, it’ll go to pieces on them, or misinterpret their actions, and do something they don’t want. It also allows hackers to exploit the human weaknesses of the software’s designers to cause havoc in their systems and/or steal information.

Just thinking this out into the story of “Tron Legacy,” Quorra’s characteristics were probably the reason Clu tried to kill off the isomorphs. He didn’t understand their makeup, or their motivation. Their very being did not have the same purpose as he had for the overall system. They were not designed to contribute to a singular goal. Instead, their makeup promoted self-sustaining, self-motivated entities. Their allowance for imperfection, their refusal to adhere to a singular goal, in his mind, was dangerous to the overall system goal.

The “end game” in the story goes as such. Sam’s entry into the system opened a portal inside the computer system back to the real world. This presents an opportunity for Clu. He has been secretly generating an army that he hopes to use to take over the real world, and he plans to use the opportunity of the open portal. This particular sequence brought to mind the formation of the Grand Army of the Republic in the 2nd movie in the most recent Star Wars trilogy, Attack of the Clones. It also had a “sorcerer’s apprentice” (from Fantasia) quality about it, in that Kevin created something, but despite his intentions, his creation got out of his control, and became dangerous.

To me, the main computer characters in the movie were metaphors for different philosophies of what computer systems are supposed to be. Clu and his army are symbolic of the dominant technical mentality in computing today that imposes perfection on itself, and humanity, a demand that can never completely be satisfied. Quorra represents a different idea, and I’ve occasionally puzzled over it. Were the origins of the isomorphs meant to convey an evolutionary process for computer systems whereby they can evolve on their own, or was that just a romantic idea to give the story a sense of mystery? The best way I can relate to what Quorra is “made of” is that she and the isomorphs are a computational “organism” that is made up of elements in a sophisticated architecture that is analogous to human anatomy, in terms of the scale her software is able to achieve, the complexity it is able to encompass, the intelligence she has, her curiosity, and her affinity for humanity. She represents the ideal in the story, in contrast to Flynn’s notions of perfection.

I think the basic idea of the movie is that in the early going, people like Flynn were smart, witty, and clever. They were fascinated by what they could create in the computer world, and they could create a lot, but it was divorced from humanity, and they became fascinated by the effort of trying to make that creation better. What they missed is that the ideal creation in the computer world is more like us, both in terms of its structure and intelligence.

The final scene in the movie was a bit of a surprise. At first blush it was the most disappointing part of it for me, because I wondered, “How did they make *that* work??” I was tempted to extend the “isomorph” idea still further as I wrote this post, to say that a computer entity could be transformed into a human, just as the term “isomorphic” suggests, but I don’t know if that’s what the creators of the movie were going for. After all, Clu thought he could bring an entire army of computer entities into the real world, including himself. How was that going to work? I think that interpretation was too literal. Going along with the idea that Quorra is a symbolic character, what I took away from it was that Sam was taking this new idea out of Flynn’s “enclave,” and bringing it into the world. He said to Alan Bradley after they got out that he was going to take the helm at Encom. Sam had found a purpose in his life. It was a message of hope. It was a way for him to honor his father, to learn from his mistakes, and try to do better. There’s also a hopeful message that computers will join us in our world, and not require us to spend a lot of time in the world we’ve created inside the computer.

“Tron Legacy” asks technologists to reassess what’s been created at a much deeper technical level than I expected. It does not use technical jargon much, but subtly suggests some very sophisticated ideas. The philosophical issues it presents have deep implications for technology that are much more involved than how our computer access is organized (a technical theme of the first movie). It prompts us to ask uncomfortable, fundamental questions about “what we have wrought” in our information age, not in the sense of the content we have produced, but about how we have designed the systems we have given to the world. It also prompts us to ask uncomfortable questions about, “What do we need to do to advance this?” How do we get to a point where we can create the next leap that will bring us closer to this ideal? I think it dealt with issues which I have talked about on this blog, but extends far beyond them.

It will be interesting to see the DVD when it comes out. I wonder what the creators of the movie will say about their inspirations for what they put into it. I have not played the video game Tron Evolution, which I’ve heard tells the story from the time of the first movie up to “Tron Legacy.” Maybe it tells a very different story.

I leave you with a PC graphics/sound demo called “Memories from the MCP,” created in 2005 by a group called “Brain Control”. It looks pretty cool, and is vaguely Tron-ish. It has a “Tron Legacy” look and sound to it.

Edit 12-14-2012: I noticed there was some traffic coming to this post from Reddit. I looked at the discussion going on there, and someone referred to this video, created by Bob Plissken, called “Tron: Fight For The Users.” It kind of dovetails with what I talk about here, but it offers a different perspective. I like that it shows how both “Tron” and “Tron Legacy” try to get across similar messages. I don’t entirely agree with the idea that “we are the users of ourselves.” I’d say we are currently the users of notions of system organization, given to us by designers. These notions have had implications, which we have seen play out over the last 20 years in particular. Still, I like this message for its depth of perception. This sort of deep look at the symbolic implications of the stories, as they relate to the technological society we live in, is the way I like to look at the Tron movies.

At the end it refers people to a blog called Yori Lives. I checked it out, and a post called “Tron and the Future” nicely explains the message in this video.

On another subject, there’s been a lot of buzz lately about a sequel to “Tron Legacy” (for lack of a better term, it seems people are calling it “Tron 3”). Apparently there is some reason to be excited about it, as there’s been news that Disney is actively pursuing this. Yori Lives also talks about it.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

A few months ago Apple made some controversial changes to its App Store developer terms, which were seen as overly restrictive. The scuttlebutt was that the changes were aimed at banning Adobe Flash from the iPhone, since Apple had already said that Flash was not going to be included, nor allowed on the iPad. The developer terms said that all apps. which could be downloaded through the App Store had to be originally written only in C, C++, Objective-C, or Javascript compatible with Apple’s WebKit engine. No cross-compiled code, private libraries, translation or compatibility layers were allowed. Apple claimed they made these changes to increase the stability and security of the iOS environment for users. I considered this a somewhat dubious claim given that they were allowing C and C++. We all know the myriad security issues that Microsoft has had to deal with as a result of using C and C++ in Windows, and in the applications that run on it. A side-effect I noticed was that the terms also banned Squeak apps., since Squeak’s VM source code is written in Smalltalk and is translated to C for cross-compilation. In addition any apps. written in it are originally written in Smalltalk (typically), and are executed by the Squeak VM, which would be considered a translation layer. The reason this was relevant was that someone had ported Squeak to the iPhone a couple years ago, and had developed several apps. in it.

The Weekly Squeak revealed today that Apple has made changes to its App Store terms, and they just so happen to allow Squeak apps. As Daring Fireball has revealed, Apple has removed all programming language restrictions. They have even removed the ban on “intermediary translation or compatibility layers”. The one caveat is they do not allow App Store apps. to download code. So if you’re using an interpreter in your app., the interpreter and all of the code that will execute on it must be included in the package. (Update 9-12-10: Justin James pointed out that the one exception to this rule is Javascript code which is downloaded and run by the WebKit engine). This still restricts Squeak some, because it would disallow users or the app. from using something like SqueakMap or SqueakSource as a source for downloading code into a Squeak image, but it allows the typical stand-alone application case to work.

John Gruber, the author of Daring Fireball, speculates that these new rules could allow developers to use Adobe’s Flash cross-compiler, which Adobe had scuttled when Apple imposed the previous restrictions. John said, “If you can produce a binary that complies with the guidelines, how you produced it doesn’t matter.” Sounds right to me.

However, looking over the other terms that John excerpts from the license agreement gives me the impression that Apple still hasn’t figured everything out yet about what it will allow, and what it won’t allow, in the future. It has this capricious attitude of, “Just be cool, bro.” So things could still change. That’s the thing that would be disappointing to me about this if I were an iPhone developer right now. I got the impression when the last license terms came out that Apple hadn’t really thought through what they were doing. While I get a better impression about the recent changes, I still have a sense that they haven’t thought everything through. To me the question is why? I guess it’s like what Tom R. Halfhill once told me, that Steve Jobs never understood developers, even back in the days when the company was young. Steve Wozniak was the resident “master developer” in those days, and he had Jobs’s ear. Once he left Apple in the 1980s that influence was gone.

Read Full Post »

See Part 1, Part 2, Part 3

The real world

Each year while I was in school I looked for summer internships, but had no luck. The economy sucked. In my final year of school I started looking for permanent work, and I felt almost totally lost. I asked CS grads about it. They told me “You’ll never find an entry level programming job.” They had all landed software testing jobs as their entree into corporate software production. Something inside me said this would never do. I wanted to start with programming. I had the feeling I would die inside if I took a job where all I did was test software. About a year after I graduated I was proved right when I took up test duties at my first job. My brain became numb with boredom. Fortunately that’s not all I did there, but I digress.

In my final year of college I interviewed with some major employers who came to my school: Federal Express, Tandem, Microsoft, NCR. I wasn’t clear on what I wanted to do. It was a bit earth-shattering. I had gone into CS because I wanted to program computers for my career. I didn’t face the “what” (what specifically did I want to do with this skill?) until I was about ready to graduate. I had so many interests. When I entered school I wanted to do application development. That seemed to be my strength. But since I had gone through the CS program, and found some things about it interesting, I wasn’t sure anymore. I told my interviewer from Microsoft, for example, that I was interested in operating systems. What was I thinking? I had taken a course on linguistics, and found it pretty interesting. I had taken a course called Programming Languages the previous year, and had a similar level of interest in it. I had gone through the trouble of preparing for a graduate level course on language compilers. I was taking it at the time of the interview. It just didn’t occur to me.

None of my interviews panned out. Looking back on it in hindsight it was good this happened. Most of them didn’t really suit my interests. The problem was who did?

Once I graduated with my Bachelor’s in CS in 1993, and had an opportunity to relax, some thoughts settled in my mind. I really enjoyed the Programming Languages course I had taken in my fourth year. We covered Smalltalk for two weeks. I thoroughly enjoyed it. At the time I had seen many want ads for Smalltalk, but they were looking for people with years of experience. I looked for Smalltalk want ads after I graduated. They had entirely disappeared. Okay. Scratch that one off the list. The next thought was, “Compilers. I think I’d like working on language compilers.” I enjoyed the class and I reflected on the fact that I enjoyed studying and using language. Maybe there was something to that. But who was working on language compilers at the time? Microsoft? They had rejected me from my first interview with them. Who else was there that I knew of? Borland. Okay, there’s one. I didn’t know of anyone else. I got the sense very quickly that while there used to be many companies working on this stuff, it was a shrinking market. It didn’t look promising at the time.

I tried other leads, and thought about other interests I might have. There was a company nearby called XVT that had developed a multi-platform GUI application framework (for an analogy, think wxWindows), which I was very enthusiastic about. While I was in college I talked with some fellow computer enthusiasts on the internet, and we wished there was such a thing, so that we didn’t have to worry about what platform to write software for. I interviewed with them, but that didn’t go anywhere.

For whatever reason it never occurred to me to continue with school, to get a masters degree. I was glad to be done with school, for one thing. I didn’t see a reason to go back. My undergrad advisor subtly chided me once for not wanting to advance my education. He said, “Unfortunately most people can find work in the field without a masters,” but he didn’t talk with me in depth about why I might want to pursue that. I had this vision that I would get my Bachelor’s degree, and then it was just a given that I was going to go out into private industry. It was just my image of how things were supposed to go.

Ultimately, I went to work in what seemed like the one industry that would hire me, IT software development. My first big job came in 1995. At first it felt like my CS knowledge was very relevant, because I started out working on product development at a small company. I worked on adding features to, and refactoring a reporting tool that used scripts for report specification (what data to get and what formatting was required). Okay. So I was working on an interpreter instead of a compiler. It was still a language project. That’s what mattered. Besides developing it on MS-DOS (UGH!), I was thrilled to work on it.

It was very complex compared to what I had worked on before. It was written in C. It had more than 20 linked lists it created, and some of them linked with other lists via. pointers! Yikes! It was very unstable. Anytime I made a change to it I could predict that it was going to crash on me, causing my PC to freeze up every time, requiring me to reboot my machine. And we think now that Windows 95 was bad about this… I got so frustrated with this I spent weeks trying to build some robustness into it. I finally hit on a way to make it crash gracefully by using a macro, which I used to check every single pointer reference before it got used.

I worked on other software that required a knowledge of software architecture, and the ability to handle complexity. It felt good. As in school, I was goal-oriented. Give me a problem to solve, and I’d do my best to do so. I liked elegance, so I’d usually try to come up with what I thought was a good architecture. I also made an effort to comment well to make code clear. My efforts at elegance usually didn’t work out. Either it was impractical or we didn’t have time for it.

Fairly quickly my work evolved away from doing product development. The company I worked for ended up discarding a whole system they’d worked two years on developing. The reporting tool I worked on was part of that. We decided to go with commodity technologies, and I got more into working with regular patterns of IT software production.

I got a taste for programming for Windows, and I was surprised. I liked it! I had already developed a bias against Microsoft software at the time, because my compatriots in the field had nothing but bad things to say about their stuff. I liked developing for an interactive system though, and Windows had a large API that seemed to handle everything I needed to deal with, without me having to invent much of anything to make a GUI app. work. This was in contrast to GEM on my Atari STe, which was the only GUI API I knew about before this.

My foray into Windows programming was short lived. My employer found that I was more proficient in programming for Unix, and so pigeon-holed me into that role, working on servers and occasionally writing a utility. This was okay for a while, but I got bored of it within a couple years.

Triumph of the Nerds

Around 1996 PBS showed another mini-series, on the history of the microcomputer industry, focusing on Apple, Microsoft, and IBM. It was called Triumph of the Nerds, by Robert X. Cringely. This one was much easier for me to understand than The Machine That Changed The World. It talked about a history that I was much more familiar with, and it described things in terms of geeky fascination with technology, and battles for market dominance. This was the only world I really knew. There weren’t any deep concepts in the series about what the computer represented, though Steve Jobs added some philosophical flavor to it.

My favorite part was where Cringely talked about the development of the GUI at Xerox PARC, and then at Apple. Robert Taylor, Larry Tesler, Adele Goldberg, Andy Warnok, and Steve Jobs were interviewed. The show talked mostly about the work environment at Xerox (how the researchers worked together, and how the executives “just didn’t get it”), and the Xerox Alto computer. There was a brief clip of the GUI they had developed (Smalltalk), and Adele Goldberg briefly mentioned the Smalltalk system in relation to the demo Steve Jobs saw, though you’d have to know the history better to really get what was said about it. Superficially one could take away from it that Xerox had developed the GUI, and Apple used it as inspiration for the Mac, but there was more to the story than that.

Triumph of the Nerds showed the unveiling of the first Macintosh in 1984 for the first time, that I had seen. I read about it shortly after it happened in 1984, but I saw no pictures and no video. It was really neat to see. Cringely managed to give a feel for the significance of that moment.

Part 5

Read Full Post »

Older Posts »