Feeds:
Posts
Comments

Archive for the ‘Squeak’ Category

See Part 1, Part 2

This post has been a long time coming. I really thought I was going to get it done about this time last year, but I got diverted into other research that I hope to convey on this blog in the future. I also found more sources to explore for the portion on Xerox PARC. I’ve been as faithful as I can to the history, but there’s always a possibility I made some mistakes. I welcome corrections from those who know better. ūüôā

This is not a complete history of computing in the period I cover. I mean to convey the closing years of ARPA’s “great leaps” in ideas about computing, and then cover the attempt to continue those “great leaps” privately, at Xerox PARC.

My primary source material for this part is the book,¬†“The Dream Machine,” by M. Mitchell Waldrop. I’ll just refer to it as “TDM.”

I’ve been dividing up this series roughly into decades, or “eras.” Part 1 focused on the 1940s and 50s. Part 2 focused on the 1960s. This part is devoted to the 1970s.

In Part 2 I covered the creation of the Advanced Research Projects Agency (ARPA) at the Department of Defense, and the IPTO (Information Processing Techniques Office, a program within ARPA focused on computer research), and the many innovative projects in which it had been engaged during the 1960s.

Decline at the IPTO

There were signs of “trouble in paradise” at the IPTO, beginning around 1967, with the acceleration of the Vietnam War. Charles Herzfeld stepped down as ARPA director that year, and was replaced by Eberhardt Rechtin. There was talk within the Department of Defense of ending ARPA altogether. Defense Secretary Robert McNamera did not like what he saw happening in the program. Money was starting to get tight. People outside of ARPA had lost track of its mission. It was recognized for producing innovative technology, but it was stuff that academics could get fascinated about. Most of it was not being translated into technologies the military could use. Its work with the academic community created political problems for it within the military ranks, because academia was viewed as being opposed to the war.

Rechtin cut a lot of programs out of ARPA. Some projects were transferred to other funders, or were spun off into their own operations. He wanted to see operational technology come out of the agency. The IPTO had some allies in John Foster at the Department of Defense Research & Engineering (DDR&E), the direct supervisor of ARPA, and Stephen Lukasik, who had had the chance to use the Whirlwind computer (which I covered in Part 1). Lukasik was very excited about the potential of computer technology. He succeeded Rechtin as ARPA director in 1970. The IPTO’s budget remained protected, mainly, it seems, due to Lukasik pitching the Arpanet to his superiors as a critical technology for the military in the nuclear age (I covered the Arpanet in Part 2).

In 1970 a rule called the Mansfield Amendment, created by Democratic Senator¬†Mike Mansfield, came into effect. It established a rule for one year mandating that all monies spent by the Defense Department had to have some stated relevance to the country’s military mission. Wikipedia says that this rule was renewed in 1973, specifically targeting ARPA. Waldrop claims it was really a round-about way of cutting defense spending, but it didn’t mean anything in terms of the technology that was being developed at ARPA. Nevertheless, it had social reverberations within the agency’s working groups. In light of the controversy over the Vietnam War, it tarnished ARPA’s image in the academic community, because it required all projects funded through it to justify their existence in military terms. When non-military research proposals would be sent to ARPA, military “relevance statements” would be written up and slapped on them at the end of the approval process, unbeknownst to the researchers, in order to comply with the law. These relevance statements would contain descriptions of the potential, or supposed intended use of the technology for military applications. This caused embarrassing moments when students would find out about these statements through disclosures obtained through the Freedom of Information Act. They would confront ARPA-funded researchers with them. This was often the first time the researchers had seen them. They’d be left explaining that while, yes, the military was funding their project, the focus of their research was peaceful. The relevance statements made it look otherwise. Secondly, it gave the working groups the impression that Congress was looking over their shoulder at everything they did. The sense that they were protected from interference had been pierced. This dampened enthusiasm all around, and made the recruiting of new students into ARPA projects more difficult.

ARPA’s name was changed to DARPA in 1972, adding the word “Defense” to its name. It didn’t mean anything as far as the agency was concerned, but given the academic community’s opposition to the Vietnam War, it didn’t help with student recruiting.

In 1972 the consulting firm Bolt Beranek and Newman (BBN), which had built the hardware for the Arpanet, wanted to commercialize its technology–to sell it to other customers besides the Defense Department. IPTO director Larry Roberts announced in 1973 that he was leaving DARPA to work for BBN’s new networking subsidiary, named Telenet. This created a big problem for Lukasik, because Roberts hadn’t groomed a deputy director, as past IPTO directors had done, so that there would be someone who could jump right in and replace him when he left. It was up to Lukasik to find a replacement, only that wasn’t so easy. He had created a bit of a monster within the IPTO. Every year he had pushed its budget higher, while other projects within DARPA were having their budgets cut. He tried to recruit a new director from within the IPTO’s research community, but everyone he thought would be suited for it turned him down. They were enjoying their research too much. Lukasik¬†came upon J.C.R. Licklider, the IPTO’s founding director, as his last resort.

J.C.R. “Lick” Licklider, from ARS 327

Licklider (he liked to be called “Lick”), who I talked about extensively in¬†Part 2, had returned to MIT and ARPA in 1968. He became a tenured professor of electrical engineering. He began his own ARPA project at MIT, called the Dynamic Modeling group, in 1971. He observed that the Multics project, which I covered in Part 2, nearly overwhelmed Project MAC’s software engineers, and almost resulted in Multics being cancelled. The goal of his project was to create software development systems that would make complex software engineering projects more comprehensible. As part of his work he got into computer graphics, looking at how virtual entities can be represented on the screen. He also studied human-computer interaction with a graphics display.

Larry Roberts tried to find someone besides Lick to be his replacement, more as an act of mercy on him, but in the end he couldn’t come up with anyone, either. He¬†contacted Lick and asked him if he’d be his replacement. He reluctantly accepted, and returned to his post as Director in 1974.

Lick became an enthusiastic supporter of Ed Feigenbaum’s artificial intelligence/expert systems research. I talked a bit about Feigenbaum in Part 2. Through his guidance, Feigenbaum founded Stanford’s Heuristic Programming Project. Feigenbaum’s work would become the basis for all expert systems produced during the 1980s. He also supported Feigenbaum’s effort to integrate artificial intelligence (AI) into medical research, granting an Arpanet connection to Stanford. This allowed a community of researchers to collaborate on this field.

The IPTO’s favored place within DARPA did not last. Lukasik didn’t get along with his new superior at DDR&E, Malcom Currie. They had different philosophies about what DARPA needed to be doing.¬†Waldrop wrote, “Currie wanted solutions out of ARPA now,” and he wanted to replace Lukasik with someone of like mind.

Bob Taylor at the Xerox Palo Alto Research Center had his eye on Lukasik. He needed someone who understood how they did research, and how to talk to business executives. Taylor had been an IPTO director in the late ’60s (I cover his tenure there in Part 2). He brought its style of computer research to Xerox, but he¬†could see that they were not set up to create products out of it.¬†He thought Lukasik would be a perfect fit.¬†Taylor had been recruiting many of DARPA’s brightest minds into PARC, and it was a source of frustration at the agency. Nevertheless, Lukasik was intensely curious to find out just what they were up to. He came out to see what they were building. PARC was accomplishing things that Lick had only dreamed of years earlier. Lukasik was so impressed, he left DARPA to join Xerox in 1975.

George Heilmeier, from Wikipedia

His¬†replacement was George Heilmeier. Waldrop characterized him as hard-nosed, someone who took an applied science approach to research. He had little patience for open-ended, basic research–the kind that had been going on at DARPA since its inception. He was a solutions man. He wanted the working groups to identify goals, where they expected their research to end up at some point in time. His emphasis was not exploration, but problem solving. Though it was jarring to everybody at first, most program directors came to accommodate it. Lick, however, was greatly dismayed that this was the new rule of the day. He thought Heilmeier’s methods went against everything DARPA stood for. Lick liked the idea of finding practical applications for research, but he wanted solutions that were orders of magnitude better than older ones–world changing–not just short-term goals of improving old methods by ten percent. Further, he could see that micromanagement had truly entered the agency. It wasn’t just a perception anymore.

Heilmeier explained in a 1991 interview,

For all the wonderful technology IPTO had sponsored, [Heilmeier insisted], it was the worst mess in the agency. And artificial intelligence was the worst mess in IPTO. “You see, there was this so-called DARPA community, and a large chunk of our money went to this community. But when I looked at the so-called proposals, I thought, Wait a second; there’s nothing here. Well, Lick and I tangled professionally on this issue. He said, ‘You don’t understand. What you do is give good people the money and they go off and do good things and that’s it.’ I said, ‘Lick, I understand that. And these may be good people, but for the life of me I can’t tell you what they’re going to do. And I don’t know whether they are going to reinvent the wheel, because there’s no discussion of the current practice and there’s no discussion of the implications, so I can’t tell whether this is a wise investment for DoD or not.'” (TDM,¬†p. 402)

Lick had a very good track record of picking research projects, using a more subjective, intuitive sense about people, and he didn’t believe you could achieve the same quality of research by trying to use a more objective method of selecting people and projects.

Bob Kahn, who had joined DARPA in 1972, explained the differences between them,

[The] fact was that … both men were right. “There were some things that were more conducive to George’s directed style,” says Kahn. “Packet radio, for example, and maybe the Internet. But speech understanding was not amenable. In fact, almost none of AI fit in. The problem was that George kept looking for a kind of road map to the field of AI. He wanted to know what was going to happen, on schedule, into the future, to make the field a reality. And he thought it was quite reasonable to ask for that road map, because he had no idea how hard it would be to produce. Suppose you were Lewis and Clark exploring the West, where you had no idea what you were going to encounter, and people wanted to know exactly what routes you were going to take, where you would camp, and what you would do out there. Well, this was just not an engineering job, where you could work out the whole plan. So George was looking for something that Lick couldn’t provide.”

Unfortunately, when Lick tried to explain that to his boss, it was a bit like Seymour Papert’s or Alan Kay’s trying to explain exploratory education to a back-to-basics hard-liner. “IPTO really didn’t have a program-management structure,” declares Heilmeier. “They had a financial management structure, and they had a cheering section.” (TDM,¬†p. 403)

Interestingly, I found this article in EETimes that saw this from a very different perspective, saying that what DARPA had been running was a “good-old-boys network” (and it’s difficult to see how it’s not implicating Lick in this), and that Heilmeier snapped them into shape.

Lick tried to see it from Heilmeier’s perspective, that researchers should not see applications as a threat to basic research, and that they should make common cause with engineers, to bring their work into the world of people who will use their technology. He thought this could provide an avenue where researchers could receive feedback that would be valuable for their work, and would hopefully soften the antagonism that was being directed at open-ended research. Meanwhile, he’d quietly try to keep Heilmeier from destroying what he had worked so hard to achieve.

Heilmeier seemed to agree with this approach. He issued some challenges to the IPTO working groups of technical problems he saw needed to be solved. He said to them, “Look, if some of you guys would sign up for these challenges I can justify more fundamental work in AI.” Some of them did just as he asked, and got to work on the challenges.

Lick declared in 1975 that the Arpanet, a project which began at DARPA in 1967, was fully operational, and handed control of the network over to the Defense Communications Agency.

Lick had to kill funding for some projects, which was never easy for him. One in particular was Doug Engelbart’s NLS project at Stanford Research Institute. I talked about this project in Part 2, and what ended up happening with it. A lot of Engelbart’s talent had left to pursue the development of personal computing at Xerox PARC. From Lick’s and DARPA’s perspective, he just wasn’t able to innovate the way he had before. From Engelbart’s perspective, Lick had been captured by the hype around AI, and just wasn’t able to see the good he was doing. It was a sad parting of the ways for both of them. (Source: Nerd TV interview with Doug Engelbart)

Heilmeier was pleased with the changes at the IPTO. They had “gotten with the program,” and he thought they were producing good results. For Lick, however, the change was exhausting, and depressing. He left DARPA for good in late 1975, and returned to MIT, where his spirit soared again. This didn’t mean that he left computing behind. He came up with his own projects, and he encouraged students to play with computers, as he loved to do.

Bob Kahn at DARPA asked an important question. He agreed with Heilmeier’s “wire brushing” of the agency; that some projects had become self-indulgent, but he cautioned against “too much of a good thing” the other way.

ARPA’s current obsession with “relevance” had come dangerously close to destroying what made the agency so special. Remember, says Kahn, when it came to basic computer research–the kind of high-risk, high-payoff work that might not mature for a decade or more–ARPA was almost the only game in town. The computer industry itself was oriented much more toward products and services, he says, which meant that “there was actually very little research going on that was as innovative as ARPA’s.” And while there were certainly some shining exceptions to that rule–notably [Xerox] PARC, IBM and [AT&T] Bell Labs–“many of the leading scientists and researchers couldn’t be supported that way. So if universities didn’t do basic research, where would industry get its trained people?”

Yet it was ARPA’s basic research that was getting cut. “The budget for basic R&D was only about one third of what it was before Lick came in,” says Kahn. “Morale in the whole computer-science community was very low.” (TDM,¬†p. 417)

Kahn pursued a research initiative, anticipating the future of VLSI (Very-Large-Scale Integration) design and fabrication for microchips, in order to try to revive the basic research culture at the IPTO. He won approval for it in 1977. Through it, methods were developed for creating computer languages for designing VLSI chips. Kahn was also the co-developer of TCP/IP with Vint Cerf at DARPA, the protocol that would create the internet.

Heilmeier left DARPA in 1977, and was replaced by Bob Fossum, who had a management style that was more amenable to basic research. The question was, though, “Okay. This is good, but what about the¬†next director?” It had been common practice for people at DARPA to cycle out of administrative positions every few years. Was it too good to last?

Kahn began the Strategic Computing Initiative in 1983, a $1 billion program (about $2.3 billion in today’s money) to continue his work on advancing computer hardware design, and to advance research in artificial intelligence.¬†Fossum was replaced by Robert Cooper that same year. Cooper took Kahn’s research goals and reimplemented them as an agency-wide program, giving every part of DARPA a piece of it. According to Waldrop, Kahn was disgusted by this, and took it as his cue to leave.

The era of revolutionary research in computing at the agency seems to come to an end at this point.

Here’s a video talking about what else was going on at DARPA during the period I’ve just covered:

Lick’s vision of the future

He would happily sit for hours, spinning visions of graphical computing, digital libraries, on-line banking and E-commerce, software that would live on the network and move wherever it was needed, a mass migration of government, commerce, entertainment, and daily life into the on-line world–possibilities that were just mind-blowing in the 1970s. (TDM,¬†p. 413)

Lick had long been a futurist, a very reliable one. In a book published in 1979,¬†“The Computer Age: A Twenty-Year View,” he looked into the future, to the year 2000, about what he could see happening–if he thought optimistically–with a nationwide digital network that he called “the Multinet.” The term “Internet” was not in wide use yet, though work on TCP/IP, what Lick called the “Kahn-Cerf internetworking protocol,” had been in progress for several years. The internet wouldn’t come into being for another 4 years.

“Waveguides, optical fibers, rooftop satellite antennae, and coaxial cables, provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphic display and speech input and output have become almost as common as television sets.”

Great. But what would all those gadgets add up to, Lick wondered, other than a bigger pile of gadgets? Well, he said, if we continued to be optimists and assumed that all this technology was connected so that the bits flowed freely, then it might actually add up to an electronic¬†commons open to all, as “the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.” Indeed, he went on, looking back from the imagined viewpoint of the year 2000, “[the electronic commons] has supplanted the postal system for letters, the dial-tone phone system for conversations and tele-conferences, stand-alone batch processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms and libraries for information storage and retrieval.”

The Multinet would permeate society, Lick wrote, thus achieving the old MIT dream of an information utility, as updated for the decentralized network age: “many people work at home, interacting with coworkers and clients through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its cable television and electronic funds transfer functions, and a few receive delivery of small items through adjacent pneumatic tube networks . . . Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants.” Thanks to ironclad guarantees of privacy and security, Lick added, the Multinet would likewise offer on-line banking, on-line stock-market trading, on-line tax payment–the works.

In short, Lick wrote, the Multinet would encompass essentially everything having to do with information. It would function as a network of networks that embraced every method of digital communication imaginable, from packet radio to fiber optics–and then bound them all together through the magic of the Kahn-Cerf internetworking protocol, or something very much like it.

Lick predicted its mode of operation would be “one featuring cooperation, sharing, meetings of minds across space and time in a context of responsive programs and readily available information.” The Multinet would be the worldwide embodiment of equality, community, and freedom.

If, that is, the Multinet ever came to be. (TDM, p. 413)

He tended to be pessimistic that this all would come true. With the world as it was, he thought a more tightly controlled scenario was more likely, one where the Multinet did not get off the ground. He thought big technology companies would not get into networking, as it would invite government regulation. Communications companies like AT&T would see it as a threat to their business. Government, he thought, would not want to share information, and would rather use computer technology to keep proprietary files on people and corporations. He thought the only way his positive vision would come to pass was if a consensus of hundreds of thousands, or millions of people came about which agreed that an open Multinet was desirable. He felt this would require leadership from someone with a vision that agreed with this idea.

At this time there were packet switching network options offered by various technology companies, including IBM, Digital Equipment Corp. (DEC), and Xerox, but none of them offered openness. In fact, business customers wanted closed networks. They feared openness, due to possibilities of security leaks and industrial espionage. Things were not looking up for this “marketplace” vision of a future network, even in academia. Michael Dertouzos, who led the Laboratory of Computer Science (LCS) at MIT (formerly Project MAC), was very interested in Lick’s vision, but complained that his fellow academics in the program were not. In fact they were openly hostile to it. It felt too outlandish to them.

Microcomputers had taken off in 1975, with the introduction of the MITS Altair, created by electrical engineer Ed Roberts. (This was also the launching point for a small venture created by Bill Gates and Paul Allen, called “Micro Soft,” with their first product, a version of the Basic programming language for the Altair.) Lick bought an IBM PC in the early 1980s, “but it never had the resources to do what he wanted,” said his son, Tracy.

Yes, Lick knew, these talented little micros had been good enough to reinvent the computer in the public mind, which was no small thing. But so far, at least, they had shown people only the faintest hint of what was possible. Before his vision of a free and open information commons could be a reality, the computer would have to be reinvented several more times yet, becoming not just an instrument for individual empowerment but a communication device, an expressive medium, and, ultimately, a window into on-line cyberspace.

In short, the mass market would have to give the public something much closer to the system that had been created a decade before at Xerox PARC. (TDM, p. 437)

Personal computing

The major sources I used for this part of the story were Alan Kay’s retrospective on his days at Xerox, called¬†“The Early History of Smalltalk”¬†(which I will call¬†“TEHS” hereafter), and¬†“The Alto and Ethernet Software,” by Butler Lampson (which I will call “TAES”).

Parallel to the events I describe at DARPA came a modern notion of personal computing. The vision of a personal computer existed at DARPA, but as I’ll describe, the concept we would recognize today was developed solely in the private sector, using knowledge and research methods that had been developed previously through DARPA funding.

Alan Kay

Alan Kay, from Wikipedia

The idea of a computer that individuals could buy and own had been around since 1961. Wes Clark had that vision with the LINC computer he invented that year at MIT. Clark invited others to become a part of that vision. One of them was Bob Taylor. What got in the way of this idea becoming something that less technical people could use was the large and expensive components that were needed to create sufficiently powerful machines, and some sense among developers about just how ordinary people would use computers. Most of the aspects that make up personal computing were invented on larger machines, and were later miniaturized, watered down in sophistication, and incorporated into microcomputers from the mid-1970s into the 1990s and beyond.

Most people in our society who are familiar with personal computers think the idea began with Steve Jobs and Steve Wozniak. The more knowledgeable might intone, and give credit to Ed Roberts at MITS, and Bill Gates. These people had their own ideas about what personal computers would be, and what they would represent to the world, but in my estimation, the idea of personal computing that we would recognize today really began with Alan Kay, a post-graduate student with a background in mathematics and molecular biology, who had become enrolled in the IPTO’s computer science program at the University of Utah (which I talk a bit about in Part 2), in the late 1960s. A good deal of credit has to be given to Doug Engelbart as well, who was doing research during the 1960s at the Stanford Research Institute, funded by ARPA/IPTO, NASA, and the Air Force, on how to improve group knowledge processes using computers. He did not pursue personal computing, but he was the one who came up with the idea for a point-and-click interface, combining graphics and text, using a device he and his team had invented called a “mouse.” He also created the first system that enabled linked documents, which foreshadowed the web we’ve known for the last 20 years, and enabled collaborative computing with teleconferencing. Describing his work this way does not do it justice, but it’ll suffice for this discussion.

Seymour Papert
Doug Engelbart,
from ibiblio.com
Ivan Sutherland,
from Wikipedia
Seymour Papert, from MIT

Flex’s “self-portrait,” from lurvely.com

Kay began developing his ideas about personal computing in 1968. He had started on his first proof-of-concept desktop machine, called Flex, a year earlier. He was aware of Moore’s Law (from Gordon Moore), a prediction that as time passed, more and more transistors would fit in the same size space of silicon. The implication of this is that more computing functionality could fit in a smaller space, which would thereby allow machines which filled up a room at the time to become smaller as time passed. As a graduate student, he imagined miniaturized technology that seemed unfathomable to other computer scientists of his day. For example, a hard drive as small as the crook of your finger. Back then, a hard drive was the size of a floor cabinet, or medium-sized refrigerator, and was typically used with mainframes that took up the space of a room.

Kay’s ideas about what personal computing could be were heavily influenced by Doug Engelbart, Ivan Sutherland (from his Sketchpad project), Tom Ellis and Gabriel Groner at RAND Corp. (from a system called GRAIL), and Seymour Papert, Wally Feurzig, and Cynthia Soloman at BBN with their work with children and Logo, among many others. (Source: TEHS)

Engelbart thought of computing as a “vehicle” for thinking about, and sharing information, and developing group knowledge processes. Kay got a profound sense of the personal computer’s place in the world from Papert’s work with children using Logo. He realized that personal computers could not just be a vehicle, but a new medium.

People can get confused about this concept. Kay wasn’t thinking that personal computers would allow people to use and manipulate text, images, audio, and video (movies and TV)–what most of us think of as “media”–for the sake of doing so. He wasn’t thinking that they would be a new way to store, transmit, and present¬†old media, and the ideas they expressed most easily. Rather, they would allow people to explore a whole¬†new category of ideas and expression that the other forms of media did not express as easily, if at all. It’s not that the old media couldn’t be part of the new, but they would be represented as models, as part of a knowledge system, and operate under a person’s control at whatever grain would facilitate what they wanted to understand.

Butler Lampson described the concept this way:

Kay was pursuing a different path to Licklider’s man-computer symbiosis: the computer’s ability to simulate or model any system, any possible world, whose behavior can be precisely defined. And he wanted his machine to be small, cheap, and easy for nonprofessionals to use. (TAES, p. 2)

Kay could see from Papert’s work that using computers as an interactive medium could enable children to understand subjects that would otherwise have been difficult, if not impossible to grasp at their age, particularly aspects of mathematics and science.

Xerox PARC

Xerox enters the story in 1970. They had a different set of priorities from Kay’s, and it’s interesting to note that even though this was true, they were willing to accommodate not only his priorities, but those of other researchers they brought in.

Xerox was concerned that as the photocopier market grew, competitors would come into the space, and Xerox didn’t want to put all their “eggs” in it. They thought computers would be a good way for them to diversify their product line. Their primary goal was to…

…develop the ‚Äúarchitecture of information‚ÄĚ and establish the technical foundation for electronic office systems that could become products in the 1980s. It seemed likely that copiers would no longer be a high-growth business by that time, and that electronics would begin to have a major effect on office sys¬≠tems, the firm‚Äôs major business. Xerox was a large and prosperous company with a strong commitment to basic research and a clear need for new technology in this area. (TAES, p. 2)

Xerox wanted to site their research facility near a community of computer research. They looked at a few places around the country, and ultimately decided on Palo Alto, CA, since it was near Berkeley and Stanford universities, which were centers of computer research. They wanted to bring in a research director who was familiar with the field, and would be respected by the research community. The right person for the job was not obvious. When they talked to computer researchers of the time, all paths eventually led back to Bob Taylor, who had been an ARPA/IPTO director in the late 1960s. The thing was he had no computer science research of his own under his belt. His background was in psychoacoustics, a field of psychology (though Taylor thinks of it¬†as applied physics). What made him the right candidate in Xerox’s eyes was that everyone who was anyone in the field Xerox wanted to explore knew and respected him. So they invited Taylor in to assist in setting up their research group.¬†¬†Thus was born the Xerox Palo Alto Research Center (PARC). It was totally funded by Xerox. Taylor was not officially given the job as PARC’s director, because of his lack of research background, but he was allowed to run the facility the same way that the IPTO had conducted computer research. You could say that PARC during the 1970s was “the ARPA way done privately.”

One of the research techniques used at ARPA and Xerox PARC was to anticipate the speed and memory capacities of¬†future¬†computers that would be in wide use. Engineers who were part of the research teams would either find hardware that fit these anticipated specifications, or would build their own, and then see what software they could develop on it that was significantly better in some capacity than the systems that were available in their present. As one can surmise, this was an expensive thing to do. In order to exceed the capacity of the machines that were in wide use, one had to not think about what was most economical, but rather go for the hardware that was only used by a relative few, if anyone was using it at all, because it would be prohibitively expensive for most to acquire. Butler Lampson described this with Xerox PARC’s Alto research system:

The Alto system was affected not only by the ideas its builders had about what kind of system to build, but also by their ideas about how to do computer systems research. In particular, we thought that it is important to predict the evolution of hardware technology, and start working with a new kind of system five to ten years before it becomes feasible as a commercial product.

Our insistence on work­ing with tomorrow’s hardware accounts for many of the differences between the Alto system and the early personal computers that were coming into existence at the same time. (TAES, p. 3)

(To get an idea of what Lampson was talking about in that last sentence, I encourage people look at another post I wrote a while back, called “Triumph of the Nerds.”)

Taylor hired a bunch of people from the failing Berkeley Computer Corp., which was started by Butler Lampson, along with students that had joined him as part of Project Genie. Among the people Taylor brought in were Chuck Thacker, Charles Simonyi, and Peter Deutsch. I talk a little about Project Genie in Part 2. He also hired many people from ARPA’s IPTO projects, including Ed McCreight from Carnegie Mellon University, and some of the best talent from Engelbart’s NLS project at the Stanford Research Institute, such as¬†Bill English. Jerry Elkind hired people from BBN, which had been an ARPA contractor, including Danny Bobrow, Warren Teitelman, and Bert Sutherland (Ivan Sutherland’s older brother). Another major “get” was hiring Allen Newell from CMU as a consultant. A couple of Newell’s students, Stuart Card and Tom Moran, came to PARC and pioneered the field we now know as Human-Computer Interaction.

As Larry Tesler put it, Xerox told the researchers, “Go create the new world. We don’t understand it. Here are people who have a lot of ideas, and tremendous talent.” Adele Goldberg said of PARC, “People came there specifically to work on 5-year programs that were their dreams.” (Source: Robert X. Cringely’s documentary, “Triumph of the Nerds”) In a recent interview, which I’ll refer to at the end of this post, she said that PARC invited researchers in to work on anything that they thought in 5 years could have an impact on the company.

Alan Kay came to work at PARC in 1970, started the Learning Research Group (LRG), and got them thinking about what would come to be known as “portable computers,” though Kay called them “KiddieKomps.” They also got working on font technology.

In 1972 Kay committed his thoughts on personal computing to paper in a document called, “A Personal Computer for Children of All Ages.”¬†In it he described a conceptual model he called a “Dynabook,” or “dynamic book.”¬†I’ll quote from a blog post I wrote in 2006, called, “Great moments in modern computer history,” as it summarizes what I mean to get across about this:

Kay envisioned the Dynabook as a portable computer, 9‚Ä≥ x 12‚Ä≥ x 3/4‚Ä≥, about the size of a modern laptop, with its own battery, and would weigh less than 4 pounds. In fact he said, ‚ÄúThe size should be no larger than a notebook.‚ÄĚ He envisioned that it would use removable media for file storage (about 1 MB in size, he said), that it might have a keyboard, and that it would record and play audio files, in addition to displaying text. He said that if no physical keyboard came with the unit, a software keyboard could be brought up, and the screen could be made touch-sensitive so that the user could just type on the screen. ‚Ķ Oh, and he made a wild guess that it would cost no more than $500 to the consumer.

He said, ‚ÄúThe owner will be able to maintain and edit his own files of text and programs, when and where he chooses.‚ÄĚ

He envisioned that¬†the Dynabook¬†would be able to ‚Äúdock‚ÄĚ with a larger computer system at work. The user could download data, and recharge its battery while hooked up. He figured the transfer rate would be 300 Kbps.

Mock up of the Dynabook,
from history-computer.com

He imagined the Dynabook being connected to an ‚Äúinformation utility,‚ÄĚ like what was called at the time ‚Äúthe ARPA network,‚ÄĚ which later came to be called the Internet. He predicted this¬†would open up online access to¬†schools and libraries of information, ‚Äústores‚ÄĚ (a.k.a. e-commerce sites), and would bring ‚Äúbillboards‚ÄĚ (a.k.a. web ads) to the user. I LOVE this quote: ‚ÄúOne can imagine one of the first programs an owner will write is a filter to eliminate advertising!‚ÄĚ

Another forward-looking concept he imagined is that it might have a flat-panel plasma¬†display. He wasn‚Äôt sure if this would work, since it would draw a lot of power, but he thought it was worth trying. … He thought an LCD flat-panel screen was another good option to consider.

Kay thought it essential that the machine make it possible to use different fonts. He and fellow researchers had already done some experiments with font technology, and he showed some examples of their results in his paper.

children using the Dynabook 2

Alan Kay’s conception of children using the Dynabook

He doesn‚Äôt elaborate on this, but he hints at a graphical interface for the device. He describes in spots how the user can create and save ‚Äúdynamic graphics.‚ÄĚ In a scenario he illustrates in the paper, two children are playing a game like Space War on the Dynabook, involving graphics animation,¬†experimenting with concepts of gravity. This scenario lays down¬†the concept of it being a learning machine, and one that‚Äôs easy enough for children to manipulate through a programming language.

He also imagined that Dynabooks would be able to communicate with each other wirelessly, peer-to-peer, so that groups of students would be able to easily work on projects together, without needing an external network.

Working with Dan Ingalls, an electrical engineer with a background in physics, Diana Merry, and colleagues, the LRG began development of a system called “Smalltalk” in 1972. This was a first effort to create the Dynabook in software. It would come to formalize another of Kay’s concepts, of virtual objects in a computer. The idea was that entities (visual and non-visual) could be fashioned by the person using a computer, which are as versatile as tools that are used in the real world, and can be used in any combination at any time to accomplish tasks that may not have been evident when they were created.

Computer programming was an important part of Kay’s concept of how people would interact with this medium, though he developed¬†doubts about it. What he really wanted was some way that people could build models in the computer. Programming just seemed to be a good way to do it at the time.

Objects could be networked together via. a concept called “message passing,” with the goal of having them work together for some purpose. As Smalltalk was developed over 8 years, this idea would come to include everything from the desktop interface, to windows, to text characters, to buttons, to menus, to “paint” brushes, to drawing tools, to icons, and more. His idea of overlapping windows came from his desire to allow people to work on several projects at the same time, allowing them to make the most of limited screen space. (source:¬†TEHS)

“The best way to predict the future is to invent it”

The graphical interface he and his team developed for Smalltalk was motivated by educational research on children, which had surmised that they have a strong visual sense, and that they relate to manipulating visual objects better than explicitly manipulating symbols, as older interactive computer systems had insisted upon. A key phrase in the paragraph below is, “doing with images¬†makes symbols,” that is, symbols in our own minds, and, if you will, in the “mind” of the computer. His concept of “doing with images” was much more expansive and varied than has typically been allowed on computers consumers have used.

All of the elements eventually used in the Smalltalk user interface were already to be found in the sixties–as different ways to access and invoke the functionality provided by an interactive system. The two major centers were Lincoln Labs [at MIT] and RAND Corp–both ARPA funded. The big shift that consolidated these ideas into a powerful theory and long-lived examples came because the LRG focus was on children. Hence we were thinking about learning as being one of the main effects we wanted to have happen. Early on, this led to a 90 degree rotation of the purpose of the user interface from “access to functionality” to “environment in which users learn by doing”. This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children’s curriculum to a “curriculum of the user interface.”

The particular aim of LRG was to find the equivalent of writing–that is learning and thinking by doing in a medium–our new “pocket universe”. For various reasons I had settled on “iconic programming” as the way to achieve this, drawing on the iconic representations used by many ARPA projects in the sixties. My friend Nicolas Negroponte, an architect, was extremely interested in embedding the new computer magic in familiar surroundings. I had quite a bit of theatrical experience in a past life, and remembered Coleridge’s adage that “people attend ‘bad theatre’ hoping to forget, people attend ‘good theatre’¬†aching to remember“. In other words, it is the ability to evoke the audience’s own intelligence and experiences that makes theatre work.

Putting all this together, we want an apparently free environment in which exploration causes desired sequences to happen (Montessori); one that allows kinesthetic, iconic, and symbolic learning–“doing with images makes symbols”¬†(Piaget & Bruner); the user is never trapped in a mode (GRAIL); the magic is embedded in the familiar (Negroponte); and which acts as a magnifying mirror for the user’s own intelligence (Coleridge). (source: TEHS)

A demonstration of Smalltalk-80

Chuck Thacker Butler Lampson
Dan Ingalls,
from Wikipedia
Diana-Merry Shapiro,
from Linkedin
Chuck Thacker,
from the ACM
Butler Lampson,
from MIT

Chuck Thacker, who has a background in physics and designing computer hardware, and a small team he assembled, created the first “Interim Dynabook,” as Kay called it, at PARC’s Computer Science Lab (CSL) in 1973. It was named “Bilbo.” It was not a handheld device, but more like the size of a desk, perhaps connected to a larger cabinet of electronics. It had a monitor with a bitmap display of 606 x 808 pixels (which gave it the profile of an 8-1/2″ x 11″ sheet of paper), and 128 kilobytes of memory. Smalltalk was brought over to it from its original “home,” a¬†Data General Nova computer.

Thacker’s team created the first Alto models (named after Palo Alto) shortly thereafter. Butler Lampson, a computer scientist with a background in physics, and electrical engineering, led a team which created the operating system for it. The Alto computer fit under a desk. It was the size of a small refrigerator. The keyboard, mouse, and display sat on top of the desk. It had the same size display, and internal memory as “Bilbo,” ran at about 6 Mhz, and had a removable hard drive that stored 2.5 MB per disk. (source: History of Computers)

As the years passed, CSL would develop bigger, more powerful machines, with code names Dolphin, and Dorado.

Adele Goldberg, from PC Forum

Beginning in 1973, Adele Goldberg, who had a background in mathematics and information systems, and Alan Kay tried Smalltalk out on students from ages 12-15, who volunteered to take programming courses from them, and to come up with their own ideas for things to try out on it. Goldberg developed software design schema and curricular materials for these courses, and helped guide the education process as they got results.

They had some success with an approach developed by Goldberg where they had the students build more and more sophisticated models with graphical objects. They thought they were building the skill of students up to more sophisticated approaches to computing, and in some ways they were, though the influence these lessons were having on the students’ conception of computing was not as broad as they thought. Kay realized after teaching programming to some adult students that they could only get so far before they ran into a “literature” barrier. The same had been true with the teen and pre-teen students they had taught earlier, it turned out. From Kay’s description, the way I’d summarize the problem is that they required background knowledge in organizing their ideas, and they needed practice in doing this.

Kay said in retrospect that literature renders ideas. Any medium needs literature in order to be powerful. Literature’s purpose¬†is to provide a body of ideas that can be discussed in the medium. (Source: TEHS) Without this literary background,¬†the students were unable to write about, or discuss the more sophisticated ideas through programming code.¬†No matter how good a tool or instrument is, what’s produced by the people using it can only be as good as the ideas they use in fashioning the product. Likewise, the quality of what is written in text or notation by an author or composer, or produced in sound by a musician, or imagery and sound for a movie, is only as good as a) the skill of the creators, b) the outlook they have on what they are expressing, and c) their knowledge, which they can apply to the effort.

Reconsidering the Dynabook

There came a “dividing point” at PARC in 1975. Kay met with other members of the LRG and discussed “starting over.” He could see that the developed ideas of Smalltalk were taking them away from the educational goals he set out to accomplish. Professional considerations had started to take hold with the group, though, and they saw potential with Smalltalk. The majority of the group wanted to stick with it.¬†His argument for “starting over” with the educational project did not win out. He said of this point in time that while he disagreed with the decision of his colleagues, he held no ill will towards them for it. He knew them to be wonderful people. The sense I get from reading Kay’s account of this history is they saw potential perhaps in creating more sophisticated user environments that professionals would be interested in using. I infer this from the projects that PARC engaged in with Smalltalk thereafter.

Kay turned his focus to a new project, as if to pick up again his “KiddieKomps” idea. He designed a computer he called NoteTaker.¬†Adele Goldberg said Kay’s purpose in doing this was to develop a proof of concept for the Dynabook’s hardware (see the interview with her at the end of this post). While Kay went off in this direction, Goldberg took over management of the Smalltalk project. The educational program at PARC faded away in 1976. (Source: TEHS)

The original concept for NoteTaker was a laptop design, with what Kay called a “tab mouse,” a physical control that was small, mounted on the computer, yet agile enough to allow a person using it to move a cursor around a graphical interface. This did not end up on the the machine, but NoteTaker was made useable with a keyboard, mouse, and a touch screen, and it had stereo sound output. (Source: “Joining the Mac Group,”¬†by Bruce Horn)

Once Smalltalk-76 was done (each version of Smalltalk was named by the year it was completed), Dan Ingalls and Ted Kaehler ported it to the NoteTaker, and by around 1978 Kay had created the first “luggable” portable computer. Kay recalls it using three Intel 8086 processors (though others remember it using two Motorola 68000 processors), had 256 kilobytes of memory, and was able to run on batteries.¬†It wasn’t a laptop (it was too big and heavy), but it could fit on a desktop. At first glance, it looks similar to the Osborne 1, the first commercially available portable computer, and there’s a reason for that. The Osborne’s case design was based on NoteTaker.

To put it through its paces, a team from PARC took the NoteTaker on an airplane trip, “running an object-oriented system with a windowed interface at 35,000 feet.”

Kay lamented that there wasn’t enough corporate will to use their own know-how to create better hardware for it (Kay considered the Intel processors barely sufficient), much less turn it into a commercial product.

The NoteTaker, from the Computer History Museum

Looking back on the experience, Kay was dismayed to see that PARC had pushed aside his educational goals, and had co-opted Smalltalk as a purely professional’s tool. The technologies that could have made his original Dynabook vision a physical reality were coming into being at just this time, but there was no will at Xerox to make it into an actual product. (source: TEHS)

He¬†has pursued development of his educational ideas ever since. He sees computers in the same way that a few in the Middle Ages saw the invention of the printing press, enabling new ways of interacting and thinking, to, as Licklider would have said, allow people to “think as no one has ever thought before.”

Developing the office of the future

93ewis17
From left to right: Larry Tesler, from Twitter.com; Timothy Mott, from startupgenome.co; Charles Simonyi

A separate project at PARC also got started in the early 1970s, called POLOS (the Parc On-Line Office System), in the System Science Lab (SSL). The goal there was to develop a networked computer office system.

Edit 5/31/2016: I had suggested in the way I wrote the following paragraph that Larry Tesler had invented cut, copy, and paste actions in digital text editing. Upon further review, I think that historical interpretation is wrong. Doug Engelbart had copy and paste (and probably a “cut” action) in NLS, and there may have been earlier incarnations of it.

One of the first projects in this effort was a word processor called Gypsy, designed and implemented by Larry Tesler and Timothy Mott, both computer scientists. (Source: TAES) The unique thing about Gypsy was that Tesler tried to make it “modeless.” Its behavior was like our modern day word processors, where you always have a cursor on the screen, and all actions mean the same thing at all times. Wherever the cursor is, that’s where text is entered. He also invented drag selection/highlighting with a mouse, and which was incorporated into the a cut, copy, and paste process. This has been a familiar feature whenever we work with digital text.

Later, Charles Simonyi, an electrical engineer, and Butler Lampson began work on the first What You See Is What You Get (WYSIWYG) text processor, called Bravo. They developed their first version in 1974. It had its own graphical interface, allowed the use of fonts, and basic document elements, like italics and boldface, but it worked in “modes.” The person using it either used the computer’s keyboard to edit text, or to issue commands to manipulate text. The keyboard did something different depending on which mode was operating at any point in time. This was followed by BravoX, which was a “modeless” version. BravoX had a menu system, which allowed the use of a mouse for executing commands, making it more like what we’d recognize as a word processor today. (source: Wikipedia) Each team was trying out different capabilities of digital text, and trying to see how people worked with a text system most productively.

Andrew Birrell Roger Needham
From left to right: Andrew Birrell, from Microsoft Research; Roy Levin, from Linkedin; Roger Needham, from ACM SIGSOFT

A couple graphical e-mail clients were developed by a team led by Doug Brotz, named “Laurel” and “Hardy,” which allowed easy review and filing of electronic mail messages. (sources: The Xerox “Star”: A Retrospective) From¬†Lampson’s description, they appeared to be¬†“e-mail terminals.” They didn’t store, or allow one to write e-mails. They just provided an organized display for them, and a means of telling a separate messaging system what you wanted to do with them, or that you wanted to create a new message to send. (Source: TAES) Andrew Birrell, Roy Levin, Roger Needham (who had a background in mathematics and philosophy), and Michael Schroeder, a computer scientist, developed a distributed service,¬†which the e-mail clients used to compose, receive, and transmit network messages, called Grapevine. From the description, it appeared to work on a distributed peer-to-peer basis, with no central server controlling its services for an organization. Grapevine also provided authentication, file access control, and resource location services. Its purpose was to provide a way to transmit messages, provide network security, and find things like computers and printers on a network. It had its own name service by which clients could identify other systems. (source: Grapevine: An Exercise in Distributed Computing)¬†To get a sense of the significance of this last point, the Arpanet did not have a domain name service, and the internet did not get its Domain Name Service (DNS) until the mid-1980s.

PARC had a complete office system going, with all of this, plus networked file systems, print servers (using Ethernet, created by Bob Metcalfe and Chuck Thacker at PARC), and laser printing (invented at PARC by Gary Starkweather, with software written by Butler Lampson) by 1975! A year later they had created a digital scanner.

You can see one of the e-mail clients being demonstrated, with WYSIWYG technology/laser printing, on the Alto, in this Xerox promotional video:

Clarence Ellis Gary Nutt
Clarence Ellis,
from the ACM
Gary Nutt,
from CU Boulder

Clarence Ellis and Gary Nutt, both computer scientists, developed OfficeTalk, a prototype office automation system, at PARC. It tracked “job” documents as they went from person to person in an organization. (source: “The Xerox ‘Star’: A Retrospective”) In addition, Ellis came up with the idea of clicking on a graphical image to start up an application, or to issue a command to a computer, rather than typing out words to do the same thing. This is something we see in all modern user interfaces. (source: Answers.com)

Why Xerox missed the boat

For this section I go back to Waldrop’s book as my main source.

One might ask if the future in PCs was invented at Xerox–sounding an awful lot like what is in use with business IT systems today–why didn’t they own the PC industry? An even bigger question, why weren’t people back then using systems like what we’re using now? We use Windows, Mac, and Linux systems today, each with their own graphical interfaces, which descended from what PARC created. We had word processing, and network LAN file servers for years, beginning in the 1980s, and later, e-mail over the internet, print servers, and later still, office “task” automation software. The future of personal computing as we would come to know it was sitting right in front of them in the mid-1970s. We can say that now, since all of these ideas have been made into products we use in the work world. If we were to look at this technology back then, we might’ve been clueless to its significance. The executives at Xerox were an example of this.

For several years management couldn’t understand the vision that was developing at PARC. The research culture there didn’t mesh that well with the executive culture, especially after the company’s founder, Joe Wilson, died in 1971. Wilson had been open to new ideas and venturing into new technologies. At the time of his death, the company was also struggling from its own growth. The demand for its copiers was outstripping its ability to produce them competently. So a new management team was brought in which understood how to manage big companies. These were “numbers men,” though, and their methodology didn’t allow them to understand how to translate the kind of deep R&D the company had been doing with computers into products, because of course they didn’t have metrics to make an accurate measurement of how much money they’d make with a technology that was totally¬†unknown to them. The company had plenty of metrics about costs and revenue that came from copier development and sales, so it was no problem for them to make reliable estimates for a new copier model.¬†The most profitable part of their business was copiers, so that’s where they put their product development resources. The old hands at Xerox who had set up PARC ran interference for their operation, to keep the “numbers men” from shutting down their work.¬†The people at PARC kept trying to convince the higher-ups that what they had developed was useful for office productivity, but it was for naught.¬†In a depressing anecdote, Waldrop wrote:

One top-level Xerox executive, after a day of being shown the wonders of PARC, had posed precisely one question to the researchers: “Where can I get some of those beanbag chairs?” (TDM, p. 408)

(A famous “feature” of PARC was their use of¬†beanbag chairs for group discussions, called “dealer meetings.”)

Stephen Lukasik came to Xerox from DARPA in 1975. He set up what was called the Systems Development Division (SDD). Its purpose was to create new salable products out of the research that was going on inside Xerox. The problem for him was that Xerox’s executives were willing to give him the authority to set up the division, but they weren’t willing to listen to what he said was possible, nor were they willing to give him the funding to make it happen. Lukasik left Xerox in 1976. He said that though his time there was short, he valued the experience. He just didn’t see the point in staying longer. Nevertheless, SDD would become useful to Xerox. It just had to wait for the right management.

Xerox set up a big demo of its products in Boca Raton, FL. in 1977, which included the prototypes from PARC. All the company executives, their wives, family and friends were invited. The people from PARC did the best bang-up job they could to make their stuff look impressive for business computing.

Back at PARC, says [Gary] Starkweather, he and his colleagues saw this as their last, best chance. “The feeling was, ‘If they don’t get this, we don’t know what we can do.'” So, he says, with John Ellenby coordinating an all-out effort, “We stripped everything out of PARC down to the power cords and set it all up again in Boca. Computers, networks, printers–the whole thing! I built a laser printer that did color. Bill English had a word processor that did Japanese. We were going to show them space flight!”

They certainly tried. [At the event], Xerox executives and their families swarmed through the Grenada Rooms of the Boca Raton Hotel for a hands-on demonstration of WYSIWYG editing in Bravo, graphical programming in Smalltalk, E-mailing in Laurel, artistry in Paint and Draw–the works. “The idea was a mental slam-dunk!” says Starkweather. “And some people did see it.” The executives’ wives, for example–many of them former secretaries who knew all about carbon paper, Wite-Out, and having to retype whole pages to correct a single mistake–took one look at Bravo and¬†got it. “The wives were so ecstatic they came over and kissed me,” remembers Jack Goldman. “They said, ‘Wonderful things you’re doing!’ Years later, I’d see them and they’d still remember Boca.”

Then there were the delegates from Fuji Xerox, the company’s Japanese partner: they were beside themselves over Bill English’s word processor. “Fuji clamored, ‘Give us this! We’ll manufacture it!'” recalls Goldman. In fact, he says, that was a near-universal reaction: “People from Europe, people from South America, marketing groups around the U.S.–everyone who went out of that conference was excited by what they had seen.”

Everyone, that is, except the copier executives, the real power brokers in Xerox. You couldn’t miss them; they were the ones standing in the background with the puzzled, So-What? look on their faces.

Indeed, one of the corporation’s purposes in calling the conference was to rally the troops for the coming era of ever-more-ferocious competition. In fairness, those executives in the background had to worry about defending the homeland¬†now, not ten years from now. Or maybe they were simply too bound by the culture of the executive suite, vintage 1977. The xerographers lived in a world in which typing was women’s work and keyboards were for secretaries. It was a rare executive who would even deign to touch one. (TDM, p. 408)

There was a change in leadership in 1978 which recognized that the management style they had been using wasn’t working. The Japanese had entered the copier market, and were taking market share away from them. This is just what Joe Wilson had anticipated eight years earlier. Secondly, the new management recognized the vision at PARC. They wanted to create products from it. Work on the Xerox Star system began that year at SDD.

SDD had previously created a machine called Dandelion, Xerox’s most powerful computer to date. The Star would be the Dandelion translated into a business system. (Source: TAES)

Xerox and Apple

There’s been increasing awareness about the history of Xerox and Apple as time has passed, but there are some misconceptions about it that deserve to be cleared up.

Xerox and Apple developed a brief business relationship in 1979. Apple got ideas about how to create the user experience with their Lisa and Macintosh computers from Xerox PARC. The misperceptions are in¬†how¬†this happened.¬†This is how the meeting between Xerox and Apple in 1979 has been perceived among those who are generally familiar with this history (from the 1999 TNT made-for-TV movie¬†“Pirates of Silicon Valley”):

There’s some truth to this, but some of it is myth. One could almost say it was trumped up to bolster Steve Jobs’s image as a visionary. It’s a story that’s been propagated for many years, including by yours truly. So I mean to correct the record.

Much of Waldrop’s account of what happened between Xerox and Apple is a retelling of an account from Michael Hiltzik’s book,¬†“Dealers of Lightning.”¬†From what he says, the Apple team’s visit at PARC was¬†not¬†a total revelation about graphical user interfaces, but it was an inspiration for them to improve on what they had developed.

The idea of a graphical user interface on a computer was already out there. People at Apple knew of it prior to visiting Xerox. Xerox had been publishing information about the concept, and had been holding public demos of some of the graphical technology PARC had developed. So the idea of a graphical user interface was not a secret at all, as has been portrayed. However, this does not mean that every aspect of Xerox’s user interface development was made public, as I’ll discuss below.

Apple had been working on a computer called Lisa since 1978. It was a 16-bit machine that had a high-resolution bitmapped display. The Lisa team called it a “graphical computer.” Apple was approached by the Xerox Development Corporation (XDC) to take a look at what PARC had developed. The director of XDC felt that the technology needed to be licensed to start-ups, or else it would languish. Steve Jobs rebuffed the offer at first, but was convinced by Jef Raskin at Apple to form¬†a team to go to PARC for a demonstration.

Apple and Xerox made a trade. Xerox bought a stake in Apple that was worth about $1 million,¬†in exchange for Apple getting access to PARC’s technology. Waldrop says that Jobs and a team of Apple engineers made two visits to PARC in December 1979. According to an article from¬†Stanford University, Jobs was not with them for the first visit. Waldrop notes that by this point Apple had already added a graphical user interface to the Lisa, but that it was clunky. By Larry Tesler’s account, there were more visits by the Apple team than Waldrop talks about. He has a different recollection of some of the details of the story as well. I include these different sources to give a “spectrum” of this history.

Going by Waldrop’s account, at their first visit in December ’79, they got a “standard” demo of the Alto, given by Adele Goldberg, that many other visitors to PARC had already seen. The group from Apple saw it being used with a mouse, the Bravo word processor, some drawing programs, etc., and then they left, apparently satisfied with what they had seen. It should be noted that this demo did not show the Xerox “desktop interface” in the sense of what people have come to know on personal computers and laptops. Each program they saw was a separate entity, which took over the entire operation of the machine, using the Alto’s graphical capabilities to show graphics and text. The Apple team did not see the desktop metaphor, which was in Smalltalk, and which was being developed for the Xerox Star. Goldberg considered Smalltalk proprietary. Jobs and the team eventually realized that they hadn’t really seen “the good stuff.”

The Apple team came back to PARC, unannounced. This is the visit that’s usually talked about in Apple-PARC lore, and is portrayed in the Pirates of Silicon Valley¬†clip above. Jobs¬†demanded¬†to see the Smalltalk system¬†*NOW*. A heated argument ensued between PARC and Xerox headquarters over this. Xerox’s executives ordered the PARC team to demonstrate Smalltalk to the Apple team, citing the partnership with Apple brokered by XDC. Bob Taylor was out of town at the time. He later said that he had no respect for Steve Jobs, and if he had been there, he would’ve kicked the Apple team out of the building, no if’s, and’s, or but’s! He figured Xerox would’ve fired him for it, but that would’ve been fine with him.

This time the people from Apple were prepared. They asked very detailed technical questions, and they got to see what Smalltalk was capable of. They saw educational software written by Goldberg, programming tools written by Larry Tesler, and animation software written by Diana Merry that combined graphics with text in a single document. They got to see multiple tasks on the screen at the same time with overlapping windows, in a “desktop” metaphor. Jobs was beside himself. He exploded, “Why hasn’t this company brought this to market?! What’s going on here? I don’t get it!”¬†This was when Jobs had his epiphany about the graphical user interface, though Waldrop doesn’t delve into what that was. The breakthrough for him may have been the desktop metaphor, how it allowed multiple, different views of information, and multiple tasks to be worked on at the same time, along with everything else he’d seen. The way Waldrop portrays it is that Jobs realized that using a computer should be a fulfilling and fun experience for the person using it.

According to Hiltzik, the partnership between Xerox and Apple, of which the Apple stock trade had been a part, quickly fell apart after this, due to a culture clash. The Lisa’s chief programmer, Bill Atkinson, had to go off of what he remembered seeing at PARC, since he no longer had access to their detailed technical information.

So the idea that Apple (and Microsoft for that matter) “stole” stuff from Xerox is not accurate, though there was trepidation at PARC that Apple would steal “the kitchen sink.” While the visit gave the Lisa team ideas about how to make computer interaction better, and what the potential of the GUI was, Hiltzik said it wasn’t that big of an influence on the Lisa, or the Macintosh, in terms of their overall system design.

What this account means is that the visits to PARC were tangentially influential on Apple’s version of the idea of a graphical user interface. They did not go in ignorant of what the concept was, and it did not give them all of the ideas they needed to create one. It just helped make their idea better.

The microcomputer “revolution”

The people at PARC were aware of the nascent microcomputer phenomenon that was occurring under their noses. Some of them went to the Homebrew Computer Club meetings, where Steve Jobs and Steve Wozniak used to hang out.¬†They read the upstart computer press that was raving about what Bill Gates and Steve Jobs were doing with their new companies, Microsoft and Apple Computer. According to Waldrop, it galled them that this upstart industry was getting so much attention and adoration that they felt it didn’t deserve.

“It had never occurred to us that people would buy crap,” declares Alan Kay, who considered the hobbyists in their garages down the hill to be very bright and very creative ignoramuses–undisciplined kids who didn’t read and didn’t have a clue about what had already been done. They were successful only because their customers were just as unsophisticated. “What none of us was thinking was that there would be millions of people out there who would be perfectly happy with the McDonald’s hamburger approach.” (TDM,¬†p. 437)

Steve Jobs was somewhat of an exception. At least he got a hint of “what had already been done.” It took some prodding, but once he got it, he paid attention. Still, he didn’t fully understand¬†the significance of the research that went into what he saw. For example, Jobs was very hostile to the idea of computer networking at the time, because he thought that would deprive the personal computer user of their autonomy. Freedom from dependency on larger computer systems was a notion he held to ideologically. When he railed against IBM as “big brother” it wasn’t just for show. If only he had been aware of Licklider’s vision for the internet (which was published at the time), the notion of a distributed network, with independent units, and the DARPA work that was creating it, perhaps he would’ve cottoned to it the way he had the graphical user interface. We’ll never know. He came to understand the importance of the network features that had been developed at PARC about 6 years later when he left Apple and started up NeXT.

There’s a famous quote from Jobs about his visits to PARC that illustrates what I’m talking about, from the PBS mini-series,¬†“Triumph of the Nerds” (I wrote a post about this mini-series here):

They showed me, really,¬†three things, but I was so blinded by the first¬†one that I didn‚Äôt really¬†‚ÄĚsee‚Ä̬†the other two. One of the things they showed me was object-oriented programming. They showed me that, but I didn‚Äôt even ‚Äúsee‚ÄĚ that. The other one they showed me was really a networked computer system. They had over 100 Alto computers all networked, using e-mail, etc., etc. I didn‚Äôt even ‚Äúsee‚ÄĚ that. I was so blinded by the first thing they showed me, which was the graphical user interface. I thought it was the best thing I had ever seen in my life. Now, remember it was very flawed. What we saw was incomplete. They had done a bunch of things wrong, but we didn‚Äôt know that at the time.¬†Still, though, the germ of the idea was there, and they had done it very well. And within¬†ten minutes it was obvious to me that all computers would work like this, someday.

Too much, too late for Xerox

Xerox released the Star in 1981 as the 8010 Information System.

It had a Smalltalk-like graphical user interface, anywhere from 384 kilobytes of memory up to 1.5 MB, an 8″ floppy drive, a 10 to 40 MB hard disk, a monitor measuring 17″ diagonally, a mouse, a bevy of system features, Ethernet, and laser printing. (source: The Xerox “Star”: A Retrospective) The 8010 cost $16,500 per unit, though a customer couldn’t purchase just the computer. It was designed to be an integrated system, with networking and laser printing included. A minimal installation cost $100,000 (about $252,438 in today’s money). Xerox was going for the Fortune 500, which could afford expensive, large-scale systems. Even though the designers thought of it as a “personal computer” system, the 8010 was marketed under the old mainframe business model.

The designers at SDD put the kitchen sink into it. It had every cool thing they could think of. The system was designed so that no third-party software could be installed on it at all. All of the hardware and software came from Xerox, and had to be installed by Xerox employees. This was also par for the course with typical mainframe setups.

In the Star operating system, all of the software was loaded into memory at boot-up, and kept memory-resident (very much like Smalltalk). Users didn’t worry about what applications to run. All they had to focus on was their documents, since all documents and data were implicitly linked to the appropriate software. It presented an object-oriented approach to information. It didn’t say to you, “You’re working with a word processor.” Instead, you worked with your document. The system software just accommodated the document by surreptitiously activating the appropriate part of the system for its manipulation, and presenting the appropriate interface.

Since the Xerox brass recognized they knew nothing about computers, they turned the Star’s design totally over to SDD. There appeared to have been no input from marketing. Even so, as Steve Jobs said of the executives, “They were copier-heads. They just had no clue about a computer, what it could do.” It’s unclear whether input from marketing would’ve helped with product development. I have a feeling “the innovator’s dilemma” applies here.

The designers at SDD were also told that the cost of the product was no object, since their target market was used to paying hundreds of thousands of dollars for large-scale systems. The Xerox executives miscalculated on this point. While it’s true that their target market had no problem paying the system’s price tag, personal computing was a new, untested concept to them, and they were wary about risking that much money on it. Xerox didn’t have a low-risk, low-cost entry configuration to offer. It was all or nothing. Microcomputers were a much easier sell in this environment, because their individual cost, by comparison, went unnoticed in corporate budgets. Corporate managers were able to sneak them in, buying them with petty cash, even though IT managers (who believed wholeheartedly in mainframes) tried their darnedest to keep them out.

The microcomputer market, which just about everyone at Xerox saw as a joke, was eating their lunch. Bob Frankston and Dan Bricklin at VisiCorp (both alumni of the Project MAC/Multics project) had come out with VisiCalc, the world’s first commercial spreadsheet software, in 1979, and it was only available for micros. Xerox didn’t have an equivalent on the 8010 when it was released. They had put all of their “eggs” into word processing, databases, and e-mail. These things were needed in business, but it was the same thing the researchers at PARC had run into when they tried to show the Alto to the Xerox brass in years passed: People in the corporate hierarchy saw themselves as having certain roles, and they thought they needed different tools than what Xerox was offering. Word processing was what stenographers needed, in the minds of business customers. Managers didn’t want it. They wanted spreadsheets, which were the “killer applications” of the era. They would buy a microcomputer just so they could run a spreadsheet on it.

The designers at SDD didn’t understand their target market.¬†The philosophy at PARC was “eat your own dog food” (though I imagine they used a different term for it). From what I’ve heard, listening to people who were part of the IPTO in the 1960s, it was Doug Engelbart who invented this development concept. The problem for Xerox was this applied to product development as well as to overall quality control. From the beginning, they wanted to use all of the technology they developed, internally, with the idea that if they found any deficiencies, there would be no running away from them. It would motivate them to make their systems great.

Thacker and Lampson mentioned in a retrospective, which I refer to at the end of this post, that someone at PARC had come up with a spreadsheet for the Alto, but that nobody there wanted to use it. They were not accountants. Xerox eventually released a spreadsheet for the 8010, once they saw the demand for it, but the writing was already on the wall by then.

It’s hard for me to say whether this was intentional, but the net effect of the way SDD designed the Star resulted in an implicit assumption that customers in their target market would want the capabilities that the designers¬†thought were important. Their focus was on building a complete integrated system, the likes of which no one had ever seen. The problem was nothing in their strategy accounted for the needs and perceptions that business customers would assert. They may have assumed that customers would be so impressed by the innovativeness of the system that it would sell itself, and people would adjust their roles to it in a McLuhan-esque “the medium is the message” sort of way.

The researchers at PARC recognized that producing a microcomputer had always been an option. In 1979, Bob Belleville suggested going with a 16-bit microcomputer design, maybe using the Motorola 68000, or the Intel 8086 processor, instead of going forward with the Star. He built a prototype and showed that it could work, but the team working on the Star didn’t have the patience for it. They would’ve had to throw out everything they had done, hardware and software, and start over. Secondly, they wouldn’t have been able to do as much with it as they were able to do with the approach they had been pursuing. Thacker and Lampson saw as well that building a microcomputer would be more expensive than going ahead with their approach. It was, however, a fateful decision, because the opposite would be true two years later, when the 8010 Information System was released.

Lampson said, ironically, that this time, “the problem wasn’t a shortage of vision at headquarters. If anything, it was an excess of vision at PARC.”

When Apple released their Lisa computer in 1983, Xerox realized that they had missed their chance. The future belonged to IBM, Apple, and Microsoft.

Dispersion

From about 1980 on, people left PARC to join other companies. They were bleeding talent. Taylor was seen as part of the problem. He had an overriding vision, and it was his way, or the highway, so others have said. He understood full well where computing was going. He was just ahead of his time. He could be very supportive, if you agreed with his vision, but he had utter contempt for any ideas he didn’t approve of, and he would “take out the flamethrower” if you opposed him. This sounds a lot like what people once complained about with Steve Jobs, come to think of it. The director of PARC kindly asked Taylor to change his attitude. He took it as an insult, and¬†left in frustration in 1983, taking some of PARC’s best engineers with him.

From looking at the story, it appears the failure of the Star project was “the last straw” for Xerox’s foray into computer research. After Taylor left, the era of innovative computer research ended at PARC.

The visions at Xerox were grandiose, and were too much, too late for the market. I don’t mean to take anything away from what they did by saying this. What they had was pretty good. It represented the future, but it was too far ahead of where the business computing market was. Competitors had grabbed the attention of customers, and they preferred what the competition was offering.

The PARC researchers got it both ways. When they had the chance to develop products in 1975, ahead of the explosion in the microcomputer market, their vision wasn’t recognized by the company that housed them. By the time it was recognized, the market had changed such that much less powerful machines, backed by more innovative business models, won out.

Even though Alan Kay brought the idea of a small, handheld computer to PARC, something that ordinary people would love to use, he valued computing power over the size of the machine. As he would later say about that period, the idea of the Dynabook was more of a service metaphor. The size and form of the hardware was incidental to the concept. (source: An Interview with Computing Pioneer Alan Kay) Thacker and Lampson, the designers of the 8010, shared that value as well. By the late 1970s the market was thinking “small is beautiful,” and they were willing to tolerate clunky, low-powered machines, because their economies of scale met immediate needs, and they created less friction from corporate politics.

Apple and Microsoft used ideas developed at PARC to further develop the personal computer market, and so watered down pieces of that vision got out to customers over about 20 years. Today we live in a world that resembles what Bob Taylor envisioned, with individual computers, networking, and digital printing, using graphical systems.

PARC’s legacy

The outside world would come to know the desktop graphical user interface metaphor because of Smalltalk, though the metaphor was repurposed away from Alan Kay’s powerful ideas about a modeling system, into a system that made it easy to run applications, and emulated integrating older media together into virtual paper documents.

To me, the most interesting contributor to the desktop interface was Diana Merry. She was originally hired at PARC as a secretary. She just happened to understand Alan Kay’s goals, and so she was invited into the LRG. She and Kay created the first implementation of overlapping windows, in Smalltalk. She also wrote a lot of the basic software Smalltalk used to display and animate graphics. (Sources: “The Mouse and the Desktop — Designing Interactions,” by Bill Moggridge, and TEHS)

The Paint, Draw, and Write applications that appeared on the Apple Lisa and the first Macintosh systems were inspired by similar works that had been created years earlier at PARC.

Microsoft Windows, and Microsoft Word benefitted from the research that was done at PARC, though some inspiration came from the Macintosh as well. The look and feel of the first versions of Windows owed more to the X/Window system on Unix. Where Microsoft copied Apple was in how people interacted with Windows (icons and menus), and the application suite that came with the system. (source: The Secret Origin of Windows)

Software developers who have been working on apps. for Apple products in the most recent generations of systems may be interested to know that the Objective-C language they’ve used was first created by a company called StepStone in the early 1980s. The design of the language and its runtime were somewhat influenced by the Smalltalk system.

Several companies licensed a version of Smalltalk, called Smalltalk-80, from Xerox during the 1980s. Among them were Tektronix, Hewlett-Packard, Digital Equipment Corp., and Apple. (source: “Smalltalk-80: Bits of History, Words of Advice”) Apple got a version running on the Macintosh XL in 1985. (source: The Long View) Apple’s version of Smalltalk was updated in the mid-1990s by Alan Kay and some of the original Learning Research Group gang from Xerox. They got Apple’s permission to release it into the public domain, and it has since been ported to many platforms. Professional developers call it by its new name, “Squeak.” An educational version also exists, maintained by the Squeakland Foundation, which goes by the name “Etoys.”

Charles Simonyi joined Microsoft. He brought his knowledge from developing Bravo with him, and it went into creating Microsoft Word. He became the lead developer on their Multiplan, and Excel spreadsheet software.

Where are they now?

(Edit 4/15/2017: As events unfold, I will be updating this section.)

Stephen Lukasik became a Chief Scientist at the FCC from 1979-1982. He is a member of the International Institute for Strategic Studies, the American Physical Society, and the American Association for the Advancement of Science. He is a founder of The Information Society journal, and has served on the Boards of Trustees of Harvey Mudd College and Stevens Institute of Technology. (Source: Georgia Tech)

Larry Roberts¬†was chief executive at Telenet until 1980. Telenet was sold to GTE in 1979 and subsequently became the data division of Sprint.¬†In 1983 he became the CEO of NetExpress, an Asynchronous Transfer Mode (ATM) equipment company. Roberts then became president of ATM Systems from 1993 to 1998. He then went back to packet networking, founding Caspian Networks, which focused on IP flow management (IP as in “Internet Protocol”), until 2004. He founded Anagran in effort to do what Caspian did, but more efficiently. (Source: Larry Roberts’s home page)

J.C.R. Licklider¬†–¬†In the late 1970s Lick visited Xerox PARC regularly. He served on the Committee on Government Relations at the Association of Computing Machinery (ACM), and as deputy chairman of the Social Security Administration’s Data Management System. He spent a year in 1978 on a task force commissioned by the Carter Administration, which examined the government’s data-processing needs. He served as president of the Boston Computer Society. He was an investor and advisor to a company called Infocom in 1979, which was founded by eight of his former students from his Dynamic Modeling group at MIT. He spent part of his time working at VisiCorp. He¬†continued to work at MIT until he retired in 1985.¬†He died on June 26, 1990.

Ed Feigenbaum¬†– In 1984 he became a Fellow at the American College of Medical Informatics. From 1994 to 1997 he served as Chief Scientist of the U. S. Air Force.¬†He founded the Knowledge Systems Laboratory at Stanford University, and is now professor emeritus at Stanford. He was co-founder of several start-ups, such as IntelliCorp, Teknowledge, and Design Power Inc. He has served on the National Science Foundation Computer Science Advisory Board, on the National Research Council‚Äôs Computer Science and Technology Board, and as a member of the Board of Regents of the National Library of Medicine. He is a Fellow at the¬†the Association for the Advancement of Artificial Intelligence, the American Institute of Medical and Biological Engineering, and of the American Association for the Advancement of Science. He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences.¬†(Source:¬†Feigenbaum’s Turing Award citation at the ACM)

George Heilmeier¬†became vice president at Texas Instruments in 1977. In 1983 he was promoted to Senior Vice President and Chief Technical Officer. In his current position, he is responsible for all TI research, development, and engineering activities.¬†From 1991-1996 he also served as president and CEO of Bellcore (now Telcordia), ultimately overseeing its sale to Science Applications International Corporation (SAIC). He served as the company’s chairman and CEO from 1996-1997, and afterwards as its chairman emeritus.¬†He serves on the board of trustees of Fidelity Investments and of Teletech Holdings, and the Board of Overseers of the School of Engineering and Applied Science of the University of Pennsylvania. He¬†is a member of the National Academy of Engineering, the Defense Science Board, and is Chairman of the Technical Advisory Board of Southern Methodist University.¬†(sources: Wikipedia, and¬†IEEE Global History Network)

It should be noted that Heilmeier is credited with being the inventor of the Liquid Crystal Display (LCD), a technology Alan Kay hoped to use for his Dynabook in the 1970s, and which became the basis for the digital display most of us use today.

Alan Kay¬†left PARC on sabbatical in 1980, and never came back. He came to Atari in 1981, as Chief Scientist, doing research on interactive computing (you can see some samples of it here). He then joined Apple as a research Fellow in 1984, where he worked on improving education in conjunction with technology. He joined Disney as a Fellow and Imagineer from 1996 to 2001. He founded Viewpoints Research Institute in 2001. He then became a research fellow at Hewlett-Packard from 2002 to 2005. During that time he was a Visiting Professor at Kyoto University, an adjunct professor of Computer Science at MIT, and was involved with the development of the XO Laptop, developed by the One Laptop Per Child program at MIT. Today he is an adjunct professor of Computer Science at UCLA, and he continues his work at Viewpoints. He is also on the advisory board of TTI/Vanguard. Since 2006 Viewpoints has been working on a project, sponsored by the National Science Foundation, to reinvent personal computing. (sources: The New York Times,¬†Chap. 12¬†from Howard Rheingold’s “Tools For Thought,” Wikipedia,¬†Bio. on Alan Kay at Answers.com,¬†VPRI: Inventing Fundamental New Computing Technologies)

Bob Taylor went on to found the Systems Research Center (SRC) at Digital Equipment Corp. (DEC) in 1984. He retired from DEC in 1996. He died on April 13, 2017.

Butler Lampson also left PARC with Taylor, and joined him at Digital Equipment Corp. He became a Fellow of the ACM in 1992. He now works at Microsoft Research, and is an adjunct professor at MIT. He became a Fellow at the Computer History Museum in 2008.

Dan Ingalls¬†left PARC to work at Apple in 1984. Beginning in 1987 he helped run the Homestead Hotel, a family business, until 1993. He stayed on with Apple until 1996. From 1996 to 2001 he was Principal Staff Director at Disney Imagineering. He worked as a consultant for Hewlett-Packard and at Viewpoints Research Institute from 2001 to 2005. He joined Sun Labs in 2005, where he developed his newest project, called Lively Kernel. He left Sun in 2010 to become a Fellow at SAP, where he continues to work today. He continues development of his Lively Kernel Project at the Hasso Plattner Institute. (Sources: Dan Ingalls’s Linkedin page, and his Wikipedia page)

Diana Merry – After leaving Xerox in 1986, she has continued her work in Smalltalk with various employers. You can see her complete work history at her LinkedIn page.

Chuck Thacker¬†left PARC the same time Bob Taylor did, and joined DEC as a founder of its Systems Research Center. He then joined Microsoft in 1997 to help found Microsoft Research in Cambridge, UK. He returned to the U.S., and developed the technology which was used in Microsoft’s Tablet PC. He continued working at Microsoft Research in Silicon Valley on computer architecture for many years. He died on June 12, 2017. (source: Arstechnica)

Adele Goldberg became president of the Association of Computing Machinery (ACM) from 1984 to 1986, and continued to have a long association with the organization, taking on various roles. She continued on at PARC until 1987. She wanted to make sure that Smalltalk technology got out to a wider audience, so she worked out a technology exchange agreement with Xerox in the late 1980s and founded ParcPlace Systems, which commercialized a version of Smalltalk. She served as CEO and chairwoman of ParcPlace until 1995, when the company merged with Digitalk, another Smalltalk vendor, to become ParcPlace-Digitalk. In 1997 the company changed its name to ObjectShare. In 1999 ObjectShare was sold to Cincom, which has continued to develop and sell a Smalltalk development suite. Goldberg was inducted as a Fellow at the ACM in 1994. She co-founded Neometron in 1999, and now also works at Bullitics. She is a board member of Cognito Learning Media. She has continued her interest in education, formulating computer science courses at community colleges in the U.S. and abroad.

Charles Simonyi¬†went to work at Microsoft in 1981. He led their development efforts to create Word, Multiplan, and Excel. He left Microsoft in 2002 to found Intentional Software, where he’s been at work on what I’d call his “Domain-Oriented Development” software and techniques.

Larry Tesler¬†went to work at Apple in 1980, becoming Vice President of the Advanced Technology Group, and Chief Scientist. He worked on the team that developed the Lisa computer. In 1990 he led the effort to develop the Apple Newton, one of the first of what we’d recognize as a personal digital assistant.¬†Tesler left Apple in 1997 to co-found a company called Stagecast Software. In 2001 he joined Amazon.com as its Vice President of Shopping Experience. In 2005 he joined Yahoo! as Vice President of its User Experience and Design group. In 2008 he went to work for 23andMe, a personal genetics information company. Since 2009 he’s worked as an independent consultant.

Timothy Mott left Xerox PARC to co-found Electronic Arts in 1982. In 1990 he became Director at Electronic Arts, and co-founded Macromedia, staying with them until 1994. In 1995 he co-founded Audible.com, and stayed with them until 1998. He stayed on with Electronic Arts until 2007. You can see his complete work profiles at Bloomberg BusinessWeek and CrunchBase.

Doug Brotz – It’s been difficult finding much information on him. All I have is that he joined Adobe and was a co-developer of the PostScript laser printer control language.

Andrew Birrell РHe left Xerox PARC in 1984 to join the Systems Research Center at DEC, where he worked on Network Objects, a distributed object system, Virtual Paper, a system for easy online document reading, and the Personal Jukebox, the first multi-gigabyte portable audio player. DEC was acquired by Compaq in 1998. He stayed with Compaq until 2001, when he went to Microsoft Research. He retired in 2014. He died on December 7, 2016. (Source: the ACM)

Roy Levin became a senior researcher at the DEC Systems Research Center in 1984. In 1996 he became its director. In 2001 he co-founded Microsoft Research in Silicon Valley, became a Distinguished Engineer, and its Managing Director. He continues work there today.

Roger Needham worked as a consultant at Xerox PARC from 1977-1984. He then worked at DEC’s Systems Research Center from 1984-1997, and at Hitachi Advanced Research Laboratory from 1994-1997. He became a Fellow at the Royal Academy of Engineering in 1993, and of the ACM in 1994. He was a Fellow at Wolfson College, in Cambridge, UK, from 1966-2002.¬†He joined Microsoft Research in 1997, and founded their European Research Labs. He was a longtime member of the International Association for Cryptographic Research, the IEEE Computer Society Technical Committee on Security and Privacy, and the University Grants Committee, an advisory committee to the British government. He died on March 1, 2003. (source: Wikipedia)

Michael Schroeder – I haven’t found detailed information on Schroeder, except to say that as a professor at MIT he worked on the Multics project (I covered this project in Part 2 of this series), before coming to Xerox PARC, and that after PARC he worked at DEC’s Systems Research Center. He then came to Microsoft Research in 2001, where he continues to work today. He became a Fellow of the ACM in 2004.

Clarence Ellis РAfter leaving Xerox PARC, he became head of the Groupware Research Program at the Microelectronics Computer Consortium in Austin, TX. He also worked at Los Alamos Labs, and Argonne National Laboratory. He held academic positions at Stanford, the University of Texas, MIT, and the Stevens Institute of Technology. In 1991 he became the chief architect of the FlowPath workflow product at Bull S.A. In 1992 he came to the University of Colorado at Boulder as a professor of computer science, and, with Gary Nutt, formed the Collaborative Technology Research Group (CTRG) to work on project workflow systems. He became professor emeritus of computer science at CU in 2010. He was on the editorial board of various journals. He was a member of the National Science Foundation (NSF) Computer Science Advisory Board, the University of Singapore ISS International Advisory Board, and the NSF Computer Science Education Committee. He died on May 17, 2014. (sources: Computer Scientists of the African Diaspora, CTRG Groupware and Workflow Research, the Daily Camera)

A note I found in Ellis’s bio. also says that he was the first African American to receive a Ph.D. in computer science, in 1969, from the University of Illinois at Urbana-Champaign. He did his post-graduate work developing the Illiac IV supercomputer, an ARPA/IPTO project, and one of the first massively parallel computer systems.

Gary Nutt¬†worked at Xerox PARC from 1978-1980. He worked on collaboration systems at Bell Labs in Denver, CO. from 1980-81. He took a couple executive roles at technology firms in Boulder, CO. from 1981-1986. He returned to the University of Colorado at Boulder in 1986 as a CS professor (he was previously an associate CS professor at CU Boulder from 1972-1978). While on sabbatical in 1993, he worked for Group Bull in Paris, France, developing collaboration products. In 2000 he worked as VP of Engineering for Bookface.com, managing their intellectual property. He went to Inktomi in 2001 to work on content and media distribution over the internet. He also advised on managing their intellectual property. He retired from CU Boulder in 2010, and is now professor emeritus. You can see his work history in more detail at his web page, which I’ve linked to here.

Steve Jobs – I wrote about Jobs’s work in a separate post. He died on October 5, 2011.

Bob Belleville – I couldn’t find much on him. What I have is that he joined Apple in 1981, becoming the chief engineer on the Lisa, and later the Macintosh project. He later joined Silicon Graphics’s R&D Dept. (Source: Good-bye Woz and Jobs,¬†Making the Macintosh)

You can watch a retrospective from 2001 given by Chuck Thacker and Butler Lampson on their days at PARC, and what they did,¬†here, if you’re interested. It gets pretty technical, and is an hour and 20 minutes long.

You can see a 2010 interview with Adele Goldberg at the Computer History Museum, where she reviews her academic, educational, and business career here. It’s about an hour and 30 minutes.

Here’s a presentation given by the University of Texas at Austin in 2010, with Mitchell Waldrop, the author of¬†“The Dream Machine,”¬†and¬†Michael Hiltzik, the author of¬†“Dealers of Lightning,”¬†along with an interview with Bob Taylor. It’s a nice bookend to the history I’ve discussed here.

Epilogue

The closing chapter in Waldrop’s book is called, “Lick’s Kids.” It talks about the people who were mentored by Licklider, either through ARPA, or at MIT, and who went out into the world to bring their ideas to life. It also talked about his waning years. He lived to see “the wheel reinvented” with microcomputers. The companies that were going gangbusters with them were repeating many of the lessons he and his researchers had learned in the 1960s, working at ARPA. The implication being that if they had merely taken time to look at what had already been learned 20 years earlier they would’ve avoided the same mistakes. He also saw the first glimmers of his vision of the Multinet come into being with the internet.

When reflecting on his life’s work, Lick was humble. He didn’t give himself much credit for creating our digital world. He thought of himself as just happening to be at the right place at the right time while some very bright people did the real work. But those who were mentored, and funded by him gave him a great deal of credit. They said if it wasn’t for him, their ideas would not have gotten off the ground, and often he was the¬†only one who could see promise in them. He was indeed in the right place at the right time, but what was important was that he was in the right position to give them the support they needed to bring their dreams into reality.

I’ll talk more about Bob Kahn, and the development of the internet, in Part 4.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

A few months ago Apple made some controversial changes to its App Store developer terms, which were seen as overly restrictive. The scuttlebutt was that the changes were aimed at banning Adobe Flash from the iPhone, since Apple had already said that Flash was not going to be included, nor allowed on the iPad. The developer terms said that all apps. which could be downloaded through the App Store had to be originally written only in C, C++, Objective-C, or Javascript compatible with Apple’s WebKit engine. No cross-compiled code, private libraries, translation or compatibility layers were allowed.¬†Apple claimed they made these changes to increase the stability and security of the iOS environment for users. I considered this a somewhat dubious claim given that they were allowing C and C++. We all know the myriad security issues that Microsoft has had to deal with as a result of using C and C++ in Windows, and in the applications that run on it. A side-effect I noticed was that the terms also banned Squeak apps., since Squeak’s VM source code is written in Smalltalk and is translated to C for cross-compilation. In addition any apps. written in it are originally written in Smalltalk (typically), and are executed by the Squeak VM, which would be considered a translation layer. The reason this was relevant was that someone had ported Squeak to the iPhone a couple years ago, and had developed several apps. in it.

The Weekly Squeak revealed today that Apple has made changes to its App Store terms, and they just so happen to allow Squeak apps. As Daring Fireball has revealed,¬†Apple has removed all programming language restrictions. They have even removed the ban on “intermediary translation or compatibility layers”. The one caveat is they do not allow App Store apps. to download code. So if you’re using an interpreter in your app., the interpreter and all of the code that will execute on it must be included in the package. (Update 9-12-10: Justin James pointed out that the one exception to this rule is Javascript code which is downloaded and run by the WebKit engine).¬†This still restricts Squeak some, because it would disallow users or the app. from using something like SqueakMap or SqueakSource as a source for downloading code into a Squeak image, but it allows the typical stand-alone application case to work.

John Gruber, the author of Daring Fireball, speculates that these new rules could allow developers to use Adobe’s Flash cross-compiler, which Adobe had scuttled when Apple imposed the previous restrictions. John said, “If you can produce a binary that complies with the guidelines, how you produced it doesn‚Äôt matter.” Sounds right to me.

However, looking over the other terms that John excerpts from the license agreement gives me the impression that Apple still hasn’t figured everything out yet about what it will allow, and what it won’t allow, in the future. It has this capricious attitude of, “Just be cool, bro.” So things could still change. That’s the thing that would be disappointing to me about this if I were an iPhone developer right now. I got the impression when the last license terms came out that Apple hadn’t really thought through what they were doing. While I get a better impression about the recent changes, I still have a sense that they haven’t thought everything through. To me the question is why? I guess it’s like what Tom R. Halfhill once told me, that Steve Jobs never understood developers, even back in the days when the company was young. Steve Wozniak was the resident “master developer” in those days, and he had Jobs’s ear. Once he left Apple in the 1980s that influence was gone.

Read Full Post »

I was going through my list of links for this blog (some call it a “blogroll”) and I came upon a couple items that might be of interest.

The first is, it appears that Dolphin Smalltalk is getting back on its feet again. You can check it out at Object Arts. I had reported 3 years ago that it was being discontinued. So I’m updating the record about that. Object Arts says it’s working with Lesser Software to produce an updated commercial version of Dolphin that will run on top of Lesser’s Smalltalk VM. According to the Object Arts website this product is still in development, and there’s no release date yet.

Another item is since last year I’ve been hearing about a branch-off of Squeak called Pharo. According to its description it’s a version of Squeak designed specifically for professional developers. From what I’ve read, even though people have had the impression that the squeak.org release was also for professional developers, there were some things that the Pharo dev. team felt were getting in the way of making Squeak a better professional dev. tool, mainly the EToys package, which has caused consternation. EToys was stripped out of Pharo.

There’s a book out now called “Pharo by Example”, written by the same people who wrote “Squeak by Example”. Just from perusing the two books, they look similar. There were a couple differences I picked out.

The PbE book says that Pharo, unlike Squeak, is 100% open source. There’s been talk for some time now that while Squeak is mostly open source, there has been some code in it that was written under non-open source licenses. In the 2007-2008 time frame I had been hearing that efforts were under way to make it open source. I stopped keeping track of Squeak about a year ago, but last I checked this issue hadn’t been resolved. The Pharo team rewrote the non-open source code after they forked from the Squeak project, and I think they said that all code in the Pharo release is under a uniform license.

The second difference was that they had changed some of the fundamental architecture of how objects operate. If you’re an application developer I imagine you won’t notice a difference. Where you would notice it is at the meta-class/meta-object level.

Other than that, it’s the same Squeak, as best I can tell. According to what I’ve read Pharo is compatible with the Seaside web framework.

An introduction to the power of Smalltalk

I’m changing the subject some, but I couldn’t resist talking about this, because I read a neat thing in the PbE book. I imagine it’s in SbE as well. Coming from the .Net world, I had gotten used to the idea of “setters” and “getters” for class properties. When I first started looking at Squeak, I downloaded Ramon Leon‘s Squeak image. I may have seen this in a screencast he produced. I found out there was a modification to the browser in his image that I could use to have it set up default “setters” and “getters” to my class’s variables automatically. I thought this was neat, and I imagine other IDEs already had such a thing (like Eclipse). I used that feature for a bit, and it was a good time-saver.

PbE revealed that there’s a way to have your class set up its own “setters” and “getters”. You don’t even need a browser tool to do it for you. You just use the #doesNotUnderstand message handler (also known as “DNU”), and Smalltalk’s ability to “compile on the fly” with a little code generation. Keep in mind that this happens at run time. Once you get the idea, it’s not that hard, it turns out.

Assume you have a class called DynamicAccessors (though it can be any class). You add a message handler called “doesNotUnderstand” to it:

DynamicAccessors>>doesNotUnderstand: aMessage
| messageName |
messageName := aMessage selector asString.
(self class instVarNames includes: messageName)
ifTrue: [self class compile: messageName, String cr, ' ^ ', messageName.
         ^aMessage sendTo: self].
^super doesNotUnderstand: aMessage

This code traps the message being sent to a DynamicAccessors instance, because there is no method for what’s being called for at the moment. It extracts the method name that’s being called, looks to see if the class (DynamicAccessors) has a variable by the same name, and if so, compiles a method by that name, with a little boilerplate code that just returns the variable’s value. Once it’s created, it resends the original message to itself, so that the now-compiled accessor can return the value. However, if no variable exists that matches the message name, it triggers the superclass’s “doesNotUnderstand” method, which will typically activate the debugger, halting the program, and notifying the programmer that the class, “doesn’t understand this message.”

Assuming that DynamicAccessors has a member variable “x”, but no “getter”, it can be accessed by:

myDA := DynamicAccessors new.
someValue := myDA x

If you want to set up “setters” as well, you could add a little code to the doesNotUnderstand method that looks for a parameter value being passed along with the message, and then compiles a default method for that.

Of course, one might desire to have some member variables protected from external access and/or modification. I think that could be accomplished by having a variable naming convention, or some other convention, such as a collection that contains member variable names along with a notation specifying to the class how certain variables should be accessed. The above code could follow those rules, allowing access to some internal values and not others. A thought I had is you could set this up as a subclass of Object, and then just derive your own objects off of that. That way this action will apply to any classes you create, which you choose to have it apply to (otherwise, just have them derive from Object).

Once an accessor is compiled, the above code will not be executed for it again, because Smalltalk will know that the accessor exists, and will just forward the message to it. You can go in and modify the method’s code however you want in a browser as well. It’s as good as if you created the accessor yourself.

Edit 5-6-2010: Heard about this recently. Squeak 4.1 has been released. From what I’ve read on The Weekly Squeak,¬†Squeak has been 100% open source since Version 4.0. I was talking about this earlier in this article in relation to Pharo. 4.1 features some “touch up” stuff. It sounds like this makes it nicer to use. The description says it includes some first-time user features and a nicer, cleaner visual interface.

Read Full Post »

See Part 1, Part 2, Part 3, Part 4, Part 5

What happened to my dreams?

I was working for a company as a contractor a few years ago, and the job was turning sour. I quit the job several months later. While I was working I started to wonder why I had gotten into this field. I didn’t ask this question in a cynical way. I mean, I wondered why I was so passionate about it in the first place. I knew there was a good reason.¬†I wanted that back. I clearly remembered a time when I was excited about working with computers, but why? What did I imagine it would be like down the road? Maybe it was just a foolish dream, and what I had experienced¬†was the reality.

I started reviewing stuff I had looked at when I was a teen, and reminiscing some (this is before I started this blog). I was trying to put myself back in that place where I felt like computing had an exciting, promising future.

Coming full circle

One day while reading a blog I frequented, I stumbled upon a line of inquiry that grabbed my interest. I pursued it on the internet, and got more and more engrossed. I was delving back into computer science, and realizing that it really was interesting to me.

When I first entered college I wasn’t sure that CS was for me, but I found it interesting enough to continue with it. For the most part I felt like I got through it, rather than engrossing myself in it¬†by choice, though it had¬†a few¬†really interesting moments.

The first article I came upon that really captured my imagination was Paul Graham’s essay,¬†“Beating The Averages”. I had never seen anyone talk about programming this way before. The essay was about a company he and a business partner developed and sold to Yahoo!, called ViaWeb, how they wrote¬†the web application for it¬†in Lisp, and how that was their competitive advantage over their rivals. It convinced me to¬†give Lisp a second chance.

I decided to start this blog. I came up with the name “Tekkie” by thinking of two words: techie and trekkie. A desire had been building in me to write my thoughts somewhere about technology issues, particularly around .Net development and Windows technical issues. That was my original intent when I started it, but I was going through a transformative time. My new interest was too nascent for me to see it for what it was.

I dug out my old Lisp assignments from college. I still had them. I also pulled out my old Smalltalk assignments. They were in the same batch. Over the course of a few weeks I committed myself to learning a little Common Lisp and solving one of the Lisp problems that once gave me fits. I succeeded! For some reason it felt good, like I had conquered an old demon.

Next, I came upon Ruby via. a podcast. A guest the host had on referenced an online Ruby tutorial. I gave it a try and thoroughly enjoyed it. I had had a taste of programming with dynamic languages (when I actually had some good material with which to learn), and I liked it. I wondered if I could find work developing web apps. with one. I watched an impressive podcast on Rails, and so decided to learn the language and try my hand at it.

I had a conversation with an old college friend about Lisp and Ruby. He had been trying to convince me for a few years to give Ruby a try. I told him that Ruby reminded me of Smalltalk, and I was interested in seeing what the full Smalltalk system looked like, since I had never seen it. He told me that Squeak was the modern version, and it was free and openly available.

As I was going along, continuing to learn about Ruby and Rails,¬†I was discovering online video services, which were¬†a new thing, YouTube and Google Video. I had always wanted to see the event where Steve Jobs introduced the first Macintosh, in full. I had seen a bit of it in¬†Triumph of¬†The Nerds. I found it on Google Video. Great! I think one of the “related” videos it showed me was¬†on the Smalltalk system at Xerox PARC. I watched that with excitement. Finally I was going to see this thing! I think one of the “related” videos there was a presentation by Alan Kay. The name sounded familiar. I watched that, and then a few more of his presentations. It gradually dawned on me that this was the “mystery man” I¬†had seen¬†on that local cable channel back in 1997!

I had heard¬†of Kay years ago when I was a teenager. His name would pop up from time to time in computer magazine articles (no pictures though), but nothing he said then made an impression on me.¬†I remember using a piece of software he had written for the Macintosh classic, but I can’t for the life of me remember what it was.

I had heard about the Dynabook¬†within a few years of when I started¬†programming, but the concept of it was very elusive to me.¬†It was always described as¬†“the predecessor to the personal computer” or something like that. For years I thought¬†it had been a real product. I wondered, “Where are these¬†Dynabooks?” Back in the 1980s I remember watching an episode of¬†The Computer Chronicles and some inventor came on with an early portable Mac clone he called the “Dynamac”. I thought maybe that had something to do with it…

“What have I been doing with my life?”

So I¬†watched a few of¬†Kay’s presentations. Most of them¬†were titled¬†The Computer Revolution Hasn’t Happened Yet.¬†I guess he was on a speaking tour with this subject. He had given them all between 2003 and 2005.¬†And then I came upon the one¬†he did for ETech 2003:

This blew me away. By the end of it I felt devastated. There was a subtle, electric feeling running through me, and at the same time a feeling of my own insignificance. I just sat in stunned silence for a moment. What I had seen and heard was so beautiful. In a way it was like my fondest hopes that I used to dare not hope had come true. I saw Kay demonstrate Squeak, and I saw the meaning of what he was doing with it. I saw him demonstrate Croquet, and it reminded me of a prediction I made about 11 years ago about 3D user interfaces being created. It was so gratifying to see a prototype of that in the works. What amazed me is it was all written in Squeak (Smalltalk).

I remembered that when I was a teenager I wished that software development would¬†work like¬†what I saw with Squeak (EToys), that¬†I could¬†imagine what¬†I wanted and make it come to life in front of me quickly. I¬†did this through my programming efforts¬†back then, but it was a hard, laborious process. Kay’s phrase rang in my mind: “Why isn’t all programming like this?”

I also realized that nothing I had done in the past could hold a candle to what I had just seen, and for some reason I really cared about that. I had seen other amazing feats of software done in the past, but I used to just move on, feeling it was out of my realm. I couldn’t shake this demo. About a week later I was talking with a friend about it¬†and the words for that feeling¬†from that moment came out: “What have I been doing with my life?” The next thing I felt was, “I’ve got to find out what this is!”

Alan Kay captured¬†the experience¬†well in¬†“The Early History of Smalltalk”:

A twentieth century problem is that technology has become too “easy”. When it was hard to do¬†anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to¬†generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to¬†decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results. [my emphasis in bold]

Kay’s presentation of the ideas in Sketchpad, and the work by Engelbart was also enlightening.¬†Sketchpad was covered briefly¬†in the textbook for a computer graphics course I took in college. It didn’t say much, just that it was the first interactive graphical environment that allowed the user to draw shapes with a light pen, and that it was¬†the predecessor to CAD systems. Kay helped me realize through this presentation, and other materials¬†he’s produced, that the meaning of¬†Sketchpad was deeper than that. It was the first object-oriented system, and it had a profound influence on¬†the design of the¬†Smalltalk system.

I didn’t understand very much of what Kay said on the first viewing. I revisited it several times after having time to think about what I had seen, read some more, and discuss these ideas with other people. Each time I came back to it I understood what he meant a little more. Even today I get a little more out of it.

After having the experience of watching it the first time, I reflected on the fact that Kay had shown a clip of the very same “mouse demo” with Douglas Engelbart that I remembered seeing in¬†The Machine That Changed The World. I thought, “I wonder if I can find the whole thing.” I did. I was seeing a wonderful post on modern computer history come together, and so I wrote one called,¬†“Great moments in modern computer history”.

Reacquiring the dream

So with this blog I have documented my journey of discovery. The line of inquiry I followed has brought me what I sought earlier: to get back what made me excited to get into this field in the first place. It feels different though. It’s not the same kind of adolescent excitement I used to have. Through this study I remembered my young, naive fantasies about positive societal change via. computer technology, and I’ve been gratified that Alan Kay wants something similar.¬†I still like the idea that we can change, and realize the computer’s full potential, and that those benefits I imagined earlier might still be possible.¬†I have a¬†little¬†better sense now (emphasis on “little”)¬†of the kind of work that will be required to bring that about.

Having experienced what I have in the¬†business world, I now know that technology itself will not bring about that change. Reading¬†Lewis Mumford helped me realize that. I found out about him thanks to¬†Alan Kay’s reading list, and his admonition that¬†technologists should¬†read.¬†The culture is shaping the computer, and in turn it is shaping us,¬†and at least in the business world we think of computers like¬†our forebearers¬†thought of machinery in factories. Looking back on¬†my experience in the work world that was one of my disappointments.

About¬†“The Machine That Changed The World”

The series I mentioned earlier, “The Machine That Changed The World”,¬†showed that this transformation has happened in some ways, but I contend that it’s spotty. Alan Kay has pointed to two areas where promising progress has been made. The first is in the scientific community. He said several years ago that the computer revolution¬†is¬†happening in science today. He’s said that¬†the computer¬†has “revolutionized” science. The next place is in video games, though he hasn’t been that pleased with the content. What they have right is they create interactive worlds, and the designers are very cognizant of the players’ “flow”, making interaction with the simulated environment easy and as natural as is possible, considering¬†current controller technology.

Looking back on this TV series now I can see its significance. It tried to blend a presentation of the important ideas that were discovered about computing with the reality of how computers were perceived and used. It showed the impact they had on our society, and how perceptions of them changed over time.

In the first episode,¬†“Great Brains”,¬†the narrator puts the emphasis on the computer being¬†a new medium:

5,000 years ago mankind invented writing, a way to record and communicate ideas. These simple marks on clay and paper changed the world, becoming the cornerstone of our intellectual and commercial lives. Today we may be witnessing the emergence of a new medium whose influence may one day rival that of writing.

A modern computer fits on a desk, is affordable and simple enough for a child to play on. But what to the child is a computer game, is to the computer just patterns of voltages. Today we take it for granted that such patterns can help architects to draw, scientists to model complex phenomena, musicians to compose, and even aid scholars to search the literature of the past. Yet a machine with such powers of transformation is unlike any machine in history.

Computers don’t just do things, like other machines. They manipulate ideas. Computers conjure up artificial universes, and even allow people to experience them from the inside, exploring a new molecule, or walking through an unbuilt building.

Doron Swade of the London Science Museum adds (the first part of this quote was¬†unintelligible. I’m making a guess that I think makes sense here):

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality. And it seems that that has a close affinity with the way minds and consciousness work. The mind manipulates processes, images, ideas, notions. Exactly how it does that is of course not yet known. But there seems to be a strong kindred relationship between the manner in which computers process information and the analogy that that has with the way minds, and thinking, and consciousness seem to work. So they have a very special place, because they’re the closest we have to a mind-like machine that we have yet had.

The final episode,¬†“The World At Your Fingertips”,¬†ends by bringing the series full circle, back to the idea that the computer is a new medium, in a way that I think is beautiful:

The computer is not a machine in the traditional sense. It is a new medium. Perhaps predicting the future of computers is as hard as it would have been to predict the consequences of the marks Sumerians made on clay 4,000 years ago.

Paul Ceruzzi, a computer historian says:

It’s ironic when you look at the history of writing, to find that it began as a utilitarian method of helping people remember how much grain they had. From those very humble, utilitarian beginnings came poetry, and literature, and all the kinds of wonderful things that we associate with writing today. Now we bring ourselves up to the invention of the computer. The very same thing is happening. It was invented for a very utilitarian, prozaic purpose of doing calculations, relieving the tedium and drudgery of cranking out numbers for insurance companies, or something like that. And now we begin to see this huge culture that’s grown up of people who are discovering all the things you can do by playing with the computer. We may see a time in the not too distant future when people will look at the computer’s impact on society, and they’ll totally forget its humble beginnings as a calculating device, but rather as something that enriches their culture in undreamed of ways just as literature and art is perceived today in this world.

This TV series was made at a time before the internet became popular. The focus was on desktop computers and multimedia. As you can see there was a positive outlook towards what the computer could become. Since then I’m sad to say the computer has faded into the background in favor of a terminal metaphor, one that is not particularly good at being an authoring platform, though there are a few good efforts being made at changing that. What this TV series brought out for me is that we have actually taken some steps backwards with the form that the internet has taken.

Perhaps I am closing a chapter with this series of posts. I’ve been meaning to move on from just talking about the ideas I’ve been thinking about, to something more concrete. What form that will take I’m not sure yet. I’ll keep writing here about it.

Read Full Post »

My 4-year-old Windows laptop died on me about a week ago. I think it was the hard drive controller.¬†I set up¬†my Windows desktop machine which¬†I haven’t used in a year or two. That took some work, getting¬†it set up the way I wanted, getting all the security updates, and updating drivers.¬†Anyway, I’m shopping for a new laptop now.

In the news, I heard last week that Circuit City would be closing stores or laying off employees. Something like that. Yesterday I heard they’re filing for bankruptcy. It’s reminiscent of what happened to CompUSA almost a year ago, except that the company isn’t being liquidated. They’re trying to restructure. Nevertheless it shows another decline in the PC/electronics space.

Best Buy seems to be doing okay. They’re opening a store in my town soon…in the same building where our CompUSA used to be. Poetic, no?

And then there’s Staples, another place where you can get a Windows computer.

I feel like Microsoft still doesn’t have its act together. It seems like¬†they¬†started¬†going off the rails in terms of¬†their technology offerings in 2006.¬†I’ve been talking to¬†friends and reading articles¬†from technical folks, and¬†it sounds to me like¬†Microsoft ended support for Windows XP too soon. I know they’re always anxious to retire a version after so many years, but from what I’m hearing getting a $600 (or less) Windows laptop is almost pointless now. Vista is a CPU and memory hog, and it runs applications slower than XP does¬†unless you have a top of the line, high-end machine costing $1,000+.

I’ve been eyeing¬†a Macbook, giving it a chance. I’ve been hearing good things about Macs. I’m just cautious at the moment. I’ve been doing some calculations and it looks like I could actually get a decent, current Macbook, get¬†Windows XP Pro to go with it, and¬†spend slightly LESS than I paid for my last Windows laptop! So that may be an option.

I watched the¬†online presentation of Leopard on the Apple website and I was impressed. A feature that brought back memories for me¬†was¬†its ability to set up¬†“spaces”. You can set up multiple desktops and¬†switch between them, allowing you to group your work into different projects you’re working on. The guy demonstrating it showed how it worked. He set up four desktops, and had something running in each. Then he switched between them. You could watch as one set of apps. (one desktop)¬†“slid” off the screen,¬†and the next set (the other desktop)¬†“slid” on. It reminded me of a system utility I used to run on a “Fat Mac” at my local library¬†in the mid-1980s called “Switcher”. The classic Macs could not multitask. You could only load and run one application at a time.¬†Andy Hertzfeld¬†came up with a way to load¬†multiple apps. in memory at once on a 512K Mac, and you could task-switch between them. You could even go back to the desktop if you wanted to, without quitting your currently running application. Bringing another app. into memory was easy. You just found its icon on the desktop and ran it like you would any other.¬†When you task-switched, you could literally watch as one application “slid” off the screen, and the next one “slid” on. The one displayed on the screen would run. The¬†others would go into suspended animation. The reason I liked it at the time was that most Macs only had floppy drives, and loading apps. was slow. If you wanted to¬†copy and paste something from one app. to the next, you literally had to select what you wanted and¬†“copy” in one app., quit¬†out of it, which would take you¬†back to the desktop, then¬†run the next app. and paste¬†the item in.¬†People kind of tolerated this back then because hardly any¬†microcomputers multitasked.¬†“Switcher” made¬†things easier. You could copy from the source, select an app. that was already pre-loaded, and then paste the item into it. Since OS X multitasks you can have off-screen stuff running at the same time.

I remember in some talk I watched Alan Kay give (I believe it was his ETech 2003 presentation)¬†he said that with the Smalltalk system at Xerox PARC they had the ability to create multiple desktops, because they figured people would want to group activities into separate spaces. It was part of the design. You can do¬†the same thing¬†in Squeak today using the Morphic interface, though now the multiple desktops are¬†called “Projects”. Interesting how things are catching up to Smalltalk. ūüôā

Read Full Post »

Update on Seaside hosting

I deleted a¬†post I wrote in the past on Seaside hosting, since¬†it was¬†out of date. I¬†brought¬†one I wrote on¬†June 10, 2007 up to date with¬†new information, and gave it a current posting date. There are¬†4 comments you’ll see on it that are dated from around June 10 last year.

=========================

It looks like Seaside Parasol, the Canadian commercial Seaside hosting service set up by Chris Cunnington is gone. Too bad. It would’ve been nice to see that take off. So we’re back to just having¬†Seaside-Hosting, a free service based in Switzerland. They offer free hosting for non-commercial applications. This service does not host databases like MySQL or PostgreSQL. You can however store data inside your Squeak image. You can also store external files on the server filesystem, using them¬†as support for your site (like graphics images), though they don’t want you using it for hosting files for download.¬†From what I’ve heard, the people who set up Seaside-Hosting¬†will try to¬†accomodate you for a fee¬†if you have requirements that go beyond what they typically offer. There’s an e-mail address you can use to contact them for this purpose on their home page.

A new feature they’ve added since I last looked is¬†you can now set up your own domain name (using a domain registration service)¬†and use that instead of their seasidehosting.st domain.

There are a couple of screencasts here and here by Lukas Renggli, on how to register and use the features of the Seaside-Hosting service.

Where can I get Seaside?

There are a couple Seaside developers who have created their own preconfigured Squeak images you can use:

  • Ramon Leon
  • Damien Cassou’s “Squeak web” image. Look for “squeak-web”¬†on this page. There you will find the image, and information on how to use it.

If you go to Ramon’s page, you get the Squeak VM and his preconfigured image in one Zip file. Ramon’s image contains a bunch of extras. I’ve used his image in the past, though it’s been a while, so some of this may have changed. In addition to Seaside¬†it may contain¬†Monticello (a version control system for Squeak), Magritte (a meta language in Smalltalk¬†for creating forms, reports, and graphs, and works with Seaside), Glorp (an object-relational framework for interfacing with an external database), SUnit (a unit testing framework), Albatross (a web app. testing framework, like Watir in Ruby), Mondrian (an app. that shows you graphically how classes are related, and their properties), Toothpick (a configurable logging library),¬†SOAP-Client for interfacing with XML web services (including .Net), Scriptaculous (an AJAX framework for creating slick web UIs), VB-Regex (a regular expression pattern matching library), and some XML parsers/frameworks, among other things.

Damien Cassou’s image¬†contains packages for syntax highlighting, code completion, different code browsers, Seaside, AIDA/Web (another web framework/application server), Magritte, and Pier (a content management system).

One thing about Ramon’s image is it has a few customizations for running on Windows. I think it’ll run on other platforms okay, but expect to get some error messages when you first load it up.

Storing data in the image

Since Seaside-Hosting does not offer database hosting, if you want to be able to store data, you should do it within the image. One possibility I’ve mentioned before is MinneStore, an object-oriented database written entirely in Smalltalk. I’m not sure how it works. It might just store data inside the image.

A simple way of storing read-only data is to create a class, and then create a class variable within it (equivalent to a static variable inside a class in languages like C++, C#, and Java). You can create your own data structure and put it in this variable using “class side” methods (again, equivalent to static methods in traditional languages).¬†It¬†will remain persistent. You do not need to instantiate an object to access it. It is globally available.¬†If you want the data structure to be updateable and accessible from multiple Seaside sessions, then you’re going to have to use semaphores to make sure operations on the data structure are atomic. That’s when it can get hairy.

Another solution is SandstoneDB, a Smalltalk persistence framework Ramon Leon developed recently. It’s designed to make object persistence as easy as possible, getting around the limitations of many object databases.

Read Full Post »

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s,¬†that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been¬†missing¬†what’s powerful about computing¬†because like the early users of the printing press¬†we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his¬†speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he¬†said that¬†after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books¬†were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy¬†could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.”¬†It¬†said that¬†there were such things as libraries, going back hundreds of years,¬†but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has¬†said in his recent speeches¬†that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it¬†at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press¬†predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with¬†the printing press’s¬†ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether¬†they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational¬†argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media¬†and processes enables powerful things to happen in¬†our society via. efficiency. It further democratizes old media and modes of thought,¬†but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power¬†of EToys¬†comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys¬†is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better¬†using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this¬†discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful,¬†but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s¬†fallen prey to¬†ideologies. The goal of constructivism as Kay has used it¬†is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play)¬†the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path¬†to the answer. The children do some of the work to discover the answers themselves, once a focus has been established.¬†And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how¬†the student¬†thought in order to get¬†it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to¬†get otherwise. This keys right into a philosophy of computing that¬†J.C.R. Licklider¬†pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,”¬†as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.”¬†Computers have the power to make¬†this complexity comprehensible.¬†Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it¬†should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if¬†this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally¬†think of our computers as simulators of¬†“business models.” I don’t think the current definition of “business model” (a business plan)¬†really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing¬†language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the¬†company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are¬†in¬†the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today.¬†We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers¬†have¬†historically.

In my “Reminiscing” post I referred to above, I¬†talked about¬†the fact¬†that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think¬†Kay would¬†like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching¬†it play out, inform themselves about it. Rather than appealing to experts, they can¬†understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that¬†they see as relevant.¬†This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different¬†concept of what it means to be literate; a different model of education, and¬†research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years.¬†The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference.¬†I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that.¬†It adds another dimension to what I say here about the computer as medium, but it also expounds on¬†what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful¬†because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Paul Murphy saw fit to give me another guest spot on his blog, called “The tattered history of OOP”, talking about the history of OOP practice, where the idea came from, and how industry has implemented it. If you’ve been reading my blog this will probably be review. I’m just spreading the message a little wider.

Paul has an interesting take on the subject. He thinks OOP is a failure in practice¬†because with the way it’s been implemented it’s just another way to make procedure calls. I agree with him for the most part. He’s informed me that he’s going to put up another post soon that gets further into why he thinks OOP is a failure. I’ll update this post when that’s available.

In short, where I’m coming from is that OOP, in the original vision that was created at Xerox PARC, still has promise. The current implementation that most developers use has architectural problems that the PARC version did not, and it still promotes¬†a¬†mode of thinking¬†that’s compatible with procedural programming.

Update 6/3/08: Paul Murphy’s response to my article is now up, called “Oddball thinking about OOP”. He argues that OOP is a failure because it’s an idea that was incompatible with digital computing to begin with, and is better suited to analog computing. I disagree¬†that this¬†is the reason for its failure, but to each¬†their own.

Update 8/1/09: I realized later I may have misattributed a quote to Albert Einstein. Paul Murphy talked about this in a post sometime after “The tattered history of OOP” was published. I said that insanity is, “Doing the same thing over and over again and expecting a different result.” Murphy said he realized that this was misattributed to Einstein. I did a little research myself and it seems like there’s confusion about it. I’ve found sites of quotations that attribute this saying to Einstein. On Wikipedia though it’s attributed to Rita Mae Brown, a novelist, who wrote this in her 1983 book, Sudden Death. I don’t know. I had always heard it attributed to Einstein, though I agree with the naysayers that no one has produced a citation that would prove he said it.

Read Full Post »

The 2nd edition of Squeak by Example is out now (h/t to The Weekly Squeak for this). Like the first edition, it’s a book released as a PDF¬†under the Creative Commons Share-Alike license. You can also get a hardcopy edition of it for $20.10 USD. They also welcome donations.¬†Another tidbit of news is that the first edition was downloaded 20,000 times. Cool!

I wrote about the first edition here, along with links to tutorials that I thought readers would find useful after perusing it and learning some basics.

Read Full Post »

I found this item on reddit. An entry in the GNU Smalltalk FAQ, under the heading “Does GNU Smalltalk run Seaside?”¬†says that it will support Seaside in a release¬†scheduled for¬†March 8th. Now, the FAQ posting that says this is dated June 20, 2007. I’ve asked about this on reddit. Has the release date slipped any since then? Anyone know?

Anyway, this is good news. It gives those who want to use Seaside more options. GNU Smalltalk offers operating options that some find more convenient for what they’re trying to do. For example it offers good scripting support. Of course, I’m not sure if the scripting support will be compatible with Seaside. I guess we’ll see.

I used GNU Smalltalk a bit in college many years ago. It was the first¬†implementation of Smalltalk I¬†encountered.¬†It was really my first exposure to OOP. We used its scripting support then–just the Smalltalk language. We¬†were told about the Smalltalk system that was developed at Xerox, but we did¬†not use the whole system with the GUI, browsers, workspaces,¬†transcript windows, etc.¬†It was very compatible with the “Unix way”. You wrote your code in vi, Emacs, whatever, and then ran it through the interpreter on the command line. The¬†code we wrote looked similar¬†to a file-out of Squeak code. The metadata was a little simpler. One oddity (I think) was if you wanted to print something out to the console you had to wrap it inside of an array and use its printNl method. For example:

#(‘Hello world’) printNl

Yep,¬†we even had to wrap strings. Even so I thoroughly enjoyed the experience at the time. Ah, memory lane. ūüôā

Edit 2/27/08: I got a little more information. It turns out the FAQ posting about this¬†on the GNU Smalltalk site originally said something vague about how “Seaside could be made to work with it”, but was updated just recently to say that this implementation will support Seaside on March 8th. So the release date is a firm one.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Older Posts »