Feeds:
Posts
Comments

Archive for the ‘Government policy’ Category

I’ve had it on my mind to post this for a while, though I do it with hesitation, since this has been so politicized. I wrote about the ban on incandescents in 2008, in “Say goodbye to incandescent bulbs. Say hello to the ‘new normal’.”

I found this interesting nugget as I was writing this. Back then, I had an idea for how to make CFLs safer. I don’t know why no one thought of it before I did. From a consumer safety standpoint, it would seem like an obvious solution:

[M]aybe they could put a plastic bulb around the coil that would absorb the shock, or at least contain the contents if the coil broke from jarring. This would require them to make the coils smaller, but the added safety would be worth it.

Several months ago, while shopping for bulbs at the grocery store, I noticed they were selling CFLs with a hard plastic bulb, just like I suggested! Too little, too late, I guess. I am happy to say that CFLs are going away at the end of this year.

I actually stocked up on incandescents on clearance. I haven’t been using CFLs for most of my lighting. I ended up using them for a couple overhead lights, just because an inspector came through a couple years ago to make sure the apartment complex I live in was “smart regs-compliant,” and I didn’t want to embarrass my landlord by creating any excuse to call the dwelling non-compliant. “I love my minders…”

This doesn’t mean incandescents are coming back. They’re still banned by legislation passed in 2007, but at least there are now better lighting options. Since a few years ago, those who prefer incandescents can get halogen bulbs that are the same size as the old incandescents, and are just as bright. They cost more, but they’re supposed to use less electricity, and last longer. What I care about is that they operate on the same principle. They use a filament. So if you break one, all you have to do is clean up the glass, and vacuum or mop. No toxic spill. Yay!

As the guys in the video say, there are now very good LED lighting options, which are also non-toxic to you.

So I see this as reason to celebrate. The market actually succeeded in getting rid of an expensive, hazardous form of lighting that, it seems to me, our government tried to force on us. I wish the ban would be lifted. I don’t see the point in it. I know people will say it reduces pollution, because we can lower the amount of electricity we use, but as I explained years ago, this is a dumb way to go about it. Lighting is not the major source of electricity usage. Our air conditioning is the elephant in the room, each summer!

Aaron Renn had a good article on what was really going on with the light bulb ban, in “The Return of the Monkish Virtues,” written in 2012. It’s interesting reading. The idea always was to force people to think about their impact on the environment, not to actually do anything substantial about it.

Read Full Post »

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

Foster also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

He says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Edit 2/16/2016: I’ve added 3 more paragraphs relating to another video I’m adding, since it relates more specifically to this topic.

The video, below, is intriguing, but I can’t help but wonder if the two researchers, Cosmides and Tooby, are reading current issues they’re hearing about in political discourse into “stone age thinking” unjustifiably, because how do we know what stone age thinking was? I have to admit, I have no background in anthropology or archaeology at this point. I might need that to give more weight to this inference. The topic they discuss here, about a common misunderstanding of market economics, relates back to something they discussed in the above video, about humans trying to detect and form coalitions, and how market mechanisms have the effect of appearing to interfere with coalition-building strategies. They say this leads to resentment against the market system.

What this would seem to suggest is that the idea that humans are drastically changing our planet’s climate system for the worst is a nice salve for that desire for coalition building, because it leads one to a much larger inference that market economics (the perceived enemy of coalition strategies) is a problem that transcends national boundaries. The constant mantra of warmists that, “We must act now to solve it,” appears to demand a coalition, which to those who feel disconnected by markets feels very desirable.

One of the most frequent desires I’ve heard from those who believe that we are changing our climate for the worst is that they only want to deal with market participants, “Who care about me and my community.” What Cosmides and Tooby say is this relates back to our innate desire to build coalitions, and is evidence that these people feel that the market system is interfering, or not cooperating in that process. What they say, as Foster says, is this reflects a lack of understanding of market economics, and a total ignorance of the benefits its effects bring to humanity.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in our evolutionary experience, by comparison. So we still carry an evolutionary heritage that forms our perceptions of fairness and social survival, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions that are taken in its name, and the technical premises that motivate them. Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

Read Full Post »

See Part 1, Part 2

This post has been a long time coming. I really thought I was going to get it done about this time last year, but I got diverted into other research that I hope to convey on this blog in the future. I also found more sources to explore for the portion on Xerox PARC. I’ve been as faithful as I can to the history, but there’s always a possibility I made some mistakes. I welcome corrections from those who know better. 🙂

This is not a complete history of computing in the period I cover. I mean to convey the closing years of ARPA’s “great leaps” in ideas about computing, and then cover the attempt to continue those “great leaps” privately, at Xerox PARC.

My primary source material for this part is the book, “The Dream Machine,” by M. Mitchell Waldrop. I’ll just refer to it as “TDM.”

I’ve been dividing up this series roughly into decades, or “eras.” Part 1 focused on the 1940s and 50s. Part 2 focused on the 1960s. This part is devoted to the 1970s.

In Part 2 I covered the creation of the Advanced Research Projects Agency (ARPA) at the Department of Defense, and the IPTO (Information Processing Techniques Office, a program within ARPA focused on computer research), and the many innovative projects in which it had been engaged during the 1960s.

Decline at the IPTO

There were signs of “trouble in paradise” at the IPTO, beginning around 1967, with the acceleration of the Vietnam War. Charles Herzfeld stepped down as ARPA director that year, and was replaced by Eberhardt Rechtin. There was talk within the Department of Defense of ending ARPA altogether. Defense Secretary Robert McNamera did not like what he saw happening in the program. Money was starting to get tight. People outside of ARPA had lost track of its mission. It was recognized for producing innovative technology, but it was stuff that academics could get fascinated about. Most of it was not being translated into technologies the military could use. Its work with the academic community created political problems for it within the military ranks, because academia was viewed as being opposed to the war.

Rechtin cut a lot of programs out of ARPA. Some projects were transferred to other funders, or were spun off into their own operations. He wanted to see operational technology come out of the agency. The IPTO had some allies in John Foster at the Department of Defense Research & Engineering (DDR&E), the direct supervisor of ARPA, and Stephen Lukasik, who had had the chance to use the Whirlwind computer (which I covered in Part 1). Lukasik was very excited about the potential of computer technology. He succeeded Rechtin as ARPA director in 1970. The IPTO’s budget remained protected, mainly, it seems, due to Lukasik pitching the Arpanet to his superiors as a critical technology for the military in the nuclear age (I covered the Arpanet in Part 2).

In 1970 a rule called the Mansfield Amendment, created by Democratic Senator Mike Mansfield, came into effect. It established a rule for one year mandating that all monies spent by the Defense Department had to have some stated relevance to the country’s military mission. Wikipedia says that this rule was renewed in 1973, specifically targeting ARPA. Waldrop claims it was really a round-about way of cutting defense spending, but it didn’t mean anything in terms of the technology that was being developed at ARPA. Nevertheless, it had social reverberations within the agency’s working groups. In light of the controversy over the Vietnam War, it tarnished ARPA’s image in the academic community, because it required all projects funded through it to justify their existence in military terms. When non-military research proposals would be sent to ARPA, military “relevance statements” would be written up and slapped on them at the end of the approval process, unbeknownst to the researchers, in order to comply with the law. These relevance statements would contain descriptions of the potential, or supposed intended use of the technology for military applications. This caused embarrassing moments when students would find out about these statements through disclosures obtained through the Freedom of Information Act. They would confront ARPA-funded researchers with them. This was often the first time the researchers had seen them. They’d be left explaining that while, yes, the military was funding their project, the focus of their research was peaceful. The relevance statements made it look otherwise. Secondly, it gave the working groups the impression that Congress was looking over their shoulder at everything they did. The sense that they were protected from interference had been pierced. This dampened enthusiasm all around, and made the recruiting of new students into ARPA projects more difficult.

ARPA’s name was changed to DARPA in 1972, adding the word “Defense” to its name. It didn’t mean anything as far as the agency was concerned, but given the academic community’s opposition to the Vietnam War, it didn’t help with student recruiting.

In 1972 the consulting firm Bolt Beranek and Newman (BBN), which had built the hardware for the Arpanet, wanted to commercialize its technology–to sell it to other customers besides the Defense Department. IPTO director Larry Roberts announced in 1973 that he was leaving DARPA to work for BBN’s new networking subsidiary, named Telenet. This created a big problem for Lukasik, because Roberts hadn’t groomed a deputy director, as past IPTO directors had done, so that there would be someone who could jump right in and replace him when he left. It was up to Lukasik to find a replacement, only that wasn’t so easy. He had created a bit of a monster within the IPTO. Every year he had pushed its budget higher, while other projects within DARPA were having their budgets cut. He tried to recruit a new director from within the IPTO’s research community, but everyone he thought would be suited for it turned him down. They were enjoying their research too much. Lukasik came upon J.C.R. Licklider, the IPTO’s founding director, as his last resort.

J.C.R. “Lick” Licklider, from ARS 327

Licklider (he liked to be called “Lick”), who I talked about extensively in Part 2, had returned to MIT and ARPA in 1968. He became a tenured professor of electrical engineering. He began his own ARPA project at MIT, called the Dynamic Modeling group, in 1971. He observed that the Multics project, which I covered in Part 2, nearly overwhelmed Project MAC’s software engineers, and almost resulted in Multics being cancelled. The goal of his project was to create software development systems that would make complex software engineering projects more comprehensible. As part of his work he got into computer graphics, looking at how virtual entities can be represented on the screen. He also studied human-computer interaction with a graphics display.

Larry Roberts tried to find someone besides Lick to be his replacement, more as an act of mercy on him, but in the end he couldn’t come up with anyone, either. He contacted Lick and asked him if he’d be his replacement. He reluctantly accepted, and returned to his post as Director in 1974.

Lick became an enthusiastic supporter of Ed Feigenbaum’s artificial intelligence/expert systems research. I talked a bit about Feigenbaum in Part 2. Through his guidance, Feigenbaum founded Stanford’s Heuristic Programming Project. Feigenbaum’s work would become the basis for all expert systems produced during the 1980s. He also supported Feigenbaum’s effort to integrate artificial intelligence (AI) into medical research, granting an Arpanet connection to Stanford. This allowed a community of researchers to collaborate on this field.

The IPTO’s favored place within DARPA did not last. Lukasik didn’t get along with his new superior at DDR&E, Malcom Currie. They had different philosophies about what DARPA needed to be doing. Waldrop wrote, “Currie wanted solutions out of ARPA now,” and he wanted to replace Lukasik with someone of like mind.

Bob Taylor at the Xerox Palo Alto Research Center had his eye on Lukasik. He needed someone who understood how they did research, and how to talk to business executives. Taylor had been an IPTO director in the late ’60s (I cover his tenure there in Part 2). He brought its style of computer research to Xerox, but he could see that they were not set up to create products out of it. He thought Lukasik would be a perfect fit. Taylor had been recruiting many of DARPA’s brightest minds into PARC, and it was a source of frustration at the agency. Nevertheless, Lukasik was intensely curious to find out just what they were up to. He came out to see what they were building. PARC was accomplishing things that Lick had only dreamed of years earlier. Lukasik was so impressed, he left DARPA to join Xerox in 1975.

George Heilmeier, from Wikipedia

His replacement was George Heilmeier. Waldrop characterized him as hard-nosed, someone who took an applied science approach to research. He had little patience for open-ended, basic research–the kind that had been going on at DARPA since its inception. He was a solutions man. He wanted the working groups to identify goals, where they expected their research to end up at some point in time. His emphasis was not exploration, but problem solving. Though it was jarring to everybody at first, most program directors came to accommodate it. Lick, however, was greatly dismayed that this was the new rule of the day. He thought Heilmeier’s methods went against everything DARPA stood for. Lick liked the idea of finding practical applications for research, but he wanted solutions that were orders of magnitude better than older ones–world changing–not just short-term goals of improving old methods by ten percent. Further, he could see that micromanagement had truly entered the agency. It wasn’t just a perception anymore.

Heilmeier explained in a 1991 interview,

For all the wonderful technology IPTO had sponsored, [Heilmeier insisted], it was the worst mess in the agency. And artificial intelligence was the worst mess in IPTO. “You see, there was this so-called DARPA community, and a large chunk of our money went to this community. But when I looked at the so-called proposals, I thought, Wait a second; there’s nothing here. Well, Lick and I tangled professionally on this issue. He said, ‘You don’t understand. What you do is give good people the money and they go off and do good things and that’s it.’ I said, ‘Lick, I understand that. And these may be good people, but for the life of me I can’t tell you what they’re going to do. And I don’t know whether they are going to reinvent the wheel, because there’s no discussion of the current practice and there’s no discussion of the implications, so I can’t tell whether this is a wise investment for DoD or not.'” (TDM, p. 402)

Lick had a very good track record of picking research projects, using a more subjective, intuitive sense about people, and he didn’t believe you could achieve the same quality of research by trying to use a more objective method of selecting people and projects.

Bob Kahn, who had joined DARPA in 1972, explained the differences between them,

[The] fact was that … both men were right. “There were some things that were more conducive to George’s directed style,” says Kahn. “Packet radio, for example, and maybe the Internet. But speech understanding was not amenable. In fact, almost none of AI fit in. The problem was that George kept looking for a kind of road map to the field of AI. He wanted to know what was going to happen, on schedule, into the future, to make the field a reality. And he thought it was quite reasonable to ask for that road map, because he had no idea how hard it would be to produce. Suppose you were Lewis and Clark exploring the West, where you had no idea what you were going to encounter, and people wanted to know exactly what routes you were going to take, where you would camp, and what you would do out there. Well, this was just not an engineering job, where you could work out the whole plan. So George was looking for something that Lick couldn’t provide.”

Unfortunately, when Lick tried to explain that to his boss, it was a bit like Seymour Papert’s or Alan Kay’s trying to explain exploratory education to a back-to-basics hard-liner. “IPTO really didn’t have a program-management structure,” declares Heilmeier. “They had a financial management structure, and they had a cheering section.” (TDM, p. 403)

Interestingly, I found this article in EETimes that saw this from a very different perspective, saying that what DARPA had been running was a “good-old-boys network” (and it’s difficult to see how it’s not implicating Lick in this), and that Heilmeier snapped them into shape.

Lick tried to see it from Heilmeier’s perspective, that researchers should not see applications as a threat to basic research, and that they should make common cause with engineers, to bring their work into the world of people who will use their technology. He thought this could provide an avenue where researchers could receive feedback that would be valuable for their work, and would hopefully soften the antagonism that was being directed at open-ended research. Meanwhile, he’d quietly try to keep Heilmeier from destroying what he had worked so hard to achieve.

Heilmeier seemed to agree with this approach. He issued some challenges to the IPTO working groups of technical problems he saw needed to be solved. He said to them, “Look, if some of you guys would sign up for these challenges I can justify more fundamental work in AI.” Some of them did just as he asked, and got to work on the challenges.

Lick declared in 1975 that the Arpanet, a project which began at DARPA in 1967, was fully operational, and handed control of the network over to the Defense Communications Agency.

Lick had to kill funding for some projects, which was never easy for him. One in particular was Doug Engelbart’s NLS project at Stanford Research Institute. I talked about this project in Part 2, and what ended up happening with it. A lot of Engelbart’s talent had left to pursue the development of personal computing at Xerox PARC. From Lick’s and DARPA’s perspective, he just wasn’t able to innovate the way he had before. From Engelbart’s perspective, Lick had been captured by the hype around AI, and just wasn’t able to see the good he was doing. It was a sad parting of the ways for both of them. (Source: Nerd TV interview with Doug Engelbart)

Heilmeier was pleased with the changes at the IPTO. They had “gotten with the program,” and he thought they were producing good results. For Lick, however, the change was exhausting, and depressing. He left DARPA for good in late 1975, and returned to MIT, where his spirit soared again. This didn’t mean that he left computing behind. He came up with his own projects, and he encouraged students to play with computers, as he loved to do.

Bob Kahn at DARPA asked an important question. He agreed with Heilmeier’s “wire brushing” of the agency; that some projects had become self-indulgent, but he cautioned against “too much of a good thing” the other way.

ARPA’s current obsession with “relevance” had come dangerously close to destroying what made the agency so special. Remember, says Kahn, when it came to basic computer research–the kind of high-risk, high-payoff work that might not mature for a decade or more–ARPA was almost the only game in town. The computer industry itself was oriented much more toward products and services, he says, which meant that “there was actually very little research going on that was as innovative as ARPA’s.” And while there were certainly some shining exceptions to that rule–notably [Xerox] PARC, IBM and [AT&T] Bell Labs–“many of the leading scientists and researchers couldn’t be supported that way. So if universities didn’t do basic research, where would industry get its trained people?”

Yet it was ARPA’s basic research that was getting cut. “The budget for basic R&D was only about one third of what it was before Lick came in,” says Kahn. “Morale in the whole computer-science community was very low.” (TDM, p. 417)

Kahn pursued a research initiative, anticipating the future of VLSI (Very-Large-Scale Integration) design and fabrication for microchips, in order to try to revive the basic research culture at the IPTO. He won approval for it in 1977. Through it, methods were developed for creating computer languages for designing VLSI chips. Kahn was also the co-developer of TCP/IP with Vint Cerf at DARPA, the protocol that would create the internet.

Heilmeier left DARPA in 1977, and was replaced by Bob Fossum, who had a management style that was more amenable to basic research. The question was, though, “Okay. This is good, but what about the next director?” It had been common practice for people at DARPA to cycle out of administrative positions every few years. Was it too good to last?

Kahn began the Strategic Computing Initiative in 1983, a $1 billion program (about $2.3 billion in today’s money) to continue his work on advancing computer hardware design, and to advance research in artificial intelligence. Fossum was replaced by Robert Cooper that same year. Cooper took Kahn’s research goals and reimplemented them as an agency-wide program, giving every part of DARPA a piece of it. According to Waldrop, Kahn was disgusted by this, and took it as his cue to leave.

The era of revolutionary research in computing at the agency seems to come to an end at this point.

Here’s a video talking about what else was going on at DARPA during the period I’ve just covered:

Lick’s vision of the future

He would happily sit for hours, spinning visions of graphical computing, digital libraries, on-line banking and E-commerce, software that would live on the network and move wherever it was needed, a mass migration of government, commerce, entertainment, and daily life into the on-line world–possibilities that were just mind-blowing in the 1970s. (TDM, p. 413)

Lick had long been a futurist, a very reliable one. In a book published in 1979, “The Computer Age: A Twenty-Year View,” he looked into the future, to the year 2000, about what he could see happening–if he thought optimistically–with a nationwide digital network that he called “the Multinet.” The term “Internet” was not in wide use yet, though work on TCP/IP, what Lick called the “Kahn-Cerf internetworking protocol,” had been in progress for several years. The internet wouldn’t come into being for another 4 years.

“Waveguides, optical fibers, rooftop satellite antennae, and coaxial cables, provide abundant bandwidth and inexpensive digital transmission both locally and over long distances. Computer consoles with good graphic display and speech input and output have become almost as common as television sets.”

Great. But what would all those gadgets add up to, Lick wondered, other than a bigger pile of gadgets? Well, he said, if we continued to be optimists and assumed that all this technology was connected so that the bits flowed freely, then it might actually add up to an electronic commons open to all, as “the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.” Indeed, he went on, looking back from the imagined viewpoint of the year 2000, “[the electronic commons] has supplanted the postal system for letters, the dial-tone phone system for conversations and tele-conferences, stand-alone batch processing and time-sharing systems for computation, and most filing cabinets, microfilm repositories, document rooms and libraries for information storage and retrieval.”

The Multinet would permeate society, Lick wrote, thus achieving the old MIT dream of an information utility, as updated for the decentralized network age: “many people work at home, interacting with coworkers and clients through the Multinet, and many business offices (and some classrooms) are little more than organized interconnections of such home workers and their computers. People shop through the Multinet, using its cable television and electronic funds transfer functions, and a few receive delivery of small items through adjacent pneumatic tube networks . . . Routine shopping and appointment scheduling are generally handled by private-secretary-like programs called OLIVERs which know their masters’ needs. Indeed, the Multinet handles scheduling of almost everything schedulable. For example, it eliminates waiting to be seated at restaurants.” Thanks to ironclad guarantees of privacy and security, Lick added, the Multinet would likewise offer on-line banking, on-line stock-market trading, on-line tax payment–the works.

In short, Lick wrote, the Multinet would encompass essentially everything having to do with information. It would function as a network of networks that embraced every method of digital communication imaginable, from packet radio to fiber optics–and then bound them all together through the magic of the Kahn-Cerf internetworking protocol, or something very much like it.

Lick predicted its mode of operation would be “one featuring cooperation, sharing, meetings of minds across space and time in a context of responsive programs and readily available information.” The Multinet would be the worldwide embodiment of equality, community, and freedom.

If, that is, the Multinet ever came to be. (TDM, p. 413)

He tended to be pessimistic that this all would come true. With the world as it was, he thought a more tightly controlled scenario was more likely, one where the Multinet did not get off the ground. He thought big technology companies would not get into networking, as it would invite government regulation. Communications companies like AT&T would see it as a threat to their business. Government, he thought, would not want to share information, and would rather use computer technology to keep proprietary files on people and corporations. He thought the only way his positive vision would come to pass was if a consensus of hundreds of thousands, or millions of people came about which agreed that an open Multinet was desirable. He felt this would require leadership from someone with a vision that agreed with this idea.

At this time there were packet switching network options offered by various technology companies, including IBM, Digital Equipment Corp. (DEC), and Xerox, but none of them offered openness. In fact, business customers wanted closed networks. They feared openness, due to possibilities of security leaks and industrial espionage. Things were not looking up for this “marketplace” vision of a future network, even in academia. Michael Dertouzos, who led the Laboratory of Computer Science (LCS) at MIT (formerly Project MAC), was very interested in Lick’s vision, but complained that his fellow academics in the program were not. In fact they were openly hostile to it. It felt too outlandish to them.

Microcomputers had taken off in 1975, with the introduction of the MITS Altair, created by electrical engineer Ed Roberts. (This was also the launching point for a small venture created by Bill Gates and Paul Allen, called “Micro Soft,” with their first product, a version of the Basic programming language for the Altair.) Lick bought an IBM PC in the early 1980s, “but it never had the resources to do what he wanted,” said his son, Tracy.

Yes, Lick knew, these talented little micros had been good enough to reinvent the computer in the public mind, which was no small thing. But so far, at least, they had shown people only the faintest hint of what was possible. Before his vision of a free and open information commons could be a reality, the computer would have to be reinvented several more times yet, becoming not just an instrument for individual empowerment but a communication device, an expressive medium, and, ultimately, a window into on-line cyberspace.

In short, the mass market would have to give the public something much closer to the system that had been created a decade before at Xerox PARC. (TDM, p. 437)

Personal computing

The major sources I used for this part of the story were Alan Kay’s retrospective on his days at Xerox, called “The Early History of Smalltalk” (which I will call “TEHS” hereafter), and “The Alto and Ethernet Software,” by Butler Lampson (which I will call “TAES”).

Parallel to the events I describe at DARPA came a modern notion of personal computing. The vision of a personal computer existed at DARPA, but as I’ll describe, the concept we would recognize today was developed solely in the private sector, using knowledge and research methods that had been developed previously through DARPA funding.

Alan Kay

Alan Kay, from Wikipedia

The idea of a computer that individuals could buy and own had been around since 1961. Wes Clark had that vision with the LINC computer he invented that year at MIT. Clark invited others to become a part of that vision. One of them was Bob Taylor. What got in the way of this idea becoming something that less technical people could use was the large and expensive components that were needed to create sufficiently powerful machines, and some sense among developers about just how ordinary people would use computers. Most of the aspects that make up personal computing were invented on larger machines, and were later miniaturized, watered down in sophistication, and incorporated into microcomputers from the mid-1970s into the 1990s and beyond.

Most people in our society who are familiar with personal computers think the idea began with Steve Jobs and Steve Wozniak. The more knowledgeable might intone, and give credit to Ed Roberts at MITS, and Bill Gates. These people had their own ideas about what personal computers would be, and what they would represent to the world, but in my estimation, the idea of personal computing that we would recognize today really began with Alan Kay, a post-graduate student with a background in mathematics and molecular biology, who had become enrolled in the IPTO’s computer science program at the University of Utah (which I talk a bit about in Part 2), in the late 1960s. A good deal of credit has to be given to Doug Engelbart as well, who was doing research during the 1960s at the Stanford Research Institute, funded by ARPA/IPTO, NASA, and the Air Force, on how to improve group knowledge processes using computers. He did not pursue personal computing, but he was the one who came up with the idea for a point-and-click interface, combining graphics and text, using a device he and his team had invented called a “mouse.” He also created the first system that enabled linked documents, which foreshadowed the web we’ve known for the last 20 years, and enabled collaborative computing with teleconferencing. Describing his work this way does not do it justice, but it’ll suffice for this discussion.

Doug Engelbart,
from ibiblio.com
Ivan Sutherland,
from Wikipedia
Seymour Papert, from MIT

Flex’s “self-portrait,” from lurvely.com

Kay began developing his ideas about personal computing in 1968. He had started on his first proof-of-concept desktop machine, called Flex, a year earlier. He was aware of Moore’s Law (from Gordon Moore), a prediction that as time passed, more and more transistors would fit in the same size space of silicon. The implication of this is that more computing functionality could fit in a smaller space, which would thereby allow machines which filled up a room at the time to become smaller as time passed. As a graduate student, he imagined miniaturized technology that seemed unfathomable to other computer scientists of his day. For example, a hard drive as small as the crook of your finger. Back then, a hard drive was the size of a floor cabinet, or medium-sized refrigerator, and was typically used with mainframes that took up the space of a room.

Kay’s ideas about what personal computing could be were heavily influenced by Doug Engelbart, Ivan Sutherland (from his Sketchpad project), Tom Ellis and Gabriel Groner at RAND Corp. (from a system called GRAIL), and Seymour Papert, Wally Feurzig, and Cynthia Soloman at BBN with their work with children and Logo, among many others. (Source: TEHS)

Engelbart thought of computing as a “vehicle” for thinking about, and sharing information, and developing group knowledge processes. Kay got a profound sense of the personal computer’s place in the world from Papert’s work with children using Logo. He realized that personal computers could not just be a vehicle, but a new medium.

People can get confused about this concept. Kay wasn’t thinking that personal computers would allow people to use and manipulate text, images, audio, and video (movies and TV)–what most of us think of as “media”–for the sake of doing so. He wasn’t thinking that they would be a new way to store, transmit, and present old media, and the ideas they expressed most easily. Rather, they would allow people to explore a whole new category of ideas and expression that the other forms of media did not express as easily, if at all. It’s not that the old media couldn’t be part of the new, but they would be represented as models, as part of a knowledge system, and operate under a person’s control at whatever grain would facilitate what they wanted to understand.

Butler Lampson described the concept this way:

Kay was pursuing a different path to Licklider’s man-computer symbiosis: the computer’s ability to simulate or model any system, any possible world, whose behavior can be precisely defined. And he wanted his machine to be small, cheap, and easy for nonprofessionals to use. (TAES, p. 2)

Kay could see from Papert’s work that using computers as an interactive medium could enable children to understand subjects that would otherwise have been difficult, if not impossible to grasp at their age, particularly aspects of mathematics and science.

Xerox PARC

Xerox enters the story in 1970. They had a different set of priorities from Kay’s, and it’s interesting to note that even though this was true, they were willing to accommodate not only his priorities, but those of other researchers they brought in.

Xerox was concerned that as the photocopier market grew, competitors would come into the space, and Xerox didn’t want to put all their “eggs” in it. They thought computers would be a good way for them to diversify their product line. Their primary goal was to…

…develop the “architecture of information” and establish the technical foundation for electronic office systems that could become products in the 1980s. It seemed likely that copiers would no longer be a high-growth business by that time, and that electronics would begin to have a major effect on office sys­tems, the firm’s major business. Xerox was a large and prosperous company with a strong commitment to basic research and a clear need for new technology in this area. (TAES, p. 2)

Xerox wanted to site their research facility near a community of computer research. They looked at a few places around the country, and ultimately decided on Palo Alto, CA, since it was near Berkeley and Stanford universities, which were centers of computer research. They wanted to bring in a research director who was familiar with the field, and would be respected by the research community. The right person for the job was not obvious. When they talked to computer researchers of the time, all paths eventually led back to Bob Taylor, who had been an ARPA/IPTO director in the late 1960s. The thing was he had no computer science research of his own under his belt. His background was in psychoacoustics, a field of psychology (though Taylor thinks of it as applied physics). What made him the right candidate in Xerox’s eyes was that everyone who was anyone in the field Xerox wanted to explore knew and respected him. So they invited Taylor in to assist in setting up their research group.  Thus was born the Xerox Palo Alto Research Center (PARC). It was totally funded by Xerox. Taylor was not officially given the job as PARC’s director, because of his lack of research background, but he was allowed to run the facility the same way that the IPTO had conducted computer research. You could say that PARC during the 1970s was “the ARPA way done privately.”

One of the research techniques used at ARPA and Xerox PARC was to anticipate the speed and memory capacities of future computers that would be in wide use. Engineers who were part of the research teams would either find hardware that fit these anticipated specifications, or would build their own, and then see what software they could develop on it that was significantly better in some capacity than the systems that were available in their present. As one can surmise, this was an expensive thing to do. In order to exceed the capacity of the machines that were in wide use, one had to not think about what was most economical, but rather go for the hardware that was only used by a relative few, if anyone was using it at all, because it would be prohibitively expensive for most to acquire. Butler Lampson described this with Xerox PARC’s Alto research system:

The Alto system was affected not only by the ideas its builders had about what kind of system to build, but also by their ideas about how to do computer systems research. In particular, we thought that it is important to predict the evolution of hardware technology, and start working with a new kind of system five to ten years before it becomes feasible as a commercial product.

Our insistence on work­ing with tomorrow’s hardware accounts for many of the differences between the Alto system and the early personal computers that were coming into existence at the same time. (TAES, p. 3)

(To get an idea of what Lampson was talking about in that last sentence, I encourage people look at another post I wrote a while back, called “Triumph of the Nerds.”)

Taylor hired a bunch of people from the failing Berkeley Computer Corp., which was started by Butler Lampson, along with students that had joined him as part of Project Genie. Among the people Taylor brought in were Chuck Thacker, Charles Simonyi, and Peter Deutsch. I talk a little about Project Genie in Part 2. He also hired many people from ARPA’s IPTO projects, including Ed McCreight from Carnegie Mellon University, and some of the best talent from Engelbart’s NLS project at the Stanford Research Institute, such as Bill English. Jerry Elkind hired people from BBN, which had been an ARPA contractor, including Danny Bobrow, Warren Teitelman, and Bert Sutherland (Ivan Sutherland’s older brother). Another major “get” was hiring Allen Newell from CMU as a consultant. A couple of Newell’s students, Stuart Card and Tom Moran, came to PARC and pioneered the field we now know as Human-Computer Interaction.

As Larry Tesler put it, Xerox told the researchers, “Go create the new world. We don’t understand it. Here are people who have a lot of ideas, and tremendous talent.” Adele Goldberg said of PARC, “People came there specifically to work on 5-year programs that were their dreams.” (Source: Robert X. Cringely’s documentary, “Triumph of the Nerds”) In a recent interview, which I’ll refer to at the end of this post, she said that PARC invited researchers in to work on anything that they thought in 5 years could have an impact on the company.

Alan Kay came to work at PARC in 1970, started the Learning Research Group (LRG), and got them thinking about what would come to be known as “portable computers,” though Kay called them “KiddieKomps.” They also got working on font technology.

In 1972 Kay committed his thoughts on personal computing to paper in a document called, “A Personal Computer for Children of All Ages.” In it he described a conceptual model he called a “Dynabook,” or “dynamic book.” I’ll quote from a blog post I wrote in 2006, called, “Great moments in modern computer history,” as it summarizes what I mean to get across about this:

Kay envisioned the Dynabook as a portable computer, 9″ x 12″ x 3/4″, about the size of a modern laptop, with its own battery, and would weigh less than 4 pounds. In fact he said, “The size should be no larger than a notebook.” He envisioned that it would use removable media for file storage (about 1 MB in size, he said), that it might have a keyboard, and that it would record and play audio files, in addition to displaying text. He said that if no physical keyboard came with the unit, a software keyboard could be brought up, and the screen could be made touch-sensitive so that the user could just type on the screen. … Oh, and he made a wild guess that it would cost no more than $500 to the consumer.

He said, “The owner will be able to maintain and edit his own files of text and programs, when and where he chooses.”

He envisioned that the Dynabook would be able to “dock” with a larger computer system at work. The user could download data, and recharge its battery while hooked up. He figured the transfer rate would be 300 Kbps.

Mock up of the Dynabook,
from history-computer.com

He imagined the Dynabook being connected to an “information utility,” like what was called at the time “the ARPA network,” which later came to be called the Internet. He predicted this would open up online access to schools and libraries of information, “stores” (a.k.a. e-commerce sites), and would bring “billboards” (a.k.a. web ads) to the user. I LOVE this quote: “One can imagine one of the first programs an owner will write is a filter to eliminate advertising!”

Another forward-looking concept he imagined is that it might have a flat-panel plasma display. He wasn’t sure if this would work, since it would draw a lot of power, but he thought it was worth trying. … He thought an LCD flat-panel screen was another good option to consider.

Kay thought it essential that the machine make it possible to use different fonts. He and fellow researchers had already done some experiments with font technology, and he showed some examples of their results in his paper.

children using the Dynabook 2

Alan Kay’s conception of children using the Dynabook

He doesn’t elaborate on this, but he hints at a graphical interface for the device. He describes in spots how the user can create and save “dynamic graphics.” In a scenario he illustrates in the paper, two children are playing a game like Space War on the Dynabook, involving graphics animation, experimenting with concepts of gravity. This scenario lays down the concept of it being a learning machine, and one that’s easy enough for children to manipulate through a programming language.

He also imagined that Dynabooks would be able to communicate with each other wirelessly, peer-to-peer, so that groups of students would be able to easily work on projects together, without needing an external network.

Working with Dan Ingalls, an electrical engineer with a background in physics, Diana Merry, and colleagues, the LRG began development of a system called “Smalltalk” in 1972. This was a first effort to create the Dynabook in software. It would come to formalize another of Kay’s concepts, of virtual objects in a computer. The idea was that entities (visual and non-visual) could be fashioned by the person using a computer, which are as versatile as tools that are used in the real world, and can be used in any combination at any time to accomplish tasks that may not have been evident when they were created.

Computer programming was an important part of Kay’s concept of how people would interact with this medium, though he developed doubts about it. What he really wanted was some way that people could build models in the computer. Programming just seemed to be a good way to do it at the time.

Objects could be networked together via. a concept called “message passing,” with the goal of having them work together for some purpose. As Smalltalk was developed over 8 years, this idea would come to include everything from the desktop interface, to windows, to text characters, to buttons, to menus, to “paint” brushes, to drawing tools, to icons, and more. His idea of overlapping windows came from his desire to allow people to work on several projects at the same time, allowing them to make the most of limited screen space. (source: TEHS)

“The best way to predict the future is to invent it”

The graphical interface he and his team developed for Smalltalk was motivated by educational research on children, which had surmised that they have a strong visual sense, and that they relate to manipulating visual objects better than explicitly manipulating symbols, as older interactive computer systems had insisted upon. A key phrase in the paragraph below is, “doing with images makes symbols,” that is, symbols in our own minds, and, if you will, in the “mind” of the computer. His concept of “doing with images” was much more expansive and varied than has typically been allowed on computers consumers have used.

All of the elements eventually used in the Smalltalk user interface were already to be found in the sixties–as different ways to access and invoke the functionality provided by an interactive system. The two major centers were Lincoln Labs [at MIT] and RAND Corp–both ARPA funded. The big shift that consolidated these ideas into a powerful theory and long-lived examples came because the LRG focus was on children. Hence we were thinking about learning as being one of the main effects we wanted to have happen. Early on, this led to a 90 degree rotation of the purpose of the user interface from “access to functionality” to “environment in which users learn by doing”. This new stance could now respond to the echos of Montessori and Dewey, particularly the former, and got me, on rereading Jerome Bruner, to think beyond the children’s curriculum to a “curriculum of the user interface.”

The particular aim of LRG was to find the equivalent of writing–that is learning and thinking by doing in a medium–our new “pocket universe”. For various reasons I had settled on “iconic programming” as the way to achieve this, drawing on the iconic representations used by many ARPA projects in the sixties. My friend Nicolas Negroponte, an architect, was extremely interested in embedding the new computer magic in familiar surroundings. I had quite a bit of theatrical experience in a past life, and remembered Coleridge’s adage that “people attend ‘bad theatre’ hoping to forget, people attend ‘good theatre’ aching to remember“. In other words, it is the ability to evoke the audience’s own intelligence and experiences that makes theatre work.

Putting all this together, we want an apparently free environment in which exploration causes desired sequences to happen (Montessori); one that allows kinesthetic, iconic, and symbolic learning–“doing with images makes symbols” (Piaget & Bruner); the user is never trapped in a mode (GRAIL); the magic is embedded in the familiar (Negroponte); and which acts as a magnifying mirror for the user’s own intelligence (Coleridge). (source: TEHS)

A demonstration of Smalltalk-80


Dan Ingalls,
from Wikipedia
Diana-Merry Shapiro,
from Linkedin
Chuck Thacker,
from Microsoft Research
Butler Lampson,
from Microsoft Research

Chuck Thacker, who has a background in physics and designing computer hardware, and a small team he assembled, created the first “Interim Dynabook,” as Kay called it, at PARC’s Computer Science Lab (CSL) in 1973. It was named “Bilbo.” It was not a handheld device, but more like the size of a desk, perhaps connected to a larger cabinet of electronics. It had a monitor with a bitmap display of 606 x 808 pixels (which gave it the profile of an 8-1/2″ x 11″ sheet of paper), and 128 kilobytes of memory. Smalltalk was brought over to it from its original “home,” a Data General Nova computer.

Thacker’s team created the first Alto models (named after Palo Alto) shortly thereafter. Butler Lampson, a computer scientist with a background in physics, and electrical engineering, led a team which created the operating system for it. The Alto computer fit under a desk. It was the size of a small refrigerator. The keyboard, mouse, and display sat on top of the desk. It had the same size display, and internal memory as “Bilbo,” ran at about 6 Mhz, and had a removable hard drive that stored 2.5 MB per disk. (source: History of Computers)

As the years passed, CSL would develop bigger, more powerful machines, with code names Dolphin, and Dorado.

Adele Goldberg, from PC Forum

Beginning in 1973, Adele Goldberg, who had a background in mathematics and information systems, and Alan Kay tried Smalltalk out on students from ages 12-15, who volunteered to take programming courses from them, and to come up with their own ideas for things to try out on it. Goldberg developed software design schema and curricular materials for these courses, and helped guide the education process as they got results.

They had some success with an approach developed by Goldberg where they had the students build more and more sophisticated models with graphical objects. They thought they were building the skill of students up to more sophisticated approaches to computing, and in some ways they were, though the influence these lessons were having on the students’ conception of computing was not as broad as they thought. Kay realized after teaching programming to some adult students that they could only get so far before they ran into a “literature” barrier. The same had been true with the teen and pre-teen students they had taught earlier, it turned out. From Kay’s description, the way I’d summarize the problem is that they required background knowledge in organizing their ideas, and they needed practice in doing this.

Kay said in retrospect that literature renders ideas. Any medium needs literature in order to be powerful. Literature’s purpose is to provide a body of ideas that can be discussed in the medium. (Source: TEHS) Without this literary background, the students were unable to write about, or discuss the more sophisticated ideas through programming code. No matter how good a tool or instrument is, what’s produced by the people using it can only be as good as the ideas they use in fashioning the product. Likewise, the quality of what is written in text or notation by an author or composer, or produced in sound by a musician, or imagery and sound for a movie, is only as good as a) the skill of the creators, b) the outlook they have on what they are expressing, and c) their knowledge, which they can apply to the effort.

Reconsidering the Dynabook

There came a “dividing point” at PARC in 1975. Kay met with other members of the LRG and discussed “starting over.” He could see that the developed ideas of Smalltalk were taking them away from the educational goals he set out to accomplish. Professional considerations had started to take hold with the group, though, and they saw potential with Smalltalk. The majority of the group wanted to stick with it. His argument for “starting over” with the educational project did not win out. He said of this point in time that while he disagreed with the decision of his colleagues, he held no ill will towards them for it. He knew them to be wonderful people. The sense I get from reading Kay’s account of this history is they saw potential perhaps in creating more sophisticated user environments that professionals would be interested in using. I infer this from the projects that PARC engaged in with Smalltalk thereafter.

Kay turned his focus to a new project, as if to pick up again his “KiddieKomps” idea. He designed a computer he called NoteTaker. Adele Goldberg said Kay’s purpose in doing this was to develop a proof of concept for the Dynabook’s hardware (see the interview with her at the end of this post). While Kay went off in this direction, Goldberg took over management of the Smalltalk project. The educational program at PARC faded away in 1976. (Source: TEHS)

The original concept for NoteTaker was a laptop design, with what Kay called a “tab mouse,” a physical control that was small, mounted on the computer, yet agile enough to allow a person using it to move a cursor around a graphical interface. This did not end up on the the machine, but NoteTaker was made useable with a keyboard, mouse, and a touch screen, and it had stereo sound output. (Source: “Joining the Mac Group,” by Bruce Horn)

Once Smalltalk-76 was done (each version of Smalltalk was named by the year it was completed), Dan Ingalls and Ted Kaehler ported it to the NoteTaker, and by around 1978 Kay had created the first “luggable” portable computer. Kay recalls it using three Intel 8086 processors (though others remember it using two Motorola 68000 processors), had 256 kilobytes of memory, and was able to run on batteries. It wasn’t a laptop (it was too big and heavy), but it could fit on a desktop. At first glance, it looks similar to the Osborne 1, the first commercially available portable computer, and there’s a reason for that. The Osborne’s case design was based on NoteTaker.

To put it through its paces, a team from PARC took the NoteTaker on an airplane trip, “running an object-oriented system with a windowed interface at 35,000 feet.”

Kay lamented that there wasn’t enough corporate will to use their own know-how to create better hardware for it (Kay considered the Intel processors barely sufficient), much less turn it into a commercial product.

The NoteTaker, from the Computer History Museum

Looking back on the experience, Kay was dismayed to see that PARC had pushed aside his educational goals, and had co-opted Smalltalk as a purely professional’s tool. The technologies that could have made his original Dynabook vision a physical reality were coming into being at just this time, but there was no will at Xerox to make it into an actual product. (source: TEHS)

He has pursued development of his educational ideas ever since. He sees computers in the same way that a few in the Middle Ages saw the invention of the printing press, enabling new ways of interacting and thinking, to, as Licklider would have said, allow people to “think as no one has ever thought before.”

Developing the office of the future

93ewis17
From left to right: Larry Tesler, from Twitter.com; Timothy Mott, from startupgenome.co; Charles Simonyi

A separate project at PARC also got started in the early 1970s, called POLOS (the Parc On-Line Office System), in the System Science Lab (SSL). The goal there was to develop a networked computer office system.

Edit 5/31/2016: I had suggested in the way I wrote the following paragraph that Larry Tesler had invented cut, copy, and paste actions in digital text editing. Upon further review, I think that historical interpretation is wrong. Doug Engelbart had copy and paste (and probably a “cut” action) in NLS, and there may have been earlier incarnations of it.

One of the first projects in this effort was a word processor called Gypsy, designed and implemented by Larry Tesler and Timothy Mott, both computer scientists. (Source: TAES) The unique thing about Gypsy was that Tesler tried to make it “modeless.” Its behavior was like our modern day word processors, where you always have a cursor on the screen, and all actions mean the same thing at all times. Wherever the cursor is, that’s where text is entered. He also invented drag selection/highlighting with a mouse, and which was incorporated into the a cut, copy, and paste process. This has been a familiar feature whenever we work with digital text.

Later, Charles Simonyi, an electrical engineer, and Butler Lampson began work on the first What You See Is What You Get (WYSIWYG) text processor, called Bravo. They developed their first version in 1974. It had its own graphical interface, allowed the use of fonts, and basic document elements, like italics and boldface, but it worked in “modes.” The person using it either used the computer’s keyboard to edit text, or to issue commands to manipulate text. The keyboard did something different depending on which mode was operating at any point in time. This was followed by BravoX, which was a “modeless” version. BravoX had a menu system, which allowed the use of a mouse for executing commands, making it more like what we’d recognize as a word processor today. (source: Wikipedia) Each team was trying out different capabilities of digital text, and trying to see how people worked with a text system most productively.

From left to right: Andrew Birrell, from Microsoft Research; Roy Levin, from Linkedin; Roger Needham, from ACM SIGSOFT; Michael Schroeder, from Microsoft Research

A couple graphical e-mail clients were developed by a team led by Doug Brotz, named “Laurel” and “Hardy,” which allowed easy review and filing of electronic mail messages. (sources: The Xerox “Star”: A Retrospective) From Lampson’s description, they appeared to be “e-mail terminals.” They didn’t store, or allow one to write e-mails. They just provided an organized display for them, and a means of telling a separate messaging system what you wanted to do with them, or that you wanted to create a new message to send. (Source: TAES) Andrew Birrell, Roy Levin, Roger Needham (who had a background in mathematics and philosophy), and Michael Schroeder, a computer scientist, developed a distributed service, which the e-mail clients used to compose, receive, and transmit network messages, called Grapevine. From the description, it appeared to work on a distributed peer-to-peer basis, with no central server controlling its services for an organization. Grapevine also provided authentication, file access control, and resource location services. Its purpose was to provide a way to transmit messages, provide network security, and find things like computers and printers on a network. It had its own name service by which clients could identify other systems. (source: Grapevine: An Exercise in Distributed Computing) To get a sense of the significance of this last point, the Arpanet did not have a domain name service, and the internet did not get its Domain Name Service (DNS) until the mid-1980s.

PARC had a complete office system going, with all of this, plus networked file systems, print servers (using Ethernet, created by Bob Metcalfe and Chuck Thacker at PARC), and laser printing (invented at PARC by Gary Starkweather, with software written by Butler Lampson) by 1975! A year later they had created a digital scanner.

You can see one of the e-mail clients being demonstrated, with WYSIWYG technology/laser printing, on the Alto, in this Xerox promotional video:

Clarence Ellis,
from CU Boulder
Gary Nutt,
from CU Boulder

Clarence Ellis and Gary Nutt, both computer scientists, developed OfficeTalk, a prototype office automation system, at PARC. It tracked “job” documents as they went from person to person in an organization. (source: “The Xerox ‘Star’: A Retrospective”) In addition, Ellis came up with the idea of clicking on a graphical image to start up an application, or to issue a command to a computer, rather than typing out words to do the same thing. This is something we see in all modern user interfaces. (source: Answers.com)

Why Xerox missed the boat

For this section I go back to Waldrop’s book as my main source.

One might ask if the future in PCs was invented at Xerox–sounding an awful lot like what is in use with business IT systems today–why didn’t they own the PC industry? An even bigger question, why weren’t people back then using systems like what we’re using now? We use Windows, Mac, and Linux systems today, each with their own graphical interfaces, which descended from what PARC created. We had word processing, and network LAN file servers for years, beginning in the 1980s, and later, e-mail over the internet, print servers, and later still, office “task” automation software. The future of personal computing as we would come to know it was sitting right in front of them in the mid-1970s. We can say that now, since all of these ideas have been made into products we use in the work world. If we were to look at this technology back then, we might’ve been clueless to its significance. The executives at Xerox were an example of this.

For several years management couldn’t understand the vision that was developing at PARC. The research culture there didn’t mesh that well with the executive culture, especially after the company’s founder, Joe Wilson, died in 1971. Wilson had been open to new ideas and venturing into new technologies. At the time of his death, the company was also struggling from its own growth. The demand for its copiers was outstripping its ability to produce them competently. So a new management team was brought in which understood how to manage big companies. These were “numbers men,” though, and their methodology didn’t allow them to understand how to translate the kind of deep R&D the company had been doing with computers into products, because of course they didn’t have metrics to make an accurate measurement of how much money they’d make with a technology that was totally unknown to them. The company had plenty of metrics about costs and revenue that came from copier development and sales, so it was no problem for them to make reliable estimates for a new copier model. The most profitable part of their business was copiers, so that’s where they put their product development resources. The old hands at Xerox who had set up PARC ran interference for their operation, to keep the “numbers men” from shutting down their work. The people at PARC kept trying to convince the higher-ups that what they had developed was useful for office productivity, but it was for naught. In a depressing anecdote, Waldrop wrote:

One top-level Xerox executive, after a day of being shown the wonders of PARC, had posed precisely one question to the researchers: “Where can I get some of those beanbag chairs?” (TDM, p. 408)

(A famous “feature” of PARC was their use of beanbag chairs for group discussions, called “dealer meetings.”)

Stephen Lukasik came to Xerox from DARPA in 1975. He set up what was called the Systems Development Division (SDD). Its purpose was to create new salable products out of the research that was going on inside Xerox. The problem for him was that Xerox’s executives were willing to give him the authority to set up the division, but they weren’t willing to listen to what he said was possible, nor were they willing to give him the funding to make it happen. Lukasik left Xerox in 1976. He said that though his time there was short, he valued the experience. He just didn’t see the point in staying longer. Nevertheless, SDD would become useful to Xerox. It just had to wait for the right management.

Xerox set up a big demo of its products in Boca Raton, FL. in 1977, which included the prototypes from PARC. All the company executives, their wives, family and friends were invited. The people from PARC did the best bang-up job they could to make their stuff look impressive for business computing.

Back at PARC, says [Gary] Starkweather, he and his colleagues saw this as their last, best chance. “The feeling was, ‘If they don’t get this, we don’t know what we can do.'” So, he says, with John Ellenby coordinating an all-out effort, “We stripped everything out of PARC down to the power cords and set it all up again in Boca. Computers, networks, printers–the whole thing! I built a laser printer that did color. Bill English had a word processor that did Japanese. We were going to show them space flight!”

They certainly tried. [At the event], Xerox executives and their families swarmed through the Grenada Rooms of the Boca Raton Hotel for a hands-on demonstration of WYSIWYG editing in Bravo, graphical programming in Smalltalk, E-mailing in Laurel, artistry in Paint and Draw–the works. “The idea was a mental slam-dunk!” says Starkweather. “And some people did see it.” The executives’ wives, for example–many of them former secretaries who knew all about carbon paper, Wite-Out, and having to retype whole pages to correct a single mistake–took one look at Bravo and got it. “The wives were so ecstatic they came over and kissed me,” remembers Jack Goldman. “They said, ‘Wonderful things you’re doing!’ Years later, I’d see them and they’d still remember Boca.”

Then there were the delegates from Fuji Xerox, the company’s Japanese partner: they were beside themselves over Bill English’s word processor. “Fuji clamored, ‘Give us this! We’ll manufacture it!'” recalls Goldman. In fact, he says, that was a near-universal reaction: “People from Europe, people from South America, marketing groups around the U.S.–everyone who went out of that conference was excited by what they had seen.”

Everyone, that is, except the copier executives, the real power brokers in Xerox. You couldn’t miss them; they were the ones standing in the background with the puzzled, So-What? look on their faces.

Indeed, one of the corporation’s purposes in calling the conference was to rally the troops for the coming era of ever-more-ferocious competition. In fairness, those executives in the background had to worry about defending the homeland now, not ten years from now. Or maybe they were simply too bound by the culture of the executive suite, vintage 1977. The xerographers lived in a world in which typing was women’s work and keyboards were for secretaries. It was a rare executive who would even deign to touch one. (TDM, p. 408)

There was a change in leadership in 1978 which recognized that the management style they had been using wasn’t working. The Japanese had entered the copier market, and were taking market share away from them. This is just what Joe Wilson had anticipated eight years earlier. Secondly, the new management recognized the vision at PARC. They wanted to create products from it. Work on the Xerox Star system began that year at SDD.

SDD had previously created a machine called Dandelion, Xerox’s most powerful computer to date. The Star would be the Dandelion translated into a business system. (Source: TAES)

Xerox and Apple

There’s been increasing awareness about the history of Xerox and Apple as time has passed, but there are some misconceptions about it that deserve to be cleared up.

Xerox and Apple developed a brief business relationship in 1979. Apple got ideas about how to create the user experience with their Lisa and Macintosh computers from Xerox PARC. The misperceptions are in how this happened. This is how the meeting between Xerox and Apple in 1979 has been perceived among those who are generally familiar with this history (from the 1999 TNT made-for-TV movie “Pirates of Silicon Valley”):

There’s some truth to this, but some of it is myth. One could almost say it was trumped up to bolster Steve Jobs’s image as a visionary. It’s a story that’s been propagated for many years, including by yours truly. So I mean to correct the record.

Much of Waldrop’s account of what happened between Xerox and Apple is a retelling of an account from Michael Hiltzik’s book, “Dealers of Lightning.” From what he says, the Apple team’s visit at PARC was not a total revelation about graphical user interfaces, but it was an inspiration for them to improve on what they had developed.

The idea of a graphical user interface on a computer was already out there. People at Apple knew of it prior to visiting Xerox. Xerox had been publishing information about the concept, and had been holding public demos of some of the graphical technology PARC had developed. So the idea of a graphical user interface was not a secret at all, as has been portrayed. However, this does not mean that every aspect of Xerox’s user interface development was made public, as I’ll discuss below.

Apple had been working on a computer called Lisa since 1978. It was a 16-bit machine that had a high-resolution bitmapped display. The Lisa team called it a “graphical computer.” Apple was approached by the Xerox Development Corporation (XDC) to take a look at what PARC had developed. The director of XDC felt that the technology needed to be licensed to start-ups, or else it would languish. Steve Jobs rebuffed the offer at first, but was convinced by Jef Raskin at Apple to form a team to go to PARC for a demonstration.

Apple and Xerox made a trade. Xerox bought a stake in Apple that was worth about $1 million, in exchange for Apple getting access to PARC’s technology. Waldrop says that Jobs and a team of Apple engineers made two visits to PARC in December 1979. According to an article from Stanford University, Jobs was not with them for the first visit. Waldrop notes that by this point Apple had already added a graphical user interface to the Lisa, but that it was clunky. By Larry Tesler’s account, there were more visits by the Apple team than Waldrop talks about. He has a different recollection of some of the details of the story as well. I include these different sources to give a “spectrum” of this history.

Going by Waldrop’s account, at their first visit in December ’79, they got a “standard” demo of the Alto, given by Adele Goldberg, that many other visitors to PARC had already seen. The group from Apple saw it being used with a mouse, the Bravo word processor, some drawing programs, etc., and then they left, apparently satisfied with what they had seen. It should be noted that this demo did not show the Xerox “desktop interface” in the sense of what people have come to know on personal computers and laptops. Each program they saw was a separate entity, which took over the entire operation of the machine, using the Alto’s graphical capabilities to show graphics and text. The Apple team did not see the desktop metaphor, which was in Smalltalk, and which was being developed for the Xerox Star. Goldberg considered Smalltalk proprietary. Jobs and the team eventually realized that they hadn’t really seen “the good stuff.”

The Apple team came back to PARC, unannounced. This is the visit that’s usually talked about in Apple-PARC lore, and is portrayed in the Pirates of Silicon Valley clip above. Jobs demanded to see the Smalltalk system *NOW*. A heated argument ensued between PARC and Xerox headquarters over this. Xerox’s executives ordered the PARC team to demonstrate Smalltalk to the Apple team, citing the partnership with Apple brokered by XDC. Bob Taylor was out of town at the time. He later said that he had no respect for Steve Jobs, and if he had been there, he would’ve kicked the Apple team out of the building, no if’s, and’s, or but’s! He figured Xerox would’ve fired him for it, but that would’ve been fine with him.

This time the people from Apple were prepared. They asked very detailed technical questions, and they got to see what Smalltalk was capable of. They saw educational software written by Goldberg, programming tools written by Larry Tesler, and animation software written by Diana Merry that combined graphics with text in a single document. They got to see multiple tasks on the screen at the same time with overlapping windows, in a “desktop” metaphor. Jobs was beside himself. He exploded, “Why hasn’t this company brought this to market?! What’s going on here? I don’t get it!” This was when Jobs had his epiphany about the graphical user interface, though Waldrop doesn’t delve into what that was. The breakthrough for him may have been the desktop metaphor, how it allowed multiple, different views of information, and multiple tasks to be worked on at the same time, along with everything else he’d seen. The way Waldrop portrays it is that Jobs realized that using a computer should be a fulfilling and fun experience for the person using it.

According to Hiltzik, the partnership between Xerox and Apple, of which the Apple stock trade had been a part, quickly fell apart after this, due to a culture clash. The Lisa’s chief programmer, Bill Atkinson, had to go off of what he remembered seeing at PARC, since he no longer had access to their detailed technical information.

So the idea that Apple (and Microsoft for that matter) “stole” stuff from Xerox is not accurate, though there was trepidation at PARC that Apple would steal “the kitchen sink.” While the visit gave the Lisa team ideas about how to make computer interaction better, and what the potential of the GUI was, Hiltzik said it wasn’t that big of an influence on the Lisa, or the Macintosh, in terms of their overall system design.

What this account means is that the visits to PARC were tangentially influential on Apple’s version of the idea of a graphical user interface. They did not go in ignorant of what the concept was, and it did not give them all of the ideas they needed to create one. It just helped make their idea better.

The microcomputer “revolution”

The people at PARC were aware of the nascent microcomputer phenomenon that was occurring under their noses. Some of them went to the Homebrew Computer Club meetings, where Steve Jobs and Steve Wozniak used to hang out. They read the upstart computer press that was raving about what Bill Gates and Steve Jobs were doing with their new companies, Microsoft and Apple Computer. According to Waldrop, it galled them that this upstart industry was getting so much attention and adoration that they felt it didn’t deserve.

“It had never occurred to us that people would buy crap,” declares Alan Kay, who considered the hobbyists in their garages down the hill to be very bright and very creative ignoramuses–undisciplined kids who didn’t read and didn’t have a clue about what had already been done. They were successful only because their customers were just as unsophisticated. “What none of us was thinking was that there would be millions of people out there who would be perfectly happy with the McDonald’s hamburger approach.” (TDM, p. 437)

Steve Jobs was somewhat of an exception. At least he got a hint of “what had already been done.” It took some prodding, but once he got it, he paid attention. Still, he didn’t fully understand the significance of the research that went into what he saw. For example, Jobs was very hostile to the idea of computer networking at the time, because he thought that would deprive the personal computer user of their autonomy. Freedom from dependency on larger computer systems was a notion he held to ideologically. When he railed against IBM as “big brother” it wasn’t just for show. If only he had been aware of Licklider’s vision for the internet (which was published at the time), the notion of a distributed network, with independent units, and the DARPA work that was creating it, perhaps he would’ve cottoned to it the way he had the graphical user interface. We’ll never know. He came to understand the importance of the network features that had been developed at PARC about 6 years later when he left Apple and started up NeXT.

There’s a famous quote from Jobs about his visits to PARC that illustrates what I’m talking about, from the PBS mini-series, “Triumph of the Nerds” (I wrote a post about this mini-series here):

They showed me, really, three things, but I was so blinded by the first one that I didn’t really ”see” the other two. One of the things they showed me was object-oriented programming. They showed me that, but I didn’t even “see” that. The other one they showed me was really a networked computer system. They had over 100 Alto computers all networked, using e-mail, etc., etc. I didn’t even “see” that. I was so blinded by the first thing they showed me, which was the graphical user interface. I thought it was the best thing I had ever seen in my life. Now, remember it was very flawed. What we saw was incomplete. They had done a bunch of things wrong, but we didn’t know that at the time. Still, though, the germ of the idea was there, and they had done it very well. And within ten minutes it was obvious to me that all computers would work like this, someday.

Too much, too late for Xerox

Xerox released the Star in 1981 as the 8010 Information System.

It had a Smalltalk-like graphical user interface, anywhere from 384 kilobytes of memory up to 1.5 MB, an 8″ floppy drive, a 10 to 40 MB hard disk, a monitor measuring 17″ diagonally, a mouse, a bevy of system features, Ethernet, and laser printing. (source: The Xerox “Star”: A Retrospective) The 8010 cost $16,500 per unit, though a customer couldn’t purchase just the computer. It was designed to be an integrated system, with networking and laser printing included. A minimal installation cost $100,000 (about $252,438 in today’s money). Xerox was going for the Fortune 500, which could afford expensive, large-scale systems. Even though the designers thought of it as a “personal computer” system, the 8010 was marketed under the old mainframe business model.

The designers at SDD put the kitchen sink into it. It had every cool thing they could think of. The system was designed so that no third-party software could be installed on it at all. All of the hardware and software came from Xerox, and had to be installed by Xerox employees. This was also par for the course with typical mainframe setups.

In the Star operating system, all of the software was loaded into memory at boot-up, and kept memory-resident (very much like Smalltalk). Users didn’t worry about what applications to run. All they had to focus on was their documents, since all documents and data were implicitly linked to the appropriate software. It presented an object-oriented approach to information. It didn’t say to you, “You’re working with a word processor.” Instead, you worked with your document. The system software just accommodated the document by surreptitiously activating the appropriate part of the system for its manipulation, and presenting the appropriate interface.

Since the Xerox brass recognized they knew nothing about computers, they turned the Star’s design totally over to SDD. There appeared to have been no input from marketing. Even so, as Steve Jobs said of the executives, “They were copier-heads. They just had no clue about a computer, what it could do.” It’s unclear whether input from marketing would’ve helped with product development. I have a feeling “the innovator’s dilemma” applies here.

The designers at SDD were also told that the cost of the product was no object, since their target market was used to paying hundreds of thousands of dollars for large-scale systems. The Xerox executives miscalculated on this point. While it’s true that their target market had no problem paying the system’s price tag, personal computing was a new, untested concept to them, and they were wary about risking that much money on it. Xerox didn’t have a low-risk, low-cost entry configuration to offer. It was all or nothing. Microcomputers were a much easier sell in this environment, because their individual cost, by comparison, went unnoticed in corporate budgets. Corporate managers were able to sneak them in, buying them with petty cash, even though IT managers (who believed wholeheartedly in mainframes) tried their darnedest to keep them out.

The microcomputer market, which just about everyone at Xerox saw as a joke, was eating their lunch. Bob Frankston and Dan Bricklin at VisiCorp (both alumni of the Project MAC/Multics project) had come out with VisiCalc, the world’s first commercial spreadsheet software, in 1979, and it was only available for micros. Xerox didn’t have an equivalent on the 8010 when it was released. They had put all of their “eggs” into word processing, databases, and e-mail. These things were needed in business, but it was the same thing the researchers at PARC had run into when they tried to show the Alto to the Xerox brass in years passed: People in the corporate hierarchy saw themselves as having certain roles, and they thought they needed different tools than what Xerox was offering. Word processing was what stenographers needed, in the minds of business customers. Managers didn’t want it. They wanted spreadsheets, which were the “killer applications” of the era. They would buy a microcomputer just so they could run a spreadsheet on it.

The designers at SDD didn’t understand their target market. The philosophy at PARC was “eat your own dog food” (though I imagine they used a different term for it). From what I’ve heard, listening to people who were part of the IPTO in the 1960s, it was Doug Engelbart who invented this development concept. The problem for Xerox was this applied to product development as well as to overall quality control. From the beginning, they wanted to use all of the technology they developed, internally, with the idea that if they found any deficiencies, there would be no running away from them. It would motivate them to make their systems great.

Thacker and Lampson mentioned in a retrospective, which I refer to at the end of this post, that someone at PARC had come up with a spreadsheet for the Alto, but that nobody there wanted to use it. They were not accountants. Xerox eventually released a spreadsheet for the 8010, once they saw the demand for it, but the writing was already on the wall by then.

It’s hard for me to say whether this was intentional, but the net effect of the way SDD designed the Star resulted in an implicit assumption that customers in their target market would want the capabilities that the designers thought were important. Their focus was on building a complete integrated system, the likes of which no one had ever seen. The problem was nothing in their strategy accounted for the needs and perceptions that business customers would assert. They may have assumed that customers would be so impressed by the innovativeness of the system that it would sell itself, and people would adjust their roles to it in a McLuhan-esque “the medium is the message” sort of way.

The researchers at PARC recognized that producing a microcomputer had always been an option. In 1979, Bob Belleville suggested going with a 16-bit microcomputer design, maybe using the Motorola 68000, or the Intel 8086 processor, instead of going forward with the Star. He built a prototype and showed that it could work, but the team working on the Star didn’t have the patience for it. They would’ve had to throw out everything they had done, hardware and software, and start over. Secondly, they wouldn’t have been able to do as much with it as they were able to do with the approach they had been pursuing. Thacker and Lampson saw as well that building a microcomputer would be more expensive than going ahead with their approach. It was, however, a fateful decision, because the opposite would be true two years later, when the 8010 Information System was released.

Lampson said, ironically, that this time, “the problem wasn’t a shortage of vision at headquarters. If anything, it was an excess of vision at PARC.”

When Apple released their Lisa computer in 1983, Xerox realized that they had missed their chance. The future belonged to IBM, Apple, and Microsoft.

Dispersion

From about 1980 on, people left PARC to join other companies. They were bleeding talent. Taylor was seen as part of the problem. He had an overriding vision, and it was his way, or the highway, so others have said. He understood full well where computing was going. He was just ahead of his time. He could be very supportive, if you agreed with his vision, but he had utter contempt for any ideas he didn’t approve of, and he would “take out the flamethrower” if you opposed him. This sounds a lot like what people once complained about with Steve Jobs, come to think of it. The director of PARC kindly asked Taylor to change his attitude. He took it as an insult, and left in frustration in 1983, taking some of PARC’s best engineers with him.

From looking at the story, it appears the failure of the Star project was “the last straw” for Xerox’s foray into computer research. After Taylor left, the era of innovative computer research ended at PARC.

The visions at Xerox were grandiose, and were too much, too late for the market. I don’t mean to take anything away from what they did by saying this. What they had was pretty good. It represented the future, but it was too far ahead of where the business computing market was. Competitors had grabbed the attention of customers, and they preferred what the competition was offering.

The PARC researchers got it both ways. When they had the chance to develop products in 1975, ahead of the explosion in the microcomputer market, their vision wasn’t recognized by the company that housed them. By the time it was recognized, the market had changed such that much less powerful machines, backed by more innovative business models, won out.

Even though Alan Kay brought the idea of a small, handheld computer to PARC, something that ordinary people would love to use, he valued computing power over the size of the machine. As he would later say about that period, the idea of the Dynabook was more of a service metaphor. The size and form of the hardware was incidental to the concept. (source: An Interview with Computing Pioneer Alan Kay) Thacker and Lampson, the designers of the 8010, shared that value as well. By the late 1970s the market was thinking “small is beautiful,” and they were willing to tolerate clunky, low-powered machines, because their economies of scale met immediate needs, and they created less friction from corporate politics.

Apple and Microsoft used ideas developed at PARC to further develop the personal computer market, and so watered down pieces of that vision got out to customers over about 20 years. Today we live in a world that resembles what Bob Taylor envisioned, with individual computers, networking, and digital printing, using graphical systems.

PARC’s legacy

The outside world would come to know the desktop graphical user interface metaphor because of Smalltalk, though the metaphor was repurposed away from Alan Kay’s powerful ideas about a modeling system, into a system that made it easy to run applications, and emulated integrating older media together into virtual paper documents.

To me, the most interesting contributor to the desktop interface was Diana Merry. She was originally hired at PARC as a secretary. She just happened to understand Alan Kay’s goals, and so she was invited into the LRG. She and Kay created the first implementation of overlapping windows, in Smalltalk. She also wrote a lot of the basic software Smalltalk used to display and animate graphics. (Sources: “The Mouse and the Desktop — Designing Interactions,” by Bill Moggridge, and TEHS)

The Paint, Draw, and Write applications that appeared on the Apple Lisa and the first Macintosh systems were inspired by similar works that had been created years earlier at PARC.

Microsoft Windows, and Microsoft Word benefitted from the research that was done at PARC, though some inspiration came from the Macintosh as well. The look and feel of the first versions of Windows owed more to the X/Window system on Unix. Where Microsoft copied Apple was in how people interacted with Windows (icons and menus), and the application suite that came with the system. (source: The Secret Origin of Windows)

Software developers who have been working on apps. for Apple products in the most recent generations of systems may be interested to know that the Objective-C language they’ve used was first created by a company called StepStone in the early 1980s. The design of the language and its runtime were somewhat influenced by the Smalltalk system.

Several companies licensed a version of Smalltalk, called Smalltalk-80, from Xerox during the 1980s. Among them were Tektronix, Hewlett-Packard, Digital Equipment Corp., and Apple. (source: “Smalltalk-80: Bits of History, Words of Advice”) Apple got a version running on the Macintosh XL in 1985. (source: The Long View) Apple’s version of Smalltalk was updated in the mid-1990s by Alan Kay and some of the original Learning Research Group gang from Xerox. They got Apple’s permission to release it into the public domain, and it has since been ported to many platforms. Professional developers call it by its new name, “Squeak.” An educational version also exists, maintained by the Squeakland Foundation, which goes by the name “Etoys.”

Charles Simonyi joined Microsoft. He brought his knowledge from developing Bravo with him, and it went into creating Microsoft Word. He became the lead developer on their Multiplan, and Excel spreadsheet software.

Where are they now?

Stephen Lukasik became a Chief Scientist at the FCC from 1979-1982. He is a member of the International Institute for Strategic Studies, the American Physical Society, and the American Association for the Advancement of Science. He is a founder of The Information Society journal, and has served on the Boards of Trustees of Harvey Mudd College and Stevens Institute of Technology. (Source: Georgia Tech)

Larry Roberts was chief executive at Telenet until 1980. Telenet was sold to GTE in 1979 and subsequently became the data division of Sprint. In 1983 he became the CEO of NetExpress, an Asynchronous Transfer Mode (ATM) equipment company. Roberts then became president of ATM Systems from 1993 to 1998. He then went back to packet networking, founding Caspian Networks, which focused on IP flow management (IP as in “Internet Protocol”), until 2004. He founded Anagran in effort to do what Caspian did, but more efficiently. (Source: Larry Roberts’s home page)

J.C.R. Licklider – In the late 1970s Lick visited Xerox PARC regularly. He served on the Committee on Government Relations at the Association of Computing Machinery (ACM), and as deputy chairman of the Social Security Administration’s Data Management System. He spent a year in 1978 on a task force commissioned by the Carter Administration, which examined the government’s data-processing needs. He served as president of the Boston Computer Society. He was an investor and advisor to a company called Infocom in 1979, which was founded by eight of his former students from his Dynamic Modeling group at MIT. He spent part of his time working at VisiCorp. He continued to work at MIT until he retired in 1985. He died on June 26, 1990.

Ed Feigenbaum – In 1984 he became a Fellow at the American College of Medical Informatics. From 1994 to 1997 he served as Chief Scientist of the U. S. Air Force. He founded the Knowledge Systems Laboratory at Stanford University, and is now professor emeritus at Stanford. He was co-founder of several start-ups, such as IntelliCorp, Teknowledge, and Design Power Inc. He has served on the National Science Foundation Computer Science Advisory Board, on the National Research Council’s Computer Science and Technology Board, and as a member of the Board of Regents of the National Library of Medicine. He is a Fellow at the the Association for the Advancement of Artificial Intelligence, the American Institute of Medical and Biological Engineering, and of the American Association for the Advancement of Science. He is a member of the National Academy of Engineering and of the American Academy of Arts and Sciences. (Source: Feigenbaum’s Turing Award citation at the ACM)

George Heilmeier became vice president at Texas Instruments in 1977. In 1983 he was promoted to Senior Vice President and Chief Technical Officer. In his current position, he is responsible for all TI research, development, and engineering activities. From 1991-1996 he also served as president and CEO of Bellcore (now Telcordia), ultimately overseeing its sale to Science Applications International Corporation (SAIC). He served as the company’s chairman and CEO from 1996-1997, and afterwards as its chairman emeritus. He serves on the board of trustees of Fidelity Investments and of Teletech Holdings, and the Board of Overseers of the School of Engineering and Applied Science of the University of Pennsylvania. He is a member of the National Academy of Engineering, the Defense Science Board, and is Chairman of the Technical Advisory Board of Southern Methodist University. (sources: Wikipedia, and IEEE Global History Network)

It should be noted that Heilmeier is credited with being the inventor of the Liquid Crystal Display (LCD), a technology Alan Kay hoped to use for his Dynabook in the 1970s, and which became the basis for the digital display most of us use today.

Alan Kay left PARC on sabbatical in 1980, and never came back. He came to Atari in 1981, as Chief Scientist, doing research on interactive computing (you can see some samples of it here). He then joined Apple as a research Fellow in 1984, where he worked on improving education in conjunction with technology. He joined Disney as a Fellow and Imagineer from 1996 to 2001. He founded Viewpoints Research Institute in 2001. He then became a research fellow at Hewlett-Packard from 2002 to 2005. During that time he was a Visiting Professor at Kyoto University, an adjunct professor of Computer Science at MIT, and was involved with the development of the XO Laptop, developed by the One Laptop Per Child program at MIT. Today he is an adjunct professor of Computer Science at UCLA, and he continues his work at Viewpoints. He is also on the advisory board of TTI/Vanguard. Since 2006 Viewpoints has been working on a project, sponsored by the National Science Foundation, to reinvent personal computing. (sources: The New York TimesChap. 12 from Howard Rheingold’s “Tools For Thought,” Wikipedia, Bio. on Alan Kay at Answers.comVPRI: Inventing Fundamental New Computing Technologies)

Bob Taylor went on to found the Systems Research Center (SRC) at Digital Equipment Corp. (DEC) in 1984. He retired from DEC in 1996.

Butler Lampson also left PARC with Taylor, and joined him at Digital Equipment Corp. He became a Fellow of the ACM in 1992. He now works at Microsoft Research, and is an adjunct professor at MIT. He became a Fellow at the Computer History Museum in 2008.

Dan Ingalls left PARC to work at Apple in 1984. Beginning in 1987 he helped run the Homestead Hotel, a family business, until 1993. He stayed on with Apple until 1996. From 1996 to 2001 he was Principal Staff Director at Disney Imagineering. He worked as a consultant for Hewlett-Packard and at Viewpoints Research Institute from 2001 to 2005. He joined Sun Labs in 2005, where he developed his newest project, called Lively Kernel. He left Sun in 2010 to become a Fellow at SAP, where he continues to work today. He continues development of his Lively Kernel Project at the Hasso Plattner Institute. (Sources: Dan Ingalls’s Linkedin page, and his Wikipedia page)

Diana Merry – After leaving Xerox in 1986, she has continued her work in Smalltalk with various employers. You can see her complete work history at her LinkedIn page.

Chuck Thacker left PARC the same time Bob Taylor did, and joined DEC as a founder of its Systems Research Center. He then joined Microsoft in 1997 to help found Microsoft Research in Cambridge, UK. He returned to the U.S., and developed the technology which was used in Microsoft’s Tablet PC. He is currently working at Microsoft Research in Silicon Valley on computer architecture.

Adele Goldberg became president of the Association of Computing Machinery (ACM) from 1984 to 1986, and continued to have a long association with the organization, taking on various roles. She continued on at PARC until 1987. She wanted to make sure that Smalltalk technology got out to a wider audience, so she worked out a technology exchange agreement with Xerox in the late 1980s and founded ParcPlace Systems, which commercialized a version of Smalltalk. She served as CEO and chairwoman of ParcPlace until 1995, when the company merged with Digitalk, another Smalltalk vendor, to become ParcPlace-Digitalk. In 1997 the company changed its name to ObjectShare. In 1999 ObjectShare was sold to Cincom, which has continued to develop and sell a Smalltalk development suite. Goldberg was inducted as a Fellow at the ACM in 1994. She co-founded Neometron in 1999, and now also works at Bullitics. She is a board member of Cognito Learning Media. She has continued her interest in education, formulating computer science courses at community colleges in the U.S. and abroad.

Charles Simonyi went to work at Microsoft in 1981. He led their development efforts to create Word, Multiplan, and Excel. He left Microsoft in 2002 to found Intentional Software, where he’s been at work on what I’d call his “Domain-Oriented Development” software and techniques.

Larry Tesler went to work at Apple in 1980, becoming Vice President of the Advanced Technology Group, and Chief Scientist. He worked on the team that developed the Lisa computer. In 1990 he led the effort to develop the Apple Newton, one of the first of what we’d recognize as a personal digital assistant. Tesler left Apple in 1997 to co-found a company called Stagecast Software. In 2001 he joined Amazon.com as its Vice President of Shopping Experience. In 2005 he joined Yahoo! as Vice President of its User Experience and Design group. In 2008 he went to work for 23andMe, a personal genetics information company. Since 2009 he’s worked as an independent consultant.

Timothy Mott left Xerox PARC to co-found Electronic Arts in 1982. In 1990 he became Director at Electronic Arts, and co-founded Macromedia, staying with them until 1994. In 1995 he co-founded Audible.com, and stayed with them until 1998. He stayed on with Electronic Arts until 2007. You can see his complete work profiles at Bloomberg BusinessWeek and CrunchBase.

Doug Brotz – It’s been difficult finding much information on him. All I have is that he joined Adobe and was a co-developer of the PostScript laser printer control language.

Andrew Birrell – You can find a summary of his work history here.

Roy Levin became a senior researcher at the DEC Systems Research Center in 1984. In 1996 he became its director. In 2001 he co-founded Microsoft Research in Silicon Valley, became a Distinguished Engineer, and its Managing Director. He continues work there today.

Roger Needham worked as a consultant at Xerox PARC from 1977-1984. He then worked at DEC’s Systems Research Center from 1984-1997, and at Hitachi Advanced Research Laboratory from 1994-1997. He became a Fellow at the Royal Academy of Engineering in 1993, and of the ACM in 1994. He was a Fellow at Wolfson College, in Cambridge, UK, from 1966-2002. He joined Microsoft Research in 1997, and founded their European Research Labs. He was a longtime member of the International Association for Cryptographic Research, the IEEE Computer Society Technical Committee on Security and Privacy, and the University Grants Committee, an advisory committee to the British government. He died on March 1, 2003. (source: Wikipedia)

Michael Schroeder – I haven’t found detailed information on Schroeder, except to say that as a professor at MIT he worked on the Multics project (I covered this project in Part 2 of this series), before coming to Xerox PARC, and that after PARC he worked at DEC’s Systems Research Center. He then came to Microsoft Research in 2001, where he continues to work today. He became a Fellow of the ACM in 2004.

Clarence Ellis – After leaving Xerox PARC, he became head of the Groupware Research Program at the Microelectronics Computer Consortium in Austin, TX. He also worked at Los Alamos Labs, and Argonne National Laboratory. He held academic positions at Stanford, the University of Texas, MIT, and the Stevens Institute of Technology. In 1991 he became the chief architect of the FlowPath workflow product at Bull S.A. In 1992 he came to the University of Colorado at Boulder as a professor of computer science, and, with Gary Nutt, formed the Collaborative Technology Research Group (CTRG) to work on project workflow systems. He became professor emeritus of computer science at CU in 2010. He was on the editorial board of various journals. He was a member of the National Science Foundation (NSF) Computer Science Advisory Board, the University of Singapore ISS International Advisory Board, and the NSF Computer Science Education Committee. He died on May 17, 2014. (sources: Computer Scientists of the African DiasporaCTRG Groupware and Workflow Research, the Daily Camera)

A note I found in Ellis’s bio. also says that he was the first African American to receive a Ph.D. in computer science, in 1969, from the University of Illinois at Urbana-Champaign. He did his post-graduate work developing the Illiac IV supercomputer, an ARPA/IPTO project, and one of the first massively parallel computer systems.

Gary Nutt worked at Xerox PARC from 1978-1980. He worked on collaboration systems at Bell Labs in Denver, CO. from 1980-81. He took a couple executive roles at technology firms in Boulder, CO. from 1981-1986. He returned to the University of Colorado at Boulder in 1986 as a CS professor (he was previously an associate CS professor at CU Boulder from 1972-1978). While on sabbatical in 1993, he worked for Group Bull in Paris, France, developing collaboration products. In 2000 he worked as VP of Engineering for Bookface.com, managing their intellectual property. He went to Inktomi in 2001 to work on content and media distribution over the internet. He also advised on managing their intellectual property. He retired from CU Boulder in 2010, and is now professor emeritus. You can see his work history in more detail at his web page, which I’ve linked to here.

Steve Jobs – I wrote about Jobs’s work in a separate post. He died on October 5, 2011.

Bob Belleville – I couldn’t find much on him. What I have is that he joined Apple in 1981, becoming the chief engineer on the Lisa, and later the Macintosh project. He later joined Silicon Graphics’s R&D Dept. (Source: Good-bye Woz and JobsMaking the Macintosh)

You can watch a retrospective from 2001 given by Chuck Thacker and Butler Lampson on their days at PARC, and what they did, here, if you’re interested. It gets pretty technical, and is an hour and 20 minutes long.

You can see a 2010 interview with Adele Goldberg at the Computer History Museum, where she reviews her academic, educational, and business career here. It’s about an hour and 30 minutes.

Here’s a presentation given by the University of Texas at Austin in 2010, with Mitchell Waldrop, the author of “The Dream Machine,” and Michael Hiltzik, the author of “Dealers of Lightning,” along with an interview with Bob Taylor. It’s a nice bookend to the history I’ve discussed here.

Epilogue

The closing chapter in Waldrop’s book is called, “Lick’s Kids.” It talks about the people who were mentored by Licklider, either through ARPA, or at MIT, and who went out into the world to bring their ideas to life. It also talked about his waning years. He lived to see “the wheel reinvented” with microcomputers. The companies that were going gangbusters with them were repeating many of the lessons he and his researchers had learned in the 1960s, working at ARPA. The implication being that if they had merely taken time to look at what had already been learned 20 years earlier they would’ve avoided the same mistakes. He also saw the first glimmers of his vision of the Multinet come into being with the internet.

When reflecting on his life’s work, Lick was humble. He didn’t give himself much credit for creating our digital world. He thought of himself as just happening to be at the right place at the right time while some very bright people did the real work. But those who were mentored, and funded by him gave him a great deal of credit. They said if it wasn’t for him, their ideas would not have gotten off the ground, and often he was the only one who could see promise in them. He was indeed in the right place at the right time, but what was important was that he was in the right position to give them the support they needed to bring their dreams into reality.

I’ll talk more about Bob Kahn, and the development of the internet, in Part 4.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

The death of Neil Armstrong, the first man to walk on the Moon, on August 25 got me reflecting on what was accomplished by NASA during his time. I found a YouTube channel called “The Conquest of Space,” and it’s been wonderful getting acquainted with the history I didn’t know.

I knew about the Apollo program from the time I was a kid in the 1970s. I was born two months after Apollo 11, so I only remember it in hindsight. By the time I was old enough to be conscious of the Apollo program’s existence, it had been mothballed for four or five years. I could not be ignorant of its existence. It was talked about often on TV, and in the society around me. I lived in Virginia, near Washington, D.C., in my early childhood. I remember I used to be taken regularly to the Smithsonian Air and Space Museum. Of all of their museums, it was my favorite. There, I saw video of one of the moon walks, the space suits used for the missions (as mannequins), the Command Module, and Lunar Module at full scale, artifacts of a time that had come and gone. There was hope that someday we would go back to the Moon, and go beyond it to the planets. The Air and Space Museum had an IMAX movie that was played continuously, called “To Fly.” From what I’ve read, they still show it. It was produced for the museum in 1976. I remember watching it a bunch of times. It was beautifully done, though looking back on it, it had the feel of a “demo” movie, showing off what could be done with the IMAX format. It dramatizes the history of flight, from hot air balloons in the 19th century, to the jet age, to rockets to the Moon. A cool thing about it is it talked about the change in perspective that flight offered, a “new eye.” At the end it predicted that we would have manned space missions to the planets.

Why wouldn’t we have manned missions that venture to the planets, and ultimately, perhaps a hundred years off, to other star systems? It would just be an extension of the advancements in flight we had made on earth. The idea that we would keep pushing the boundaries of our reach seemed like a given, that this technological pace we had experienced would just keep going. That’s what everything that was science-oriented was telling me. Our future was in space.

In the late 1970s Carl Sagan produced a landmark series on science called “Cosmos.” He talked about the history of space exploration, mostly from the ground, and how our destiny was to travel into space. He said, introducing the series,

The surface of the earth is the shore of the cosmic ocean. On this shore we’ve learned most of what we know. Recently we’ve waded a little way out, maybe ankle deep, and the water seems inviting. Some part of our being knows this is where we came from. We long to return, and we can…

Winding down?

As I got into my twenties, in the 1990s, I started to worry about NASA’s robustness as a space program. It started to look like a one-trick pony that only knew how to launch astronauts into low-earth orbit. “When are we going to return to the Moon,” I’d ask myself. NASA sent probes out to Jupiter, Mars, and then Saturn, following in the footsteps of Voyager 1 and 2. Surely similar questions were being asked of NASA, because I’d often hear them say that the probes were forerunners to future manned space flight, that they were gathering information that we needed to know in advance for manned missions, holding out that hope that someday we’d venture out again.

The Space Shuttle was our longest running space program, from 1981 to 2011, 30 years. Back around the year 2000 I remember Vice President Al Gore announcing the winner of the contract to build the next generation space shuttle, which would take the place of the older models, but it never came to be. Under the administration of George W. Bush the Constellation program started in 2005, with the idea of further developing the International Space Station, returning astronauts to the Moon, establishing a base there for the first time, and then launching manned missions to Mars. This program was cancelled in 2010 in the Obama Administration, and there has been nothing to replace it. I heard some criticism of Constellation, saying that it was ill-defined, and an expensive boondoggle, though it was defended by Neil Armstrong and Gene Cernan, two Apollo astronauts. Perhaps it was ill-defined, and a waste of money, but it felt sad to see the Space Shuttle program end, and to see that NASA didn’t have a way to get into low-earth orbit, or to the International Space Station. The original idea was to have the first stage of the Constellation program follow, after the space shuttles were retired. Now NASA has nothing but rockets to send out space probes and robotic rovers to bodies in space. Even the Curiosity rover mission, now on Mars, was largely developed during the Bush Administration, so I hear.

I have to remember at times that even in the 1970s, during my childhood, there was a lull in the manned space program. The Apollo program was ended in the Nixon Administration, before it was finished. There was a planned flight, with a rocket ready to go, to continue the program after Apollo 17, but it never left the ground. There’s a Saturn V rocket that was meant for one of the later missions that lays today as a display model on the grounds of the Kennedy Space Center. I have to remember as well that then, as now, the program was ended during a long drawn out war. Then, it was in Vietnam. Now, it’s in the Middle East.

Manned space flight ended for a time after the SL-4 mission to the Skylab space station in 1974. It didn’t begin again for another 7 years, with the first launch of the Space Shuttle. The difference is the Shuttle was first conceptualized towards the end of the Apollo program. It was there as a goal. Perhaps we are experiencing the same gap in manned flight now, though I don’t have a sense that NASA has a “next mission” in mind. As best I can tell the Obama Administration has tasked NASA with supporting private space flight. There is good reason to believe that private space flight companies will be able to send astronauts into low-earth orbit soon. That’s a consolation. The thing is that’s likely all they’re going to do in the future–launch to low-earth orbit. They’re at the stage that the Mercury program was more than 50 years ago.

What I ask is do we have anything beyond this in mind? Do we have a sense of building on the gains in knowledge that have been made, to venture out beyond what we now know? I grew up being told that “humans want to explore, to push the boundaries of what we know.” I guess we still are that, but maybe we’re directing that impulse in new ways here on earth, rather than into space. I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence. Astrophysicist, and vocal advocate for NASA, Neil deGrasse Tyson has worried about this, too.

I realized a few years ago, to my dismay, that what really drove the creation of the space program, and our flights to the Moon, was not an ambition to push our frontiers of knowledge just for the sake of gaining knowledge. There was a major political aspect to it: beating the Soviets in “the space race” of the 1960s, establishing higher ground for ourselves, in a military sense. Yes, some very valuable scientific and engineering work was done in the process, but as Tyson would say, “science hitched a ride on another agenda.” That’s what it’s often done in human history. Many non-military benefits to our society flowed from what NASA once did, none of which are widely recognized today. Most people think that our technological development came from innovators in the private sector alone. The private sector did a lot, but they also drew from a tremendous resource in our space and defense research and development programs, as I’ve documented in earlier posts.

I’ll close with this great quote. It echoes what Tyson has said, though it’s fleshed out in an ethical sense, too, which I think is impressive.

The great enemy of the human race is ignorance. It’s what we don’t know that limits our progress. And everything that we learn, everything that we come to know, no matter how esoteric it seems, no matter how ivory tower-ish, will fit into the general picture a block in its proper place that in the end will make it possible for mankind to increase and grow; become more cosmic, if you wish; become more than a species on Earth, but become a species in the Universe, with capacities and abilities we can’t imagine now. Nor do I mean greater and greater consumption of energy, or more and more massive cities.

It’s so difficult to predict, because the most important advances are exactly in the directions that we now can’t conceive, but everything we now do, every advance in knowledge we now make, contributes to that. And just because I can’t see it, and I’m an expert at this, … doesn’t mean it isn’t there. And if we refuse to take those steps, because we don’t see what the future holds, all we’re making certain of is that the future won’t exist, and that we will stagnate forever. And this is a dreadful thought. And I am very tired when people ask me, “What’s the good of it,” because the proper answer is, “You may never know, but your grandchildren will.”

— Isaac Asimov, 1973, from the NASA film “Small Steps, Giant Strides”

Then as now, this is the lament of the scientist, I think. Scientists must ask society’s permission to explore, because they usually need funds from others to do their work, and there is no immediate payback to be had from it. It is for this reason that justifying the funding of that work is tough, because scientific work goes outside the normal set of expectations people have about what is of value. If the benefits can’t be seen here and now, many wonder, “What’s the point?” What Asimov pointed out is the pursuit of knowledge is its own reward, but to really gain its benefits you must be future-oriented. You have to think about and value the world in which your children and grandchildren will live, not your own. If your focus is on the here and now, you will not value the future, and so potential future benefits of scientific research will not seem valuable, and therefor will not seem worthy of pursuit. It is a cultural mindset that is at issue.

Edit 12-10-2012: Going through some old articles I’d saved, I came upon this essay about humanity’s capacity for intellectual thought, called “Why is there Anti-Intellectualism?”, by Steven Dutch at the University of Wisconsin-Green Bay. It provides some reasonable counter-notions to my own that seem to confirm what I’ve seen, but will still take some contemplation on my part.

There’s no science in the article. In terms of quality, at best, I’d call this an “executive summary.” Maybe there’s more detailed research behind it, but I haven’t found it yet. Dutch uses heuristics to provide his points of comparison, and uses a notion of evidence to provide some meat to the bones. He asks some reasonable questions that are worth contemplating, challenging the notion that “humans are naturally curious, and strive to explore.” He then makes observations that seem to come from his own experience. Overall, he provides a reasonable basis for answering a statement I made in this article: “I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence.” He comes down on the side of saying, in his opinion (paraphrasing), “Yes, some in the scientific community have fooled themselves on this issue.” He discusses the notion that “humans are naturally curious,” due to the behavior exhibited by children. He concludes by saying that children naturally display a shallow curiosity, which he calls “tinkering.” The harder task of creative, deep thought does not come naturally. It’s something that needs to be cultivated to take root. Hence the need for schools. The question I think we as citizens should be asking is whether our schools are actually doing this, or something else.

Read Full Post »

See Part 1

This turned into a much larger research project than I anticipated. I am publishing this part around the 1-year anniversary of when I started on this whole series! I published the first part of it in late June 2011, and then put the rest “on the shelf” for a while, because it was crowding out other areas of study I wanted to get into.

To recap, the purpose of this series is to educate the reader on government-funded research that led to the computer technology we use today. I was inspired to write it because of gross misconceptions I was seeing on the part of some who thought the government has had nothing to do with our technological advancement, though I think technologists will be interested in this history as well. This history is not meant to be read as a definitive guide to all computer history. I purposely have focused it on a specific series of events. I may have made mistakes, and I welcome corrections. The first part of this series covers government-funded research in computing during the 1940s and 50s. In this article I continue coverage of this research during the late 1950s, and 1960s, and the innovations that research inspired.

Most of what I will talk about below is sourced from the book “The Dream Machine,” by M. Mitchell Waldrop. Some of it is sourced from a New York Times opinion blog article called, “Did My Brother Invent E-mail with Tom Van Vleck?” A great article, by the way, on the development of time-sharing systems; a major topic I cover here. There’s a video produced by DARPA at the end of this post that I use as a source, and I’ll just refer to as “the DARPA video.” Quite a bit of material is sourced from Wikipedia as well. Knowing Wikipedia’s reputation for getting facts wrong, I have tried as much as possible to look at other sources, to verify it. Occasionally I’ll relay some small anecdotes I’ve heard from people who were in the know about these subjects over the years.

Revving up interactive, networked computing

Wes Clark in 2002, from Wikipedia.org

Ken Olsen, an electrical engineer, Wes Clark, an electrical engineer, and Harlan Anderson, a physicist, all graduates of the Massachusetts Institute of Technology (MIT), and veterans of Project Whirlwind, formed the Advanced Computer Research Group at MIT in 1955. Since all of the interactive computers that had been made up to that point were committed to air defense work, due to the Cold War, they wanted to make their own interactive computer that would be like what they had experienced with Whirlwind. It would be a system they would control. The first computer they built was called the TX-0, completed in 1956. It was about the size of a room. It was one of the first computers to be built with transistors (invented at AT&T Bell Labs in 1947). It had an oscilloscope for a display, a speaker for making sound, and a light pen for manipulating and drawing things on the display. They started work on a larger, more ambitious computer system they called the TX-2 in 1956. It was completed in 1958, and took up an entire building. It would come to play an important role in computer history when an electrical engineer named Ivan Sutherland did his Sketchpad project on it in 1963 for his doctoral thesis (PDF). Sketchpad is considered to be the prototype for CAD (Computer-Aided Design) systems. It was also in my view the first proof-of-concept for the graphical user interface, particularly since one used gestures to manipulate objects on the screen with it.

Even though the idea behind building these computers was to build an interactive machine that these guys could control, away from the military, I’m not so sure they succeeded at that, at least for long. From what Alan Kay has said in his presentations on this history, Sutherland had to work on his Sketchpad project in the middle of the night, because the TX-2 was used for air defense during the day…

In the midst of the TX-2’s construction, Ken Olsen, his brother Stanley, and Harlan Anderson founded Digital Equipment Corporation (DEC) in 1957 out of frustration with the academic and business world. They had tried to tell their colleagues, and people in the computer industry, that interactive computing had a promising future, but they got nowhere. Hardly anybody believed them. And besides, the idea of interactive computing was considered immoral. Computers were expensive, and were for serious work, not for playing around. That was the thinking of the time. Olsen and the founders of DEC figured that to make interactive computing something people would pay attention to, they had to sell it.

A pivotal moment happened for J.C.R. Licklider, a man I introduced in Part 1, in 1957 that would have a major impact on the future of computing as we have come to know it. Wes Clark, seemingly by chance, walked into Licklider’s office, and invited him to try out the TX-2. It was an event that would change Licklider’s life. He had an epiphany from using it, that humans and computers could form an interactive system together–a system made up of the person using it, and the computer–that he called “symbiosis.” He had experienced interactive computing from working on the SAGE (Semi-Automated Ground Environment) project at MIT in the 1950s (covered in Part 1 of this series), but this is the moment when he really got what its significance was, what its societal impact would be.

The difference between the TX-2 and SAGE was that with SAGE you couldn’t create anything with it. Information was created for the person using it from signals that were sent to the system from radar stations and weapons systems. The person could only respond to the information they were given, and send out commands, which were then acted upon by other people or other systems. It was a communication/decision system. With the TX-2, the system could respond to the person using it, and allow them to build on those responses to create something more than where the person and the computer had started. This is elementary to many computer users now, but back then it was very new, and a revelation. It’s a foundational concept of interactive computing.

Licklider left his work in psychology, and MIT, to work at a consulting firm called Bolt Beranek and Newman (BBN) to pursue his new dream of realizing human-computer symbiosis with interactive computing. He wrote his seminal work, “Man-Computer Symbiosis,” in 1960, while working at BBN. DEC was an important part of helping Licklider understand his “symbiosis” concept. The first computer they produced was a commercialized version of the TX-0, called the PDP-1 (Programmed Data Processor). BBN bought the very first PDP-1 DEC produced, for Licklider to use in his research. With the right components, it was able to function as an interactive computer, and it was a lot less expensive than the mainframes that dominated the industry.

John McCarthy, from History of Computers

Another person comes into the story at this point, a mathematician at MIT named John McCarthy, who had developed a new field of computing studies at MIT, which he called “Artificial Intelligence.” He had developed a theoretical computing language, or notation, that he called Lisp (for “List Processing language”), which he wanted to use as an environment in which he and others could explore ideas in this new area of study. He thought that ideas would need to be tried in an experimental setting, and that researchers would need to be allowed to quickly revise their ideas and try again. He needed an interactive computer.

A typical run-of-the-mill computer in the 1950s was not interactive at all. The Whirlwind and SAGE systems I talked about in Part 1 were the exceptions, not the rule. First off, there were very few computers around. They were designed to be used by an administrator, who would receive batches of punched cards from people, which contained program code on them. They would be fed through the computer one at a time in batches. The data that was to be processed was often also stored on punch cards, or magnetic tape. The government was heavily invested in systems like this. The IRS and the Census Bureau used this type of system to handle their considerable record keeping, and computation needs. Any business which owned a computer at the time had a system like this.

Each person who submitted a card batch would receive a printout of what their program did hours, or even days later. That was the only feedback they’d get. There were no keyboards, no mice, no input devices of any kind that an ordinary person could use, and no screens to look at. Typical computation was like a combine that’s used on the farm. Raw material (programs and data) would go in, and grain (processed data) would come out the other end. This was called batch processing. McCarthy said this would not do. Researchers couldn’t be expected to wait hours or days for results from a single run of a Lisp program.

McCarthy developed a concept called “time sharing” in 1957. It was a strategy of taking a computer that was designed for batch processing and adapting it to have several ordinary people use it. The system would use the pauses that occurred within itself, where it was just waiting for something to happen, as opportunities to grant computer time to any one of the people using it, in a sort of round-robin fashion. This switching would typically happen fast enough so that everyone using it would notice little pauses here and there, but otherwise operation would appear continuous to them. McCarthy figured that this way, computer time could be used more efficiently, and he and his colleagues could do the kind of work they wanted to do, killing two birds with one stone.

The way time sharing worked was several people would sign on to a single computer using teletype terminals, and use their account on the computer to create files on a shared system. Each person would issue commands to the system through their terminal to tell it what they wanted it to do. They could run their own programs on the system as if the machine was their own, shall we say, “personal computer” (though the term didn’t exist at the time), since they wouldn’t need to think about what anyone else on the system was doing. The equivalent today would be like working on a Unix or Linux system from what’s called a “dumb terminal,” with several people logged in to it at the same time. The way in which our computers now can have several people using them at once, and doing a bunch of things seemingly simultaneously, has its roots in this time sharing strategy.

McCarthy, and a team he had assembled, created a jury-rigged prototype of a time-sharing system out of MIT’s IBM 704 mainframe. They showed that it could work, in a demonstration to some top brass at MIT in 1961. (source: “The MAC System, A Progress Report,” by Robert Fano (PDF)) This was the first public demonstration of a working Lisp system as well.

Fernando Corbató (also known as “Corby”), who was then MIT’s deputy director of their Computation Center, undertook a project to build on McCarthy’s work. This project was funded initially by the Navy. He managed the retrofitting of the university’s then-new IBM 709 computer to create the Compatible Time-Sharing System (CTSS) in 1961. It was named “Compatible” for its compatibility with the mainframe’s existing operating system. This would come to be a very influential system in the years to come.

McCarthy set out a vision at the MIT centennial lecture series, also in 1961, that a computing utility industry would develop in the future that would allow public access to computer resources, and it would be based on his time sharing idea.

IBM was asked if it had any interest in time sharing for the masses, because it was the one company that had the ability to build computers powerful enough at the time to do it. They were unwilling to invest in it, calling the idea impractical. It wouldn’t have to wait long, however. Other researchers started work on their own time-sharing projects at places such as Carnegie Tech (which would become Carnegie-Mellon University), Dartmouth, and RAND Corp.

The “Big Mo” in time sharing, and in many other “big bang” concepts, came when ARPA invited Licklider to lead their efforts in computer research.

ARPA

Republican President Dwight Eisenhower created ARPA, the Advanced Research Projects Agency, under the Department of Defense, in 1958, in response to the Soviet Sputnik launch. I covered some background on the significance of Sputnik in Part 1. The idea was that the Soviets had surprised us technologically, and it was ARPA’s job to make sure we were not surprised again. A lot of ARPA’s initial work was on research for spacecraft, but this was moved to the National Aeronautics and Space Administration (NASA) and the National Reconnaissance Office (NRO) (source: “The DARPA video”). NASA was created the same year as ARPA. ARPA was left with the projects nobody else wanted, and it was in search of a mission.

Jack Ruina was ARPA’s third director, appointed in 1961. He set the pace for the future of the agency, encouraging the people under him to seek out innovative, cutting edge scientific research projects that needed funding, whether they applied to military goals or not, whether they were in government labs, university labs, or in private industry. He knew that the people in ARPA needed to understand the science and the technology thoroughly. He wanted them to seek out people with ideas that were cutting edge, to fund them generously, and to take risks with the funding. He made it clear that researchers that received ARPA funding would need to have the freedom to pursue their own goals, without micromanagement. The overall goal was to, “assault the technological frontiers.”

By my reading of the history, ARPA was forced into researching computing by necessity, in 1961. The way Waldrop characterized it, they “backed into it.” Old, but expensive leftover hardware from the SAGE project needed to be used. It was considered too valuable to junk it. There was also the looming problem of command and control of military operations in the nuclear age, where military decisions and actions needed to happen quickly, and be based on reliable information. It had already been established via. projects like SAGE that computers were the best way anyone could think of to accomplish these goals. So ARPA would take on the task of computer research.

J.C.R. “Lick” Licklider, from Wikipedia.org

J.C.R. Licklider (he preferred to be called “Lick”) was recruited by Ruina to be the director of research in developing command and control systems for the military. Ruina had known of Licklider through his work on SAGE, his career as a researcher in psychology, and the consulting he did on various military projects. It helped a lot that Lick was such an affable fellow. He made a lot of friends in the military ranks. They also needed a director of behavioral science research, which he was uniquely qualified to handle. Licklider was hesitant to accept this position, because he enjoyed his work at BBN, but he became convinced that he could pursue his vision for human-computer symbiosis more comprehensively at ARPA, since they had no problem with him having a dual mission of developing technology for military and civilian use, and they could give him much larger budgets. He started work at ARPA in 1962, in an office of the Pentagon that would come to be named the Information Processing Techniques Office (IPTO).

True to the military side of his mission, Lick created plans to computerize the military’s data gathering, and information storage infrastructure, including digital communication plans for information sharing. He would gradually come to realize the urgency of helping the military modernize. He became aware of some harrowing “near misses” our military experienced, where our nuclear forces were put on high alert, and were nearly ready to launch, all on a misleading signal.

I’ll be focusing on the civilian side of Lick’s research, though all of the research that was funded through the IPTO was always intended to have military applications.

Lick had met John McCarthy while at BBN, and was excited about his idea of time sharing. He wanted to use his position at ARPA to create a movement around this idea, and other innovative computer research. He began by working his existing contacts. He gathered together a prestigious group of computing academics: Marvin Minsky, John McCarthy, Alan Perlis, Ben Gurley, and Fernando Corbató to convince the engineers at System Development Corporation (SDC) to start developing time-sharing technology. It was a hard sell at first, but they came around once these heavy hitters showed how the technology would really work.

He solicited project proposals from all quarters: in government, at universities, and at private companies. He particularly targeted individuals who he thought had the right stuff to do innovative computer research, and sent out funding for various research projects that he thought had merit.

In 1962 he sent $300,000 (more than $2 million in today’s money) to Allen Newell, Herbert Simon, and Alan Perlis at Carnegie Tech to fund whatever they wanted to do. He knew them, and the kinds of minds and motivations they had. In the years that followed, ARPA would send $1.3 million (more than $9 million in today’s money) per year to this team at Carnegie Tech, because they were recognized for having an expansive view of computer science: “The study of all phenomena surrounding computers,” including the impact of computing on human life in general. Lick had confidence that whatever knowledge, program, or technology they came up with would be great.

He was very interested in what RAND Corp. was doing. In 1962 they had developed the first computer with a “pen” (stylus/pad) interface, which was linked to a visual display. This jived very much with what Lick envisioned as “symbiosis.”

John McCarthy left MIT for Stanford in 1962, where he created their Artificial Intelligence Laboratory. Lick funded McCarthy’s work at Stanford for the rest of his time at the IPTO.

Lick began funding an obscure researcher, named Doug Engelbart, working at the Stanford Research Institute (SRI), in 1962. Engelbart had an, at the time, radical idea to create an interactive computer system with graphical video displays that groups could use to deal with complex problems together.

In 1963 Lick funded both Project MAC, a time sharing project at MIT, overseen by Bob Fano, and Project Genie, a time sharing project at Berkeley, overseen by Harry Husky. Fano started developing a curriculum for computer science in the Electrical Engineering Department at MIT at this time as well. Fano, by his own admission, had almost no technical knowledge about computers!

Under Project MAC, ARPA funded a project to create a new time-sharing system called Multics in 1963. This sub-project was overseen by Fernando Corbató.

In 1963, Lick envisioned a network of time-sharing systems, which he dubbed the “Intergalactic Network” to his working groups. The idea was to link computers together in a network, and he wanted people to think big about this; not just a few hundred computers together, but billions!

It was all coming together. Lick realized that he could not juggle all of these projects and continue to work at BBN, so he resigned from BBN in 1963.

Project MAC

Robert Fano came up with the name “Project MAC.” It stood for “Multi-Access Computer,” though people came up with other meanings for it as well, like “Machine-Aided Cognition.” Some artificial intelligence work at MIT, led by Marvin Minsky, was funded through it.

The working group used the existing CTSS project as its starting point, and the goal was to improve on it. CTSS was moved to an IBM 7094 mainframe in 1964, and the first information utility was made available to people at MIT around the clock. The 7094 ran at about the speed of an IBM PC (about 5 Mhz). So if more than a dozen people were logged into it at the same time, it ran very slowly. Eventually they added the ability for members of the MIT Computation Center staff to connect to the time-sharing system using remote teletype terminals from their homes.

In the following video, Bob Fano lays out his vision for the research that was taking place with computers at MIT, and gives a demonstration of their time-sharing system. It’s thought that this film was made in 1964.

CTSS became a knowledge repository, as people were able to share information and programs with each other on it. If enough people found some programs useful, they were added to the standard library of programs in the system, so that everyone could use them. This method of growing the features of the system by adding software to it is seen in all modern systems, particularly Unix and Linux. It created the first software ecosystem, where software could use other software. This is also seen in the more commonly used commercial systems, like on Windows PCs, and Apple Macs. As a couple examples, Tom Van Vleck created a MAIL command for the system that allowed people to send text messages to each other. It was probably the first implementation of e-mail. Allan Scher created ARCHIVE, which allowed people to take a bunch of files and compress and package them into one file.

This was not an attempt to create the internet, but they ended up creating something like it on a smaller scale, under a different configuration. Machines were not networked together, but people were, through the workings of a single computer system.

Things that we take for granted today were discovered as the project went along. Some examples are they used a hierarchical file system that enabled people to organize their files into directories (think folders) and subdirectories (subfolders). The system didn’t have a system clock at first. They attached an electronic clock to the computer’s printer port, and software was added to take advantage of it. Now the system could time-stamp files, and processes could be run at specific times.

It was noticed that people came to depend on the system. They became agitated and irate when it went down. This was something that the researchers were looking for. They meant for it to become a computing utility, and people were coming to depend on it as if it was one. They took this as a positive sign that they were on the right track. It also motivated them to make the system as reliable as they could make it.

The CTSS team developed fault-tolerant features for it, like secure memory, so that programs that crashed didn’t interfere with other running programs, or the rest of the system. They developed an automated backup system, so that when a disk failed (all information was stored on disk), people’s work wasn’t lost. A backup to tape of newly created files ran every half-hour. It was also the first system to enforce passwords for access. This created the first instance of user revolt. Already the idea of “information wants to be free” had developed among people at MIT, and they resented the idea that access to the system could be restricted. Hackers committed various acts of “civil disobedience” (pranks in the system) to protest this, but the Project MAC staff insisted password protection was going to stay!

Time sharing comes of age

The vision of Project MAC was to promote “utility computing,” where private companies would rent computer time to businesses, and ordinary people, like a utility company charges for gas and electricity, so that they could experience interactive computing, and make some use of it. This business model is used today in systems that make applications available on the web on a subscription basis, except now the concept is not so much renting computer time, as renting application time, and data storage space–the whole idea of SaaS (Software as a Service).

Unlike today, the sense of the time was that computers needed to be large, and computing power needed to be centralized, because it was so expensive. It was thought that this was the only way to get this power out to as many people as possible.

In 1963 the leaders of Project MAC wanted to create a model system. Most of the time-sharing systems that had been successfully created up to this point began as mainframes that were only meant for batch processing, and had been modified to do time sharing. They wanted a system that was designed for time sharing from the ground up. It was also decided that an operating system was needed which would be designed for a large scale time-sharing service. CTSS was good as an internal, experimental system, but it was still unstable, because it was an adaptation of a system that was not designed for time sharing. No time was taken to make it a clean design. It had lots of things added onto it as the team learned more about what features it needed. It was a prototype. It wasn’t good enough for the vision Project MAC had in mind of utility computing for the masses.

Project MAC partnered with General Electric to develop the hardware, and AT&T Bell Labs to develop the operating system. Fernando Corbató managed the development team at MIT for the new system they called “Multics,” for Multiplexed Information and Computing Service. The plan was to have it run on a multi-processor machine that had built-in redundancy, so that if one processor burned out, it would be able to continue with the good processors it had left, without missing a beat. Like CTSS, it would have sophisticated memory management so that processes and user data wouldn’t clobber each other while people were using it at the same time. It would also have a hierarchical file system. System security was a top priority. It was noted when I took computer science in college that Multics was the most secure operating system that had ever been created. It may still carry that distinction.

Licklider left ARPA in 1964 to work at IBM. According to Waldrop, it seemed he had a mission to “bring the gospel” of interactive computing to the “high temple of batch processing.” Ivan Sutherland replaced Lick as IPTO director.

The research at Project MAC was having payoffs out in the marketplace. The publicity from the project, and the deal with GE for Multics, inspired companies to develop their own commercial time-sharing systems. The first was from DEC, the PDP-6, which came out in 1965. General Electric and IBM came out with time-sharing service offerings, which usually was a single programming language environment as their sole service. It was rather like the experience of using the early microcomputers in the late 1970s, come to think of it. When you turned them on, up would come the Basic programming language, and all you’d have was a prompt of some sort where you could type commands to make something happen, or write a program for the computer to run.

By 1968 there were 20 separate firms offering time-sharing services. GE’s service was operating in 20 cities. The University Computing Company was operating a time-sharing service in 30 states, and a dozen countries.

In 1968, Bill Gates, who was in the 8th grade, got his first chance to use the Basic programming language on a GE Mark II time-sharing system, using a teletype, through his school in Lakeside, Washington. Basic was invented at Dartmouth as its own time-sharing system. The main reason I mention this is Basic would play a major role in the formation of Microsoft in the 1970s. The first product Gates and Paul Allen created that led to the founding of the company was a version of Basic for the Altair computer. Microsoft Basic went on to become the company’s first big-selling software product, before IBM came calling.

The first release of Multics came out in 1969. AT&T left Project MAC the same year. GE sold its computer division to Honeywell in 1970. Along with GE’s hardware came the Multics software. Other time-sharing systems outsold Multics, and it never caught up as a commercial product. This was mainly because it was 3 years late. Multics had been beset with delays. The complexity of it overwhelmed the software development team at MIT. ARPA considered ending the project. An evaluation team was created to look at its viability. They ultimately decided it should continue, because of the new technologies that were being developed out of the project, and the knowledge that was being generated as a result of working on it. ARPA continued funding development on Multics into the early 1970s.

Honeywell didn’t want to sell Multics to customers, and only did so after considerable prompting by Larry Roberts, who was IPTO director at this time. A major barrier for it was that it was basically off-loaded onto Honeywell. Most of the people in the company, particularly its sales staff, didn’t know what it was! This was a chronic problem for the life of the product. Honeywell sold 77 Multics systems, in all, to customers over the next 20+ years. They stopped selling it in the early 1990s. Wikipedia claims the last Multics system was retired at the Canadian Department of National Defence in the year 2000.

Despite Multics being a dud in the commercial market, much was learned from it. It was the inspiration for Unix (which was originally named “Unics”, a play on “Multics”). Ken Thompson and Dennis Ritchie, the creators of Unix, had worked on Multics at AT&T Bell Labs. They missed the kind of interaction they were able to have with the Multics system, though they disliked the operating system’s bigness, and bureaucratic restrictions, which were key to its security. They brought into Unix what they thought were the best ideas from Multics and Project Genie (which I describe below).

I once had a conversation with a computer science professor while I was in college regarding the design of Unix. From what he described, and what I know of the design of early time-sharing systems, my sense is the early design of Unix was not that different from a time-sharing system. The main difference was that it shared computer time between multiple programs running at the same time, as well as between people using the system. This scheme of operation was called “multitasking.”

Unix would spread into the academic community in the years that followed. The main reason being that under an antitrust agreement with the government, AT&T was not allowed to get into other businesses besides telecommunications at the time. So they retained ownership of the rights to Unix, and the trademark to its name, but gave away copies of it to customers under a free license. They made the binary distribution, as well as all source code available to those who wanted it. This made it very attractive to universities. They liked getting the source code, because it was something that computer science students could investigate and use. It contributed to the academic environment.

Unix continues to be used today in IT environments, and in Apple products. It inspired the development of Linux, beginning in the early 1990s.

The Multics project taught the Project MAC team members valuable lessons in software engineering. These people went on to work at Digital Equipment Corp. (which was bought by Compaq in 1998, which was bought by Hewlett-Packard in 2002), Data General (which was bought by EMC in 1999), Prime (which was bought by Parametric Technology Corp. in 1998), and Apollo (which was bought by HP in 1989). (source: Wikipedia) A few of these companies created their own “lite” versions of Multics for commercial sale, under other product names.

When microcomputers/personal computers started being sold commercially in the late 1970s, the idea of the information utility was pared back as a widespread system strategy. It’s tempting to think that it died out, because computers got smaller and faster, and the idea of a central computing utility became old-hat. That didn’t really happen, though. It continued on in services like CompuServe, BIX, GEnie (offered by GE, no relation to Project Genie), and America Online of the late ’80s, and into the ’90s. The idea of large, centralized computing services has recently had a resurgence in such areas as software-as-a-service, and cloud computing. Individually owned computers, in all their forms, are mainly seen now as portals, a way to consume a service offered by much larger, centralized computer systems.

Project Genie

The information on this project comes from the Wikipedia page on it, and its list of references. The section on Evans and Sutherland is sourced from Waldrop’s book and Wikipedia. I do not have much of the details of this project, nor of what Dave Evans and Ivan Sutherland did at the University of Utah. This is best read as a couple anecdotes about the kind of positive influence that ARPA had on institutions and students of computer science at the time.

Project Genie began in 1963 at Berkeley. Harry Husky, who had been in charge of it, left the project in 1964, leaving Ed Feigenbaum, an associate professor with a background in electrical engineering, and Dave Evans, a junior professor with a background in physics and electrical engineering, to manage it. Both were dubious about their chances of accomplishing much of anything. I think it’s safe to say that what actually happened exceeded their expectations.

Feigenbaum wouldn’t stay with the project for long, though they had their system almost operational by the time he left. He joined John McCarthy’s Artificial Intelligence Laboratory at Stanford in 1964.

A team of graduate students who joined this project, which included Peter DeutschButler LampsonChuck Thacker, and Ken Thompson, modified a Scientific Data Systems 930 minicomputer to do time sharing as their post-graduate project. This project was perhaps unique in its goal, since other time sharing efforts had been done on mainframes, since only they had enough computing power to share. Licklider wanted to fund time sharing projects of various scales, presumably to see how each would work out.

Over the next 3 years the team wrote their own operating system for the 930, which came to be called the Berkeley Time Sharing System. The main technological advancements I can pick out of it were paged virtual memory, and a particular design for multitasking (the idea of sharing computer time between programs loaded into memory, as well as people)., and state-restoring crash recovery.

Paged virtual memory is a scheme where the system can “swap” information in memory out to hard disk in standard-sized segments, and bring it back in again, at the system’s discretion. This is done to allow the system to operate on a collection of programs and data that, put all together, are greater in size than the computer’s memory. This scheme is used on all modern personal computer systems, and on Unix and Linux.

State-restoring crash recovery is something we have yet to see on modern computer systems. It allows a system to restore itself to an operational state after a crash has occurred, picking up where it left off, without rebooting. It should be noted that this project did not invent this idea. The earliest version of it I’ve heard about is in Bob Barton’s Burroughs B5000 computer, which was created in 1961.

Not a lot has been said about this project in the literature I’ve researched, and I think it’s because the innovations that were developed were technical. No social study took place during the development of this system that I can tell, as took place with Project MAC at MIT, where many things about necessary time-sharing features were learned. Nevertheless, the technical talent that was developed during this project is legendary in the field of computing.

Scientific Data Systems eventually chose to market the Berkeley Time Sharing System as the SDS 940 computer, with some encouragement by Bob Taylor, who was IPTO director at the time. This system would go on to be used by Doug Engelbart for his NLS system, in the late 1960s (which I describe below).

Edit 8/5/2012: I added in “The trail to Xerox” as it provides background for the discussion of Xerox PARC in Part 3.

The trail to Xerox

Scientific Data Systems wasn’t enthusiastic about the 940, though. They considered it a one-off machine, and they had no intention of developing it further. Butler Lampson and others in Project Genie got impatient with SDS. They left Berkeley, and founded their own company, the Berkeley Computer Corporation (BCC), in 1969. Bob Taylor left ARPA the same year.

SDS was bought by Xerox in 1969, and its name was changed to Xerox Data Systems (XDS).

BCC didn’t last long. It was going broke in 1970. Taylor had founded Xerox Palo Alto Research Center (PARC) that year, and recruited a bunch of people from BCC to create their Computer Science Laboratory.

XDS lost so much money that Xerox shut it down in 1975.

Evans and Sutherland, and the University of Utah
David_C._Evans

David C. Evans

Dave Evans left Berkeley to chair a brand new computer science department at the University of Utah, his alma mater, in 1966. The department was one of ARPA/IPTO’s projects, and was fully funded by the IPTO.

Back then, the term “computer science” was new, and was considered an aspirational title for what the department was doing. It was expected that computation would become a science someday, but it was clearly acknowledged in Evans’s department that it was not a science then. I can safely say that it still isn’t, anywhere.

Ivan Sutherland, who had left ARPA to teach at Harvard, joined the computer science department at the University of Utah in 1968, serving as a professor. He came on the condition that Evans would join him in starting a new company. The new company was founded that year, called Evans & Sutherland. It was, and still is, a maker of computer graphics hardware.

Young Ivan Sutherland

Ivan Sutherland

Evans and Sutherland made a deliberate decision to stick to the philosophy of “doing one thing well,” and so the department specialized in computer graphics.

It should be noted that at the time their computer science department only took post-graduate students. There was no undergraduate computer science program. Dave Evans and Ivan Sutherland would have a who’s who of computing pass through their department. They had students participate in IPTO projects, and the IPTO fully funded many of their tuitions. Some of the most notable of their students (not all were shared between Evans and Sutherland) were Alan KayNolan BushnellJohn WarnockEdwin CatmullJim Clark, and Alan Ashton. (sources: Wikipedia, and utahstories.com)

I get more into the work of Alan Kay, Chuck Thacker, and Butler Lampson in Part 3.

Contributions of the students

Peter Deutsch, who had been with Project Genie at Berkeley, went on to work at Xerox PARC, and then Sun Microsystems. He is best known for his work in programming languages. In 1994 he became a Fellow at the Association of Computing Machinery (ACM). He developed Ghostscript, an open source PostScript and PDF viewer. More recently he’s taken up music composition. (Sources: Wikipedia, and Coders at Work)

Nolan Bushnell and Ted Dabney founded the video game company Atari in 1972. They produced two arcade video games that were big hits, Pong and Breakout. Steve Jobs and Steve Wozniak briefly worked for Atari during this period, helping to develop Breakout. When Wozniak developed his first computer, built with parts that some Atari employees had “liberated” from Atari, which would become the Apple I, they showed it to Bushnell to see if he was interested in marketing it. He turned them down. The rest, as they say, is history… Bushnell created the Pizza Time (eventually named “Chuck E. Cheese”) chain of restaurants as a way of marketing Atari arcade games in 1977. Atari was sold to Warner Communications that same year. Bushnell founded Catalyst Technologies, a business incubator, in the early 1980s. Bushnell resigned from Chuck E. Cheese in 1984. (Source: Atari history timelines) More recently Bushnell has been part of a business venture called uWink that’s been trying to set up futuristic, technology-laden restaurants. In 2010, the third iteration of Atari (the most recent acquirer of the rights to the name and intellectual property of Atari is a company formerly known as Infogrames) announced that Bushnell had joined their board of directors. There are a bunch of other entrepreneurial ventures he was involved in that I will not list here. Check out his Wikipedia page for more details.

John Warnock worked for a time at Evans & Sutherland, and then at Xerox PARC, where he and a colleague, Charles Geschke, developed InterPress, a computer language to control laser printers. Warnock couldn’t convince Xerox management to market it, and so he and Geschke founded Adobe in 1982, where they marketed a newer laser printing control language called PostScript. In 1991, Warnock created a preliminary concept called “Camelot” that led to Postscript Document Format, otherwise known as “PDF.”

Edwin Catmull developed some fundamental concepts used in 3D graphics today, and went on to become Vice President of the computer graphics division at Lucasfilm in 1979. This division was eventually sold to Steve Jobs, and became Pixar, where Catmull became Chief Technical Officer. He was a key developer of Pixar’s RenderMan software, used in such films as Toy Story and Finding Nemo. He has received 4 Academy awards for his technical work in cinematic computer graphics. He is today President of Walt Disney and Pixar Animation Studios.

Jim Clark worked for a time at Evans & Sutherland. He founded Silicon Graphics in 1982, Netscape in 1994, Healtheon in 1996 (which became WebMD), and myCFO in 1999 (an asset management company for Silicon Valley entrepreneurs, which is now called “Harris myCFO”). More recently, he co-produced the Academy award-winning documentary The Cove.

Alan Ashton co-founded Satellite Software International with one of his students, Bruce Bastian, in 1979. The company created the WordPerfect word processor. SSI was eventually renamed WordPerfect Corporation. Throughout the 1980s, WordPerfect was the leading word processing product sold on PCs. The company was bought by Novell in 1994. Ashton stayed on with Novell until 1996. He founded ASH Capital, a venture capital firm, in 1999.

For those who are interested, you can watch a 2004 presentation with Ivan Sutherland and his brother Bert, talking about their lives and work, here. The guest host for this presentation is Ivan’s long-time friend and business associate, Bob Sproull.

Dave Evans went on to help design the Arpanet, which I describe below.

Doug Engelbart and NLS

If, in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?

— Doug Engelbart, introducing NLS at Menlo Park in 1968

“Doug and staff using NLS to support 1967 meeting with sponsors — probably the first computer-supported conference. The facility was rigged for a meeting with representatives of the ARC’s research sponsors NASA, Air Force, and ARPA. A U-shaped table accommodated setup CRT displays positioned at the right height and angle. Each participant had a mouse for pointing. Engelbart could display his hypermedia agenda and briefing materials, as well as the documents in his laboratory’s knowledge base.”
— caption and photo from Doug Engelbart Institute

It’s a bit striking to me that after the interactive computing that was done in the 1950s, which I covered in Part 1, that a lot of effort in the ARPA work was put into making interactive systems that operated only from a command line, rather than from a visual, graphical display. At one point Licklider asked the Project MAC team if it might be possible for everybody on the CTSS time-sharing system to have a visual display, but the answer was no. It was too expensive. In the scheme of projects that ARPA funded, NLS was in a class by itself.

In 1962, an electrical engineer named Doug Engelbart was having trouble getting his computer research project funded. Funders in his proximity couldn’t see the point in his idea. Most of the people he talked to were openly hostile to it. He wanted to work on interactive computing, but they’d say that computers were only meant to do accounting and the like, using batch processing. Engelbart’s vision was larger, and a little more disciplined than Licklider’s. He wanted to create a system that was specifically designed to help groups work together more effectively to solve complex problems.

Lick at ARPA/IPTO allocated funding for his project that lasted several years. Engelbart also got funding from NASA (via. Bob Taylor), and the Air Force’s Rome Air Development Center. With this, he established the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI).

Engelbart envisioned word processing at a time when it had just been invented, and even then only on an experimental basis. He spent a lot of time exploring how to enhance human capability with information systems. In the mid-1960s, he envisioned the potential for online research libraries, online address books, online technical support, online discussion forums, online transcripts of discussions, etc., all cross-linked with each other. He drew inspiration from Vannevar Bush’s Atlantic Monthly article called, “As We May Think,” which was published in 1945 (Vannevar Bush is no relation to the Bush political family), and Ivan Sutherland’s Sketchpad project. Engelbart said that he came up with the idea of hyperlinked documents on his own, though he later recalled reading Bush’s article, which focused on a theoretical concept of a stored library of linked documents that Bush called a “Memex.”

The system that was ultimately created out of the ARC’s work was built on an SDS 940 time-sharing system, the system software for which was developed by Project Genie. Engelbart’s software ran in 192 kilobytes of memory, and 8 MB of hard disk space, if I recall correctly. He called the end result NLS, for “oN-Line System.” He demonstrated the system at the Fall Joint Computer Conference at Menlo Park, CA in 1968. It was all recorded. The full-length annotated video is here. It’s interesting, but gets very technical.

What Engelbart and his team accomplished was so far ahead of its time, there were people in the audience who didn’t think it was real, because, you see…computers couldn’t do this kind of stuff back then! It was real alright. What Engelbart had created was the world’s first computer mouse (that’s what people in the computing field give him credit for today, but there was so much more), one of the first systems to implement rudimentary windows on a screen, the first system to integrate graphics and text together on a screen, the first system to implement hyperlinking (these were like links on a web page, though much more sophisticated), and the first collaborative computing platform that was networked, all in one system. He demonstrated live for the audience how he and a worker in another office, in a separate building, could collaboratively work on the same system, at the same time, on a shared workspace that appeared on both of their screens. Each person was able to use their own mouse and keyboard to issue commands to the system, and edit documents. They could see each other on their terminals, since there were cameras set up in both locations to capture their images. They each had headsets with microphones, and a speaker system set up, so that they could hear each other.

The whole thing was a prototype, and a lot of infrastructure had to be set up just for the demo, to make it all work. Not all of it was digital. In the collaborative computing part of the demo, the video display of the remote worker, and the audio system, were analog. Everything else was digital. Even so, it was a revolutionary event in the history of computers. This was 16 years before the first Apple Macintosh, which couldn’t do most of this! NLS cost a lot more, too, more than a million dollars (more than $6 million in today’s money)! There is still technology Engelbart developed (which I haven’t discussed here) that has not made it into computer products yet, nor the typical web environment most of us use.

After the demo, Bob Taylor, who was the director at IPTO at this time, agreed to continue funding Engelbart’s work. However, other government priorities reduced the funding available to him, mainly due to the cost of the Vietnam War.

Considering the magnitude of this achievement, one would think there would be nothing but better and better things coming out of this project, but that was not to be. A lot of Engelbart’s technical talent left his project to work at Xerox PARC, pursuing the idea of personal computing, in the 1970s. Innovation with NLS slowed, and the system became more bloated and unwieldy with each update. ARPA could see this directly, since they ended up using NLS to do their own work. The IPTO decided to cut off Engelbart’s funding in 1975, since their goals had diverged.

NLS was transferred out of SRI to a company called Tymshare in 1978. They attempted to make it into a commercial product called “Augment.” Tymshare was bought by McDonnel-Douglas in 1984. They authorized the transfer of Augment to the Bootstrap Alliance, founded by Doug Engelbart, which is now known as the Doug Engelbart Institute. NLS/Augment was ported to a modern computer system, thanks to funding from ARPA in the mid-1990s, and continues to be used at the Doug Engelbart Institute today! (source: article on NLS/Augment at The Doug Engelbart Institute)

His efforts to get his ideas out into the world would be somewhat vindicated in the research and development that was carried out by Xerox PARC, and ultimately in the computers we use today, but this is only a pale shadow compared to what he had in mind. Still, we owe Engelbart a great debt of gratitude. It is not a stretch to say that the interactive computers we’ve been using for the past 26 years would’ve come into existence quite a bit later in our history had it not been for his work. The Apple Lisa and the Apple Macintosh wouldn’t have come out in 1983 and 1984, respectively. Something like Microsoft Windows would’ve been a more recent invention. It’s possible most of us would still be using command-line interfaces today, using a text display, rather than a graphical one, were it not for him. It also should be noted that the IPTO’s funding (by Licklider) of Engelbart’s ARC project at SRI was crucial to NLS happening at all. It was this funding that lent it credibility so that Engelbart could pursue funding from the other sources I mentioned. Were it not for Licklider, Engelbart’s dream would have remained just a dream.

To really drive home the significance of Engelbart’s accomplishment, I’m including this video, created by Smeagol Studios, which demonstrates some of the technology Engelbart invented, but was eventually brought to us by others.

The Arpanet

In terms of the technology we use today, all of the other projects that ARPA funded generated ideas that we see in what we use. They were inspirations to others. The Arpanet is the one ARPA project that directly resulted in a technology we use today, in the sense that you can trace, in terms of technology and management, the internet we’re using now directly back to ARPA initiatives.

The basis for the Arpanet, and later the internet, is a concept called packet switching, which divides up information into standard sized pieces, called “packets,” that have a “from” and “to” address attached to them, so that they can be routed on a network. It turned out ARPA did not invent this concept. It had gone through a couple false starts before ARPA implemented it.

The earliest known version of this idea was created by Paul Baran (pronounced “Barren”) at RAND Corporation in 1964. He had done an amazing amount of work designing a packet switching network for a digital phone system (think voice over IP, as an analogy!) for the U.S. government. The big difference between his idea and what we eventually got was that he designed his system only for transmitting digitized sound information over a network, not generic computer data. The amount of design documentation he produced filled 11 volumes. This is where the idea came from to design a network that could survive a nuclear attack. Baran’s goals were dovish. He wanted to avoid all-out nuclear war with the Soviets. His rationale was that if the government’s conventional communication system was wiped out in a first strike, our government would have no choice but to launch all its nuclear missiles, because it would be deaf and blind, unable to communicate effectively with its forces, or with the Soviet government. In his scheme, the communication network would route around the damage, and continue to allow the leaders of both countries to communicate, hopefully allowing them to decelerate the conflict. It didn’t get anywhere. Baran tried working through the Defense Communications Agency to get it built, but the engineers there, and at AT&T (Baran’s system was designed to work over the existing long-distance phone network), didn’t understand the concept. Baran even proposed that the U.S. government offer his system to the Soviets, so that if the roles were reversed, they could still communicate with the U.S. That didn’t convince anybody to go ahead with it, either. It was shelved and forgotten, until it was discovered by Donald Davies, a researcher at the National Physical Laboratory (NPL) in Teddington, UK.

Davies’s idea to develop a packet switching network (Davies is the one who invented the term) was inspired by a conversation he had with Larry Roberts and Licklider at a conference on time sharing Davies hosted in 1965. Davies found Baran’s project later while researching how to build his own system. Davies had come up with the concept of using separate message processing computers to route traffic on the network, what we call “routers” today. He finished his design in 1966. He got one node going, where a bunch of terminals communicated with each other through a single message processor, demonstrating that the idea could work. Like with Baran’s system, his idea was to use the UK’s phone network. He ran into a similar problem: The British Postal Service, which at the time was in charge of the country’s phone network, didn’t see the point in it, didn’t want to fund it, and so it died. Just imagine if things had turned out differently. The Brits might’ve taken all the credit!

Bob Taylor, from Stanford University

ARPA eventually got going with its packet switching network, but it took some searching, and leadership, on the part of a psychologist named Bob Taylor. Taylor moved from NASA to the IPTO in 1965. He served under Ivan Sutherland, who had replaced Licklider as IPTO director. Taylor and Licklider had worked in the same area of psychological research, called psychoacoustics, in the 1950s, and they were both passionate about developing interactive computing.

One of the first things Taylor noticed was that the IPTO had several teletype terminals set up to communicate with their different working groups, because each working group’s computer system had its own way of communicating. He thought this was something that needed to be remedied. The thought occurred to him that there should be just one network to connect these groups.

Wes Clark once again enters the story, and introduces a new idea into this mix. I’ll give a little background.

Clark had come up with a “backwater” concept in 1961 at MIT of individual computing, the idea that computers should be small and something that individuals could afford and own. Clark had developed what was considered a “small” computer at the time. It had a portion that fit on a desk (though it took up almost the entire desk), which contained a bunch of components fitted together. It had a keyboard and a 6″ screen. The part on the desk was hooked up to a single cabinet of electronics that stood on its own. It might’ve been the world’s first minicomputer. Clark dubbed his creation the LINC, after the place where he invented it, Lincoln Lab, which is part of MIT.

It was a highly technical box. It wasn’t something that most people would want to sit down and use for the stuff we use computers to do today, but it was the beginning of a concept that would come to define the first 25 years of personal computing. The LINC came to be manufactured and sold, and was used in scientific laboratories.

Clark obtained funding from the IPTO in 1965 to continue his work in developing his individual computing technology, at Washington University at St. Louis. From what I’ve read of his history, Clark did not end up playing a role in the development of the personal computer as most people would come to know it, though he nevertheless contributed to the vision of the networked world we have today, in more ways than one.

Sutherland and Taylor met with Clark in 1965. Taylor tried out the LINC, typing out characters and seeing them appear on its small screen. As he used it, he and Clark talked about networking. The idea of the “Intergalactic Network” that Lick had talked about a couple years earlier came into Taylor’s mind. Clark regarded time sharing as a mistake. To him, the future was in networking small machines like his. Taylor could see the validity of Clark’s argument. Clark hadn’t incorporated the idea of networking into the LINC, but he thought that was something that would come along later. Taylor observed that what made time sharing a valid concept was the community it created. He asked himself questions like, “Why not expand that community into a cross-country network,” and “Why couldn’t the network serve the same function as the time-sharing system in forming communities?” Clark agreed with this idea, and urged Taylor to go for it. Clark didn’t like the idea of linking time-sharing systems together, but link thousands, or millions of individually owned computers together? Yes!

Licklider encouraged Taylor to pursue the idea of a network as well. Lick said the only reason he didn’t try to implement his idea while at ARPA was the technology wasn’t ready for it. They went on to discuss ideas about how the network might be used. They thought about online collaboration, digital libraries, electronic commerce, etc. Electronic commerce is now well established, but the other areas they talked about are still being developed on the internet today.

When Taylor got back together with the IPTO’s working groups, he discussed the idea of creating a nationwide network, with them involved. Most of them didn’t like the idea at all. The only one who was excited about it was Doug Engelbart. It was one thing to talk about a network, as they had in the past. It was another to actually want to do it, primarily because most of them didn’t want to share computer time with outsiders. They wanted to keep it within their own communities, because they were having enough trouble keeping computer performance up with the users they had. Plus, they didn’t want to contribute funding to the project. All projects before this had come from the bottom up. Researchers put forward proposals, and ARPA would pick which to fund. This was a project coming from ARPA, top-down.

Taylor talked with Charles Herzfeld, who was the director of ARPA at the time, about the idea of building the network. He liked it, and they agreed on an initial funding amount of $1 million (about $7 million in today’s money). This allayed some fears so that investigations for how to build the network could move forward. Taylor assured the working groups that if they needed additional computing power to take on the load of being a part of the network, ARPA would find a way to provide it.

Ivan Sutherland left ARPA to become a professor of electrical engineering at Harvard in 1966, and Taylor became IPTO director. Taylor would come to play a role in the development of the Arpanet, the ancestor to the internet, and would later lead the effort to develop modern personal computers at Xerox PARC.

Larry Roberts, from his home page at http://www.packet.cc/

Larry Roberts, an electrical engineer from MIT, was brought into the Arpanet project, after some arm twisting by Taylor and Herzfeld. He was made a chief scientist at ARPA, to lead the technical team that would design and build the network. Roberts, Len Kleinrock, and Dave Evans developed a preliminary design for the Arpanet, independently from Donald Davies and Paul Baran, in 1967. Roberts noticed that in addition to the systems associated with the IPTO, all of the time-sharing systems that were sprouting up around the country had no way to communicate with each other. People who used one time-sharing system could communicate with each other, but they couldn’t easily communicate information, programs, and data with others who used a different system. Roberts wanted to help them all communicate with each other, furthering the goals of Project MAC.

Like the other ideas that came before it, the Arpanet was designed to work over AT&T’s long-distance phone network. The idea was to open dedicated phone connections and never hang up, so that data could be sent as soon as possible at all times. Roberts ran into hostile resistance at the Defense Communications Agency over this idea from some of the same people who told Baran his idea wouldn’t work. The difference this time was the man who wanted to implement it worked for the Pentagon, and ARPA was fully behind it. Part of ARPA’s mission was to cut through the bureaucratic red tape to get things done, and that’s what happened.

The original idea Roberts had was to have each time-sharing system on the network devote time to routing data packets on it. One of the objections raised by the ARPA working groups was that most or all of their computer time would be taken up routing packets. Unbeknownst to them at the time, this problem had already been solved by Davies over in England. Roberts changed his mind about this arrangement after he had a conversation with Wes Clark. He suggested having dedicated “interchange” computers, so that the computers used by people wouldn’t have to handle routing traffic–again, routers. Clark got this idea by thinking about the network as a highway system. He observed that they had specialized ramps to get from one highway to another when they’d intersect–interchanges. Thus was born Arpanet’s Information Message Processors (IMPs).

Roberts’s first inclination was to make the data rate on the Arpanet 9.6Kbps. He thought that would be plenty fast for what they would need. He reconsidered, though, upon meeting with Roger Scantlebury and his colleagues from the UK’s National Physical Laboratory. Scantlebury had worked with Donald Davies. They suggested a faster data rate. Roberts saw that a speed of 56Kbps was achievable, and decided that should be the data rate for the network. This is the same speed that’s used when people now use a dial-up connection to get on the internet, though this was the network’s speed, not just the speed at which the people who used it connected with it.

Three groups worked on the implementation details for the Arpanet: The ARPA group led by Roberts, a second group at BBN led by Robert Kahn, a mathematician and electrical engineer, and a third group made up of graduate students from many different universities (really whoever wanted to participate), who were part of what became the Network Working Group, which was coordinated by Steve Crocker at UCLA. For the techies reading this, Crocker was the one who invented the term “RFC” for the internet’s design documentation, which stands for “Request For Comments.” He was looking for something to call the design that the graduate students had done, without coming across as superior to the people at ARPA leading the effort.

BBN won the contract from ARPA in a competitive bidding process to manufacture and install the IMPs. BBN chose the Honeywell 516 computer as the basis for them. It was customized to do the job. Each IMP was the size of a refrigerator. Graduate students and employees at the sites where these IMPs were installed were expected to come up with their own software (their own network stack), based on a specification that had been hammered out by the aforementioned working groups, to send and receive packets to and from the IMPs.

Crocker and the Network Working Group came up with several basic protocols for how computers would interact with each other over the Arpanet, including “telnet” and “file transfer protocol” (FTP). BBN created the system and protocol that enabled e-mail over the Arpanet in 1972. It became the most popular service on the network. These protocols would become the foundation for the protocols on the internet, as it developed several years later.

Along the way, the Arpanet’s designers had an idea that we would recognize as cloud computing today, where computers would have distributed, dedicated functionality, so that it would not have to be duplicated from machine to machine. A computer application that needed functionality could access it through the network interface. You could say that the idea of “the network is the computer” was starting to form, though from what I can tell, no one has pursued this idea with vigor until recently.

Also of note, ARPA had contracts with BBN in 1969 to research how medical information could be used and accessed over a network, to see what doctors could do with online access to medical records. Perhaps this research would be relevant today.

The first node on the Arpanet was set up at UCLA in September 1969. Bob Taylor left his position as IPTO director soon after. He was replaced by Larry Roberts. Another five sites were set up at SRI, UC Santa Barbara, the University of Utah, BBN, and MIT, by the end of 1970. The Arpanet became fully operational in 1975, and its management was transferred from ARPA to, ironically, the Defense Communications Agency.

Here’s a good synopsis on the history of the internet I’ve just covered, done by a couple of students at Vanderbilt University.

People often look askance at nostalgic notions that there was ever such a thing as a “golden age” of anything, but Bob Taylor said of this period of computer research, “Goddammit there was a Golden Age!” It was a special time when great leaps were made in computing. A lot of it had to do with changing the dominant paradigm in how they could be used.

What happened to the people involved?

Edit 11-23-2013: As I hear more news on these individuals, since the writing of this post, I will update this section.

Ken Olsen, who I introduced at the beginning of this post, retired from Digital Equipment Corp. in 1992. As noted earlier, DEC was purchased by Compaq in 1998, and Compaq was purchased by Hewlett-Packard in 2002. Olsen died on February 6, 2011.

Harlan Anderson was fired from DEC by Olsen in 1966 over a disagreement on what to do about the company’s growth. He founded the Anderson Investment Company in 1969. He was a trustee of financing for Rensselaer Polytechnic Institute for 16 years. He served as a trustee of the King School of Stamford, Connecticut, and the Norwalk Community Technical College Foundation. He is currently a member of the Board of Advisors for the College of Engineering at the University of Illinois. He is also a trustee of the Boston Symphony Orchestra, and the Harlan E. Anderson Foundation. He has written a book called, “Learn, Earn & Return: My Life as a Computer Pioneer.” (Sources: Wikipedia, the Boston Globe, and Rensselaer Board of Trustees)

Wes Clark left Washington University in St. Louis in 1972, and founded Clark, Rockoff, and Associates, a consulting firm in Brooklyn, New York, with his wife, Maxine Rockoff. His oldest son, Douglas Clark, is a professor of computer science at Princeton University. (Source: IEEE Computer Society) Wes Clark died on February 22, 2016.

John McCarthy worked at Stanford in the field he invented, artificial intelligence research, from 1962 until he retired in 2000. He remained a professor emeritus at Stanford until his death on October 24, 2011.

Fernando Corbató became a professor at the Department of Electrical Engineering and Computer Science at MIT in 1965. (see my note about Robert Fano below re. this department) He was an Associate Department Head of Computer Science and Engineering at MIT from 1974-1978, and 1983-1993. He retired in 1996. (Source: MIT Computer Science and Artificial Intelligence Laboratory)

Jack Ruina served as a professor of electrical engineering at MIT from 1963 to 1997, and is currently professor emeritus. During a two-year leave of absence from MIT he served as president of the Institute for Defense Analysis. In addition, he served as deputy for research to the assistant secretary of research and engineering of the U.S. Air Force, and Assistant Director of Defense Research and Engineering for the Office of the Secretary of Defense. He also served on a couple presidential appointments to the General Advisory Committee, from 1969 to 1977, and as senior consultant to the White House Office of Science and Technology Policy from 1977 to 1980.

Ivan Sutherland left the University of Utah in 1976 to help create the computer science department at the California Institute of Technology (CalTech). He served as professor of computer science there until 1980. In that year he founded a consulting firm with one of his students, Bob Sproull, called Sutherland, Sproull and Associates. Sproull is the son of Dr. Robert Sproull, who was one of ARPA’s directors. (Sources: BookRags.com and “the DARPA video”) The firm was purchased by Sun Microsystems in 1990, becoming Sun Labs. Sutherland became a Fellow and Vice President at Sun Microsystems. Sun was purchased by Oracle in 2010, and Sun Labs was renamed Oracle Labs. (Source: Wikipedia) Sutherland, and his wife, Marly Roncken, are currently involved with computer science research at Portland State University. (Sources: Wikipedia and digenvt.com)

Bob Fano, who supervised Project MAC, stayed with the project until 1968. He became an associate department head for computer science in MIT’s Department of Electrical Engineering in 1971. Soon after, the Electrical Engineering department was renamed the Department of Electrical Engineering and Computer Science, and Project MAC was renamed the Laboratory of Computer Science (LCS). He was a member of the National Academy of Sciences, and the National Academy of Engineering. He was a fellow with the American Academy of Arts and Sciences, and the Institute of Electrical and Electronics Engineers (IEEE). (Sources: Wikipedia.org and “Did My Brother Invent E-Mail With Tom Van Vleck?” from the New York Times opinion blog) He died on July 13, 2016. (source: “Robert Fano, computing pioneer and founder of CSAIL, dies at 98”)

Ken Thompson was elected to the National Academy of Engineering in 1980 for his work on Unix. Bell Labs was spun off into Lucent in 1996. He retired from Lucent in the year 2000. He joined Entrisphere, Inc. as a fellow, and worked there until 2006. He now works at Google as a Distinguished Engineer.

Dennis Ritchie – I feel I would be remiss if I did not note that Ritchie developed a computer programming language called “C” as a part of his work on Unix, in 1972. It came to be known and used by professional software developers, and became pervasive in the software industry in the 1990s, and beyond. Most of the software people have used on computers for the past 20+ years was at some level written in C. This language continues to be used to this day, though it’s gone through some revisions. Its influence is felt in the use of many different programming languages used by professional developers. At some level, some of the software you are using to view this web page was likely written in C, or some derivative of it.

Dennis Ritchie worked in the Computing Science Research Center at Bell Labs throughout his career. Bell Labs became Lucent in 1996. Lucent and Alcatel merged in 2006. Ritchie retired from Alcatel-Lucent in 2007. He died on October 12, 2011. (sources: Wikipedia.org and Dennis Ritchie’s home page)

Dave Evans – Aside from his computer science work at the University of Utah, he was a devout member of the Church of Jesus Christ of Latter-day Saints for 27 years. I do not have documentation on when he left the University of Utah. All I’ve found is that he retired from Evans & Sutherland in 1994, and that he died on October 3, 1998.

Doug Engelbart went into management consulting after Tymshare was sold. He tried to spread his information technology vision in large organizations that he thought would benefit from having their knowledge generation and storage/retrieval processes improved. He continued his work at the Doug Engelbart Institute at SRI until his death on July 2, 2013. There are many more details about his professional career at his Wikipedia page.

Len Kleinrock joined UCLA in the mid-1960s, and never left, becoming a member of the faculty. He served as Chairman of their Computer Science Department from 1991-1995. He was President and Co-founder of Linkabit Corporation, the co-founder of Nomadix, Inc., and Founder and Chairman of TTI/Vanguard, an advanced technology forum organization. He is today a Distinguished Professor of Computer Science. He is a member of the National Academy of Engineering, the National Academy of Arts and Sciences, an IEEE fellow, an ACM fellow (Association of Computing Machinery), an INFORMS fellow (INstitute For Operations Research and the Management Sciences), an IEC fellow (International Electrotechnical Commission), a Guggenheim fellow (which I assume refers to the Guggenheim arts museum), and a founding member of the Computer Science and Telecommunications Board of the National Research Council. (Source: Len Kleinrock’s home page)

Steve Crocker has worked on the internet community since its inception. Early in his career he served as the first area director of security for the Internet Engineering Task Force (IETF). He also served on the Internet Architecture Board (IAB), the IETF Administrative Support Activity Oversight Committee (IAOC), the Board of the Internet Society, and the Board of The Studio Theatre in Washington, DC. Other items on his resume are that he conducted research management at the Information Sciences Institute at the University of Southern California, and The Aerospace Corporation. He was once vice-president of Trusted Information Systems, and co-founded CyberCash, Inc., and Longitude Systems. He was Chair of ICANN’s Security and Stability Advisory Committee (SSAC) from its inception in 2002 until December 2010. He is currently chairman of the board for the Internet Committee for Assigned Names and Numbers (ICANN). He is also CEO and co-founder of Shinkuro, Inc., a start-up focusing on dynamic sharing of information on the internet, and on improved security protocols for the internet. (Source: ICANN Biographical Data)

I’ll close this post with a video that gives a wider view of what ARPA was doing during the late ’50s, when it was founded, into the mid-70s. Note that “Defense” was added to ARPA’s name in 1972, so it became “DARPA.” This is “the DARPA video” I’ve referred to a couple times. There is a brief segment with Licklider in it.

In Part 3 I cover the later events and operating prerogatives at DARPA that are mentioned in this video. I will talk a little more about Larry Roberts, Ed Feigenbaum, and Bob Kahn, more about Licklider, the IPTO, Robert Taylor’s work at Xerox PARC, along with Alan Kay, Chuck Thacker, and Butler Lampson in the invention of personal computing.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Luigi Zingales wrote an excellent article for City Journal, called “Who Killed Horatio Alger?” He lays out very clearly how a meritocratic society works, and the diversions that can take us off that track. He says this is what’s happened with the bubbles and bailouts, and that they risk ending our belief in a meritocracy. He identifies a crucial agenda to reasserting it, which he calls, I think aptly, “pro-market.” It is neither anti-business nor pro-business. Rather it asserts that the rules of a market should rule in the economy: When you succeed, the reward you receive from the market should not be pillaged. You get to keep the lion’s share of it. When you fail, it’s all on you. Do not expect the government to back you up. He points out that a “pro-market” agenda does not have a lot of political allies right now. Pro-business, particularly big business interests, and anti-business interests are on the same page: Neither of them like the market approach.

Zingales does a great job of describing what’s different about the political realm vs. the market, and how culturally similar business monopolies are to political institutions.

I think he puts forward a legitimate warning that if we don’t stop doing the bailouts when future market crashes happen (and they will happen), the free market principles this country has used since its founding will go away, and may never come back, because people will see that using free market assumptions in their own lives don’t pay off when society doesn’t believe in them.

The alternative in such a scenario is to see success as purely a matter of luck, not merit, and that there is nothing fair about it. This provides a rationale to redistribute income to the less lucky, and for government to pick winners and losers in the economy, favoring some businesses or industries over others. The position of industries in the market would be seen as purely arbitrary, a matter of luck. He points out that redistribution leads to economic stagnation, because there is no incentive to create new wealth in that scenario.

Here are some salient quotes from the article:

[T]his rosy picture obscures a hard fact: meritocracy is a difficult principle to sustain in a democracy. Any system that allocates rewards on the basis of merit inevitably gives higher compensation to the few, leaving the majority potentially envious. In a democracy, the majority generally rules. Why should that majority agree to grant a minority disproportionate power and rewards?

America … encouraged meritocracy from its inception. In the eighteenth century, the social order throughout the world was based on birthrights: nobles ruled Europe and Japan, the caste system prevailed in India, and even in England, where merchants were gaining economic and political strength, the aristocracy wielded most of the political power. The American Revolution was a revolt against aristocracy and the immobility of European society, but unlike the French Revolution, which emphasized the principle of equality, it championed the freedom to pursue happiness. In other words, America was founded on equality of opportunities, not of outcomes.

When the difference between a comfortable retirement and an indigent one is determined not by hard work or by a frugal lifestyle but by lucky timing in buying or selling your house, people start questioning the fairness of the market system. The fact that the real-estate bubble was the second large bubble to pop in less than a decade further undermined trust in markets as a good indicator of where to invest resources.

Though he doesn’t get into this, in my mind the third quote relates to monetary policy. The Fed kept interest rates low in the 1990s for an extended period of time. Fed Chairman Alan Greenspan said that there was no fear of inflation, due to increases in productivity from the computer/internet revolutions. The Fed’s easy money policy likely was a major contributor to the dot-com and telecom bubbles that burst in 2000. The Fed returned to an easy money policy in 2001, which fueled the housing bubble, and led to its subsequent collapse. It continues to implement an easy money policy to this day. That needs to change if Zingales’s pro-market agenda is going to work. All this policy has done is led us to ruin.

Read Full Post »

When President Obama gave his State of the Union address on January 25, 2011, I noticed he was criticized specifically for his so-called “Sputnik moment,” his call for “investment” in certain areas involving science and engineering. Here is some of what he said on that:

Half a century ago, when the Soviets beat us into space with the launch of a satellite called Sputnik, we had no idea how we would beat them to the moon. The science wasn’t even there yet. NASA didn’t exist. But after investing in better research and education, we didn’t just surpass the Soviets; we unleashed a wave of innovation that created new industries and millions of new jobs.

This is our generation’s Sputnik moment. Two years ago, I said that we needed to reach a level of research and development we haven’t seen since the height of the Space Race. And in a few weeks, I will be sending a budget to Congress that helps us meet that goal. We’ll invest in biomedical research, information technology, and especially clean energy technology, an investment that will strengthen our security, protect our planet, and create countless new jobs for our people.

People within the computer science, science, and engineering communities have been pining for a “Sputnik moment” for several years now, hoping that something, anything, will inspire our society to make math, science, engineering, and computer science a higher priority, and promote more funding for those fields in academia. It’s not just academics who are worried about this. The industries that count on having a supply of good scientists, engineers, and mathematicians to draw from in the U.S. are worried about the decline in interest among American students as well.

The hope for this “moment” is misguided, in my opinion, because what they’re really hoping for is something contrived. It also speaks to a lack of faith in America, that we cannot be inspired to pursue these fields without some foreign enemy that we feel we have to compete against. Obama kind of tried to create this “Sputnik moment” in his speech, talking about how the Chinese and South Koreans have better technology than we do. This is true. In the case of South Korea and Singapore, it’s been true for almost ten years. Back in 2002 I was hearing about how in Singapore people were reserving movie tickets for specific movies at specific theaters, and carrying out cash and credit card transactions at retail outlets, with their cell phones. All they needed to enter on their phone was their PIN. You can watch a Computer Chronicles episode where they talk about this. A couple years later I heard about how South Koreans had higher speed internet service than we did, back when the fastest broadband service to which most American consumers had access was probably 1.5 megabits per second. Koreans were already doing HD video conferencing as a matter of course. It’s come to this country just recently. I’m not convinced, though, that Americans are going to commit themselves to spending years studying technical subjects just to beat the Chinese and Koreans over issues like this.

Sputnik was a cultural realization in 1957 about the implications of the Soviets, our Cold War adversary, getting ahead of the U.S. in space technology, primarily out of an anti-communist sentiment, and concern for national security. The government didn’t have to tell people to be worried about this. In fact it was the citizenry that was banging on the doors of the government, demanding, “Why didn’t you see this coming,” and, “Why weren’t we first?” The Soviets had developed their own nuclear weapon in 1949, and now they had the ability to launch things into space. People made the connection that “they could drop nuclear bombs on us from space,” a strategic high ground that we were not even close to occupying at the time. Our own test rockets were blowing up on the launch pad on a regular basis. The feeling was the Soviets had beaten us in a game of one-upsmanship, when we least expected it, and there was an alarming sense that we needed to “catch up.”

According to what we know from history, the so-called “missile gap” was a political myth. Nevertheless, something good was able to come out of this. Math and science, which our culture had largely ignored and neglected, became a higher priority. We produced more scientists and engineers. Some of them went into university research. Others went into private industry. A couple by-products of this was a heightened interest in getting more women into computer science in the 1970s–an effort that actually succeeded at the time, but only temporarily. It also spurred the explosion in commercial personal computers in the late 1970s, which continued into the 1990s. There was a cultural understanding which said that learning about computers at a technical level, and how to use them, was important for our future.

That culture has changed. We value computer technology, because now it’s everywhere, and it’s hard not to learn about it, but now it’s all about learning the technology’s interface, not how to build it, or manipulate its internals. There is much less interest in what makes our technology possible, and in learning what’s necessary to drive it forward.

The criticism of Obama’s speech where he talks about our “Sputnik moment,” has bothered me somewhat, because the critics missed the history of government research funding. We really shouldn’t dismiss it, and let ourselves be ignorant of it, because I think this is one of the relatively few areas where government has done a good job, even with the occasional misguided notions that have been promoted through it under the label of “science”.

Here is some of what was said against Obama’s speech.

From Investors.com, published by Investor’s Business Daily, “Obama’s Tribute to Big Government”:

Are you impressed with the Internet? With your iPad? With that gadget on your car’s dashboard that gets you back on the Interstate after you get lost in a strange city?

Thank Washington, because according to the president Washington “planted the seeds for the Internet. That’s what helped make possible things like computer chips and GPS. Just think of all the good jobs — from manufacturing to retail — that have come from these breakthroughs.”

If you don’t remember Presidents Carter or Reagan or Clinton bragging about the initiatives of their administrations that would one day bear the fruit of global social networks like Facebook, handheld communication devices like BlackBerrys, and the mobility revolution of Wi-Fi, it’s because they didn’t.

Government didn’t bring us any of those things; the private sector did. Does anyone really believe the federal government — which can’t find, let alone police, the 13 million-plus illegal aliens within our borders — can find and finance entrepreneurs like Bill Gates and Steve Jobs while they are in the embryonic stages of their careers and make them successes?

The part of Obama’s speech that was quoted makes it sound like he was making a bit of a non-sequitor. Here is the full quote:

Our free enterprise system is what drives innovation. But because it’s not always profitable for companies to invest in basic research, throughout our history, our government has provided cutting-edge scientists and inventors with the support that they need. That’s what planted the seeds for the Internet. That’s what helped make possible things like computer chips and GPS. Just think of all the good jobs — from manufacturing to retail — that have come from these breakthroughs.

On these points, Obama was right, but it takes an understanding of the history of this technology to get that. Defense spending was an important part of establishing the internet (through ARPA). It should be noted that the internet I refer to here is not the web, but rather the basic hardware and software infrastructure that the web uses in order to work. NASA provided R&D funding for the development of computer chips around the time they were invented, and without defense spending there would not be a GPS satellite system, which is essential for GPS technology to work here on the ground. It should be noted that the technology for GPS was initially used exclusively by the military, the technology for which was kept secret, and I think was only made available for consumer use within the last 10+ years. These have been essential technology platforms for the consumer products and services we use.

Just a side note: The example of the level of illegal immigration exhibiting government incompetence is a bad one. It’s not that our government can’t police and regulate this better. It’s that it doesn’t want to, and there are interests in this country who like it that way. Anyway…

There was also this from Glenn Beck shortly after the speech:

The President also announced he would be “investing” in biomedical research. Can anyone find that one in the Constitution? Oh, and information technology, especially clean energy technology. Alright. Let’s take these apart one at a time, shall we?

Biomedical: Maybe it’s just me as an American citizen, but I don’t want the government now dabbling with creating our drugs. Period.

Information technology? I dunno. I thought we were doing pretty good with Steve Jobs. Bill Gates [is] doing good, you know. Newcomers [are] coming into that field all the time. I think we’re good! Have you seen the iPhone…next to the Post Office? I’m going to go with Apple on that one.

After seeing these comebacks from conservatives, I shook my head a bit in disgust. To me, these people clearly didn’t understand what they were talking about with regard to how the products we see today came to be. Well, I hope to remedy that a bit in this series.

Obama wasn’t talking about starting a government-owned biomedical or IT company, or the government becoming a venture capitalist that would fund startups (though, as you’ll see later in this series, the government did that in a round-about way during the “middle stage” of the internet’s development), but rather that the government would provide funding for basic scientific research of the same type that developed the technology platforms which Facebook, Twitter, GPS devices, smartphones, and PCs were able to build upon, or were able to use. It’s not a stretch at all to say that most of these platforms would not exist today, or would’ve come later, had government-financed research not delved into creating them.

I will use several sources for this series. The primary one is “The Dream Machine,” by M. Mitchell Waldrop. He tells the story of where the basis–the ideas and infrastructure–for a lot of the technology we use today came from. Other sources I will use are Wikipedia, historical videos I’ve found on the internet, and a documentary called “The Machine That Changed The World.” Occasionally I will cite anecdotes I have heard from people who were either involved in this work, or who have been close to the people who were.

“The Dream Machine” follows the career of one man, J.C.R. Licklider, and several other computing luminaries, who made great contributions to the field of computer science, and particularly to the goal of creating interactive, networked computing, the kind of computing we do every day when we use technology.

What follows is a series of posts on this subject. I’ve broken it up into parts, because it’s extensive. The goal of what I’m writing here is not to be an authoritative source, but to provide a primer which you, the reader, can use to further your own research of this topic, if you feel so inclined. I will not cover the full history of computer technology (though there is still a lot here). What I will cover is the history of government-funded research into certain computer technologies, and what flowed from that research. I’ve made a point to try as much as I can to include the contributions of private companies that were created from government projects, or which worked with government agencies, or were beneficiaries of ideas generated by government-sponsored research, and which were critical in bringing us the technology we use today.

Getting computing off the ground

The first technological breakthrough described in the “The Dream Machine” was ENIAC (the Electronic Numerical Integrator And Computer), the first general-purpose computer. There were other computers invented by other creators before this, but they were designed for a specific purpose. It was either impossible, or very difficult, to program them for other purposes. This computer could be programmed to use a variety of calculation methods.

J. Presper Eckert, from the Computer History Museum

ENIAC was designed by J. Presper Eckert, and John Mauchly (pronounced “mock-lee”). It was a digital computer, and its electronics were made up of vacuum tubes, also known as “valves.” It was a government project, financed by the U.S. Army, at the Moore School of Engineering at the University of Pennsylvania, during WW II. It was completed in 1946, and took up a whole room. It was also the first high profile project where women were important to the operation of a computer. They were called “Rosies,” and they were the programmers of ENIAC. This was no mean feat for anyone to do, since no programming language existed. It was programmed via. panels of “switches” (though the controls were actually dials), and the only code the programmers had to read were wiring diagrams. Before this, the “Rosies” had worked as human computers, figuring firing tables for artillery shells for the war effort by hand, with the help of mechanical calculators. Human computation was a problem, though. Errors were inevitable, and they might go undetected. It took a long time for people to compute tables. The sense of the time was they needed the firing tables yesterday. They couldn’t produce them fast enough. So building an electronic computer to do this work was seen as essential. As was the case in all sorts of male-dominated fields of the day, since most of the young men in the country were off fighting the war, women were brought in to do both kinds of work (computation and programming), and they showed themselves to be up to the task, though their contribution went unrecognized by society for many years.

John Mauchly, from the Computer History Museum

There’s a new documentary out on DVD now called “Top Secret Rosies: The Female Computers of World War II” (h/t to Mark Guzdial) for anyone who’s interested in learning more about this. There’s also a segment on this history in the documentary, “The Machine That Changed The World,” in its first episode, called “Great Brains.”

The following video of ENIAC, and the people who operated it, is National Archives footage digitized to the web. Note: There is no sound in this video. Eckert and Mauchly appear in the video at 4:55.

The blinking light display on the ENIAC was set up solely for the computer’s public unveiling. The display is a panel of lights covered with ping pong balls, with numbers painted on them. This panel was not used for normal operation. Nevertheless, it established the precedent of computers having panels of blinking lights.

There were two limitations with ENIAC. One was, while it could be programmed, it could not store its program. Programming was a matter of “wiring” the program into the machine, which was accomplished by adjusting the aforementioned panels of dials. Second, it was not interactive. The computer loaded data into itself, processed it, and produced printed tables of calculations. That was it. This would more or less become the template for most of the computers that would be produced and used for the next 25 years.

Taking what they had learned from building ENIAC, Eckert and Mauchly formed the world’s first computer company, the Electronic Control Company, later called the Eckert-Mauchly Computer Corporation, in 1946. They developed the Univac (the Universal Automatic Computer). The following ad for Univac was produced after the Eckert and Mauchly company was bought by Remington Rand in 1950.

Univac was made famous when it was featured on a CBS News broadcast, giving the first statistical computer prediction for an election, in the 1952 presidential race. The prediction was based on a sample of actual vote tallies on election night, and was very close to the actual result, though CBS did not air the prediction, because it differed significantly from polls taken before the election.

IBM decided to get into the computer business after hearing requests for computers from the U.S. military during the Korean War. There was resistance within IBM to getting into this new market. Nevertheless, it produced its first computer model, the 701, in 1953. Its design was based on Dr. John von Neumann’s paper, “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” which documented his work in creating a computer that could store its program, at the Institute for Advanced Studies, at Princeton.

IBM came to dominate the field with its superior marketing, and backward compatibility with the punch card system of its older mechanical tabulation machines.

The first glimmers of interactive computing

What I covered above is one path that computers took, which came to dominate computer use for a long time. Another path that was very significant, though was not enjoyed by the public at the time, was interactive computing. For many years, interactive computing would only be known in the military. Eventually it would come to be known by the whole world.

Jay Forrester, from Wikipedia

Going back in time to before ENIAC was created, the Navy wanted to build a flight simulator for pilot training. They funded a project at MIT called Whirlwind, starting in 1944. It was designed by Jay Forrester. The system he ultimately built was the first real-time digital computer. Like ENIAC, it used vacuum tubes. What was different was it gave constant feedback to an operator (the pilot trainee) as it received input (movements of a joystick by the trainee), and it gave the operator something to look at, a simple graphical display, generated using an oscilloscope.

Whirlwind was never really “finished,” though it was put into military service in 1951. The scientists and engineers who worked on it were constantly tinkering with it, and changing its purpose. Under “Project Claude,” Whirlwind was tasked with displaying the positions of a real drone and a real aircraft flying in the sky, taking radar data as input in real time, allowing the operator to use the joystick that was part of the computer’s setup to remotely control the drone. The goal was to see if the person sitting at the computer could use the equipment to pilot the drone to intercept the aircraft. In tests it worked. This is not unlike the way military drones are piloted today.

A government-deputized committee that was formed shortly after the Soviets tested their first nuclear bomb in 1949, headed by MIT physicist George Valley, saw Whirlwind in action in 1950. It was a decisive proof-of-concept for them about what computers were capable of, and how they could be used for air defense. The Valley committee was tasked with assessing our existing defenses in the face of a Soviet nuclear threat, and making recommendations about any changes that were needed. They found that our defenses were woefully inadequate for defending against a bomber raid, the only major threat that was imagined at the time.

The committee’s report was delivered to the Pentagon in 1950. It recommended that the government install more radar stations to fill in the gaps in the radar net, and that an automated radar tracking and response system–a computer system–was needed. Seeing Whirlwind in action gave them confidence that this was possible. The U.S. government was very nervous about the situation with the Soviets. The thinking was the cold war could turn into a hot war quickly, and so they accepted the Valley committee’s recommendations without question, even though this was a radical, new idea at the time, only supported by experimental technology.

A man by the name of J.C.R. Licklider comes into the story here. He was a psychologist, with a background in mathematics and physics, who was very interested in computers. In addition to being a scientist, he was also a mathematician. He is a major figure in the history that follows.

Licklider came to MIT in 1950 to continue his work in psychology. While there, he developed custom-built analog computers to simulate isolated neurological activity in the human brain, as part of his research. He also contributed to the SAGE (Semi-Automatic Ground Environment) project in 1951, working on human factors research for the displays that operators would be working with. He helped determine when and how much information should be displayed to the operators, and what the intensity of the displays should be, so that the operators would be comfortable, and would not be overwhelmed with information, so they could make clear decisions.

SAGE was, in essence, the system that was recommended in the Valley Committee report. It was designed to be a real-time system, like Whirlwind, which would monitor our defensive airspace for Soviet bombers, and our own air force flights. It was developed at MIT, with several corporations contributing to its construction and planning, including IBM, System Development Corporation (which was spun off from RAND Corp. They managed all of the programmers for the SAGE project), Burroughs, and Western Electric.

The programming task for SAGE was very complex. It was estimated they needed 2,000 programmers. There weren’t enough in the whole country to work on it. So the SAGE project undertook an education program to recruit and train new programmers. All comers were invited, men and women, from all walks of life. It was not easy to tell who the best programmers would be. Scientists, engineers, mathematicians, the typical candidates one would think were ideal, were not sure bets. Music teachers tended to be the best candidates. It was found that women were better than men at paying attention to minute details while simultaneously not losing track of the big picture. One of the best programming groups in the project was 80% female.

The Navy lost interest in Whirlwind in 1954, and stopped work on it. The Air Force began construction of SAGE installations that same year. The SAGE system was brought online in 1958. Each installation was huge, due to the fact that, like ENIAC, its electronics were made up of large arrays of vacuum tubes. It took up a few floors of a building at each installation. Each one had two computers, so that one could always be taken down for maintenance if need be. It was networked so that information gathered about detected objects in the sky could be relayed to other stations. The first phone modems were created during this project, for this information transfer.

SAGE was fully built by 1963. The thing was, it was obsolete by this point. ICBMs (Inter-Continental Ballistic Missiles) had been developed, and the nuclear threat situation had become more a matter of nuclear deterrence than strategic defense. SAGE could not deal with ICBMs at all. Still, Whirlwind and SAGE would have a huge impact on the future of computing in the minds of the scientists and engineers who worked on them. SAGE was the first big proof-of-concept that interactive, networked computing was a viable concept. Still, it would take a lot of convincing to get more players in the computer industry to believe in it. Computers were still big and expensive, and computer time was expensive and precious. The idea of interactive computing was seen as pie in the sky by many–a waste of time, and SAGE was the exception that proved the rule that computers were only for data processing, not for interaction.

What came of it

One of MIT’s working groups on the SAGE project was spun off as the MITRE Corporation in 1958. Its purpose was to integrate new weapons systems into SAGE. In addition, in the coming years MITRE would develop the National Airspace System for air traffic control, which is still in use today, and AWACS (the Airborne Warning and Control System) (sources: Wikipedia, The MITRE Digest).

The SAGE project enhanced IBM’s ability to deliver computer systems that could handle big, real-time data processing projects, because their staff had learned how to do it through the experience of working on this project, and by bringing experienced people in from places like MIT. IBM developed SABRE (Semi-Automated Business-Related Environment) for American Airlines. It was a simplified commercial version of SAGE, but was designed for making airline reservations. The first experimental SABRE system went online in 1960. It was the largest real-time commercial data processing network in the world at the time. It was ultimately linked by phone lines to 1,200 teletype terminals across the country.

American made SABRE available to independent travel agents in 1976, and in the year 2000 spun it off as SABRE Holdings, which exists today. It’s divided into four business units: Travelocity (the e-commerce site for people to make their own travel reservations), Sabre Travel Network (a global distribution system providing travel information to agencies, corporations, and travelers), Sabre Airline Solutions (providing airline reservations systems and financial management), and Sabre Hospitality Solutions (providing technology solutions to hotels).

The SAGE system continued to be used by the military until 1984. Fortunately it was never tested in combat.

J. Presper Eckert, a co-inventor of the ENIAC, became an executive with Remington Rand when the Eckert-Mauchly Computer Corporation was purchased by Rand in 1950. He stayed on through the company’s many transitions. Remington Rand was purchased by Sperry in 1955, and went by the name of Sperry Rand, and then Sperry, until the company merged with Burroughs to form Unisys in 1986. It is still in existence today. Eckert retired from Unisys in 1989. He died on June 3, 1995.

John Mauchly, co-inventor of the ENIAC, became a founding member, and president, of the Association for Computing Machinery (ACM), and helped found the Society of Industrial and Applied Mathematics (SIAM). He stayed on with Remington Rand for 10 years, after their company was bought by Rand. He left the company in 1959 to form the consulting firm Mauchly Associates. He later formed another consulting firm called Dynatrend in 1967. He died on January 8, 1980. (sources: Wikipedia, the Association for Computing Machinery, Ohio History Central)

Jay Forrester, creator of Whirlwind, moved to the Sloan School of Management at MIT, and left the development of digital computing for good, in 1956. He created a new field of research called “system dynamics,” which looks at the interactions of objects in dynamic systems, with an emphasis on simulation of those systems. He developed ideas which have led to modern notions of supply chain management. (Update 11/21/2016: He died on November 16, 2016. Source: New York Times)

In Part 2, I describe the efforts that began in the 1960s to bring interactive computing to the masses.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I saw this from Star Parker today, and I think she makes an excellent point about the role of government:

There is no way around the fact that freedom and prosperity only exist when government protects property, and this includes our money.

I would add to this, “protects the lives of individuals, and respects contracts,” as well as property, but she is on to something. Since the financial crash of 2008 our government has taken on the role of the protector of our economy, even if the majority of Americans has not wanted it to do this. It has propped up companies, at best providing a cushion to tide them through their restructuring, and at worst given them a false sense of value. It has been focused for too long on the conceit that it can produce favorable outcomes, and when I say this in particular, I mean it far beyond this discussion. It goes above and beyond protecting property. Protecting property means protecting it from harm inflicted by fraud, thieves, saboteurs, and vandals–external threats, not harm that is self-inflicted. What the government has been presuming is that it can save us from ourselves.

Even though the Federal Reserve has wanted to deny that it is devaluing the Dollar, it has been doing so by creating money and then exchanging it for Treasuries, to finance our dramatically expanding public debt, and/or toxic mortgage securities. We haven’t been feeling the effects of this so much, because banks have not been lending that much into the private economy, but someday this will change. This policy has had effects abroad, which we have been feeling indirectly. Nevertheless, this is not protecting our property–specifically, our money!

What I’ve been increasingly realizing is that if we are to get back to prosperity, our government should stop trying to save us from ourselves, stop trying to manage the economy, and get back to its raison d’etre, and the Fed should adopt a new policy of protecting the value of the Dollar. If both were to do this, it would likely cause some scary and unpleasant results in the short term, but if people can see where the real problems are in the economy, then they are more likely to be resolved.

Read Full Post »

This is kind of a follow-up post to one I wrote in March, 2009, talking about what I anticipated in our economic future. This takes a look back at the past, and brings us up to the present, showing us a bit of how we got here.

I saw an interview with economist and Wall Street Journal columnist Judy Shelton a couple months ago on C-SPAN. It was meant to be a nice sit-down chat, but she gives an interesting history of what’s happened to world monetary systems since the 1930s. She said that the global economy went from instability in the 1930s, where every nation was trying to undercut every other country’s currency, to stability with the creation of the International Monetary Fund (IMF). Then the IMF betrayed its own original mission, President Nixon took the U.S. off Bretton-Woods in the early 1970s, and now we’re back to where we were in the 1930s in terms of international finance–each country trying to undercut everyone else’s currencies.

Quite a bit of this interview is a profile of Shelton and her interests, particularly her love of Russian culture. The interview is an hour long.

The most intriguing thing to me is she likes the idea of a Bretton-Woods-like gold standard for the U.S. dollar, an idea that most economists would consider anachronistic. This is an idea that I got into shortly after I left college in the early 1990s, but later abandoned, because I confused it with the gold standard that existed prior to Bretton-Woods. The older one was partly blamed for the Great Depression.

The most interesting parts of the video are at 42 minutes and 55 minutes in. At minute 42 she explains her rationale for an international gold standard, saying she’s not a “gold bug,” I guess meaning that she doesn’t like it for its own sake. Rather she sees it as a relatively stable guarantor of value, and she dislikes the fact that world currencies are now not comparable to anything that seems real to most people. Instead, what people get is a sliding measure of value in the international currency market that varies from month to month and year to year. She said that now world currency trading is just a gamble, not unlike derivatives, and this has a debilitating effect on international trade. She also advocates it so that people can count on a dollar still being worth approximately what it was when they earned it, rather than its value wasting away because of inflation.

At minute 55 she gets very frank about the fiscal and economic situation that we are now in, and how we got into it. She utterly rejects the notion that supply side economics and the free market created the mess. She puts the blame squarely on government interference in the real estate market, and the dysfunctional incentives this created, with the government on the one hand tacitly guaranteeing mortgage-backed securities, while on the other allowing speculators to use them in free wheeling bets, which ultimately caused the crash to be as bad as it was. This put the government (in other words, the tax paying public) in the position of having to back the bets to try to contain an economic collapse.

On the lighter side, I noted that Mrs. Shelton said she grew up in “The Valley” (San Fernando Valley in CA). “I’m really kind of the classic Valley girl,” she said. Wow! You wouldn’t know it by listening to her talk. Note this is not “The Valley” I sometimes refer to. To those of us with a computer/tech background, “The Valley” means Silicon Valley, centered in San Jose, CA. San Fernando is a northern suburb of Los Angeles. Anyway, this took me back to the early 1980s when I used to hear “Valley Girl” by Frank Zappa on the radio.

Valley girl
she’s a Valley girl
Valley girl
she’s a Valley girl
Okay fine
fer sure, fer sure
She’s a Valley girl
in a clothing store …

Like, totally. Good memories. 🙂

Read Full Post »

“Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. … The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
— from President Dwight Eisenhower’s farewell address, Jan. 17, 1961

Before I begin, I’m going to recommend right off a paper called “Climate science: Is it currently designed to answer questions?”, by Dr. Richard Lindzen, as an accompaniment to this post. It really lays out a history of what’s happened to climate science, and a bit of what’s happened to science generally, during the post-WW II period. I was surprised that some of what he said was relevant to explaining what happened to ARPA computer science research as it entered the decade of the 1970s, and thereafter, though he doesn’t specifically talk about it. The footnotes make interesting reading.

The issue of political influence in science has been around for a long time. Several presidential administrations in the past have been accused of distorting science to fit their predilections. I remember years ago, possibly during the Clinton Administration, hearing about a neuroscientist who was running into resistance for trying to study the difference between male and female brains. Feminists objected, because it’s their belief that there are no significant differences between men and women. President George W. Bush’s administration was accused of blocking stem cell research for religious reasons, and of altering the reports of government scientists, particularly on the issue of global warming. When funding for narrow research areas are blocked, it doesn’t bother me so much. There are private organizations that fund science. What irks me more is when the government, or any organization, alters the reports of its scientists. What’s bothered me much more is when scientists have chosen to distort the scientific process for an agenda.

One example of this was in 2001 when the public learned that a few activist scientists had planted lynx fur on rubbing sticks that were set out by surveyors of lynx habitat. The method was to set out these sticks, and lynx would come along and rub them, leaving behind a little fur, thereby revealing where their habitat was. The intent was to determine where it would be safe to allow development in or near wilderness areas so as to not intrude on this habitat. A few scientists who were either involved in the survey, or knew of it, decided to skew the results in order to try to prevent development in the area altogether. This was caught, but it shows that not all scientists want the evidence to lead them to conclusions.

The most egregious example of the confluence of politics and science that I’ve found to date, and I will be making it the “poster child” of my concern about this, is the issue of catastrophic human-caused global warming in the climate science community. I will use the term anthropogenic global warming, or “AGW” for short. I’m not going to give a complete exposition of the case for or against this theory. I leave it to the reader to do their own research on the science, though I will provide some guidance that I consider helpful. This post is going to assume that you’re already familiar with the science and some of the “atmospherics” that have been occurring around it. The purpose of this post is to illustrate corruption in the scientific process, its consequences, and how our own societal ignorance about science allows this to happen.

There is legitimate climate research going on. I don’t want to besmirch the entire field. There is, however, a significant issue in the field that is not being dealt with honestly, and it cannot be dealt with honestly until the influences of politics, and indeed religion–a religious mindset, are acknowledged and dealt with, however unfortunate and distasteful that is. The issue I refer to is the corruption of science in order to promote non-scientific agendas.

I felt uncomfortable with the idea of writing this post, because I don’t like discussing science together with politics. The two should not mix to this degree. I’d much prefer it if everyone in the climate research field respected the scientific method, and were about exploring what the natural world is really doing, and let the chips fall where they may. What prompted me to write this is I understand enough about the issue now to be able to speak somewhat authoritatively about it, and my conclusions have been corroborated by the presentations I’ve seen a few climate scientists give on the subject. I hate seeing science corrupted, and so I’ve felt a need to speak up about it.

I will quote from Carl Sagan’s book, The Demon-Haunted World, from time to time, referring to it as “TDHW,” to provide relevant descriptions of science to contrast against what’s happening in the field of climate science.

Scientists are insistent on testing . . . theories to the breaking point. They do not trust the intuitively obvious. The truth may be puzzling or counter-intuitive. It may contradict deeply held beliefs.

— TDHW

I think it is important to give some background on the issue as I talk about it. Otherwise I fear my attempt at using this as an example will be too esoteric for readers. There are two camps battling out this issue of the science of AGW. For the sake of description I’ll use the labels “warmist” and “skeptic” for them. They may seem inaccurate, given the nuances of the issue, but they’re the least offensive labels I could find in the dialogue.

The warmists claim that increasing carbon dioxide from human activities (factories, energy plants, and vehicles) is causing our climate to warm up at an alarming rate. If this is not curtailed, they predict that the earth’s climate and other earth systems will become inhospitable to life. They point to the rising levels of CO2, and various periods in the temperature record to make their point, usually the last 30 years. The predictions of doom resulting from AGW that they have communicated to the public are more based on conjecture and scenarios produced by computer models than anything else. This is the perspective that we all most often hear on the news.

The skeptics claim that climate has always changed on earth, naturally. It has never been constant, and the most recent period is no exception. They also say that while CO2 is a greenhouse gas, it is not that important, since the amount of it in the atmosphere is so small (it’s probably around 390 parts per million now), and secondly its impact is not linear. It’s logarithmic, so the more CO2 is added to the atmosphere, the less impact that addition has over what existed previously. From what I hear, even warmists agree on this point. Skeptics say that water vapor (H2O) is the most influential greenhouse gas. It is the most voluminous, from measurements that have been taken. Some challenge the idea that increased CO2 has caused the warming we’ve seen at all, whether it be from human or natural sources. Some say it probably has had some small influence, but it’s not big enough to matter, and that there must be other reasons not yet discovered for the warming that’s occurred. Others don’t care either way. They say that warming is good. It’s definitely better compared to dramatic cooling, as was seen in the Little Ice Age. Most say that the human contribution of CO2 is tiny compared to its natural sources. I haven’t seen any scientific validation of this claim yet, so I don’t put a lot of weight in it. As you’ll see, I don’t consider it that relevant, either.

In any case, they don’t see what the big deal is. They often point to geologic CO2 records and temperature proxies going back thousands of years to make their point, but they have some recent evidence on their side as well. They also use the geologic record and historical records to show that past warming periods (the Medieval Warm Period being the most recent–1,000 years ago) were not catastrophic, but in fact beneficial to humanity.

Some sober climate scientists say that there is a human influence on local climate, and I find that plausible, just from my own experience of traveling through different landscapes. They say that the skylines of our cities alter airflow over large areas, and the steel, stone, and asphalt/cement we use all absorb and radiate heat. This can have an effect on regional weather patterns.

Not everyone involved in distributing this information to the public is a scientist. There are many people who have other duties, such as journalists, reviewers of scientific papers, and climate modelers, who may have some scientific knowledge, but do not participate in obtaining observational data from Nature, or analyzing it.

So what is the agenda of the warmists? Well, that’s a little hard to pin down, because there are many interests involved. It seems like the common agenda is more government control of energy use, a desire to make a major move to alternative energy sources, such as wind and solar (and maybe natural gas), and a desire to set up a transfer of payments system from the First World to the Third World, a.k.a. carbon trading, which as best I can tell has more to do with international politics than climate. The issue of population control seems to be deeply entwined in their agenda as well, though it’s rarely discussed. Giving it a broader view, the people who hold this view are critics of our civilization as it’s been built. They would like to see it reoriented towards one that they see would be more environmentally friendly, and more “socially just.”

The sense I get from listening to them is they believe that our society is destroying the earth, and as our sins against the environment build up, the earth will one day make our lives a living hell (an “apocalypse,” if you will). Some will not admit to this description, but will instead prefer a more technical explanation that still amounts to a faith-based argument. Michael Crichton said in 2003 that this belief seemed religious. Lately there’s some evidence he was right. A lot of the AGW arguments I hear sound like George Michael’s “Praying for Time” from the early 1990s.

Crichton made a well-reasoned argument that environmentalism as religion does not serve us well:

Let me be clear. I am not anti-religion in all aspects of life. My concern here is not that people have religious beliefs, of whatever kind. What concerns me is the attempt to use religious beliefs as justification for government policy. I understand that environmentalism is not officially recognized as a religion in the U.S….yet. We can, however, recognize something that “walks like a duck, quacks like a duck,” etc., for ourselves. I agree with Crichton. The consciousness that environmentalism provides, that we have a role to play in the development of the natural world, a responsibility to be good stewards, is good. However, it should not be a religion. Despite the more alarmist environmentalists who try to scare people with phantoms, there are some sober environmentalists who act based on real scientific findings rather than a religious notion of how nature behaves, or would like to behave if we weren’t around to influence it. In my view those people should be supported.

To be fair, the skeptics have some political views of their own. Often they seem to have a politically conservative bent, with a belief in greater freedom and capitalism, though I think to a person they are environmentally conscious. The difference I’ve seen with them is they’re not coy, and are more willing to show what they’ve found in the evidence, and discuss it openly. They seem to act like scientists rather than proselytizers.

My experience with warmists is they want to control the message. They don’t want to discuss the scientific evidence. They seem to care more about whether people agree with them or not. The most I get out of them for “evidence” of AGW is anecdotes, even if their findings have been scientifically derived. I’m sure their findings are useful for something, but not for proving AGW. I’d be more willing to consider their arguments if they’d act like scientists. My low opinion of these people is driven not by the positions they take, but by how they behave.

We need science to be driven by the search for truth, and for that to happen we need people seeking evidence, being willing to share it openly, as well as their analysis, and allow it to be criticized and defended on its merits. Some climate scientists have been trying to do this. Some have been successful, but from what I’ve seen they represent only gradations of the “skeptic” position. Warmists have forfeited the debate by disclosing only as much information as they say supports their argument, restricting as much information as they can on areas that might be useful for disproving their argument (this gets to the issue of falsifiability, which is essential to science), and basically refusing to debate the data and the analysis, with a few rare exceptions.

The influence that warmists have had on culture, politics, and climate science has been tremendous. Skeptics have faced an uphill battle to be heard on the issue within their discipline since about the mid-1990s. Whole institutions have been set up under the assumption that AGW is catastrophic. Their mission is to fund research projects into the effects, and possible effects of AGW, not the cause of it. Nevertheless, the people who work for these institutions, or are funded by them, are frequently cited as the “thousands of scientists around the world who have proved catastrophic AGW is real.” The only thing is there’s not much going into looking at what’s causing global climate change, so I’ve heard, because the thinking is “everybody knows we’re the ones causing it”–it’s the consensus view, but that’s not based on strong evidence that validates the proposition.

“Consensus” might as well be code in the scientific community for “belief in the absence of evidence,” also known as “faith,” because that’s what “consensus” tends to be. Unfortunately this happens in the scientific community in general from time to time. It’s not unique to climate science.

Science is far from a perfect instrument of knowledge. It’s just the best we have. In this respect, as in many others, it’s like democracy. Science by itself cannot advocate courses of human action, but it can certainly illuminate the possible consequences of alternative courses of action.

The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything–new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.

One of the reasons for its success is that science has built-in, error correcting machinery at its very heart. Some may consider this an overbroad characterization, but to me every time we exercise self-criticism, every time we test our ideas against the outside world, we are doing science. When we are self-indulgent and uncritical, when we confuse hopes and facts, we slide into pseudoscience and superstition.

— TDHW

In light of the issue I’m discussing I would revise that last sentence to say, “when we confuse hopes and fears with facts, we slide into pseudoscience and superstition.” Continuing…

Every time a scientific paper presents a bit of data, it’s accompanied by an error bar–a quiet but insistent reminder that no knowledge is complete or perfect. It’s a calibration of how much we trust what we think we know. … Except in pure mathematics, nothing is known for certain (although much is certainly false).

I thought I should elucidate the distinction that Sagan makes here between science and mathematics. Mathematics is a pure abstraction. I’ve heard those more familiar with mathematics than myself say that it’s the only thing that we can really know. However, things that are true in mathematics are not necessarily true in the real world. Sometimes people confuse mathematics with science, particularly when objects from the real world are symbolically brought into formulas and equations. Scientists make a point of trying to avoid this confusion. Any mathematical formulas that are created in scientific study, because they seem to make sense, must be tested by experimentation with the actual object that’s being studied, to see if the formulas are a good representation of reality. Mathematics is used in science as a way of modeling reality. However, this does not make it a substitute for reality, only a means for understanding it better. Tested mathematical formulas create a mental scaffolding around which we can organize and make sense of our thoughts about reality. Once a model is validated by a lot of testing, it’s often used for prediction, though it’s essential to keep in mind the limitations of the model, as much as they are known. Sometimes a new limitation is discovered even when a well established prediction is tested.

Continuing with TDHW…

Moreover, scientists are usually careful to characterize the veridical status of their attempts to understand the world–ranging from conjectures and hypotheses, which are highly tentative, all the way up to laws of Nature which are repeatedly and systematically confirmed through many interrogations of how the world works. But even laws of Nature are not absolutely certain.

Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science–by far the most successful claim to knowledge accessible to humans–teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.

We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.

The following paragraphs are of particular interest to what I will discuss next:

One of the great commandments of science is, “Mistrust arguments from authority.” (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.) Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else. This independence of science, its occasional unwillingness to accept conventional wisdom, makes it dangerous to doctrines less self-critical, or with pretensions of certitude.

Because science carries us toward an understanding of how the world is, rather than how we would wish it to be, its findings may not in all cases be immediately comprehensible or satisfying. It may take a little work to restructure our mindsets. Some of science is very simple. When it gets complicated, that’s usually because the world is complicated, or because we’re complicated. When we shy away from it because it seems too difficult (or because we’ve been taught so poorly), we surrender the ability to take charge of our future. We are disenfranchised. Our self-confidence erodes.

But when we pass beyond the barrier, when the findings and methods of science get through to us, when we understand and put this knowledge to use, many feel deep satisfaction.

— TDHW

Sagan believed that science is the province of everyone, given that we understand what it’s about. In our society we often think of science as a two-tiered thing. There are the scientists who are authorities we can trust, and then there’s the rest of us. Sagan argued against that.

In the case of the AGW issue, what I often see with warmists is the promotion of blind trust, “The science says this,” or, “The world’s scientists have spoken,” and, “therefor we must act.” A note of certainty that in reality science does not offer. Whether we should act or not is a value judgement, and I argue that a cost/benefit analysis should be applied to such decisions as well, taking the scientific evidence and analysis into account, along with other considerations.

Breaking it wide open

There are a few really meaty exposés that have happened this year on what’s been going on in the climate science community around the issue of AGW. One of them I’ll include here is a presentation by Dr. Richard Lindzen, a climate scientist at MIT. It was sponsored by the Competitive Enterprise Institute. In addition, he also addressed an issue related to what Sagan talked about: the lack of critical thinking on the part of leaders and decision makers. Instead there are appeals to authority.

Up until a few hundred years ago, we in the West appealed to authority–monarchs and popes–for answers about how we should be governed, and how we should live. Thousands of years ago, the geometers (meaning “earth measurers”) of Egypt, who could measure and calculate angles so that great structures could be built, were worshipped. Temples were built for them. What created democracy was an appeal to rational argument among the people. A significant part of this came from habits formed in the discipline of science. Unfortunately with today’s social/political/intellectual environment, to discuss the climate issue rationally is to, in effect, commit heresy! What Lindzen showed in his presentation is the unscientific thinking that is passing for legitimate reasoning in climate science, along with a little of the science of climate.

I can vouch for most of the “trouble areas” that Dr. Lindzen talks about, with regard to the arguments warmists make, because I have seen them as I have studied this issue for myself, and discussed it with others. It’s as disconcerting as it looks.

The slides Lindzen used in his presentation are still available here. The notes below are from the video.

It’s ironic that we should be speaking of “ignorance” among the educated. Yet that seems to be the case. The leaders of universities should be scratching their heads and wondering why that is. Perhaps it has something to do with C. P. Snow’s “two cultures,” which I’ve brought up before. People in positions of administrative leadership seem to be more comfortable with narratives and notions of authorship than critically examining material that’s presented to them. If they are critical, they look at things only from a perspective of political priorities.

What’s interesting is this has been a persistent problem for ages. Dr. Sallie Baliunas talked about how the educated elite of some in Europe during the Little Ice Age persecuted, tortured, and executed people suspected of witchcraft, after severe weather events, because it was thought that the climate could be “cooked” by sorcery. In other words, it was caused by a group of people that was seen as evil. Since the weather events were “unnatural” they had to be supernatural in origin, and according to the beliefs of the day that could only happen by sorcery, and the people who caused it had to be eradicated. Skeptics who challenged the idea of weather “cooking” were marginalized and silenced.

Edit 3-14-2014: After being prompted to do some of my own research on this, I got the sense that Baliunas’s presentation was somewhat inaccurate. In my research I found there was a rivalry that went on between two prominent individuals around this issue, which Baliunas correctly identifies, one accusing witches of causing the aberrant weather, and another arguing that this was impossible, because the Bible said that only God controlled the weather. However, according to the sources I read, the accusations and prosecutions for witchcraft/sorcery only happened in rural areas, and were carried out by locals. If elites were involved, it was only the elites in those areas. The Church, and political leadership of Europe did not buy the idea that witches could alter the weather. Perhaps Baliunas had access to source material I didn’t, and that’s how she came to her conclusions. Some of what I found was behind a “pay wall,” and I wasn’t willing to put up money to research this topic.

The sense I get after looking at the global warming debate for a while is there’s disagreement between warmists and skeptics about where we are along the logarithmic curve for CO2 impact, and what coefficient should be applied to it. What Lindzen says, though, is that the idea of a “tipping point” with respect to CO2 is spurious, because you don’t get “tipping points” in situations with diminishing returns, which is what the logarithmic model tells us we will get. Some might ask, “Okay, but what about the positive feedbacks from water vapor and other greenhouse gases?” Well, I think Lindzen answered that with the data he gathered.

To clarify the graph that Lindzen showed towards the end, what he was saying is that as surface temperatures increased, so did the radiation that went back out into space. This contradicts the prediction made by computer models that as the earth warms, the greenhouse effect will be enhanced by a “piling on” effect, where warming will cause more water vapor to enter the atmosphere, and more ice to melt, causing more radiation to be trapped and absorbed–a positive feedback.

This study was just recently completed. Based on the scientific data that’s been published, and this presentation by Lindzen, it seems to me that these the computer models the IPCC was using were not based on actual observations, but instead represent untested theories–speculation.

The audio at the end of the Q & A section gets hard to hear, so I’ve quoted it. This is Lindzen:

The answer to this is unfortunately one that Arron Wildavsky gave 15-20 years ago before he died, which is, the people who are interested in the policy (and we all are to some extent, but some people, like you–foremost) have to genuinely familiarize themselves with the science. I’ll help. Other people will help. But you’re going to have to break a certain impasse. That impasse begins with the word “skeptic.” Whenever I’m asked, am I a climate skeptic? I always answer, “No. To the extent possible I am a climate denier.” That’s because skepticism assumes there is a good a priori case, but you have doubts about it. There isn’t even a good a priori case! And so by allowing us to be called skeptics, they have forced us to agree that they have something.

Despite Dr. Lindzen’s attempt to clarify his position from “skeptic” to “denier,” I think that’s a bad use of rhetoric, because “denier” in climate science circles has the political connotation of “holocaust denier,” which indicates that “the other side has something, and you have nothing.” Personally, I think that people like Lindzen should recognize that “climate skeptic” is a loaded term, and answer instead, “I am a skeptic of most everything, because that’s what good scientists are.” One can be skeptical of spurious claims.

It’s difficult for climate scientists from one side to even debate the other, as they should, because politics is inevitably introduced. This is a symptom of the corruption of science. Scientists should not have to defend their political or industrial affiliations with respect to scientific issues. This is tantamount to guilt by association and “attack the messenger” tactics, which are irrelevant as far as Nature is concerned. I’ve heard more than one scientist say, “Nature doesn’t give a damn about our opinions,” and it’s true. Science depends on the validity of observed data, and skeptical, probing analysis of that data. When the subject of study is human beings themselves, or products that could affect humans and the environment, then ethics comes into play, but this only extends so far as how to design experiments, or whether to do them at all, not what is discovered from observation.

This is old news by now, but a ton of e-mails and source code were stolen from The University of East Anglia’s Climate Research Unit (CRU), also called the Hadley Center, on November 19, 2009, and made public. I hadn’t heard about it until the last week in November. WattsUpWithThat.com has been publishing a series of articles on what’s being discovered in the e-mails, which provides a good synopsis. I picked out some of them that I thought summed up their contents and implications: herehere, here, and here. The following two interviews with retired climatologist Dr. Tim Ball also summed it up pretty well:

There are no forbidden questions in science, no matters too sensitive, or delicate to be probed, no sacred truths. Diversity and debate are valued. Opinions are encouraged to contend–substantively and in depth.

We insist on independent and–to the extent possible–quantitative verification of proposed tenets of belief. We are constantly prodding, challenging, seeking contradictions or small, persistent, residual errors, proposing alternate explanations, encouraging heresy. We give our highest rewards to those who convincingly disprove established beliefs.

— TDHW

[my emphasis in bold italics — Mark]

Ball referred to the following sites as good sources of information on climate science:

ClimateAudit

WattsUpWithThat

Here are articles written by Ball for the Canada Free Press

In the above interview Ball gets to one of the crucial issues that has frustrated skeptics for years: the publishing of scientific findings and peer review. He said the disclosed e-mails reveal that a small group of warmists exerted a tremendous amount of control over the process. He said he was mystified about why some climate scientists were emphasizing “peer review” 20 years ago (the peer-reviewed literature). He realizes now, after having reviewed the e-mails, that they were in effect promoting their own group. If you weren’t in their club, it’s likely you wouldn’t get published (they’d threaten editors if you did), and you wouldn’t get the coveted “peer review” that they touted so much. Of course, if you didn’t toe their line, you weren’t allowed in their club. No wonder former Vice-President Al Gore could say, “The debate is over.”

It goes without saying that publishing is the lifeblood of academia. If you don’t get published, you don’t get tenure, or you might even lose it. You might as well find another career if you can’t find another sponsor for your research.

The video below is called “Climategate: The backstory.” It looks like this interview with Ball was done earlier, probably in August or September.

The “damage control” from the Hadley e-mails incident was apparent in the media in December, around the time of the Copenhagen conference. There was an effort to distract people from the real issues, preferring instead to try to focus people’s attention on the nasty personalities involved. What galls me is this effort betrays a contempt for the public, taking advantage of the notion that we have little knowledge or interest in how science works, and so we can be easily distracted with personality issues.

I have to say the media reporting on this incident was pretty disappointing. If they talked about it at all, they frequently had pundits on who were not familiar with the science. They simply applied their reading skills to the e-mails and jumped to conclusions about what they read. In other cases they invited on PR flacks to give some counterpoint to the controversy. Warmists had a field day playing with the ignorance of correspondents and pundits. Some of the pundits were “in the ballpark.” At least their conclusions on the issue were sometimes correct, even if the reasoning behind them was not. A couple shows actually invited on real scientists to talk about the issue. What a concept!

On a lighter note, check out this clip from the Daily Show…

Here’s an explanatory article about the significance of the “hide the decline” comment, along with background information which gives context for it. Here’s a Finnish TV documentary that touched on the major issues that were revealed in the CRU e-mails (The link is to part 1. Look for the other two parts on the right sidebar at the linked page).

Carl Sagan saw this pattern of thought before:

“A fire-breathing dragon lives in my garage.”

Suppose (I’m following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you’d want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

“Show me,” you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle–but no dragon.

“Where’s the dragon?” you ask.

“Oh, she’s right here,” I reply, waving vaguely. “I neglected to mention that she’s an invisible dragon.”

You propose spreading flour on the floor of the garage to capture the dragon’s footprints.

“Good idea,” I say, “but this dragon floats in the air.”

Then you’ll use an infrared sensor to detect the invisible fire.

“Good idea, but the invisible fire is also heatless.”

You’ll spray-paint the dragon and make her visible.

“Good idea, except she’s an incorporeal dragon, and the paint won’t stick.”

And so on. I counter every physical test you propose with a special explanation of why it won’t work.

Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I’m asking you to do comes down to believing, in the absence of evidence, on my say-so.

The only thing you’ve really learned from my insistence that there’s a dragon in my garage is that something funny is going on inside my head.

Now another scenario: Suppose it’s not just me. Suppose that several people of your acquaintance, including people who you’re pretty sure don’t know each other, all tell you they have dragons in their garages–but in every case the evidence is maddeningly elusive. All of us admit we’re disturbed at being gripped by so odd a conviction so ill-supported by the physical evidence. None of us is a lunatic. We speculate about what it would mean if invisible dragons were really hiding out in our garages all over the world, with us humans just catching on. I’d rather it not be true, I tell you. But maybe all those ancient European and Chinese myths about dragons weren’t myths at all…

Gratifyingly, some dragon-size footprints in the flour are now reported. But they’re never made when a skeptic is looking. An alternative explanation presents itself: On close examination it seems clear that the footprints could have been faked. Another dragon enthusiast shows up with a burnt finger and attributes it to a rare physical manifestation of the dragon’s fiery breath. But again, other possibilities exist. We understand that there are other ways to burn fingers besides the breath of fiery dragons. Such “evidence”–no matter how important the dragon advocates consider it–is far from compelling. Once again, the only sensible approach is tentatively to reject the dragon hypothesis, to be open to future physical data, and to wonder what the cause might be that so many apparently sane and sober people share the same strange delusion.

— TDHW

[my emphasis in bold italics — Mark]

In an earlier part of the book he said:

The hard but just rule is that if the ideas don’t work, you must throw them away. Don’t waste neurons on what doesn’t work. Devote those neurons to new ideas that better explain the data. The British physicist Michael Faraday warned of the powerful temptation

“to seek for such evidence and appearances as are the favour of our desires, and to disregard those which oppose them . . . We receive as friendly that which agrees with [us], we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.”

Meanwhile, the risks we are ignoring

The obsession with catastrophic human-caused global warming, driven by ideology and a kind of religious group think, and the flow of money to the tune of tens of billions of dollars, represents a misplacement of priorities. It seems to me that if we should be focusing on any catastrophic threats from Nature, we should be putting more resources into a scientifically validated, catastrophic threat that hardly anyone is paying attention to: The possibility of the extinction of the human race, or an extreme culling, not to mention the extinction of most of life on Earth, from a large asteroid or comet impact. Science has revealed that large impacts have happened several times before in Earth’s history. A large impactor will come our way again someday, and we currently have no realistic method for averting such a disaster, even if we spotted a body heading for us months in advance. The number of scientists who are monitoring bodies in space that cross Earth’s orbit could literally fit around a table at McDonalds! Yet there are thousands of these missiles. These scientists say it is very difficult to make a case in congress for an increase in funding for their efforts, because the likelihood of an impact seems so remote to politicians.

The New Madrid fault zone represents a huge, known risk to the Midwestern part of the U.S. Scientists have tried to warn cities along the zone about updating their building codes to withstand the next quake that will inevitably occur. But so far they have gotten a cool reception.

Alan Kay commented on Bill Kerr’s blog that regardless of what’s caused global warming (he leaves that as an open question), what we should really be worried about is a “crash” of our climate system, where it suddenly changes state from even a small “nudge.” It could even come about as a result of natural forces. I hadn’t thought about the issue from that perspective, and I’m glad he brought it up. He cited an example for such a crash (though on a smaller scale), pointing to “dead zones” in coastal waters all over the world, resulting from agricultural effluents. The example distracts a bit from his main point, but I see what he’s getting at. He said that governments have not been focused on how to prepare for this scenario of a climate “system crash,” and are instead distracted by meaningless “counter measures.”

The implications for science and our democratic republic

The values of science and the values of democracy are concordant, in many cases indistinguishable. Science and democracy began–in their civilized incarnations–in the same time and place, Greece in the seventh and sixth centuries B.C. … Science thrives on, indeed requires, the free exchange of ideas; its values are antithetical to secrecy. Science holds to no special vantage points or privileged positions. Both science and democracy encourage unconventional opinions and vigorous debate. … Science is a way to call the bluff of those who pretend to knowledge. It is a bulwark against mysticism, against superstition, against religion misapplied to where it has no business being. If we’re true to our values, it can tell us when we’re being lied to. The more widespread its language, rules, and methods, the better chance we have of preserving what Thomas Jefferson and his colleagues had in mind. But democracy can also be subverted more thoroughly through the products of science than any pre-industrial demagogue ever dreamed.

— TDHW

I’m going to jump ahead a bit with this quote from an interview with Carl Sagan on Charlie Rose (shown further below):

If we are not able to ask skeptical questions, to interrogate those who tell us that something is true–to be skeptical of those in authority–then we’re up for grabs for the next charlatan, political or religious, who comes ambling along. It’s a thing that Jefferson lay great stress on. It wasn’t enough, he said, to enshrine some rights in a constitution or bill of rights. The people had to be educated, and they had to practice their skepticism in their education. Otherwise, we don’t run the government. The government runs us.

On December 7, 2009 the EPA came out with its endangerment finding saying that carbon dioxide is a pollutant that threatens public health. The agency will proceed to impose restrictions on CO2 emitters itself, since congress has not acted to impose its own. What is all this based on now?

President Obama, you said, “We will restore science to its rightful place.” I’m still waiting for that to happen.

Science helped birth democracy. Its shadow is now being used to create conditions for a more authoritarian government. This isn’t the first time this has happened. The pseudo-science of eugenics, which was once regarded as scientific since it was ostensibly based on the theory of evolution, was used as justification for the slaughter of millions in Europe in the 1930s and 40s. It was also used as justification for shameful actions and experiments performed by our government on certain groups of people in the U.S.

Global warming has been blown up into a huge issue. There aren’t too many people who haven’t at least heard of it. We are seriously considering taking actions that could cost ordinary people, the poor in particular, and businesses, a lot of money. When the stakes are this high we’d better have a good reason for it. This is like the craze with bran muffins and foods with oats in them, because of the belief (supported by scientific studies that were misreported to the public) that they prevented cancer, only it’s more serious. I worry about what this does to science, because it seems like since people can get away with debauching it, why not continue doing it in the future?

One worry I have about the debauching of science is that it will delegitimize science in the eyes of the public, and encourage the same superstition and magical thinking that marked the Middle Ages. Who could blame us for rejecting it after it’s been perceived as “crying wolf” too many times?

The public has valued science up to now, because of the information it can bring us. The problem is we don’t care to understand what it is or how it works. “Just give us the facts,” is our attitude. We have blindly given the name of science a legitimacy that, like other things I’ve talked about on this blog, doesn’t take into account the quality of the findings, or the way they were obtained. It reminds me of a reference Alan Kay made to Neil Postman:

Our [scientific] artifacts are everywhere, but most people, as Neil Postman said once, have to take more things on faith now in the 20th century than they did in the Middle Ages. There’s more knowledge that most people have to believe in dogmatically or be confused about.

As a result, we have set up scientists as authorities. Some purport to tell us what to believe, and how to behave, and we as a society expect this of them. The problem with this is when a “scientific fact” is later revealed to be wrong, people feel jilted. Science itself is thought of as a collection of facts, written by our scientific “priesthood.” We expect this “priesthood” to do right by the rest of us. Science was never meant to take on this role. I think a good part of the reason for this passive attitude towards science in the public sphere is the quality and the methodology of findings are not reported to the general public. Most journalists wouldn’t understand the criteria enough to explain it to the citizenry in a way they’d understand.

The other part of the problem is that science is presented in our educational system as something that’s not very interesting. In fact most students only experience a small sliver of science, if that. It’s rather like mathematics (or arithmetic and calculation that’s called “mathematics”) for them, something they’re required to take. They just want to “get through it,” and they’re thankful when it’s over.

An issue I’m not even addressing here, though it’s worth noting, is that science is often perceived as heartless and cold, a discipline that has allowed us as a society to act without a moral sense of responsibility. This I’m sure has also contributed to the public’s aversion to science. And I can see that because of this, people might prefer “the science of global warming alarm” to “the science of skepticism.” One seems to be promoting “good action,” while the other seems like a bunch of backward, out of touch folks, who don’t care about the earth. These are emotional images, a way of thought that a lot of people the world over are prone to. However, as Sagan said in the interview below, “Science is after how the Universe really is, and not what makes us feel good.” These images of one group and the other are stereotypes, not so much the truth.

Of course a moral sense is necessary for a self-governing society like ours, but morality can be misapplied. By trying to do good we could in fact be hurting people if the solution we implement is not thought through. We may act on incomplete information, all the while thinking that we have the complete picture, thereby ignoring important factors that may require a very different solution to resolve. Our understanding of complex systems and the effects of tampering with them may also be grossly incomplete. While attempting to shape and direct a system that is behaving in a way we don’t like, we may make matters worse. Intent matters, but results matter, too. What appear to be moral actions will not always result in moral outcomes, especially in systems that are huge in scale and complexity. This applies to the environment and our economy.

As we’ve seen in our past, people eventually do figure out that the science behind a spurious claim was flawed, but it tends to take a while. By that point the damage has already been done. Perhaps scientists need to take a more active public service role in informing the public about claims that are made through news outlets. What would be better is if people understood scientific thinking, but in the absence of that, scientists could do the public a service by explaining issues from a scientific perspective, and perhaps educating the audience about what science is along the way. This would need to be done carefully, though. A real effort at this would probably expose people to notions that they are uncomfortable with. Without a sufficient grounding in the importance of science, that is, the importance of listening and considering these uncomfortable ideas, most people will just change the channel when that happens. In order for this to work, people need to be willing to think, because the activity is interesting, and sometimes produces useful results. Science cannot just be regarded as a vocation in our society. It is an essential part of the health of our democratic republic.

The danger of our two-tiered knowledge society

In all uses of science, it is insufficient–indeed it is dangerous–to produce only a small, highly competent, well-rewarded priesthood of professionals. Instead, some fundamental understanding of the findings and methods of science must be available on the broadest scale.

— TDHW

I’m going to turn the subject now to the matter of science and technology, and our collective ignorance, because it also has bearing on this “dangerous brew.” I found this interview with Carl Sagan, which was done shortly before he died in 1996. He talked with Charlie Rose about his then-new book, The Demon-Haunted World. He had some very prescient things to say which add to the quotes I’ve been using from his book. I found myself agreeing with what Sagan said in this interview regarding science, scientific awareness, and science vs. faith and emotions, but context is everything. He may not have been arguing from the same point of view I am, as I reveal further below. I find it interesting, though, that his quotes seem to apply very nicely to my argument. I’ve been reading this book, and I don’t see how I might be quoting him out of context. You’ll see why I’m hedging as you read further.

This is a poignant interview, because they talk about death and what that means. It’s a bit sad and ironic to see his optimism about his good health. He died from pneumonia, which was a complication of his bone marrow transplant, which was a treatment he received for his myelodysplasia. His final accomplishment was completing work on a movie version of his novel, Contact, which came out in 1997. Interestingly, his movie touched on some themes from The Demon-Haunted World.

My jaw dropped when I heard Charlie Rose read that less than half of American adults in 1996 thought that our planet orbits the Sun once a year! I did a quick check of science surveys on the internet and it doesn’t look like the situation has gotten any better since then.

Sagan’s point was not that magical thinking in human beings was growing. He said it’s always been with us, but in the technological society we have built, the prominence of this kind of thinking is dangerous. This is partly because we are wielding great power without knowing it, and partly because it makes us as a people impotent on issues of science and technology. We will feel it necessary to just leave decisions about these issues up to a scientific-technological elite. I’ve argued before that we have an elite that has been making technological decisions for us, but not at a public policy level. It’s been at the level of IT administrators and senior engineers within organizations. In the realm of science, however, we clearly have an elite which has gladly taken over decisions about science at the policy level.

The climate issue points to another aspect of this. As Dr. Lindzen pointed out in his presentation (above), we have people who are misusing climate models (and it’s anyone’s guess whether it’s on purpose, or due to ignorance) as a substitute for the natural phenomenon itself! I’ve talked to a few climate modelers who believe that human activities are causing catastrophic climate change, and this is how they view it: Since we do not have another Earth to use as a “control,” or to use as a means for “repeatability,” we use computer models as a “control,” or in order to repeat an “experiment.” It’s absurd. Talking to these people is like entering the Twilight Zone. They argue as if they’re the professionals who know what they’re doing. The truth is they’re ignorant of the scientific method and its value, yet their theories of computer modeling and methodology carry a high level of legitimacy in the field of climate science. It’s what a lot of the prognosticating in the IPCC (Intergovernmental Panel on Climate Change) assessment reports are based on. This gives you an idea of the ignorance that at times passes for knowledge and wisdom in this field!

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality.

— Doron Swade, curator of the London Science Museum

As I’ve talked about before, a computer model is only a theory. That’s it. It’s a representation of reality created by imperfect human beings (programmers, though in principle it’s not that different from scientists creating theories and mathematical models of reality). It’s irrational to use a theory as a “control,” or as a proxy for the real thing in an “experiment.” It goes against what science is about, which is an acknowledgment that human beings are ignorant and flawed observers of Nature. Even if we have a theory that seems to work, there is always the possibility that in some circumstance that we cannot predict it will be wrong. This is because our knowledge of Nature will always be incomplete to some degree. What science offers, when applied rigorously, is very good approximations. Within the boundaries of those approximations we can find ideas that are useful and which work. There are no shortcuts to this, though.

Theories are of course welcome in science, but the only rational thing to do with them while using the scientific method is to test them against the real thing, and to pay attention to how well theory and reality match, in as many aspects as can be discerned.

Climate modelers who back the idea of catastrophe claim they do this when forming their models, but I’ve heard first-hand accounts from scientists about how modelers will “tweak” parameters to make a model do something “interesting.” This gets them attention, and I detect some techno-cultish behavior in this. I’ve heard second-hand accounts from scientists about how modelers will input unrealistic parameters to make the models closely match the temperature record, which they term “validating the model.” As Dr. John Christy, a scientist who studies temperature in the atmosphere and at the surface, at the University of Alabama at Huntsville, once remarked, “They already knew what the correct answer was.” This is an illegitimate methodology, because it’s no better than forming a conclusion based on a data correlation. I’m sure if I worked hard enough at it, I could create a computer model that also closely tracked the temperature record, just drawing lines on the screen, and/or producing numbers, coming from a standpoint of total ignorance of how the climate works, and I suppose by their criteria my model would be “validated.”

I’ve cited this quote from Alan Kay before (though he did not specifically address the issue of climate modeling, or anything having to do with climate science when he said it):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

To clarify, Kay was talking about the application of computing to non-computational sciences.

Beware of those who come bearing predictions

I have praised Carl Sagan for what he talked about in the Charlie Rose interview above (I praise him for some of his other work as well), but I feel I would be remiss if I didn’t talk about a portion of his career where he fell into doing what I’m complaining about in this post. He promoted an untested prediction for a political agenda. I’m going to talk about this, because it illustrates a temptation that scientists (who are flawed human beings like the rest of us) can succumb to.

Sagan was one of the chief proponents of the theory of nuclear winter in the 1980s during the Cold War between the U.S. and the Soviet Union. As Michael Crichton pointed out (you may want to search on Sagan’s name in the linked article to reach the relevant part), like with catastrophic AGW, this was based on a prediction that was supported by flimsy evidence. In fact, computer climate modeling had a central role in the prediction’s supposed legitimacy.

Sagan exhibited a fallacy in thinking on a number of occasions that I’ll call “belief in the mathematical scenario.” Such scenarios are supported by a concept that can be conjured up as a technical, mathematical model. Here’s the thing. Is the scenario even plausible? Does the fact that we can imagine it in a plausible way justify believing that it’s real? Does a moral belief that something is right or wrong justify promoting an unwavering belief in an untested theory that supports the moral rule, because it will cause people to “do the right thing”? How is this different from aspects of organized religion? Do these questions matter? From where I sit, it’s the academic equivalent of people making up their own myths, using the technical tool of mathematics as a legitimizer, mistaking mathematical precision for objective truth in the real world. This is a behavior that science is supposed to help us avoid!

On one level, trying to apply the scientific method to the nuclear winter prediction sounds absurd: “You want evidence confirming that a nuclear war would result in nuclear winter?? Are you nuts?” First of all, we don’t have to resort to that extreme. Scientists have found ways to physically model a scenario by using materials from Nature, but at a small scale, in order to arrive at approximations that are quite good. We don’t have to experience the real thing at full scale to get an idea of what will really happen. It’s just a matter of arriving at a realistic model, and in the case of this prediction that might’ve been difficult.

The point is that a prediction, an assertion, must be tested before it can be considered scientifically valid. It’s not science to begin with unless it’s falsifiable. And what’s worse, Sagan knew this! Without falsifiability, the appropriate scientific answer is, “We don’t know,” but that’s not what he said about this scenario. He at least admitted it was a prediction, but he also called it “science.” It was disingenuous, and he should’ve known better.

Without testing our notions, our assumptions, and our models, we are left with superstition–irrational fears of the unknown, and irrational hopes for things that defy what is possible in Nature (whether we know what’s possible is beside the point), even though they are dispensed using ideas that sound modern, and comport with what we think intelligent, educated people should know.

It doesn’t matter if the untested prediction is made seemingly plausible by mathematics, or a computer model (which is another form of mathematics). That’s mere hand waving. Prediction, mathematical or otherwise, is not science, and therefor it’s not nearly as reliable as analysis derived from the scientific method. Our predictions are hopefully derived from science, but even so, an untested prediction really is only as reliable as the experience of the person giving it.

The same sort of political dynamic came into play at the time that the nuclear winter theory was popular that has existed in climate science: If you were skeptical about the theory of nuclear winter, that meant you were in the “minority” (or so they had people believe)–not with the “consensus.” You were accused of supporting nuclear arms, and our government’s tough “cowboy” anti-Soviet policy, and were a bad person. Such smears were unjustified, but they were used to shame and silence dissent. I don’t mean to suggest that Sagan was a communist sympathizer, or anything of that sort. I think he wanted to prevent nuclear war, period. Not a bad motive in itself, but it seems to me has was willing to sacrifice the legitimacy of science for this.

A lot of scientists who didn’t know too much about the science at issue, but didn’t want to ruffle feathers, went along with it to be a part of the accepted group. The whole thing was desperate and cynical. It’s my understanding from history that the fears exhibited by the promoters of this theory were unfounded, and I think they came about because of a fundamental misunderstanding of realpolitik.

It’s not as if this scare tactic was really necessary. The consequences of nuclear war that we knew about were horrifying enough. It’s apparent from the interview with Ted Turner in the above video that there were worries about the escalation of the nuclear arms race, perhaps the Reagan Administration’s first strike nuclear capability against the Soviet Union in particular. You’ll notice that Sagan talks about (I’m paraphrasing), “One nation bombing another before the other can respond, the attacker thinking that they will remain untouched.” People like Sagan didn’t want the U.S., or perhaps the Soviets for that matter, to think it could carry out a first strike and wipe out the other side with impunity (because the climate would “get” the other side in return). Surely, a nuclear war with a large number of blasts would’ve caused some changes in climate, but how much was anyone’s guess.

The only evidence that could’ve realistically tested the theory, to a degree, would’ve been from above ground nuclear tests, or the bombs that were dropped on Hiroshima and Nagasaki, Japan. To my knowledge, none of them gave results that would’ve contributed to the prediction’s validity.

Before the very first nuclear bomb was tested there was at least one scientist in the Manhattan Project who thought that a single nuclear blast might ignite our atmosphere. That would be a fate worse than the predicted nuclear winter. Imagine everything charred to a crisp! Others thought that while it was possible, the probability was remote. Still, the scenario was terrifying. The bomb was tested. Many other above-ground atom bomb tests followed, and we’re all still here. Not to say that nuclear testing is good for us or the environment, but the prediction didn’t come true. The point is, yes, there are terrible scenarios that can be imagined. These scenarios are made plausible based on things that we know are real, and a knowledge of mathematics, but that does not mean any of these terrible scenarios will happen.

You’ll notice if you watch the interview with Turner that Sagan even talks about catastrophic AGW! Again, what he spoke of was a prediction, not a scientifically validated conclusion. It’s hard to know what his motivation was with that, but it sounded like he was uncomfortable with the idea that our civilization was not consciously thinking about the environment, and what consequences that might have down the line. Western governments began to understand environmental issues in the 1980s, and implemented regulations to clean up what was our highly polluted environment. From what I understand though, this did not happen in other parts of the world.

Not to say that all environmental problems have been solved in the U.S. There are real environmental issues that science can inform us about today, and will need to be acted upon. One example is “dead zones,” which I referred to earlier, where coastal waters are losing their oxygen due to an interaction between nitrogen-rich compounds that agricultural operations are releasing into streams, and algae. It’s killing off all marine life in the affected areas, and these “dead zones” exist all over the world. There’s a Frontline documentary called “Poisoned Waters” that talks about it. Another is an issue that does have to do with human-induced climate forcing, but not strictly in the sense of warming or cooling the planet. The PBS show Nova talked about it in an episode called “Dimming the Sun.” Huge quantities of sooty pollution have been found to affect relative humidity on Earth, which does have a significant effect on our weather. Aside from the application of this new find to the issue of AGW, which to me was rather irrelevant, this was a very interesting show. They gave what I thought was a very thorough and compelling exposition of the science behind the “dimming” effect.

The legacy of Carl Sagan

Based on what I’ve read in Sagan’s book, if he were still alive today, he would probably still be promoting the theory of catastrophic AGW. That is something I find hard to understand, given the understanding of science that he had, its implications for our society, and the seemingly innate need for humans to create their own myths, all of which he seemed to know about. Perhaps he was not one to look inward, as well as outward. Though it’s impossible to do this now, this is an issue I’d dearly like to ask him about.

I can’t help but think that Sagan and his cohorts created the template for the pseudo-science that bedevils climate science today. Richard Lindzen’s paper, which I referred to at the beginning of this post, paints the picture of what’s happened more fully, and points to some other motivations, besides politics. One of Sagan’s phrases that I still remember is, “Extraordinary claims require extraordinary proof.” Too bad he didn’t follow that maxim sometimes.

Despite this, I respect the fact that he really did try to bring scientific understanding to the masses. Sagan in my mind was a great man, but like all of us he was flawed, and even he was willing to set aside his scientific thinking and participate in the promotion of pseudo-science for non-scientific goals.

I could rant that Sagan was a hypocrite, because he cynically exploited the very ignorance he expressed concern about. However, my guess is that he saw what he thought was a dangerous situation developing in an ignorant world–a “demon-haunted” one at that. Perhaps the only way he knew how to deal with it in such “dire circumstances” was to “take what existed,” promote a scenario that was not based on much, which we ignoramuses would believe, and cynically exploit the good name of science (and his own good name) so that we would pull back from the brink. It is elitist, though I can understand the temptation.

If we really want to bring people out of ignorance it’s best to try to educate them, even though that can be hard. Sometimes people just don’t want to hear it. But if this approach is not taken, then it’s just a bunch of elites messing with people’s heads so we’ll give them a response they want. We won’t be any more enlightened. There’s too much of that already.

I guess another lesson is that even though we can see ignorance in people, when the human spirit is brought out it can manifest solutions to problems in ways that people like me would not anticipate, and things work out okay. That in essence is the genius of semi-autonomous systems like ours that have diffused power structures. It acknowledges that no one person, or group, has all the right answers. The same is true of science, when it’s done well, and relatively free markets. It’s best if we respect that, even though we may be tempted to subvert these systems for causes we ourselves deem noble.

Even so, I feel as though we put too much faith in our semi-autonomous, diffused systems. Some of us think they will solve all problems, and it’s not necessary to worry about being well educated. I think people push aside the idea too casually that more sophisticated ways of thinking and perceiving would help all of us (not just a few) make those systems more optimal.

So what are we to do?

The tenets of skepticism do not require an advanced degree to master, as most successful used car buyers demonstrate. The whole idea of a democratic application of skepticism is that everyone should have the essential tools to effectively and constructively evaluate claims to knowledge. All science asks is to employ the same levels of skepticism we use in buying a used car or in judging the quality of analgesics or beer from their television commercials.

But the tools of skepticism are generally unavailable to the citizens of our society. They’re hardly ever mentioned in the schools, even in the presentation of science, its most ardent practitioner, although skepticism repeatedly sprouts spontaneously out of the disappointments of everyday life. Our politics, economics, advertising, and religions (New Age and Old) are awash in credulity. Those who have something to sell, those who wish to influence public opinion, those in power, a skeptic might suggest, have a vested interest in discouraging skepticism.

— TDHW

I noted as I wrote this post that both Sallie Baliunas and Carl Sagan said that science needed special protection in our society. Richard Lindzen indicates that this protection is paper thin. He said it’s unfortunately easy to co-opt science in our society. The only way science can be protected in my view is if we value it for what it really is. Students need to be taught that, and shown its beauty. Sagan said a key thing in the Charlie Rose interview: “Science is more than a body of knowledge. It’s a way of thinking.” I must admit some ignorance to this, but what little I’ve heard about science education in schools now indicates that it’s taught almost strictly as a body of knowledge. I suspect this is because of No Child Left Behind and standardized testing. I remember a CS professor saying a while back that in his kid’s science class there was a lot of workbook material, but very little experimentation, because the teachers were afraid to allow their students to do experiments. He didn’t explain why. My suspicion is they didn’t want the students to come to their own conclusions about what they had seen, and possibly get “confused” about what they’re supposed to know for their tests. In any case I remember exclaiming to the professor that he should get his child out of that class! I asked rhetorically, “What do they think they’re teaching?”

Even when I look at my own science education I realize that I wasn’t given a complete sense of what science was. The hypotheses were practically given to us. When we did an experiment, the steps for it were always given to us. We were always given the “correct” answer in the end, so we could compare it against the answer we came up with. This is how we calculated error. We compared the answer we had against the “correct” answer. One thing that was valuable was we were asked to think of any reasons why the error occurred. Some error always existed (it always does in real science), and we could speculate that maybe our instruments introduced some error, and that we may have done the procedure a bit wrong, etc. This was teaching only one part of real science: observation, being skeptical of our observations, and recognizing human fallibility. That’s valuable. On the other hand, what it also taught was the fallacy that there was a perfectly correct answer, which was achieved via. mathematics, which was formulated by “masters of science.” There’s so much more to it than that. In real science, scientists come up with their own hypotheses. They design their own experiments. When they get their results they have nothing to compare them against, unless they’re reproducing someone else’s experiment. Even then they can’t just say “the results in the original experiment are the correct answer,” because the other experiment may have had unrecognized flaws, too. The process by which those mathematical formulas that we used became so good was not a “one shot” deal. They came about from making a lot of mistakes, realizing what they were, and correcting for them.

I wonder how real scientists figure out what error figures/bars to put in their results. Maybe they could come from instrument ratings, or probabilities, based on an examination of the scale of the observation.

Anyway, science is really about wondering, exploring, being curious, being skeptical of your own observations, as well as those of others. It also takes into account what’s been discovered previously. Alan Kay has talked about how there’s also a kind of critical argument that goes on in science, where the weak ideas are identified and set aside, and the strong ones are allowed to rise to the top.

So much stress is put on the need for math and science education for our country’s future economic health. It’s necessary for our society’s general health, too. I hope someday we will recognize that.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Older Posts »