Taking a look at NLS on the 45th Anniversary of “The Mother of All Demos”

Christina Engelbart posted a page on her site to mention places that commemorated the 45th Anniversary of Doug Engelbart’s (her father’s) “Mother of All Demos.” The Computer History Museum in Mountain View, CA did an event on Dec. 9, the date of the demo, to talk about it, what was accomplished from it, and what from it has yet to be brought into our digital world. There have been some other anniversary events for this demo in the past, but this one is kind of poignant I think, because Doug Engelbart passed away in July. Maybe C-SPAN will be showing it?

I was intrigued by this line in the description of the CHM event:

Some of the main records of his laboratory at SRI are in the Museum’s collection, and form a crucial part of the CHM Internet History Program.

Since I heard about what an admirable job the Augmentation Research Center had done in building NLS architecturally, I’ve been curious to know, “How can today’s developers take a look at what they did, so they could learn something from it?”

The reason I particularly wanted to point out this commemoration, though, is one of the pages Christina referenced was a post by Brad Neuberg demonstrating HyperScope, a modern, web-based system that implements some of the document and linking features of NLS, and Engelbart’s original Augment system (NLS was renamed “Augment” in the late 1970s). Watching Engelbart’s demo gives one a flavor of what it was like to use it, and its capabilities, but it doesn’t lend itself to helping the audience understand how it deals with information, and how it operates–why it’s an important artifact–beyond being dazzled by what was accomplished 45 years ago. Watching Brad’s videos gives one a better sense of these things.

Getting beyond paper and linear media

“Information has structure. The computer should enable that structure to be modeled, and for that model to be manipulated in any representation that is appropriate to our understanding of it.” — just a thought I had

I came upon a few videos that were all basically about the same thing: We should be getting beyond paper with our computers, because paper is the old medium. The computer is a new medium. A rare few people have been working to find out how it can be uniquely beneficial to society, to find things that the computer can do, but paper can’t. I will focus on two of them in this post: Doug Engelbart, and Ted Nelson. They came up with some answers 40+ years ago, but old habits die hard. For the most part these ideas were ignored. Most people couldn’t understand them. There is a strong temptation with new technology to optimize the old, to make it work better. This is what society has chosen to do with computers thus far.

This could be wrong, but it seems to me the reason this state of affairs exists is that most people don’t understand computing concepts as a set of ideas. Most of us know how to use a computer, but we don’t have a real literacy in computing. This, I think, has created the bane of the existence of real computer scientists, applications, or what Ted Nelson calls “lumps”. A lot of people will say they “don’t understand computers”. They just use them. The argument has been made for decades that people don’t need to understand computers, because that would be like people needing to understand the engine and transmission in the car they drive. Most people now understand computers the way that they understand other machines: A machine is something that gets a specific task done. The “realization” that’s been made is that with a computer you can “change it into any machine you want”. So we go from “virtual machine” to “virtual machine”, each one carrying out specific tasks. There’s no sense of generality, beyond the GUI, because this would require understanding some things that people trained in computer science understand, for example that information can be subdivided into small structures, and links can be made between these structures. Computer science typically emphasizes that you want to do this for efficient access to that information. Engelbart and Nelson used the same principle for a different purpose: recognizing that information has structure, and organizing it with relationships and links makes it more accessible in terms of understanding it.

Business has been going for the “paperless office” for 30 years, but what they’re really doing is evolving from using physical paper to “digital paper”, and in effect their work process has not changed. Doug Engelbart and Ted Nelson point the finger at Xerox PARC for this concept of computing. A sad irony. While there were some great ideas developed at PARC, as far as I know only a couple, the bitmapped display and Ethernet, have been adopted by the wider industry. Alan Kay said recently that the GUI, developed at PARC, which was once heralded as a crowning achievement by the computer industry, was really just a means to an end: to give children an interface so they could learn computing ideas, according to their capabilities. He didn’t intend to have adults use this interface, or at least the style of it that they had developed. It wasn’t the really important idea that came out of the Learning Research Group.

The video below demonstrates some functions with Nelson’s data structure that would typically be done today by function-specific applications. The potential I see is a scheme like this could replace data-oriented applications, and make information more relevant and cohesive, without having to duplicate information into different application formats, and without having to open different applications to access the same information in different hierarchies. This would of course be dependent on people understanding computing concepts, not treating the computer as a guide about how to store and retrieve information. This is not too far fetched, given that there are plenty of people who have learned how to use spreadsheets over the last few decades.

When I first got into studying Alan Kay’s work 4 years ago, I remember he said that when he was working at PARC, he and his team thought they had “safely killed off applications”, and in my opinion, after viewing the above video, it’s easy to see why. The idea they had was to look at what computers could deal with, as a medium, find an intersection with human capabilities which could be used for their goal of societal advancement, and come up with a generalized computing structure that would fill the bill, and then some. The problem was the wider culture had no idea what these people had been up to. It was illiterate of computing ideas. So the very notion of what they were doing was alien. Not to mention that a few people at PARC were actively contributing ideas that supported the concept of applications, and making computers model what could be done with paper. Applications kept chugging along out in industry, like they always had.

As Kay has pointed out, this is not the first time this sort of thing has happened in our history. It happened to books after the introduction of the printing press. We are now rapidly moving to a technology world where old media is being digitized: books, magazines, images, video, and audio. The technology that is being created to support this doesn’t provide structured access to the underlying bits of information. As Nelson says, they are just provided as “lumps”. The point is this material is being digitized in its original format. It’s serial, linear. As with everything else in technology, it’s happening because this is what we’re used to, not because we’re really recognizing what the computer’s full potential is, even though many people think we are. We can understand this limited view by recognizing that most consumers use computers as data processing engines whose sole purpose is to compress/decompress data, convert a stream of bits into readable type, convert from digital to analog (and vice versa), persist “blobs” of information, and transmit them over networks. There is some structure to what we do with computers, but as Nelson points out, it’s simplistic. This makes us have to work harder when we start dealing with complexity.

What I like about Nelson’s emphasis is he wants to make it easier for people to study content in a deep and engaging way.

The computer industry has not yet really tried to create a new medium. The companies that have won out historically don’t have a concept for what that would be. Even if they did, they would know that it would alienate their customers if they made it into a product.

Doug Engelbart built upon the same ideas that Nelson discussed to try to achieve a higher goal: to improve the effectiveness of groups in pursuing knowledge and figuring things out. The way in which he tried to do this was to use a computer to take the complexity of an issue and make it clear enough so we could understand it more fully than we otherwise would. His ultimate goal was to create a systemic process for improvement of the group’s effectiveness (ie. a process to improve the group’s improvement process) in understanding information and carrying on constructive arguments about it. What’s kind of a mind-bender is he applied this same principle of group improvement to the process of learning how to more fully understand and create NLS itself!

An idea I’m giving short shrift here (undeservedly) is the “bootstrapping” concept that Engelbart talked about. I have talked about it some in a previous post. I’ve had a feeling for a while now that there’s something very powerful about it, but I have not come to completely understand it yet. Engelbart explains it some in the video.

Edit 5/18/2010: Christina Engelbart, Doug Engelbart’s daughter, and Executive Director of the Doug Engelbart Institute left a comment, complimenting my post here. I’d like to highlight the post she wrote on her blog, called “Collective IQ”, which she says was inspired by mine here. Gosh, I’m flattered. 🙂 Anyway, be sure to give her article a read. She goes more into the subject of augmenting the human intellect than I did here.

—Mark Miller, https://tekkie.wordpress.com

Tales of inventing the future

I’ve been coming across videos lately that get more into the creative ideas that inspired research projects which were out of this world for their time, and give me feelings of inadequacy even today.

Two anniversaries happened in November 2008. One was the 40th anniversary of the idea of the Dynabook. The other was the 40th anniversary of Douglas Engelbart’s NLS demo.

Alan Kay – the Dynabook concept, 1968

Alan Kay gave a historical background on the ideas that led to the Dynabook concept. It’s 1 hour, 44 minutes.

Douglas Engelbart – NLS demo, 1968

Here is a collection of video clips of the 40th anniversary event for the NLS demo, held by the Stanford Research Institute (SRI). The original members of the NLS development team were in attendance to talk about the experience of building this amazing system. They give more details about how it was constructed. One of Engelbart’s daughters, Christina, talks about the conceptual framework her father implemented through the process of building NLS–incremental improvement of the group and the system. NLS was intended to increase the working power of professional groups through a concept Engelbart called “augmentation”, augmenting the human intellect. His goals were similar to Licklider’s concept of human-computer symbiosis.

My thanks go to Rosemary Simpson of Brown University for providing these links. This is great stuff.

In an interview on Nerd TV a few years ago Douglas Engelbart talked about the struggle he went through to implement his vision. It’s a sad tale, with occasional triumphs. It’s good to see his efforts getting public recognition in the present day.

You can learn more about Engelbart’s work at the Doug Engelbart Institute. What I think is of particular interest is the library section. You can view the complete 1968 demo there, along with the papers he wrote. It’s interesting to note when the papers were written, because his concepts of what was possible with the system he envisioned will sound familiar to people who are accustomed to today’s computer technology. Considering what was the norm in computing at the time this is amazing.

Related post: Great moments in modern computer history