Getting beyond paper and linear media

“Information has structure. The computer should enable that structure to be modeled, and for that model to be manipulated in any representation that is appropriate to our understanding of it.” — just a thought I had

I came upon a few videos that were all basically about the same thing: We should be getting beyond paper with our computers, because paper is the old medium. The computer is a new medium. A rare few people have been working to find out how it can be uniquely beneficial to society, to find things that the computer can do, but paper can’t. I will focus on two of them in this post: Doug Engelbart, and Ted Nelson. They came up with some answers 40+ years ago, but old habits die hard. For the most part these ideas were ignored. Most people couldn’t understand them. There is a strong temptation with new technology to optimize the old, to make it work better. This is what society has chosen to do with computers thus far.

This could be wrong, but it seems to me the reason this state of affairs exists is that most people don’t understand computing concepts as a set of ideas. Most of us know how to use a computer, but we don’t have a real literacy in computing. This, I think, has created the bane of the existence of real computer scientists, applications, or what Ted Nelson calls “lumps”. A lot of people will say they “don’t understand computers”. They just use them. The argument has been made for decades that people don’t need to understand computers, because that would be like people needing to understand the engine and transmission in the car they drive. Most people now understand computers the way that they understand other machines: A machine is something that gets a specific task done. The “realization” that’s been made is that with a computer you can “change it into any machine you want”. So we go from “virtual machine” to “virtual machine”, each one carrying out specific tasks. There’s no sense of generality, beyond the GUI, because this would require understanding some things that people trained in computer science understand, for example that information can be subdivided into small structures, and links can be made between these structures. Computer science typically emphasizes that you want to do this for efficient access to that information. Engelbart and Nelson used the same principle for a different purpose: recognizing that information has structure, and organizing it with relationships and links makes it more accessible in terms of understanding it.

Business has been going for the “paperless office” for 30 years, but what they’re really doing is evolving from using physical paper to “digital paper”, and in effect their work process has not changed. Doug Engelbart and Ted Nelson point the finger at Xerox PARC for this concept of computing. A sad irony. While there were some great ideas developed at PARC, as far as I know only a couple, the bitmapped display and Ethernet, have been adopted by the wider industry. Alan Kay said recently that the GUI, developed at PARC, which was once heralded as a crowning achievement by the computer industry, was really just a means to an end: to give children an interface so they could learn computing ideas, according to their capabilities. He didn’t intend to have adults use this interface, or at least the style of it that they had developed. It wasn’t the really important idea that came out of the Learning Research Group.

The video below demonstrates some functions with Nelson’s data structure that would typically be done today by function-specific applications. The potential I see is a scheme like this could replace data-oriented applications, and make information more relevant and cohesive, without having to duplicate information into different application formats, and without having to open different applications to access the same information in different hierarchies. This would of course be dependent on people understanding computing concepts, not treating the computer as a guide about how to store and retrieve information. This is not too far fetched, given that there are plenty of people who have learned how to use spreadsheets over the last few decades.

When I first got into studying Alan Kay’s work 4 years ago, I remember he said that when he was working at PARC, he and his team thought they had “safely killed off applications”, and in my opinion, after viewing the above video, it’s easy to see why. The idea they had was to look at what computers could deal with, as a medium, find an intersection with human capabilities which could be used for their goal of societal advancement, and come up with a generalized computing structure that would fill the bill, and then some. The problem was the wider culture had no idea what these people had been up to. It was illiterate of computing ideas. So the very notion of what they were doing was alien. Not to mention that a few people at PARC were actively contributing ideas that supported the concept of applications, and making computers model what could be done with paper. Applications kept chugging along out in industry, like they always had.

As Kay has pointed out, this is not the first time this sort of thing has happened in our history. It happened to books after the introduction of the printing press. We are now rapidly moving to a technology world where old media is being digitized: books, magazines, images, video, and audio. The technology that is being created to support this doesn’t provide structured access to the underlying bits of information. As Nelson says, they are just provided as “lumps”. The point is this material is being digitized in its original format. It’s serial, linear. As with everything else in technology, it’s happening because this is what we’re used to, not because we’re really recognizing what the computer’s full potential is, even though many people think we are. We can understand this limited view by recognizing that most consumers use computers as data processing engines whose sole purpose is to compress/decompress data, convert a stream of bits into readable type, convert from digital to analog (and vice versa), persist “blobs” of information, and transmit them over networks. There is some structure to what we do with computers, but as Nelson points out, it’s simplistic. This makes us have to work harder when we start dealing with complexity.

What I like about Nelson’s emphasis is he wants to make it easier for people to study content in a deep and engaging way.

The computer industry has not yet really tried to create a new medium. The companies that have won out historically don’t have a concept for what that would be. Even if they did, they would know that it would alienate their customers if they made it into a product.

Doug Engelbart built upon the same ideas that Nelson discussed to try to achieve a higher goal: to improve the effectiveness of groups in pursuing knowledge and figuring things out. The way in which he tried to do this was to use a computer to take the complexity of an issue and make it clear enough so we could understand it more fully than we otherwise would. His ultimate goal was to create a systemic process for improvement of the group’s effectiveness (ie. a process to improve the group’s improvement process) in understanding information and carrying on constructive arguments about it. What’s kind of a mind-bender is he applied this same principle of group improvement to the process of learning how to more fully understand and create NLS itself!

An idea I’m giving short shrift here (undeservedly) is the “bootstrapping” concept that Engelbart talked about. I have talked about it some in a previous post. I’ve had a feeling for a while now that there’s something very powerful about it, but I have not come to completely understand it yet. Engelbart explains it some in the video.

Edit 5/18/2010: Christina Engelbart, Doug Engelbart’s daughter, and Executive Director of the Doug Engelbart Institute left a comment, complimenting my post here. I’d like to highlight the post she wrote on her blog, called “Collective IQ”, which she says was inspired by mine here. Gosh, I’m flattered. 🙂 Anyway, be sure to give her article a read. She goes more into the subject of augmenting the human intellect than I did here.

—Mark Miller,

13 thoughts on “Getting beyond paper and linear media

  1. “Information has structure. The computer should enable that structure to be modeled, and for that model to be manipulated in any representation that is appropriate to our understanding of it.”

    This describes a blank piece of paper better than anything else. I think what you are looking for is something like a tablet PC which is like super-smart paper. For example, if you were to draw a grid on it, it could immediately square up the grid and make it nice. Or if you were to write a column of numbers on it, it would keep a running total at the bottom; perhaps you could circle the appropriate numbers, draw a line from the cirlce to a new area and choose a forumla or calculation to be performed on that column. Or you could start drawing and have Illustrator and Photoshop like functionality available. Basically, I would love for it to combine what Office’s “Clippy” intended to do (“it looks like you are writing a letter”), with enough accuracy to not need to ask me questions, the Office (again, back to Office) concept of contextuality that they started in version 2007, awesome, dead accurate OCR/handwriting recognition, and the convergence of a modern smartphone (for example, anything you write you could just send to a file, or send to an email recipient, or an SMS destination, etc.). Basically, what OneNote is the very, very, bare beginning of. Add on top of that the Smalltalk/Squeak/Dynabook/etc. concept of “everything is changeable”. For example, if I am working on a photo and I use a filter, let me examine the filter’s code and change it to suit my needs.


  2. Hi Justin.

    What you describe would certainly be part of it, in terms of the Squeak concept of “everything is changeable” with some strategies for making the content meaningful, but what I want to emphasize here is the strategies need to be generalized enough that they allow the user to manipulate the structure to what THEY want, not what the software developer thought were the “typical use cases”, or “the way the typical user would understand how to use it”. I’ve tended to not like the way Office would try to “intrude” on what I was doing, and “correct” stuff that I did not want corrected! Though I’ve found a few of its assistive features helpful. It took me a while to figure out how to turn off the assistive features I didn’t want.

    It would require the user to understand some high level computing principles, like I described, because the user would literally be involved in organizing the information, though the structure would be simple enough so as to not be too intimidating and restricting.

    More importantly I was trying to get at how information is stored and organized, and the kind of control the user has over that content’s appearance, representation, and organization, and the kind of access that the user has to the content, not just in terms of permission, but also being able to access it through a structure that shows the content’s relationship to other content.

    I thought Ted Nelson’s talk about “files” was particularly profound. What he’s really saying in the above videos is that with computers we have the power to persist bits of information, create relationships between these bits, search for them, and “bring them into view” when needed. Squeak has some of this as well, by the way. Classes and methods are just “bits of code” that are indexed and searchable, and persisted as part of a larger system. There are no monolithic files, except for the saved image of the system, and the “changeset”, which is a kind of version control system for Smalltalk code. Other than that there are no files for classes and methods.

    In Nelson’s model of an information system, documents are just bits of original content linked to other bits of original content (perhaps by other authors), brought together into different views that are customized to the viewers preferences at any point in time. There’s no such thing as a monolithic document in a singular file, with footnotes and endnotes, formatted with 1″ margins like on paper. All of these conventions were ways that people devised to deal with putting information on paper. All of these strategies are implemented in a new way that maintains the integrity of the information, but in a way the computer makes possible. The idea is to have the computer assist the reader in understanding a wider context for content, an idea or set of ideas; to allow the reader to shape their perception of the information in a way that helps them understand it in a deeper way, and to allow them to highlight stuff, and write their own thoughts alongside the original content, again, so as to help them understand what they’re reading in a deep and engaging way, all while not really changing the original content (just making it appear so), but adding new relationships to it in the system.

    For example, this web page is basically read-only, WYSIWYG, except for the comments. You get it the way I, or the blog software, want you to see it. It comes as a “blob”, take it or leave it. It isn’t structured so that individual parts are addressable (in the computing sense of the word) by others. Though its format can be copied and manipulated, HTML doesn’t make that easy. HTML is designed as a presentation format, but it assumes that the author controls the presentation and the content. It’s not designed to make it easy for others to manipulate it for their own viewing, or to include it in their own content. The video featuring Engelbart mentioned that “Grease Monkey” (I think it was called) on Firefox enables some user control over web content. It can reformat a web page to the user’s liking, controlling its presentation, but it doesn’t (I don’t think) facilitate bringing that presentation into their own content if they want. It only provides its service if you are just consuming content.

    There’s no two-way linkage in our current web infrastructure so that someone could easily sample my content into their own, format it the way they’d like, and comment on it, all the while automatically providing a link back to my original content. The web infrastructure that exists doesn’t provide me a way to see what the person who excerpted my content did with it (unless I diligently use a search engine to find that out), and to do the same with their content, allowing me to comment on it, and carry on a conversation with them about it. There are what are called “trackbacks” on blogs, which emulate this two-way linking in a simplistic way, but it’s really a feature of blog software, and is only around if two blog systems interact with each other. You have to search the comments to see who’s linked to a blog post, and you can’t tell where the link is until you read the other article. It breaks the stream of consciousness for the reader.

    So the web, as it was conceived some 17 or so years ago has really been a one-way street, and there have been some efforts to try to provide some sense of dynamism, of changeable content, in the last 8 years, but the efforts have been pretty timid. It’s provided simplistic linking, so that we can go from document to document in a non-linear fashion, but the documents have been monolithic “blobs” that give little indication that the information they contain can exist in a larger context, unless the author, or other authors that link to his/her information, make a deliberate effort to make that larger context visible. We are still limiting ourselves when we don’t need to.

    Nelson talked about this in a broader sense than just text. He talked about audio and video as well. They also come as monolithic “blobs” that are not addressable internally. There’s no dynamic linking in these formats to other content, such as text, audio, video, because no one thought to make that possible. Yes, today’s movies are segmented into chapters, and there’s extra content, but the whole idea is the same as HTML. It’s designed so that whoever produced the video controls how it appears, and how it’s used. It’s a movie, right? On the computer we should be able to ask the question, “Is that the point? Is that it?”

    If you watch Ted Nelson’s video on Xanadu Spaces, you’ll see that he’s able to view multiple documents side by side, and create relationships between them. He says we should be able to do the same with audio and video. I can see the applications that would be possible in an educational sense. For example, when you watch a scene in a movie, you could also see the classical inspiration for that scene from another movie, a play, or poetry. You could actually get a deeper sense of what was going on inside the movie makers’ heads as they made their production. The same effect could be achieved with documents.

    Nelson talks about being able to view history “in parallel”, which gets across the idea that events in the world happen simultaneously, and occasionally “meet” at certain points in time.

    As I was viewing what Nelson and Engelbart were talking about, the issue of copyright crept into my head, and I think Engelbart’s system provided an answer to that, because along with its ability to store information in bits with links to other content, it also implemented a sense of “who has permission to see and modify” those bits of information. I think that copyrights should be respected, but the computer/a better web infrastructure can help with this as well. Rather than allowing content to merely be copied and pasted (or not), content could contain metadata that has all the relevant author/production company information, even implementing hard and fast rules saying “This copyright notice must be displayed whenever any part of this content is used somewhere else,” embedded in the content, so that the author wouldn’t have to worry about making a conscious effort to properly credit the source. The computer would do that automatically. An idea that Alan Kay talked about years ago is that content on the internet should not exist as just data, but should in fact be a package of information, an object, that contains content, but also contains code that provides access and presentation controls to the content, so that people don’t have to worry about formats anymore. But we can’t do that, because people are squeamish about downloading code to computers, since it may be infected with something malicious, a product of our bad system architectures that allow these myriad of vulnerabilities.

    The idea of storing information I was trying to get across is a bit like the way IT people handle data storage. IT creates links between bits of information in a database, though Nelson and Engelbart have used more sophisticated architecture to link information together, but it’s more in that direction, rather than just a bunch of characters in a “lump”. They also envisioned that the stored information wouldn’t just be “data”. It would be text, discussions, visual information, etc.

    So while you’re close on the presentation and functionality side of things, what I meant to get across was broader, covering networking of content on the internet, and how content is stored and accessed.

  3. Mark – greatly appreciate your spotlighting these important ideas. In fact, you can find deep thinking on these points in my father Doug Engelbart’s 1962 report “Augmenting Human Intellect: A Conceptual Framework” (see esp. section II.C.4

    He was convinced from the beginning that the incredible power of the human mind has been seriously underserved and limited by the ways we’ve developed to express ourselves, by our very language, and even more so by the technologies we developed throughout history for recording and making available what’s in our mind – stone tablets, scrolls, printed paper, etc. The opportunity he saw for computers was to bring us much more advanced ways to conceptualize and express, store, access, and broadly share, exchange, and otherwise leverage our intellect capacity. So for example, if I were to make the suggestion “think of your car”, you would have an instant view in your mind of your car, “now picture the interior, front seat, dash board, what’s inside your glove compartment” your mind just bombs around the information you have stored away at any level of detail, in any combination, from any vantage point, depending on what you’re thinking about at a given moment.

    Pages that you scroll through don’t offer this agility, although they are useful for following information presented in a certain context from start to finish. Search engines offer a bit more help, although (1) search hits typically point you to the top of a page or file, rather than directly to the piece of information you are searching on, so after you click on the link you then need to Find or Scroll your way down through the (in this moment) extraneous stuff to finally arrive at what the search engine found potentially relevant, and (2) there are typically multiple hits, and sorting through them is laborious. If the author thought ahead to put anchor points at the places which in future someone might want to link to, that helps. But what really helps is for our tool to provide more granular addressibility — you’ll find a crude example in the “purple numbers” in his 1962 paper cited above.

    Needless to say, my father was frustrated and disappointed by mainstream directions in information systems after he demonstrated and published his own work in 1968 — personal computers with no provision for networking or shared knowledge spaces, office automation (why would you automate how you used to work?), desktop publishing and WYSIWYG (easy to learn is great, as long as it doesn’t also mean funneling advanced users into lowest common denominator “what you see is ALL you get” paper-based paradigms).

    What’s missing in today’s information technology? A fundamental paradigm shift. Over the years my father identified a set of key functional and architectural attributes that are crucial for advancing how computers can really begin to augment rather than automate or otherwise bypass the untapped potential of our individual and collective intellect.

    See About an Open Hyperdocument System (OHS) for highlights and links to more detail.

    Christina Engelbart
    Executive Director
    Doug Engelbart Institute

  4. Pingback: More on getting beyond paper and linear media « collective iq

  5. Pingback: SICP: What is meant by “data”? « Tekkie

  6. Pingback: The 150th post « Tekkie

  7. thanks for this article Mark. I just finished reading it and watched all the videos except the first one, and I also read Christina’s article you linked to at the end.

    I’ve been looking for some sort of more dynamic software interface for a long time, wondering why as far and wide as I search, I’m always disappointed by the software’s lack of functionality. I really would like to transcend the page, and until I read this article I hadn’t realized the drastic implications of decisions made in the 70’s which are the reason I still can’t find a decent interface.

    I think its critical that we don’t give up and let this page-centric paradigm exterminate the chance for something better. I’m sure you’d agree, and that you know much more than I about all this. although I have thoughts and ideas, I won’t waste your time right now since you’ve probably been through it all.

    all I want to say is that I’m very interested in this. I want the world to have access to better software that simulates more closely the way information is connected in our brain and in the real world.

    I’m just an undergrad student right now, but is there anything I can do to help make this a reality? people or groups I can contact? things I should learn and/or read? if there’s a chance that I could help improve the way we handle information, I’d be willing to spend a lot of time and energy to make it happen.

    thanks for the help,
    Blase Masserant

  8. also, if there’s any currently any software available representative of this paradigm, I’d greatly appreciate any information about how I can get it

  9. Hi Blase.

    I can see that you and I share a similar passion to see beyond the limitations of the current computer interfaces we have, and to strive to contribute something better. I think you are in a better position than I am to have an impact. The fact that you’re an undergrad (I assume you’re of the traditional age of an undergrad–18 or so), and recognize these issues is encouraging. I’ve only come to understand this stuff in the last few years, and I’m now in my 40s.

    I have met other people who have attributed greater knowledge to me than what I probably have. They assume that I have some advanced knowledge of all this stuff. On the level of ideas, it’s clear to me that I do have some knowledge that is beyond what most people in the field of computing have about what’s possible. Having said that, I’m open to hearing what other people think, because I may be missing something.

    On the technical level, I rate myself only a little above what a good IT software programmer knows. So my ideas are quite a ways ahead of my technical ability right now, and that’s the way it’s been for a while. I’m trying to improve that. Maybe I’m underrating myself, but that’s my own sense of my ability. Mostly what I bring to my blog is a perspective that points to some avenues that are possible. Bringing them to reality is another matter entirely, and I hope that someday I will have the ability to contribute something tangible that “proves” these ideas.

    I am not involved in any project at this point to create what I’ve talked about here, and I do not know of any projects that directly relate to it, except for what Ted Nelson is doing.

    You already know about Christina Engelbart’s blog. That, I think, is a good one to follow. Here are some other places I think you should check out:

    The Doug Engelbart Institute

    Ted Nelson’s home page

    Project Xanadu home page. This one looks more up your alley.

    You can also look up “Engelbart” and “Ted Nelson” on Google for other information about their work.

    You might be interested in this, “Active essays on the web”, by Takashi Yamamiya, Alessandro Warth, and Ted Kaehler. This is from Viewpoints Research Institute, which is getting more into the guts of looking at computers as a new medium, not just focusing on how computers enable information to be segmented and linked together, but also how concepts of computation play into that, and how content’s presentation can be manipulated via. computing concepts.

    You might want to check out Viewpoints Research Institute’s home page. They’re more into the stuff I’m interested in.

    I wrote another post a while back on the idea of the “computer as medium,” which gives highlights on what this is about.

    An item I found encouraging is about a month after I wrote this post, I found an article in my local paper titled “Tech-savvy students still favor print textbooks”, which said that students don’t like e-books that much, because they can highlight material and write notes in the margins of hardcopy books, whereas they can’t do this with e-books. There’s a message here!

    When it comes to various packages of e-books and textbooks, there “doesn’t seem to be a single victor yet,” she said.

    “Students appreciate materials that are designed to be interactive,” Mills said.

    She said they are more resistant to read material that’s simply scanned into digital format.

    This is an irony that needs to be emphasized: The computer industry is so used to imitating paper, or in the broader sense, old media, that it doesn’t even think about it! E-books are worse than paper, even though by the appearances of the people who produce them, they’re better! They can say, “Look, they read like a book, but they’re not bulky and heavy. They’re cheaper to produce, and thereby cheaper to purchase. They’re environmentally friendly,” but the students get it: You can do less with media that’s merely digitized than with the real thing!

    So I think there will someday be a recognition in the market to make media that’s more like what I’m talking about here. It’ll take people like us, I suppose, to bring these new ideas to the makers of these materials.

  10. Pingback: Doug Engelbart has passed away | Tekkie

  11. Just thought I’d leave a note to anyone who comes along and would be interested in contacting Ted Nelson. I don’t know if this was public information when I wrote this blog post (I don’t remember seeing it), but I just saw that his e-mail address is (I’m just going to spell it out to avoid spammers): ted at It’s also on his Xanadu web page.

  12. Pingback: Alan Kay: Rethinking CS education | Tekkie

  13. Pingback: Generally Interesting Links – Jan 2021 – Abacus Noir

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s