Feeds:
Posts
Comments

I found a version of Smalltalk-72 online through Gilad Bracha on Google+. It’s being written by Dan Ingalls, the original implementor of Smalltalk, on his relatively new web authoring platform called Lively Kernel (or Lively Web). I don’t know what Ingalls has planned for it, whether this will be up for long. I’m just writing about it because I found it an interesting and enlightening experience, and I wanted to share about it with others so that hopefully you’ll gain similar value from trying it out. You can use ST-72 here. I have some words of warning about it, as I’ll gradually describe in this post.

First of all, I’ve noticed as of this writing that as it’s loading up it puts up a scary “error occurred” message, and a bunch of “loading” messages get spewed all over the screen. However, if you are patient, all will be well. The error screen clears after a bit, and you can start using ST-72. This looks to be a work in progress, and some of it works.

What you’ll see is a white vertical rectangle surrounded by a yellowish box that contains some buttons. The white rectangle represents the screen area you work in. At the bottom of that screen area is a command line, which starts off with “Welcome to SMALLTALK.”

This environment was created using an original saved image of a running Smalltalk-72 system from 40 years ago. Even though the title says “ALTO Smalltalk-72,” it’s apparently running on a Nova emulator. The Nova was a minicomputer from Data General, which was the original platform Smalltalk was written on at Xerox PARC in the early 1970s. I’m unclear on what the connection is to the Alto’s hardware. You can even look at a Nova assembly monitor display by clicking on the “Show Nova” button, which will show you a disassembly of the machine instructions that the emulator is executing as you use Smalltalk. (To get back to Smalltalk, click on “Show Smalltalk.”) The screen display you see is as it was originally.

Smalltalk-72 is not the first version of Smalltalk, but it comes pretty close. According to Alan Kay’s retelling in “The Early History of Smalltalk,” there was an earlier version (which was the result of a bet he made with his PARC colleagues), but it was rudimentary.

This version does not run like the versions of Smalltalk most people familiar with it have used, which were based on Smalltalk-80. There are no workspace windows (yet), other than the command line interface at the bottom of the screen area, though there is a class editor (which as of this writing I’d say is non-functional). It doesn’t look like there’s a debugger. Anytime an error occurs, a subwindow with an error message pops up (which can be dismissed by pressing ctrl-D). You can still get some things done with the environment. To really get an idea of how to use it you need to click on “Open ST-72 Manual.” This will bring up a PDF of the original Smalltalk-72 documentation.

What you should keep in mind, though, (again, as of this writing) is that some features will not work at all. File functions do not work. There’s a keystroke the manual calls “‘s”, which is supposed to recall a specified object attribute, such as, for example, “title’s string” to access a variable called “string” out of an object called “title.” (“title” receives “‘s string” as separate messages (“‘s” and “string”), but “‘s” is supposed to be received as a single character, not two, as shown.) I have not been able to find this keystroke on the keyboard, though you can substitute a different message for it (whatever you want to use) inside a class.  The biggest thing missing at this point is that while it can read the mouse pointer’s position, this ST-72 environment does not detect any mouse button presses (the original mouse that was used with it had several mouse buttons). This disables any ability to use the built-in class editor, which makes using ST-72 a chore for anything more than “hello world”-type stuff, because besides adding a method to a class (which is easy to do), it seems the only way to change code inside a class is to erase the class (assign nil to its name), and then completely rewrite it. I also found this made later code examples in the documentation (which got more into sophisticated graphics) impossible to try out, as much as I tried to use keyboard actions to substitute for mouse clicks.

What I liked about seeing this version was it helped explain what Kay was talking about when he described the first versions of Smalltalk as “iconic” programming languages, and what he described with Smalltalk’s parsing technique, in “The Early History of Smalltalk” (which I will call “TEHS” hereafter).

To use this version, you’ll need to get familiar with how to use your keyboard to type out the right symbols. It’s not cryptic like APL, but it’s impossible to do much without using some symbolic characters in the code.

The most fascinating aspect to me is that parsing was a major part of how objects operated in this environment. If you’re familiar with Lisp, ST-72 will seem a bit Lisp-like. In fact, Kay says in TEHS that one of his objectives was to improve on Lisp’s use of “special forms” by making them unnecessary. He said that he did this by making parsing an integral part of how objects operate, and by making what appear to be syntactic elements (what would be reserved words or characters in other languages) their own classes, which receive messages, which are just other elements of a Smalltalk expression in your code; so that the parsing action goes pretty deep into how the whole system operates.

Objects in ST-72 exist in what I’d call a “nether world” between the way that Lisp functions and macros behave (without the code generation part). Rather than substitute bound values and evaluate, like Lisp does, it parses tokens and evaluates (though it has bound values in class, instance, and temporary variables).

It’s possible to create a class which has no methods (or just an implicit anonymous one). Done this way, it looks almost like a lambda (an anonymous function) in Lisp with no parameters. However, to pass parameters to an object, the preferred method is to execute some parsing actions inside the class’s code. ST-72 makes this pretty easy. Once you start doing it, using the “eyeball” character, classes start to look a bit like Lisp functions with a COND expression in it.

Distinguishing between using a class and an instance of it can be confusing in this environment. If you reference a class and pass it a message, that’s executing a class’s code using its class variable(s). (In ST-80 this would be called using “class-side code,” though in ST-72 the only distinction between class and instance is what variables inside the class specification get used. The code that gets executed is exactly the same in both cases.) In order to use an object instance (using its instance variable(s)), you have to assign the class to a variable first, and then pass your message to that variable. Assigning a class to a variable automatically creates an instance of the class. There is no “new” operator. The semantics of assignment look similar to that of SETQ in Lisp.

A major difference I notice between ST-72 and ST-80 is that in ST-80 you generally set up a method with a signature that lays out what message it will receive, and its parameter values, and this is distinct from the functionality that the method executes. In ST-72 you create a method by setting up a parsing expression (using the “eyeball” character) within the body of the class code, followed by an “implies” (conditional) expression, which then may contain additional parsing/implies expressions within it to get the message’s parameters, and somewhere in the chain of these expressions you execute an action once the message is fully parsed. I imagine this could get messy with sophisticated interactions, but at the same time I appreciated it, because it can allow a very straightforward programming style when using the class.

Creating a class is easy. You use the keyword “to” (“to” is itself a class, though I won’t get into that). If anyone has used the Logo language, this will look familiar. One of Kay’s inspirations for Smalltalk was Logo. After the “to”, you give the class a name, and enter any temporary, instance, and/or class variables. After that, you start a vector (a list) which contains your code for the class, and then you close off the vector to complete the class specification.

All graphics in ST-72 are done using a “turtle,” (though it is possible to use x-y coordinates with it to position it, and to direct it to draw lines). So it’s easy to do turtle graphics in this version of Smalltalk. If you go all the way through the documentation, you’ll see that Kay and Adele Goldberg, who co-wrote the documentation, do some sophisticated things with the graphics capabilities, such as creating windows, and menus, though since mouse button presses are non-functional at this point, I found this part difficult, if not impossible to deal with.

I have at times found that I’ve wanted to just start the emulator over and erase what I’ve done. You can do this by hitting the “Show Nova” button, hitting the “Restart” button, and then hitting “Show Smalltalk.”

Here is the best I could do for an ST-72 key map. Some of this is documented. Some of it I found through trial and error. You’ll generally get the gist of what these keys do as you go through the ST-72 documentation.

function keystroke description
! (“do it”) \ (backslash) evaluates expression entered on the command line, and executes it
hand shift-’ quotes a literal (equivalent to “quote” in Lisp)
eyeball shift-5 “look for” – for parsing tokens
open colon ctrl-C fetch the next token, but do not evaluate (looks like two small circles one on top of the other)
keyhole ctrl-K peek at the input stream, but do not fetch token from it
implies shift-/ used to create if…then…else… expressions
return shift-1 returns value
smiley/turtle shift-2 for turtle graphics
open square shift-7 bitwise operation, followed by:
a * (number) = a AND (number)
a + (number) = a OR (number)
a – (number) = a XOR (number)
a / (number) = a left-shift by (number)
? shift-~ (tilde) question mark character
‘s (unknown)
done ctrl-D exit subwindow
unary minus -
less-than-or-equal ctrl-A
greater-than-or-equal ctrl-Z
not equal ctrl-N
% ctrl-V
@ ctrl-F
! ctrl-Q exclamation point character
ctrl-O double quote character
zero(?) shift-4
up-arrow shift-6
assignment shift-_ displays a left-arrow, equivalent to “=” in Algol-derived languages.
temp/instance/class variable separator : (colon) (this character looks like ‘/’ in the ST-72 documentation, but you should use “:” instead)

When entering Smalltalk code, the Return key does not enter the code into the system. It just does a line-feed so you can enter more code. To execute code you must “do it,” by pressing ‘\’.

You’ll notice some of the keystrokes are just for typing out a literal character, like ‘?’. This is due to the emulator re-mapping some of the keystrokes on your keyboard to do something else, so it was necessary to also re-map the substituted characters to something else so they could still be used.

Using ST-72 feels like playing around in a sandbox. It’s fun seeing what you can find out. Enjoy.

As I’ve been blogging here, there have been various issues that I’ve identified, which I think need to be remedied in the world of computing, though they’re ideas that have occurred to me as I’ve written about other subjects. I went through a period, early on in writing here, of feeling lost, but exploring about, looking at whatever interested me in the moment. My goal was to rediscover, after spending several years in college, getting my CS degree, and then several more years in professional IT services, why I thought computers were so interesting so many years ago, when I was a teenager. I had this idea in my head back then that computers would lead society to an enlightened era, a new plateau of existence, where people could try out ideas in a virtual space, the way I had experienced on early computers, with semi-interactive programming. I saw the computer as always being an authoring platform, and that the primary method of creating with it would be through programming, and then through various apparatus. I imagined that people would gain new insight about their ideas through this work, the way I had. I did not see this materialize in my professional career. So I decided to leave that world for a while. I make it sound here like it was a very clear decision to me, but I’m really saying this in hindsight. Leaving that world was more of a gradual process, and for a while I wasn’t sure I wanted to do it. It felt scary to leave the only work I knew as a professional, which gave me a sense of pride, and meaning. I saw too many problems in it, though, and it eventually got to me. I expected IT to turn out better than it did, to become more sane, to make more sense. Instead the problems in IT development seemed to get worse as time passed. It made me wonder how much of what I’d done had really improved anything, or if I was just going through the motions, pretending that I was doing something worthwhile, and getting a paycheck for it. Secondly, I saw a major shift occurring in software development, perhaps for the better, but it was disorienting nevertheless. I felt the need to explore in my own way, rather than going down the beaten path that someone else, or some group, had laid out for me.

I spent as much time as I could trying to find better ideas for what I wanted to do. Secondly, I spent a lot of time trying to understand Alan Kay’s perspective on computing, as I was, and am, deeply taken with it. That’s what this blog has largely been about.

When people have asked me, “What are you doing,” “What are you up to,” I’ve been at a loss to explain it. I could explain generically, “I’ve been doing some reading and writing,” “I’m looking at an old programming language,” or, “I’m studying an operating system.” They’re humdrum explanations, but they’re the best I could do. I didn’t have any other words for it. I felt like I couldn’t say I was wandering, going here and there, sampling this and that. I thought that would sound dangerously aimless to most of the people I know and love.

I found this video by Bret Victor, “Inventing on Principle,” and finally I felt as though someone put accurate, concise words to what I’ve been trying to do all this time.

http://vimeo.com/36579366

His examples of interactive programming are wonderful. I first experienced this feeling of a fond wish come true, watching such a demonstration, when Alan Kay showed EToys on Squeak back in 2003. I thought it was one of the most wonderful things I had witnessed. Here was what I had hoped things would progress to. It was too bad such a principle wasn’t widespread throughout the industry. It should be.

What really excited me, watching Bret’s video, is that at 18 minutes in he demonstrated a programming environment that looks similar to a programming language design I sketched out last fall, except what I had in mind looked more like the “right side” of his interactive display, which showed example code, than the “left side.”

I was intrigued by an idea I heard Alan Kay express, by way of J.C.R. Licklider, of “communicating with aliens” as a general operating principle in computing. Kay said that one can think of it more simply as trying to communicate with another human being who doesn’t understand English. You can still communicate somewhat accurately by making gestures in a context. If I had to describe my sketch in concise terms, I’d call it “programming by suggestion.” I had a problem in mind of translating data from one format into another, and I thought rather than using a traditional approach of inputting data and then hard-coding what I wanted it translated into, “Why not try to express what I want to do using suggestions,” saying, “I want something like this…,” using some mutually recognizable patterns as primitives, and having the computer figure out what I want by “taking the hints,” and creating inferences about them. It sounds more like AI, though I was trying to avoid going down that path if I could, because I have zero skill in AI. Easier said than done. I haven’t given up on the idea, though as usual I’ve gone off on some other interesting diversions since then. I had an idea of building up to what I want from simpler system designs. I may yet continue to pursue that avenue.

I’ve used each problem I’ve encountered in pursuit of my technical goals as an opportunity to pursue an unexamined idea I consider “advanced” in difficulty. So I’m not throwing any of the ideas away. I’m just putting them on the shelf for a while, while I address what I think are some underlying issues, both with the idea itself, and my own knowledge of computing generally.

Bret hit it on the head for me about 35 minutes into his talk. He said that pursuing the way of life he described is a kind of social activism using technology. It’s not in the form of writing polemics, or trying to create a new law, or change an existing one. Rather, it’s creating a new way for computers to operate as a statement of principles. This gets to what I’ve been trying to do. He said finding your principle is a journey of your own making, and it can take time to define. I really liked that he addressed the issue of trying to answer the question of, “Why you’re here.” I’ve felt that pursuit very strongly since I was in high school, and I’ve gone down some dead ends since then. He said it took him 10 years to find his principle. Along the way he felt unsure just what he was pursuing. He saw things that bothered him, and things that interested him, and he paid attention to these things. It took time for him to figure out whether something was just interesting for a time, or whether it was something that fit into his reason for being. For me, I’ve been at this in earnest since 2006, when I was in my mid-30s, and I still don’t have a defined sense yet of what my principle of work is. One reason for that is I only started working along these lines a year and a half ago. I feel like a toddler. I have a sense of what my interest centers around, though; that I want to see system design improved. I’ve stated the different aspects of this in my blog from time to time. Still, I find new avenues to explore, lately in non-technical contexts.

I’ve long had an interest in what makes a modern civilization tick. I thought of minoring in political science when I was in college. My advisor joked, “So, you’ll get your computer science degree, and then run for office, eh?” Not exactly what I had in mind… I had no idea how the two related in the slightest. I ended up not going through with the minor, as it required more reading than I could stomach. I took a few poli-sci courses in the meantime, though. In hindsight, I was interested in politics (and continue to be) because I saw that it was the way our society made decisions about how we were going to relate and live with each other in a future society. I think I also saw it as a way for society to solve its problems, though I’m increasingly dubious of that view. That interest still tugs at me, though recently it’s drawn me closer to studying the different ways that humans think, and/or perhaps how we can think better, as this relates a great deal to how we conduct politics, which affects how we relate to/live with each other as a society. I see this as mattering a great deal, so it may become a major avenue as I progress in trying to define what I’m doing. I don’t see politics as a goal of mine anymore. It’s more a means for getting to other pursuits I am developing on societal issues.

When Bret talked about what “hurt” him to see, this focus on politics came into view for me. When I think about the one thing that pains me, it’s seeing people use weak arguments (created using a weak outlook for the subject) to advance major causes that involve large numbers of people’s lives, because I think the end result can only be misguided, and perhaps dangerous to a free society in the long run. When I see this I feel compelled to intervene in the discussion in some way, and to try to educate the people involved in a better way of seeing the issue they’re concerned about, though I’m a total amateur at it. Words often fail to explain the ideas I’m trying to get across, because the people receiving them don’t understand the outlook I’m assuming, which suggests that I either need a different way of expressing those ideas, or I need to get into educating children about powerful outlooks, or both. Secondly, most adults don’t like to have their ideas challenged, so their minds are closed to new ideas right from the start. And I’ve realized that while it’s legitimate to try to address this principle which I try to act on, I need to approach how I apply it differently. Beating one’s head against a wall is not a productive use of time.

It has sometimes gotten me reflecting on how societies going back more than 100 years degenerated into regimented autocracies, which also used terribly weak arguments to justify their existence, with equally terrible consequences. We are not immune from such a fate. Such regimes were created by human beings just like us. I have some ideas for what I can do to address this concern, and that of computing at the same time, though I feel I now have a more realistic sense of the breadth of the effect I can accomplish within my lifetime, which is quite limited, even if I were to fully develop the principles for my work. A way has already been paved for what I describe by another techno-social activist, but I don’t have a real sense yet of what going down that avenue means, or even if reconciling the two is really what I want to do. As I said, I’m still in the process of finding all this out.

Thinking on this, a difference between myself and Bret (or at least what he’s expressed so far) is that like Alan Kay and Doug Engelbart, I think I see my work in a societal context that goes beyond the development of technology. Kay has dedicated himself to developing a better way to educate people in the great ideas humans have invented. He uses computers as a part of that, but it’s not based in techno-centrism. His goal is not just to create more powerful “mind amplifiers” or media. The reason he does it is as part of a larger goal of creating the locus for a better future society.

I am grateful to Bret for talking so openly about his principle, and how he arrived at it. It’s reassuring to be able to put a term to what I’m trying to achieve.

Related posts:

Coding like writing

The “My Journey” series

Why I do this

Christina Engelbart posted a page on her site to mention places that commemorated the 45th Anniversary of Doug Engelbart’s (her father’s) “Mother of All Demos.” The Computer History Museum in Mountain View, CA did an event on Dec. 9, the date of the demo, to talk about it, what was accomplished from it, and what from it has yet to be brought into our digital world. There have been some other anniversary events for this demo in the past, but this one is kind of poignant I think, because Doug Engelbart passed away in July. Maybe C-SPAN will be showing it?

I was intrigued by this line in the description of the CHM event:

Some of the main records of his laboratory at SRI are in the Museum’s collection, and form a crucial part of the CHM Internet History Program.

Since I heard about what an admirable job the Augmentation Research Center had done in building NLS architecturally, I’ve been curious to know, “How can today’s developers take a look at what they did, so they could learn something from it?”

The reason I particularly wanted to point out this commemoration, though, is one of the pages Christina referenced was a post by Brad Neuberg demonstrating HyperScope, a modern, web-based system that implements some of the document and linking features of NLS, and Engelbart’s original Augment system (NLS was renamed “Augment” in the late 1970s). Watching Engelbart’s demo gives one a flavor of what it was like to use it, and its capabilities, but it doesn’t lend itself to helping the audience understand how it deals with information, and how it operates–why it’s an important artifact–beyond being dazzled by what was accomplished 45 years ago. Watching Brad’s videos gives one a better sense of these things.

This presentation by Bret Victor could alternately be called “Back To The Future,” but he called it, “The Future of Programming.” What’s striking is this is still a possible future for programming, though it was all really conceivable back in 1973, the time period in which Bret set it.

I found this video through Mark Guzdial. It is the best presentation on programming and computing I’ve seen since I’ve watched what Alan Kay has had to say on the subject. Bret did a “time machine” performance set in 1973 on what was accomplished by some great engineers working on computers in the 1960s and ’70s, and generally what those accomplishments meant to computing. I’ve covered some of this history in my post “A history lesson in government R&D, Part 2,” though I de-emphasized the programming aspect.

Bret posed as someone who was presenting new information in 1973 (complete with faux overhead projector), and someone who tried to predict what the future of programming and computing would be like, given these accomplishments, and reasoning about the future, if what was then-current thinking progressed.

The main theme, which has been a bit of a revelation to me recently, is this idea of programming being an exercise in telling the computer what you want, and having the computer figure out how to deliver it, and even computers figuring out how to get what they need from each other, without the programmer having to spell out how to do these things step by step. What Bret’s presentation suggested to me is that one approach to doing this is having a library of “solvers,” operating under a set of principles (or perhaps a single principle), that the computing system can invoke at any time, in a number of combinations, in an attempt to accomplish that goal; that operational parameters are not fixed, but relatively fluid.

Alan Kay talked about Licklider’s idea of “communicating with aliens,” and how this relates to computing, in this presentation he gave at SRII a couple years ago. A “feature” of many of Alan’s videos is that you don’t get to see the slides he’s showing…which can make it difficult to follow what he’s talking about. So I’ll provide a couple reference points. At about 17 minutes in “this guy” he’s talking about is Licklider, and a paper he wrote called “Man-Computer Symbiosis.” At about 44 minutes in I believe Alan is talking about the Smalltalk programming environment.

http://vimeo.com/22463791

I happened to find this from “The Dream Machine,” by M. Mitchell Waldrop, as well, sourced from an article written by J.C.R. Licklider and Robert Taylor called, “The Computer as a Communication Device,” in a publication called “Science & Technology” in 1968, also reprinted in “In Memoriam: J.C.R. Licklider, 1915-1990,” published in Digital Systems Research Center Reports, vol. 61, 1990:

“Modeling, we believe, is basic and central to communications.” Conversely, he and Taylor continued, “[a successful communication] we now define concisely as ‘cooperative modeling’–cooperation in the construction, maintenance, and use of a model. [Indeed], when people communicate face to face, they externalize their models so they can be sure they are talking about the same thing. Even such a simple externalized model as a flow diagram or an outline–because it can be seen by all the communicators–serves as a focus for discussion. It changes the nature of communication: When communicators have no such common framework, they merely make speeches at each other; but when they have a manipulable model before them, they utter a few words, point, sketch, nod, or object.”

I remember many years ago hearing Alan Kay say that what’s happened in computing since the 1970s “has not been that exciting.” Bret Victor ably justified that sentiment. What he got across (to me, anyway) was a sense of tragedy that the thinking of the time did not propagate and progress from there. Academic computer science, and the computer industry acted as if this knowledge barely existed. The greater potential tragedy Bret expressed was, “What if this knowledge was forgotten?”

The very end of Bret’s presentation made me think of Bob Barton, a man that Alan has sometimes referred to when talking about this subject. Barton was someone who Alan seemed very grateful to have met as a graduate student, because he disabused the computer science students at the University of Utah in their notions of what computing was. Alan said that Barton freed their minds on the subject, which opened up a world of possibilities that they could not previously see. In a way Bret tried to do the same thing by leaving the ending of his presentation open-ended. He did not say that meta-programming “is the future,” just one really interesting idea that was developed decades ago. Many people in the field think they know what computing and programming are, but these are still unanswered questions. I’d add to that message that what’s needed is a spirit of adventure and exploration. Like our predecessors, we’ll find some really interesting answers along the way which will be picked up by others and incorporated into the devices the public uses, as happened before.

I hope that students just entering computer science will see this and carry the ideas from it with them as they go through their academic program. What Bret presents is a perspective on computer science that is not shared by much of academic CS today, but if CS is to be revitalized one thing it needs to do is “get” what this perspective means, and why it has value. I believe it is the perspective of a real computer scientist.

Related post: Does computer science have a future?

—Mark Miller, http://tekkie.wordpress.com

I’ve been following José A Ortega-Ruiz (he goes by “jao” for short) for about 6 years now, on his blog Programming Musings. Recently I noticed he wasn’t posting much, if at all. Finally he said he was starting up a new blog here, using a static web page tool, written in Racket (a Scheme implementation), called “Frog” for “frozen blog.” I found his 2nd post, called “Where my mouth is” inspirational.

For many years, I’ve been convinced that programming needs to move forward and abandon the Algol family of languages that, still today, dampens the field. And that that forward direction has been signaled for decades by (mostly) functional, possibly dynamic languages with an immersive environment. But it wasn’t until recently that I was able to finally put my money where my mouth has been all these years.

A couple of years ago I finally became a co-founder and started working on a company I could call my own (I had been close in the past, but not really there), and was finally in a position to really influence our development decisions at every level. Even at the most sensitive of them all: language choice.

He says he’s working with a team of people who are curious, and are willing to take the risk of trying out less traditional, but as they perceive, more powerful programming environments, such as Clojure (a Lisp variant that runs on the JVM, and is pronounced “closure”). Read the rest of it.

I once aspired to work with a team such as this. I had been working in IT services for several years, and I thought I’d continue on in that vein, with something like Smalltalk, starting my own little “web shop,” or just happening to find a company that was already creating IT systems with something like it. More recently I’ve been finding computing as a field of study more interesting, not in the typical sense that academic computer science teaches, but as a phenomenon unto itself, a kind of interesting effect of a machine that’s capable of generating, and acting like, a machine of a different kind from itself, simply by feeding it different patterns of a certain nature. As Alan Kay has demonstrated, looking at computing this way, you can still arrive at an environment that looks recognizable, but is vastly different under the covers than the typical user-oriented system is structured today.

I’ve put two links in my links sidebar for jao’s blogs, “Programming Musings” (his old blog) and “Programming Musings 2,” since his old blog is rather like an archive of really interesting and valuable knowledge in computer science, and I encourage people to scan through it for anything they find of value.

I wish jao the best of luck in his new venture!

If, in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?

— Doug Engelbart, introducing his NLS system in 1968

Doug Engelbart, from Wikipedia

I got the news from Howard Rheingold on Google+ that Doug Engelbart died last night. It is not a stretch to say that without Engelbart the experience we’ve had with computers for the last 30+ years would be very different. He was a pioneer in interactive computing, bringing computers out of the era of batch processing with punch cards and teletypes as the only means of using them, into an age where you could directly interact with one. He wanted so much more to come out of that, and he put that desire into his NLS system.

He is best known today as the man who invented the computer mouse, but in my opinion that makes light of what he did. He gave us mouse interaction with a graphical interface. He gave us the idea of using a visual pointer (he called the visual pointer a “bug”–a little something that flies or crawls around the screen…) and information schemas that one could change on the fly to manipulate and organize information, and to save that information to your own personal file so that you could later retrieve it, and share it with others in your group. He gave us hyperlinking, long before most of us understood what the concept was. His hyperlinks were more than what we have today. They not only pointed to information, but also specified how the creator of the link wanted that information displayed. He gave us a mixture of graphics and text on the screen at the same time, to enhance our ability to communicate. He gave us the idea of sub-second access to information, so that our brains could be “in flow,” and not have mental breaks with our thoughts, because of slow technology. He gave us collaborative computing with video conferencing. In an amazing display, when he demoed NLS, he brought in other participants, using a combination of analog technology, and a digital network, who could work with Doug on the same document, live, at the same time. Using video cameras, television displays, and microphones, and speakers, they could see each other, and talk to each other as they worked.

During the 1960s he thought about word processing at a time when it was just experimental, and how the idea could be used to make forming documents easier. He thought about online discussions, address books, online technical support, and online research libraries. He brought some of what he thought about into NLS.

If you’d like to watch the full presentation, you can look through an annotated version here.

All of this was created in 1968, before the Arpanet, the predecessor to the internet, came into being. The audience that saw this demo was dazzled. Some questioned whether it was real, whether it was all a mock up, because hardly anybody expected that computers could do these things. Another computing pioneer, Chuck Thacker, said of Engelbart that day that he was “dealing lightning with both hands.” Indeed he did.

There are more ideas he had, which he came up with more than 45 years ago, and were implemented in NLS, that have yet to be introduced into our digital world. He wanted systems like NLS to be a “vehicle,” as he called it, for implementing a method for improving “collective intelligence.”

Smeagol Studios came up with what I thought was a nice video summary of Engelbart’s work, and how it was expressed in products that we in the mass market came to use.

Thank you, Doug, for all that you gave us. You took a great leap in making computers approachable, which helped me fall in love with them.

Related links:

The Doug Engelbart Institute

From my blog:

A history lesson in government R&D, Part 2

Getting beyond paper and linear media

Does computer science have a future?

Tales of inventing the future

Great moments in modern computer history

Edit: A couple other links I thought to include:

Doug Engelbart and Ted Nelson come to dinner - This happened on Engelbart’s birthday last year. Howard Rheingold invited Engelbart and Nelson over, and just had them chew the fat on what they thought of the web.

More on getting beyond paper and linear media - Christina Engelbart, who is one of Doug’s daughters, and the director of the Doug Engelbart Institute, wrote a follow-up post to “Getting beyond paper and linear media,” on her blog, Collective IQ, talking about Doug’s concept of augmenting the human intellect.

I came across this interview with Lesley Chilcott, the producer of “An Inconvenient Truth” and “Waiting For Superman.” Kind of extending her emphasis on improving education, she produced a short 9-minute video selling the idea of “You should learn to code,” both to adults and children. It addresses two points: 1) the anticipated shortage of programmers needed to write software in the future, and 2) the increasing ubiquity of programming in all sorts of fields where people would think it wouldn’t exist, such as manufacturing and agriculture.

The interview gets interesting at 3 minutes 45 seconds in.

Michelle Fields, the interviewer, asked what I thought were some insightful questions. She started things off with:

It seems as though the next generation is so fluent in technology. How is it that they don’t know what computer programming is?

Chilcott said:

I think the reason is, you know, we all use technology every day. It’s surrounding us. Like, we can debate the pro’s and con’s of technology/social media, but the bottom line is it’s everywhere, right? So I think a lot of people know how to read it. They grow up playing with an iPhone or something like that, but they don’t know how to write it. And so when you say, “Do you know what this is,” specifically, or what this job is–and you know, those kids are in first, second, fifth grade–they know all about it, but they don’t know what the job is.

I found this answer confusing. She’s kind of on the right track, thinking of programming as “writing.” I cut her some slack, because as she admits in the interview, she’s just started programming herself. However, as I’ve said before, running software is not “reading.” It’s really more like being read to by a machine, like listening to an audio book, or someone else reading to you. You don’t have to worry about the mental tasks of pronunciation, sentence construction, or punctuation. You can just listen to the story. Running software doesn’t communicate the process that the code is generating, because there’s a lot that the person using it is not shown. This is on purpose, because most people use software to accomplish some utilitarian task unrelated to how a computer works. They’re not using it to understand a process.

The last sentence came across as muddled. I think what she meant was they know all about using technology, but they don’t know how to create it (“what the job is”).

Fields then asked,

There was this study which found that 56% of students would rather eat broccoli than learn math. Do you think that since computer programming is somewhat related to math, that that’s the reason children and students shy away from it?

Chilcott said:

It could be. That is one of the myths that exist. There is some, you know, math, but as Bill Gates and some other people said, you know, addition, subtraction–It’s much more about problem solving, and I think people like to problem-solve, they like mysteries, they like decoding things. It’s much more about that than complicated algorithms.

She’s right that there is problem solving involved with programming, but she’s either mistaken or confusing math with arithmetic when she says that the relationship between math and programming is a “myth.” I can understand why she tries to wave it off, because as Fields pointed out, most students don’t like math. I contend, as do some mathematicians, this is due to the way it’s taught in our schools. The essence of math gets lost. Instead it’s presented as a tool for calculation, and possibly a cognitive development discipline for problem solving, both of which don’t communicate what it really is, and remove a lot of its beauty.

In reality math is pervasive in programming, but to understand why I say this you have to understand that math is not arithmetic–addition, subtraction, like she suggests. This confusion is common in our society. I talk more about this here. Having said this, it does not mean that programming is hard right off the bat. The math involved has more to do with logic and reasoning. I like the message in the video below from a couple of the programmers interviewed: “You don’t have to be a genius to know how to code. … Do you have to be a genius to do math? No.” I think that’s the right way to approach this. Math is important to programming, but it’s not just about calculating a result. While there’s some memorization, understanding a programming language’s rules, and knowing what different things are called, that’s not a big part of it.

The cool thing is you can accomplish some simple things in programming, to get started, without worrying about math at all. It becomes more important if you want to write complex programs, but that’s something that can wait.

My current understanding is the math in programming is about understanding the rules of a system and what statements used in that system imply, and then understanding the effects of those implications. That sounds complicated, but it’s just something that has to be learned to do anything significant with programming, and once learned will become more and more natural. I liken it to understanding how to drive a car on the road. You don’t have to learn this concept right away, though. When first starting out, you can just look at and enjoy the effects of trying out different things, exploring what a programming environment offers you.

Where Chilcott shines in the interview above is when she becomes the “organizer.” She said that even though 95% of the schools have computers and internet access, only 10% have what she calls a “computer science” course. (I wish they’d go back to calling it a “programming course.” Computer science is more than what most of these schools teach, but I’m being nit-picky.) The cool thing about Code.org, a web site she promotes, is that it tries to locate a school near you that offers programming courses. If there aren’t any, no problem. You can learn some basics of programming right inside your browser using the online tools that it offers on the site.

The video Chilcott produced is called “Code Stars” in the above interview, but when I went looking for it I found it under the name “the Code.org film,” or, “What Most Schools Don’t Teach.”

Here is the full 9-minute video:

If you want the shorter videos, you can find them here.

The programming environment you see kids using in these videos is called “Scratch.”

Gabe Newell said of programming:

When you’re programming, you’re teaching possibly the stupidest thing in the entire universe–a computer–how to do something.

I see where Newell is going with this, but from my perspective it depends on what programming environment you’re using. Some programming languages have the feel of you “teaching” the system when you’re programming. Others have the feel of creating relationships between simple behaviors. Others, still, have the feel of using relationships to set up rules for a new system. Programming comes in a variety of approaches. However, the basic idea that Newell gets across is true, that computers only come with a set of simple operations, and that’s it. They don’t do very much by themselves, or even in combination. It’s important for those new to programming to learn this early on. Some of my early experiences in programming match those of new programmers even today. One of them is, when using a programming language, one is tempted to assume that the computer will infer the meaning of some programming expression from context. There is some context used in programming, but not much, and it’s highly formalized. It’s not intuitive. I can remember the first time I learned this it was like the joke where, say, someone introduces his/her friend to a dumb, witless character in a skit. He/she says, “Say hi to my friend, Frank,” and the dummy says, “Hi to my friend Frank.” And the guy/gal says, “NO! I mean…say hello,” making a hand gesture trying to get the two to connect, and the dummy might look at the friend and say, “Hello,” but that’s it. That’s kind of a realization to new programmers. Yeah, the computer has to have almost everything explained to it (or modeled), even things we do without thinking about it. It’s up to the programmer to make the connections between the few things the computer knows how to do, to make something larger happen.

Jack Dorsey talked about programming in a way that I think is important. His ultimate goal when he started out was to model something, and make the model malleable enough that he could manipulate it, because he wanted to use it for understanding how cities work.

Bill Gates emphasized control. This is a common early motivation for programmers. Not necessarily controlling people, but controlling the computer. What Gates was talking about was what I’d call “making your own world,” like Dorsey was saying, but he wanted to make it real. When I was in high school (late 1980s) it was a rather common project for aspiring programming students to create “matchmaking” programs, where boys and girls in the whole school would answer a simple questionnaire, and a computer program that a student had written would try to match them up by interests, not unlike some of the online dating sites that are out there now. I never heard of any students finding their true love through one of these projects, but it was fun for some people.

Vanessa Hurst said, “You don’t have to be a genius to know how to code. You need to be determined.” That’s pretty much it in a nutshell. In my experience everything else flowed from determination when I was learning how to do this. It will drive you to learn what you need to learn to get it, even if sometimes it’s subject matter you find tedious and icky. You learn to just push through it to get to the glorious feeling at the end of having accomplished what you set out to do.

Newell said at the end of the video,

The programmers of tomorrow are the wizards of the future. You’re going to look like you have magic powers compared to everybody else.

That’s true, but this has been true for a long time. In my professional work developing custom database solutions for business customers I had the experience of being viewed like a magician, because customers didn’t know how I did what I did. They just appreciated the fact that I could do it. I really don’t mean to discourage anyone, because I still enjoy programming today, and I want to encourage people to learn programming, but I feel the need to say something, because I don’t want people to get disillusioned over this. This status of “wizard,” or “magician” is not always what it’s cracked up to be. It can feel great, but there is a flip side to it that can be downright frustrating. This is because people who don’t know a wit of what you know how to do can get confused about what your true abilities are, and they can develop unrealistic expectations of you. I’ve found that wherever possible, the most pleasurable work environment is working among those who also know how to code, because we’re able to size each other up, and assign tasks appropriately. I encourage those who are pursuing software development as a career to shoot for that.

A couple things I can say for being able to code are:

  • It makes you less of a “victim” in our technology world. Once you know how to do it, you have an idea about how other programs work, and the pitfalls they can fall into that might compromise your private information, allow a computer cracker to access it, or take control of your system. You don’t have to feel scared at the alarming “hacking” or phishing reports you hear on the news, because you can be choosey about what software you use based on how it was constructed, what it’s capable of, how much power it gives you (not someone else), and not just base a decision on the features it has, or cool graphics and promotion. You can become a discriminating user of software.
  • You gain the power to create the things that suite you. You don’t have to use software that you don’t like, or you think is being offered on unreasonable terms. You can create your own, and it can be whatever you want. It’s just a matter of the knowledge you’re willing to gather and the amount of energy you’re willing to put into developing the software.

Edit 5-20-2013: While I’m on this subject, I thought I should include this video by Mitch Resnick, who has been involved in creating Scratch at MIT. Similar to what Lesley Chilcott said above, he said, “It’s almost as if [users of new technologies] can read, but not write,” referring to how people use technology to interact. I disagreed with the notion, above, that using technology is the same as reading. Resnick hedged a bit on that. I can kind of understand why he might say this, because by running a Scratch program, it is like reading it, because you can see how code creates its results in the environment. This is not true, however, of much of the technology people use today.

Mark Guzdial asked a question a while back that I thought was important, because it brings this issue down to where a lot of people live. If the kind of literacy I’m going to talk about below is going to happen, the concept needs to be able to come down “out of the clouds” and become more pedestrian. Not to say that literacy needs to be watered down in toto (far from it), but that it should be possible to read and write to communicate everyday ideas and experiences without being super sophisticated about it. What Mark asked was, in the context of a computing medium, what would be the equivalent of a “note to grandma”? I remember suggesting Dan Ingalls’s prop-piston concept from his Lively Kernel demos as one candidate. Resnick provided what I thought were some other good ones, but in the context of Mother’s Day.

Context reversal

The challenge that faces new programmers today is different from when I learned programming as a child in one fundamental way. Today, kids are introduced to computers before they enter school. They’re just “around.” If you’ve got a cell phone, you’ve got a computer in your pocket. The technology kids use presents them with an easy-to-use interface, but the emphasis is on use, not authoring. There is so much software around it seems you can just wish for it, and it’s there. The motivation to get into programming has to be different than what motivated me.

When I was young the computer industry was still something new. It was not widespread. Most computers that were around were big mainframes that only corporations and universities could afford and manage. When the first microcomputers came out, there wasn’t much software for them. It was a lot easier to be motivated to learn programming, because if you didn’t write it, it probably didn’t exist, or it was too expensive to get (depending on your financial circumstances). The way computers operated was more technical than they are today. We didn’t have graphical user interfaces (at first). Everything was done from some kind of text command line interface that filled the entire screen. Every computer came with a programming language as well, along with a small manual giving you an introduction on how to use it.

PC-DOS, from Wikipedia

It was expected that if you bought a computer you’d learn something about programming it, even if it was just a little scripting. Sometimes the line between what was the operating system’s command line interface, and what was the programming language was blurred. So even if all you wanted to do was manipulate files and run programs, you were learning a little about programming just by learning how to use the computer. Some of today’s software developers came out of that era (including yours truly).

Computer and operating system manufacturers had stopped including programming languages with their systems by the mid-1990s. Programming languages had also been taken over by professionals. The typical languages used by developers were much harder to learn for beginners. There were educational languages around, but they had fallen behind the times. They were designed for older personal computer systems, and when the systems got more sophisticated no one had come around to update them. That began to be remedied only in the last 10 years.

Computer science was still a popular major at universities in the 1990s, due to the dot-com craze. When that bubble burst in 2000, that went away, too. So in the last 18 years we’ve had what I’d call an “educational programming winter.” Maybe we’ll see a revival. I hope so.

Literacy reconsidered

I’m directing the rest of this post to educators, because there are some issues around a programming revival I’d like to address. I’m going to share some more detailed history, and other perspectives on computer programming.

What many may not know is that we as a society have already gone through this once. From the late 1970s to the mid-1980s there was a major push to teach programming in schools as “computer literacy.” This was the regime that I went through. The problem was some mistakes were made, and this caused the educational movement behind it to collapse. I think the reason this happened was due to a misunderstanding of what’s powerful about programming, and I’d like educators to evaluate their current thinking in light of this, so that hopefully they do not repeat the mistakes of the past.

As I go through this part, I’ll mostly be quoting from a Ph.D. thesis written by John Maxwell in 2006 called Tracing the Dynabook: A Study of Technocultural Transformations.” (h/t Bill Kerr)

Back in the late 1970s microcomputers/personal computers were taking off like wildfire with Apple II’s, and Commodore VIC-20′s, and later, Commodore 64′s, and IBM PCs. They were seen as “the future.” Parents didn’t want their children to be “left behind” as “technological illiterates.” This was the first time computers were being brought into the home. It was also the first time many schools were able to grant students access to computers.

Educators thought about the “benefits” of using a computer for certain cognitive and social skills.  Programming spread in public school systems as something to teach students. Fred D’Ignazio wrote in an article called “Beyond Computer Literacy,” from 1983:

A recent national “computers in the schools” survey conducted by the Center for the Social Organization of Schools at Johns Hopkins University found that most secondary schools are using computers to teach programming. … According to the survey, the second most popular use of computers was for drill and practice, primarily for math and language arts. In addition, the majority of the teachers who responded to the survey said that they looked at the computer as a “resource” rather than as a “tool.”

…Another recent survey (conducted by the University of Maryland) echoes the Johns Hopkins survey. It found that most schools introduce computers into the curriculum to help students become literate in computer technology. But what does this literacy entail?

Because of the pervasive spread of computers throughout our society, we have all become convinced that computers are important. From what we read and hear, when our kids grow up almost everyone will have to use computers in some aspect of their lives. This makes computers, as a subject, not only important, but also relevant.

An important, relevant subject like computers should be part of a school’s curriculum. The question is how “Computers” ought to be taught.

Special computer classes are being set up so that students can play with computers, tinker with them, and learn some basic programming. Thus, on a practical level, computer literacy turns out to be mere computer exposure.

But exposure to what? Kids who are now enrolled in elementary and secondary schools are exposed to four aspects of computers. They learn that computers are programmable machines. They learn that computers are being used in all areas of society. They learn that computers make good electronic textbooks. And (something they already knew), they learn that computers are terrific game machines.

… According to the surveys, real educational results have been realized at schools which concentrate on exposing kids to computers. … Kids get to touch computers, play with them, push their buttons, order them about, and cope with computers’ incredible dumbness, their awful pickiness, their exasperating bugs, and their ridiculous quirks.

The main benefits D’Ignazio noted were ancillary. Students stayed at school longer, came in earlier, and stayed late. They were more attentive to their studies, and the computers fostered a sense of community, rather than competition and rivalry. If you read his article, you get a sense that there was almost a “worship” of computers on the part of educators. They didn’t understand what they were, or what they represented, but they were so interesting! There’s a problem there… When people are fascinated by something they don’t understand, they tend to impose meanings on it that are not backed by evidence, and so miss the point. The mistaken perceptions can be strengthened by anecdotal evidence (one of the weakest kinds). This is what happened to programming in schools.

The success of the strategy of using computers to try to improve higher-order thinking was illusory. John Maxwell’s telling of the “life and death of Logo” (my phrasing) serves as a useful analog to what happened to programming in schools generally. For those unfamiliar with it, the basic concept of Logo was a programming environment in which the student manipulates an object called a “turtle” via. commands. The student can ask the turtle to rotate and move. As it moves it drags a pen behind it, tracing its trail.  Other versions of this language were created that allowed more capabilities, allowing further exploration of the concepts for which it was created. The original idea Seymour Papert, who taught children using Logo, had was to teach young children about sophisticated math concepts, but our educational system imposed a very different definition and purpose on it. Just because something is created on a computer with the intent of it being used for a specific purpose doesn’t mean that others can’t use it for completely different, and possibly less valuable purposes. We’ve seen this a lot with computers over the years; people “misusing” them for both constructive and destructive ends.

As I go forward with this, I just want to put out a disclaimer that I don’t have answers to the problems I point out here. I point them out to make people aware of them, to get people to pause with the pursuit of putting people through this again, and to point to some people who are working on trying to find some answers. I present some of their learned opinions. I encourage interested readers to read up on what these people have had to say about the use of computers in education, and perhaps contact them with the idea of learning more about what they’ve found out.

I ask the reader to pay particular attention to the “benefits” that educators imposed on the idea of programming during this period that Maxwell talks about, via. what Papert called “technocentrism.” You hear this being echoed in the videos above. As you go through this, I also want you to notice that Papert, and another educator by the name of Alan Kay, who have thought a lot about what computers represent, have a very different idea about the importance of computers and programming than is typical in our school system, and in the computer industry.

The spark that started Logo’s rise in the educational establishment was the publication of Papert’s book, “Mindstorms: Children, Computers, and Powerful Ideas” in 1980. Through the process of Logo’s promotion…

Logo became in the marketplace (in the broad sense of the word) [a] particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the “thinking skills” strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon.

It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. [my emphasis -- Mark] In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo “didn’t work as promised,” as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:

“Egocentrism for Piaget does not mean ‘selfishness’—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like ‘What is THE effect of THE computer on cognitive development?’ or ‘Does Logo work?’ … such turns of phrase often betray a tendency to think of ‘computers’ and ‘Logo’ as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology.”

But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.

By the early 1990s … Logo—and with it, programming—had faded.

The message–or black box–resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. (pp. 183-185)

To be clear, the last part of the quote refers only to the educational value placed on programming by our school system. When educators attempted to formally study and evaluate programming’s benefits on higher-order thinking and the like, they found it wanting, and so most schools gradually dropped teaching programming in the 1990s.

Maxwell addresses the conundrum of computing and programming in schools, and I think what he says is important to consider as people try to “reboot” programming in education:

[The] critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary ‘humanist’ stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of “the technology must be in the service of the educational objectives and not the other way around.” The determinist critique, in turn, says, ‘the use of computers encourages a mechanistic way of thinking that is a danger to natural/human/traditional ways of life’ (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000).

Missing from either version of this critique is any idea that digital information technology might present something worth actually engaging with. De Castell, Bryson & Jenson write:

“Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become ‘comfortable’ using it. [...] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things–writing and speech, images and sound–every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity?”

Surely the power of education is transformation. And yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. And so we preserve the divide between ‘expert’ and ‘end-user;’ for the ‘end-user’ is profoundly she who is unchanged, uninitiated, unempowered.

A seemingly endless literature describes study after study, project after project, trying to identify what really ‘works’ or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time.

But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don’t amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things.

But clearly, it would be inane to try to claim that computing technology–one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down–has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is important, but we don’t really understand how. And so what shall we do, right now?

It is not that there haven’t been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology “paradigm” as Timothy Koschmann put it. (pp. 18-19)

The answer is not a “reboot” of programming, but rather a rethinking of it. Maxwell makes a humble suggestion: that educators stop being blinded by “the shiny new thing,” or some so-called “new” idea such that they lose their ability to think clearly about what’s being done with regard to computers in education, and that they deal with history and historicism. He said that the technology field has had a problem with its own history, and this tends to bleed over into how educators regard it. The tendency is to forget the past, and to downplay it (“That was neat then, but it’s irrelevant now”).

In my experience, people have associated technology’s past with memories of using it. They’ve given little if any thought to what it represented. They take for granted what it enabled them to do, and do not consider what that meant. Maxwell said that this…

…makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. We are faced with a tangle of hobbles–instrumentalism, ahistoricism, fear of transformation, Snow’s “two cultures,” and a consumerist subjectivity.

An examination of the history of educational technology–and educational computing in particular–reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert’s philosophy and his more than two decades of practical work on Logo than is commonly remembered. And Papert is not the only one. (p. 20)

Maxwell went into what Alan Kay thought about the subject. Kay has spent almost as many years as Papert working on a meaningful context for computing and programming within education. Some of the quotes Maxwell uses are from “The Early History of Smalltalk,” (h/t Bill Kerr) which I’ll also refer to. The other sources for Kay’s quotes are included in Maxwell’s bibliography:

What is Literacy?

“The music is not in the piano.” — Alan Kay

The past three or four decades are littered with attempts to define “computer literacy” or something like it. I think that, in the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have won large numbers of supporters across the board.

Kay’s appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a much more fruitful framing. His argument is thus not for computer literacy per se, but for systems literacy, of which computing is a key part.

That this is a massive undertaking is clear … and the size of the challenge is not lost on Kay. Reflecting on the difficulties they faced in trying to teach programming to children at PARC in the 1970s, he wrote that:

“The connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas.”

Because literature is about ideas, Kay connects the notion of literacy firmly to literature:

“What is literature about? Literature is a conversation in writing about important ideas. That’s why Euclid’s Elements and Newton’s Principia Mathematica are as much a part of the Western world’s tradition of great books as Plato’s Dialogues. But somehow we’ve come to think of science and mathematics as being apart from literature.”

There are echoes here of Papert’s lament about mathophobia, not fear of math, but the fear of learning that underlies C.P. Snow’s “two cultures,” and which surely underlies our society’s love-hate relationship with computing. Kay’s warning that too few of us are truly fluent with the ways of thinking that have shaped the modern world finds an anchor here. How is it that Euclid and Newton, to take Kay’s favourite examples, are not part of the canon, unless one’s very particular scholarly path leads there? We might argue that we all inherit Euclid’s and Newton’s ideas, but in distilled form. But this misses something important … Kay makes this point with respect to Papert’s experiences with Logo in classrooms:

“Despite many compelling presentations and demonstrations of Logo, elementary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children in a way that illuminates how we think about mathematics and how mathematics relates to the real world.” (Maxwell, pp. 135-137)

Just a note of clarification: I refer back to what Maxwell said re. Logo and mathematics. Papert did not use his language to teach programming as an end in itself. His goal was to use a computer to teach mathematics to children. Programming with Logo was the means for doing it. This is an important concept to keep in mind as one considers what role computer programming plays in education.

The problem, in Kay’s portrayal, isn’t “computer literacy,” it’s a larger one of familiarity and fluency with the deeper intellectual content; not just that which is specific to math and science curriculum. Kay’s diagnosis runs very close to Neil Postman’s critiques of television and mass media … that we as a society have become incapable of dealing with complex issues.

“Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in [Thomas] Paine’s Common Sense and being able (and happy) to fluently write a critique or defense of it.” (Maxwell, p. 137)

Extending this quote (from “The Early History of Smalltalk”), Kay went on to say:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

Continuing with Maxwell (and Kay):

“Many adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like AIDS, or even compound interest on their credit cards. In contrast, a 12-year-old child in a few lines of Logo [...] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that would ordinarily be consigned to common sense (the worst of all reasoning methods), can now be tried out with a modest amount of effort.”

Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? We have enough trouble coping with arguments presented in print, let alone simulations and modeling. Postman’s argument implicates television, but television is not a techno-deterministic anomaly within an otherwise sensible cultural milieu; rather it is a manifestation of a larger pattern. What is wrong here has as much to do with our relationship with print and other media as it does with television. Kay noted that “In America, printing has failed as a carrier of important ideas for most Americans.” To think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, economy, simplicity) make a real difference in the lives of only a small number of individuals, even in the Western world. Kay put it eloquently thus: “The computer really is the next great thing after the book. But as was also true with the book, most [people] are being left behind.” This is a sobering thought for those who advocate public access to digital resources and lament a “digital divide” along traditional socioeconomic lines. Kay notes,

“As my wife once remarked to Vice President Al Gore, the ‘haves and have-nots’ of the future will not be caused so much by being connected or not to the Internet, since most important content is already available in public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wherever it may be found.” (Maxwell, pp. 138-139)

I’m still trying to understand myself what exactly Alan Kay means by “literature” in the realm of computing. He said that it is a means for discussing important ideas, but in the context of computing, what ideas? I suspect from what’s been said here he’s talking about what I’d call “model content,” thought forms, such as the idea of an exponential progression, or the concept of velocity and acceleration, which have been fashioned in science and mathematics to describe ideas and phenomena. “Literature,” as he defined it, is a means of discussing these thought forms–important ideas–in some meaningful context.

In prior years he had worked on that in his Squeak environment, working with some educators. They would show children a car moving across the screen, dropping dots as it went, illustrating velocity, and then, modifying the model, acceleration. Then they would show them Galileo’s experiment, dropping heavy and light balls from the roof of a building (real balls from a real building), recording the ball dropping, and allowing the children to view the video of the ball, and simultaneously model it via. programming, and discovering that the same principle of acceleration applied there as well. Thus, they could see in a couple contexts how the principle worked, how they could recognize it, and see its relationship to the real world. The idea being that they could grasp the concepts that make up the idea of acceleration, and then integrate it into their thinking about other important matters they would encounter in the future.

Maxwell quoted from an author named Andrew diSessa to get deeper into the concept of literacy, specifically what literacy in a type of media offers our understanding of issues:

The hidden metaphor behind transparency–that seeing is understanding–is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.”

What’s a good goal for computing?

The temptation in teaching and learning programming is to get students familiar enough with the concepts and a language that they can start creating things with it. But create what? The typical cases are to allow students to tinker, and/or to create applications which gradually become more complex and feature-rich, with the idea of building confidence and competence with increasing complexity. The latter is not a bad idea in itself, but listening to Alan Kay has led me to believe that starting off with this is the equivalent of jumping to a conclusion too quickly, and to miss the point of what’s powerful about computers and programming.

I like what Kay said in “The Early History of Smalltalk” about this:

A twentieth century problem is that technology has become too “easy.” When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

Edit 4-5-2013: I thought I should point out that this quote has some nuance to it that people might miss. I don’t believe Kay is saying that “programming should be hard.” Quite the contrary. One can observe from his designs that he’s advocated the opposite. Not that technology should mold itself to what is “natural” for humans. It might require some training and practice, but once mastered, it should magnify or enhance human capabilities, thereby making previously difficult or tedious tasks easier to accomplish and incorporate into a larger goal.

Kay was making an observation about the history of technology’s relationship to society, that the effect on people of useful technology being hard to build has generally caused the people who created something useful to make it well. What he’s pointing out is that people generally take the presence of technology as an excuse to use it as a crutch, in this case to make immediate use of it towards some other goal that has little to do with what the technology represents, rather than an invitation to revisit it, criticize its design, and try to make it better. This is an easy sell, because everyone likes something that makes their lives easier (or seems to), but we rob ourselves of something important in the process if that becomes the only end goal. What I see him proposing is that people with some skill should impose a high standard for design on themselves, drawing inspiration for that standard from how the best art humanity has produced was developed and nurtured, but guard against the sense of feeling small, inadequate, and overwhelmed by the challenge.

Maxwell (and Kay) explain further why this idea of “literacy” as being able to understand and communicate important ideas, which includes ideas about complexity, is something worth pursuing:

“If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is in story form! Newton’s treatise on the laws of motion, the force of gravity, and the behavior of the planets is set up as a sequence of arguments that imitate Euclid’s books on geometry.”

The most important ideas in modern Western culture in the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented in narrative. …

But more recent still are forms of argumentation that defy linear representation at all: ‘complex’ systems, dynamic models, ecological relationships of interacting parts. These can be hinted at with logical or mathematical representations, but in order to flesh them out effectively, they need to be dynamically modeled. This kind of modeling is in many cases only possible once we have computational systems at our disposal, and in fact with the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the “regime of computation” inherent in the work of thinkers like Stephen Wolfram, Edward Fredkin, and Harold Morowitz, N. Katherine Hayles explains:

“Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work in chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so in the social sciences and humanities.”

Kay’s lament too is that though these non-narrative forms of communication and understanding–both in the linear and complex varieties–are key to our modern world, a tiny fraction of people in Western society are actually fluent in them.

“In order to be completely enfranchised in the 21st century, it will be very important for children to become fluent in all three of the central forms of thinking that are now in use. [...] the question is: How can we get children to explore ways of thinking beyond the one they’re ‘wired for’ (storytelling) and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems ‘eco-logic?’” …

In this we get Kay’s argument for ‘what computers are good for’ … It does not contradict Papert’s vision of children’s access to mathematical thinking; rather, it generalizes the principle, by applying Kay’s vision of the computer as medium, and even metamedium, capable of “simulating the details of any descriptive model.” The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as a the promise of personal computing, with millions of users and millions of machines.

“The thing that jumped into my head was that simulation would be the basis for this new argument. [...] If you’re going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a mathematical equation. If, for example, you’re talking about an epidemic, you can make claims in an essay, and you can put mathematical equations in there. Still, it is really difficult for your reader to understand what you’re actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim in the form of a working simulation, something that can be examined, and also can be changed.”

The computer is thus to be seen as a modeling tool. The models might be relatively mundane–our familiar word processors and painting programs define one end of the scale–or they might be considerably more complex. [my emphasis -- Mark] It is important to keep in mind that this conception of computing is in the first instance personal–”personal dynamic media”–so that the ideal isn’t simulation and modeling on some institutional or centralized basis, but rather the kind of thing that individuals would engage in, in the same way in which individuals read and write for their own edification and practical reasons. This is what defines Kay’s vision of a literacy that encompasses logic and systems thinking as well as narrative.

And, as with Papert’s enactive mathematics, this vision seeks to make the understanding of complex systems something to which young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a ‘computer-science’ or an ‘information technology’ curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools:

“So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects–which I think is enough complexity for anybody–gets them into that space in thinking about things that I think is more interesting than just simple input/output mechanisms.” (Maxwell, pp. 132-135)

I wanted to highlight the part about “word processors” and “paint programs,” because this idea that’s being discussed is not limited to simulating real world phenomena. It could be incorporated into simulating “artificial phenomena” as well. It’s a different way of looking at what you are doing and creating when you are programming. It takes it away from asking, “How do I get this thing to do what I want,” and redirects it to, “What entities do we want to make up this desired system, what are they like, and how can they interact to create something that we can recognize, or otherwise leverages human capabilities?”

Maxwell said that computer science is not the important thing. Rather, what’s important about computer science is what it makes possible: “the study and engagement with complex or dynamic systems–and it is this latter issue which is of key importance to education.” Think about this in relation to what we do with reading and writing. We don’t learn to read and write just to be able to write characters in some sequence, and then for others to read what we’ve written. We have events and ideas, perhaps more esoteric to this subject, emotions and poetry, that we write about. That’s why we learn to read and write. It’s the same thing with computer science. It’s pretty worthless, if we as a society value it for communicating ideas, if it’s just about learning to read and write code. To make the practice something that’s truly valuable to society, we need to have content, ideas, to read and write about in code. There’s a lot that can be explored with that idea in mind.

Characterizing Alan Kay’s vision for personal computing, Maxwell talked about Kay’s concept of the Dynabook:

Alan Kay’s key insight in the late 1960s was that computing would become the practice of millions of people, and that they would engage with computing to perform myriad tasks; the role of software would be to provide a flexible medium with which people could approach those myriad tasks. … [The] Dynabook’s user is an engaged participant rather than a passive, spectatorial consumer—the Dynabook’s user was supposed to be the creator of her own tools, a smarter, more capable user than the market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the “end-user” as documented by Bardini & Horvath. (p. 218)

Kay’s contribution begins with the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. What discursive possibilities does this new modality open up, and for whom? Kay argues that this latter communications revolution should in the first place be in the hands of children. What we are left with is a sketch of a possible new literacy; not “computer literacy” as an alternative to book literacy, but systems literacy—the realm of powerful ideas in a world in which complex systems modelling is possible and indeed commonplace, even among children. Kay’s fundamental and sustained admonition is that this literacy is the task and responsibility of education in the 21st century. The Dynabook vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. (p. 262)

I would encourage interested readers to read Maxwell’s paper in full. He gives a rich description of the problem of computers in the educational context, giving a much more detailed history of it than I have here, and what the best minds on the subject have tried to do to improve the situation.

The main point I want to get across is if we as a society really want to get the greatest impact out of what computers can do for us, beyond just being tools that do canned, but useful things, I implore educators to see computers and programming environments more as apparatus, instruments, media (the computers and programming environments themselves, not what’s “played” on computers, and languages and metaphors, which are the media’s means of expression, not just a means to some non-expressive end), rather than as agents and tools. Sure, there will be room for them to function as agents and tools, but the main focus that I see as important in this subject area is in how the machine helps facilitate substantial pedagogies and illuminates epistemological concepts that would otherwise be difficult or impossible to communicate.

—Mark Miller, http://tekkie.wordpress.com

Looking back at 2012

I spent a lot of time helping out my mother last year. A good part of it has been that she’s shifting to doing research online, using my computer. It’s not so much that she wants to do this. She’s been a techno-phobe for as long as I’ve known her. It’s that our society has moved a lot online, so it’s now a requirement for her. She’s been getting used to this, and has even come to like it a little. So I’ve been assisting her with it, and many other things not related to the techie world. She and I have looked into getting her a low-cost laptop of some sort, though she hasn’t settled on anything yet.

In addition, I’ve been trying to wrap my mind around an actual computing artifact (an 8-bit computer and its operating system–starting small), and some concepts of creating a simple virtual machine and a compiled language to go with it.

All of this is explaining why I’ve had less time to write. :) I looked back at my posts for 2012, and was a bit surprised to find I had only written six. I’m definitely slowing down on my writing, though I have no intention of discontinuing it.

Here are the top 5 most read posts for 2012. As I’ve said before, this only reflects what’s been getting attention. None of these are posts I wrote last year:

1

Exploring the meaning of Tron Legacy, 10 comments

2

Does computer science have a future?, 13 comments

3

A history of the Atari ST and Commodore Amiga, 5 comments

4

Remembering Steve Jobs and Apple Computer, 0 comments

5

Great moments in modern computer history, 8 comments

I discovered this week that a bunch of videos I had embedded in past posts had “broken.” They were all from Google Video. Some of them were ones I had posted to it.

Many times the videos I embed in my posts are crucial to their meaning, so I was wondering what I was going to do with the posts that used them. I wanted to be able to work on them without my incomplete edits going “public,” so I took down the following posts for a few days:

The death of certainty and the birth of computer science

I’m not a scientist, but I play one on TV…

The computer as medium

“Reminiscing” series, parts 1, 2, 3, and 4

Saying goodbye to someone I never knew

Redefining computing, Part 2

Exploring Squeak and Seaside

I have revamped them, getting rid of videos that have disappeared, and dead links. I found many of the same videos I had used before, posted somewhere else. I also revised some of the text. I’ve re-posted the above articles.

I frequent YouTube, and I remember seeing an invitation on there a while back to merge my Google videos into my YouTube account. YouTube didn’t mention a thing about Google Video going away. From what I remember, I didn’t take their offer, because I had assumed YouTube limited the length of most videos to 10 minutes. The whole reason I had posted videos to Google’s service was they didn’t have a length limit.

Doing some research on this, I discovered that Google had totally shut down its video service this past May, and had been disallowing anyone from posting new videos to it since a few years ago. There is still a “Google Video” service now, but it functions like a normal Google search. It just isolates its results to other video sites, like its blog search.

As I was working on my posts, I was pleasantly surprised to discover that the videos I had posted to Google’s service had in fact been merged into my YouTube account, as private videos, and none of them had been truncated. I don’t recall being notified of this. Even so, I was thankful to see they were still around. One less thing I had to think about.

There is still a video would’ve liked to have “recovered” in all this, but I can’t find it anywhere. It’s Alan Kay’s presentation to a group of teachers called “What is Squeak?”

The death of Neil Armstrong, the first man to walk on the Moon, on August 25 got me reflecting on what was accomplished by NASA during his time. I found a YouTube channel called “The Conquest of Space,” and it’s been wonderful getting acquainted with the history I didn’t know.

I knew about the Apollo program from the time I was a kid in the 1970s. I was born two months after Apollo 11, so I only remember it in hindsight. By the time I was old enough to be conscious of the Apollo program’s existence, it had been mothballed for four or five years. I could not be ignorant of its existence. It was talked about often on TV, and in the society around me. I lived in Virginia, near Washington, D.C., in my early childhood. I remember I used to be taken regularly to the Smithsonian Air and Space Museum. Of all of their museums, it was my favorite. There, I saw video of one of the moon walks, the space suits used for the missions (as mannequins), the Command Module, and Lunar Module at full scale, artifacts of a time that had come and gone. There was hope that someday we would go back to the Moon, and go beyond it to the planets. The Air and Space Museum had an IMAX movie that was played continuously, called “To Fly.” From what I’ve read, they still show it. It was produced for the museum in 1976. I remember watching it a bunch of times. It was beautifully done, though looking back on it, it had the feel of a “demo” movie, showing off what could be done with the IMAX format. It dramatizes the history of flight, from hot air balloons in the 19th century, to the jet age, to rockets to the Moon. A cool thing about it is it talked about the change in perspective that flight offered, a “new eye.” At the end it predicted that we would have manned space missions to the planets.

Why wouldn’t we have manned missions that venture to the planets, and ultimately, perhaps a hundred years off, to other star systems? It would just be an extension of the advancements in flight we had made on earth. The idea that we would keep pushing the boundaries of our reach seemed like a given, that this technological pace we had experienced would just keep going. That’s what everything that was science-oriented was telling me. Our future was in space.

In the late 1970s Carl Sagan produced a landmark series on science called “Cosmos.” He talked about the history of space exploration, mostly from the ground, and how our destiny was to travel into space. He said, introducing the series,

The surface of the earth is the shore of the cosmic ocean. On this shore we’ve learned most of what we know. Recently we’ve waded a little way out, maybe ankle deep, and the water seems inviting. Some part of our being knows this is where we came from. We long to return, and we can…

Winding down?

As I got into my twenties, in the 1990s, I started to worry about NASA’s robustness as a space program. It started to look like a one-trick pony that only knew how to launch astronauts into low-earth orbit. “When are we going to return to the Moon,” I’d ask myself. NASA sent probes out to Jupiter, Mars, and then Saturn, following in the footsteps of Voyager 1 and 2. Surely similar questions were being asked of NASA, because I’d often hear them say that the probes were forerunners to future manned space flight, that they were gathering information that we needed to know in advance for manned missions, holding out that hope that someday we’d venture out again.

The Space Shuttle was our longest running space program, from 1981 to 2011, 30 years. Back around the year 2000 I remember Vice President Al Gore announcing the winner of the contract to build the next generation space shuttle, which would take the place of the older models, but it never came to be. Under the administration of George W. Bush the Constellation program started in 2005, with the idea of further developing the International Space Station, returning astronauts to the Moon, establishing a base there for the first time, and then launching manned missions to Mars. This program was cancelled in 2010 in the Obama Administration, and there has been nothing to replace it. I heard some criticism of Constellation, saying that it was ill-defined, and an expensive boondoggle, though it was defended by Neil Armstrong and Gene Cernan, two Apollo astronauts. Perhaps it was ill-defined, and a waste of money, but it felt sad to see the Space Shuttle program end, and to see that NASA didn’t have a way to get into low-earth orbit, or to the International Space Station. The original idea was to have the first stage of the Constellation program follow, after the space shuttles were retired. Now NASA has nothing but rockets to send out space probes and robotic rovers to bodies in space. Even the Curiosity rover mission, now on Mars, was largely developed during the Bush Administration, so I hear.

I have to remember at times that even in the 1970s, during my childhood, there was a lull in the manned space program. The Apollo program was ended in the Nixon Administration, before it was finished. There was a planned flight, with a rocket ready to go, to continue the program after Apollo 17, but it never left the ground. There’s a Saturn V rocket that was meant for one of the later missions that lays today as a display model on the grounds of the Kennedy Space Center. I have to remember as well that then, as now, the program was ended during a long drawn out war. Then, it was in Vietnam. Now, it’s in the Middle East.

Manned space flight ended for a time after the SL-4 mission to the Skylab space station in 1974. It didn’t begin again for another 7 years, with the first launch of the Space Shuttle. The difference is the Shuttle was first conceptualized towards the end of the Apollo program. It was there as a goal. Perhaps we are experiencing the same gap in manned flight now, though I don’t have a sense that NASA has a “next mission” in mind. As best I can tell the Obama Administration has tasked NASA with supporting private space flight. There is good reason to believe that private space flight companies will be able to send astronauts into low-earth orbit soon. That’s a consolation. The thing is that’s likely all they’re going to do in the future–launch to low-earth orbit. They’re at the stage that the Mercury program was more than 50 years ago.

What I ask is do we have anything beyond this in mind? Do we have a sense of building on the gains in knowledge that have been made, to venture out beyond what we now know? I grew up being told that “humans want to explore, to push the boundaries of what we know.” I guess we still are that, but maybe we’re directing that impulse in new ways here on earth, rather than into space. I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence. Astrophysicist, and vocal advocate for NASA, Neil deGrasse Tyson has worried about this, too.

I realized a few years ago, to my dismay, that what really drove the creation of the space program, and our flights to the Moon, was not an ambition to push our frontiers of knowledge just for the sake of gaining knowledge. There was a major political aspect to it: beating the Soviets in “the space race” of the 1960s, establishing higher ground for ourselves, in a military sense. Yes, some very valuable scientific and engineering work was done in the process, but as Tyson would say, “science hitched a ride on another agenda.” That’s what it’s often done in human history. Many non-military benefits to our society flowed from what NASA once did, none of which are widely recognized today. Most people think that our technological development came from innovators in the private sector alone. The private sector did a lot, but they also drew from a tremendous resource in our space and defense research and development programs, as I’ve documented in earlier posts.

I’ll close with this great quote. It echoes what Tyson has said, though it’s fleshed out in an ethical sense, too, which I think is impressive.

The great enemy of the human race is ignorance. It’s what we don’t know that limits our progress. And everything that we learn, everything that we come to know, no matter how esoteric it seems, no matter how ivory tower-ish, will fit into the general picture a block in its proper place that in the end will make it possible for mankind to increase and grow; become more cosmic, if you wish; become more than a species on Earth, but become a species in the Universe, with capacities and abilities we can’t imagine now. Nor do I mean greater and greater consumption of energy, or more and more massive cities.

It’s so difficult to predict, because the most important advances are exactly in the directions that we now can’t conceive, but everything we now do, every advance in knowledge we now make, contributes to that. And just because I can’t see it, and I’m an expert at this, … doesn’t mean it isn’t there. And if we refuse to take those steps, because we don’t see what the future holds, all we’re making certain of is that the future won’t exist, and that we will stagnate forever. And this is a dreadful thought. And I am very tired when people ask me, “What’s the good of it,” because the proper answer is, “You may never know, but your grandchildren will.”

– Isaac Asimov, 1973, from the NASA film “Small Steps, Giant Strides”

Then as now, this is the lament of the scientist, I think. Scientists must ask society’s permission to explore, because they usually need funds from others to do their work, and there is no immediate payback to be had from it. It is for this reason that justifying the funding of that work is tough, because scientific work goes outside the normal set of expectations people have about what is of value. If the benefits can’t be seen here and now, many wonder, “What’s the point?” What Asimov pointed out is the pursuit of knowledge is its own reward, but to really gain its benefits you must be future-oriented. You have to think about and value the world in which your children and grandchildren will live, not your own. If your focus is on the here and now, you will not value the future, and so potential future benefits of scientific research will not seem valuable, and therefor will not seem worthy of pursuit. It is a cultural mindset that is at issue.

Edit 12-10-2012: Going through some old articles I’d saved, I came upon this essay about humanity’s capacity for intellectual thought, called “Why is there Anti-Intellectualism?”, by Steven Dutch at the University of Wisconsin-Green Bay. It provides some reasonable counter-notions to my own that seem to confirm what I’ve seen, but will still take some contemplation on my part.

There’s no science in the article. In terms of quality, at best, I’d call this an “executive summary.” Maybe there’s more detailed research behind it, but I haven’t found it yet. Dutch uses heuristics to provide his points of comparison, and uses a notion of evidence to provide some meat to the bones. He asks some reasonable questions that are worth contemplating, challenging the notion that “humans are naturally curious, and strive to explore.” He then makes observations that seem to come from his own experience. Overall, he provides a reasonable basis for answering a statement I made in this article: “I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence.” He comes down on the side of saying, in his opinion (paraphrasing), “Yes, some in the scientific community have fooled themselves on this issue.” He discusses the notion that “humans are naturally curious,” due to the behavior exhibited by children. He concludes by saying that children naturally display a shallow curiosity, which he calls “tinkering.” The harder task of creative, deep thought does not come naturally. It’s something that needs to be cultivated to take root. Hence the need for schools. The question I think we as citizens should be asking is whether our schools are actually doing this, or something else.

Follow

Get every new post delivered to your Inbox.