A couple updates from the Smalltalk world

I was going through my list of links for this blog (some call it a “blogroll”) and I came upon a couple items that might be of interest.

The first is, it appears that Dolphin Smalltalk is getting back on its feet again. You can check it out at Object Arts. I had reported 3 years ago that it was being discontinued. So I’m updating the record about that. Object Arts says it’s working with Lesser Software to produce an updated commercial version of Dolphin that will run on top of Lesser’s Smalltalk VM. According to the Object Arts website this product is still in development, and there’s no release date yet.

Another item is since last year I’ve been hearing about a branch-off of Squeak called Pharo. According to its description it’s a version of Squeak designed specifically for professional developers. From what I’ve read, even though people have had the impression that the squeak.org release was also for professional developers, there were some things that the Pharo dev. team felt were getting in the way of making Squeak a better professional dev. tool, mainly the EToys package, which has caused consternation. EToys was stripped out of Pharo.

There’s a book out now called “Pharo by Example”, written by the same people who wrote “Squeak by Example”. Just from perusing the two books, they look similar. There were a couple differences I picked out.

The PbE book says that Pharo, unlike Squeak, is 100% open source. There’s been talk for some time now that while Squeak is mostly open source, there has been some code in it that was written under non-open source licenses. In the 2007-2008 time frame I had been hearing that efforts were under way to make it open source. I stopped keeping track of Squeak about a year ago, but last I checked this issue hadn’t been resolved. The Pharo team rewrote the non-open source code after they forked from the Squeak project, and I think they said that all code in the Pharo release is under a uniform license.

The second difference was that they had changed some of the fundamental architecture of how objects operate. If you’re an application developer I imagine you won’t notice a difference. Where you would notice it is at the meta-class/meta-object level.

Other than that, it’s the same Squeak, as best I can tell. According to what I’ve read Pharo is compatible with the Seaside web framework.

An introduction to the power of Smalltalk

I’m changing the subject some, but I couldn’t resist talking about this, because I read a neat thing in the PbE book. I imagine it’s in SbE as well. Coming from the .Net world, I had gotten used to the idea of “setters” and “getters” for class properties. When I first started looking at Squeak, I downloaded Ramon Leon‘s Squeak image. I may have seen this in a screencast he produced. I found out there was a modification to the browser in his image that I could use to have it set up default “setters” and “getters” to my class’s variables automatically. I thought this was neat, and I imagine other IDEs already had such a thing (like Eclipse). I used that feature for a bit, and it was a good time-saver.

PbE revealed that there’s a way to have your class set up its own “setters” and “getters”. You don’t even need a browser tool to do it for you. You just use the #doesNotUnderstand message handler (also known as “DNU”), and Smalltalk’s ability to “compile on the fly” with a little code generation. Keep in mind that this happens at run time. Once you get the idea, it’s not that hard, it turns out.

Assume you have a class called DynamicAccessors (though it can be any class). You add a message handler called “doesNotUnderstand” to it:

DynamicAccessors>>doesNotUnderstand: aMessage
| messageName |
messageName := aMessage selector asString.
(self class instVarNames includes: messageName)
ifTrue: [self class compile: messageName, String cr, ' ^ ', messageName.
         ^aMessage sendTo: self].
^super doesNotUnderstand: aMessage

This code traps the message being sent to a DynamicAccessors instance, because there is no method for what’s being called for at the moment. It extracts the method name that’s being called, looks to see if the class (DynamicAccessors) has a variable by the same name, and if so, compiles a method by that name, with a little boilerplate code that just returns the variable’s value. Once it’s created, it resends the original message to itself, so that the now-compiled accessor can return the value. However, if no variable exists that matches the message name, it triggers the superclass’s “doesNotUnderstand” method, which will typically activate the debugger, halting the program, and notifying the programmer that the class, “doesn’t understand this message.”

Assuming that DynamicAccessors has a member variable “x”, but no “getter”, it can be accessed by:

myDA := DynamicAccessors new.
someValue := myDA x

If you want to set up “setters” as well, you could add a little code to the doesNotUnderstand method that looks for a parameter value being passed along with the message, and then compiles a default method for that.

Of course, one might desire to have some member variables protected from external access and/or modification. I think that could be accomplished by having a variable naming convention, or some other convention, such as a collection that contains member variable names along with a notation specifying to the class how certain variables should be accessed. The above code could follow those rules, allowing access to some internal values and not others. A thought I had is you could set this up as a subclass of Object, and then just derive your own objects off of that. That way this action will apply to any classes you create, which you choose to have it apply to (otherwise, just have them derive from Object).

Once an accessor is compiled, the above code will not be executed for it again, because Smalltalk will know that the accessor exists, and will just forward the message to it. You can go in and modify the method’s code however you want in a browser as well. It’s as good as if you created the accessor yourself.

Edit 5-6-2010: Heard about this recently. Squeak 4.1 has been released. From what I’ve read on The Weekly Squeak, Squeak has been 100% open source since Version 4.0. I was talking about this earlier in this article in relation to Pharo. 4.1 features some “touch up” stuff. It sounds like this makes it nicer to use. The description says it includes some first-time user features and a nicer, cleaner visual interface.

SICP Exercise 1.11: “There is no spoon”

This is the first in what I hope will be many posts talking about Structure and Interpretation of Computer Programs, by Abelson and Sussman. This and future posts on this subject will be based on the online (second) edition that’s available for free. I started in on this book last year, but I haven’t been posting about it because I hadn’t gotten to any interesting parts. Now I finally have.

After I solved this problem, I really felt that this scene from The Matrix, “There is no spoon” (video), captured the experience of what it was like to finally realize the solution. First realize the truth, that “there is no spoon”–separate form from appearance (referring to Plato’s notion of “forms”). Once you do that, you can come to see, “It is not the spoon that bends. It is only yourself.” I know, I’m gettin’ “totally trippin’ deep, man,” but it was a real trip to do this problem!

This exercise is an interesting look at how programmers such as myself have viewed how we should practice our craft. It starts off innocuously:

A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n – 1) + 2f(n – 2) + 3f(n – 3) if n >= 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by an iterative process.

(Update 5-22-2010: Don’t read too much into the “cross-outs” I’m using. I’m just trying to be more precise in my description.) Writing it recursively is pretty easy. You just do a translation from the mathematical classical algebraic notation to Scheme code. It speaks for itself. The challenging part is writing the same thing a solution that gives you the same result using an iterative process. Before you say it can’t be done, it can!

I won’t say never, but you probably won’t see me describing how I solve any of the exercises, because that makes it too easy for CS students to just read this and copy it. I want people to learn this stuff for themselves. I will, however, try to give some “nudges” in the right direction.

  • I’ll say this right off the bat: This is not a (classical) mathematical an algebra problem you’re dealing with. It’s easy to fall into thinking this, especially since you’re presented with some algebra as something to implement. This is a computing problem. I’d venture to say it’s more about the mathematics of computing than it is about algebra. As developers we often think of the ideal as “expressing the code in a way that means something to us.” While this is ideal, it can get in the way of writing something optimally. This is one of those cases. I spent several hours trying to optimize the math algebraic operations, and looking for mathematical patterns that might optimize the recursive algorithm into an iterative one. Most of my mathematical the patterns I thought I had fell to pieces. It was a waste of time anyway. I did a fair amount of fooling myself into thinking that I had created an iterative algorithm when I hadn’t. It turned out to be the same recursive algorithm done differently.
  • Pay attention to the examples in Section 1.2.1 (Linear Recursion and Iteration), and particularly Section 1.2.2 (Tree Recursion). Notice the difference, in the broadest sense, in the design between the recursive and iterative algorithms in the sample code.
  • As developers we’re used to thinking about dividing code into component parts, and that this is the right way to do it. The recursive algorithm lends itself to that kind of thinking. Think about information flow instead for the iterative algorithm. Think about what computers do beyond calculation, and get beyond the idea of calling a function to get a desired result.
  • It’s good to take a look at how the recursive algorithm works to get some ideas about how to implement the iterative version. Try some example walk-throughs with the recursive code and watch what develops.
  • Here’s a “you missed your turn” signpost (using a driving analogy): If you’re repeating calculation steps in your iterative algorithm, you’re probably not writing an iterative procedure. The authors described the characteristics of a recursive and an iterative procedure earlier in the chapter. See how well your procedure fits into either description. There’s a fine line between what’s “iterative” and what’s “recursive” in Scheme, because in both cases you have a function calling itself. The difference is in a recursive procedure you tend to have the function calling itself more than once in the same expression. Whereas In an iterative procedure you tend to have a simpler calling scheme where the function only calls itself once in an expression (though it may call itself once in more than one expression inside the function), and all you’re doing is computing an intermediate result, and incrementing a counter, to input into the next step. You should not have operators which are “lingering”, waiting for a function call to return, as a rule, though they did show an example earlier of a function that was mostly iterative, with a little recursion thrown in now and then. With this exercise I found that I was able to find a solution that was strictly iterative.
  • As I believe was mentioned earlier in this chapter (in the SICP book), remember to use the parameter list of your function as a means for changing state.
  • I will say that both the recursive and the iterative functions are pretty simple, though the iterative function uses a couple concepts that most developers are not used to. That’s why it’s tricky. When I finally got it, I thought, “Oh! There it is! That’s cool.” When you get it, it will just seem to flow very smoothly.

Happy coding!

My journey, Part 5

See Part 1, Part 2, Part 3, Part 4

Moments of inspiration

It was sometime in 1997, I think. One day I was flipping channels on my TV, and I happened upon an interview with a man who fascinated me. I didn’t recognize him. It was on a local cable channel. I caught the interview in the middle, and at no point did anyone say who he was. I didn’t care. I sat and watched with rapt attention. I was so impressed with what he was talking about I hit Record on my VCR (I might still have the tape somewhere). The man appeared to be looking at an interviewer as he spoke, but I didn’t hear any questions asked. He seemed to be talking about the history of Western civilization, how it developed from the Middle Ages onward. He wove in the development of technology and how it influenced civilization. This was amazing to me.

I remember he said that children were considered adults at the age of 7, hundreds of years ago. When students went to college they wrote their own textbooks, which were their lecture notes. When they completed their textbooks, they got their degrees. I think he said after that they were considered worthy to become professors, and this was how knowledge perpetuated from one generation to the next.

Moving up into recent history, I remember he talked about his observation of societal trends in the 1990s. He said something about how in a democracy the idea was we should be able to discuss issues with each other, no matter how controversial, as if the argument was separate from ourselves. The idea was we were to consider arguments objectively. This way we could criticize each other’s arguments without making it personal. He said at the time that our society had entered a dangerous phase, and I think he said it was reminiscent of a time in Western civilization centuries ago, where we could not talk about certain issues without others considering those who brought them up a threat. I remember he said something like, “I’ve talked to President Clinton, and he understands this.”

He shifted to talking about technology concepts that were written about 40-50 years earlier. He talked about Vannevar Bush, and his paper “As We May Think”. He talked about Bush’s conceptual Memex machine, that it would be the size of a desk, and that he had come up with a concept we would now call hyperlinking. He remarked that Bush was a very forward-thinking man. He said Bush envisioned that information would be stored on “optical discs”. This phrase really jumped out at me, and it blew me away. I knew that the laser wasn’t invented until the 1950s. “How could Bush have imagined laserdiscs?”, I thought (I misunderstood).

He ended his talk with the idea of agents in a computer system, a concept that was written about in the 1960s. The idea was these would be programs that would search systems and networks for specific sets of information on behalf of a user. He named the author of a paper on it, but I can’t remember now. I think he said something about how even though these ideas were thought about and written about years ago, long before there was technology capable of implementing them, they were still being developed in the 1990s. The span of history he spoke about had an impact on me. In terms of the history related to technology concepts, all I could grasp was the idea of hyperlinking. The rest felt too esoteric.

Once I started interacting with customers in my work I wanted to please them above all. Working on something and having the customer reject it was the biggest downer, even if the software’s innards were like a work of art to me. There was this constant tension with me, between creating an elegant solution and getting s__t done. I was convinced by my peers that my desire to create elegant solutions was just something eccentric about me. All code was good for was creating the end product. Who cared if it looked like spaghetti code? The customer certainly didn’t. I became convinced they were right. My operating philosophy became, “We may try for elegance, but ultimately the only thing that matters is delivering the product.” Really what it boiled down to was making the computer do something. Us engineers cared how that happened, how it was done, but no one else did, and they still don’t.

I quit my job in 1999. I became interested in C++, because I saw that most programming want ads were requiring it. In an exercise to learn it I decided to port a program I had written in C for my Atari STe back in 1993, to C++ for DOS. Around 1992 I had watched a program on the formation of our solar system. My memory is it went into more detail than I had seen before. It talked about how the solid inner planets were formed from rocky material, and that they grew via. meteor collisions. Previous explanations I’d seen had just focused on the “big picture”, saying that a cloud of gas and dust formed into a disc, and that eddies formed in it, and that eventually planets condensed out of that. Very vague. I decided in 1993 to try to write a simulator that would model the “collision” interactions I heard described in the 1992 show to see if it would work. I called it “orbit”. I created a particle system, where each object was the same mass, and interacted with every other object gravitationally, using Newton’s formula. Unfortunately my Atari was too slow to really do much with this. I was able to get a neat thing going where I could get one object to orbit another one that was stationary. A college friend of mine, who got a degree in aeronautical engineering, helped me out with this. When I got into multiple objects, it wasn’t that interesting. My Atari could run at 16 Mhz, but this enabled maybe ten gravitational objects to be on the screen. If I did more than that the computer would really bog down.

Re-writing it in C++ was nice. I could really localize functionality so that I didn’t have to worry about side-effects, and I enjoyed the ability to derive an object from another one and really leverage the base class. I got the simulator working at first on an old 33 Mhz 386, and then on a 166 Mhz Pentium I. I started with about 30-50 objects on screen at once. When I moved it up to the Pentium I was able to put a few hundred objects in it at one time. It ran at a decent speed. I randomized the masses as well, to make things a little more interesting.

I would just sit and watch the interactions. Out of the mass of dots on the screen I could focus on a few that were circling around each other, dancing almost. It was delightful to watch! It was the first time I was actually fascinated by something I had written. I had put some collision code in it so that if two or more objects hit each other they would form one mass (the mass being the sum of the colliders), and the new momentum would be calculated. I would sit and watch as a lot of objects bumped into each other, forming new masses. What tended to happen was not what I expected: eventually all of the masses would drift off the screen. I tried various ways of stopping this from happening, like making the objects wrap around (this just resulted in a few objects zipping by at a zillion miles per hour), or causing them to stop at the edges of the screen and then let gravity draw them back in (this ultimately didn’t work–they’d tend to congregate at the edges). The solution I finally hit upon was to “re-materialize” them at the center of the screen each time they drifted off. This seemed to create a “thriving” system that didn’t die. I had to concede that such a system was no longer realistic. It was still interesting though. Sometimes I’d let the simulator run for a few hours, and then I’d check back to see what had happened. In one case it spontaneously formed a stable orbiting system of a couple “planets”, with several objects scattered around the screen that were so massive they never moved. It didn’t form a solar system as we know it, but it did form a couple centers of gravity. Interesting.

I had ideas about creating a scalable graphics display so that I could create a larger “universe” than just the dimensions of the screen, and perhaps see if things would work out without me having to resort to these tricks, but I didn’t get around to it.

Fast forward three years…

As I was trying to acquire skills and keep up, I’d listen to podcasts recorded by developers. This was just coming on the scene. It was one way I tried to keep up on the current trends. You don’t want to get behind the trend, lest you become irrelevant in the field. In a couple podcasts I heard the host ask a guest, “What software product that you’ve worked on would you like put on your tombstone? What would you like to be in your epitaph?” These were kind of profound questions. I tried asking them of myself, and I couldn’t answer them. Nothing I’d worked on for my work felt that significant compared to what else I saw out there in the commercial market. I tried asking myself, “If I could work anyplace I wanted, that I could think of, is there anything they’re working on that I’d like to be remembered for, if I had worked on it?” I couldn’t think of anything. They just didn’t seem that interesting. I put the question aside and continued on with my work.

Inside, though, I knew I wanted what I wrote (in code) to mean something, not just to the people who used it, but to other programmers as well. I did not wish this for myself in order to receive kudos. It was part of my own personal integrity. I was capable of just grinding out sloppy code if that was required of me, but I was embarrassed by it in the end. I could use development IDE tools and frameworks, which I initially embraced, but in the end I felt like a plumber. I was spending half my time with the technologies just connecting things together, and converting pieces of data from one thing to another.

I had a couple experiences in the work world that made me feel like my heart wasn’t in it anymore. There were good times as well. I had an opportunity to work with an excellent team of people for a few years at one place I worked in the 1990’s. It wasn’t enough though. The only thing I could think to do was to continue my IT work. I didn’t have any alternative careers that I looked forward to, but dissatisfaction was growing within me.

Part 6

My journey, Part 4

See Part 1, Part 2, Part 3

The real world

Each year while I was in school I looked for summer internships, but had no luck. The economy sucked. In my final year of school I started looking for permanent work, and I felt almost totally lost. I asked CS grads about it. They told me “You’ll never find an entry level programming job.” They had all landed software testing jobs as their entree into corporate software production. Something inside me said this would never do. I wanted to start with programming. I had the feeling I would die inside if I took a job where all I did was test software. About a year after I graduated I was proved right when I took up test duties at my first job. My brain became numb with boredom. Fortunately that’s not all I did there, but I digress.

In my final year of college I interviewed with some major employers who came to my school: Federal Express, Tandem, Microsoft, NCR. I wasn’t clear on what I wanted to do. It was a bit earth-shattering. I had gone into CS because I wanted to program computers for my career. I didn’t face the “what” (what specifically did I want to do with this skill?) until I was about ready to graduate. I had so many interests. When I entered school I wanted to do application development. That seemed to be my strength. But since I had gone through the CS program, and found some things about it interesting, I wasn’t sure anymore. I told my interviewer from Microsoft, for example, that I was interested in operating systems. What was I thinking? I had taken a course on linguistics, and found it pretty interesting. I had taken a course called Programming Languages the previous year, and had a similar level of interest in it. I had gone through the trouble of preparing for a graduate level course on language compilers. I was taking it at the time of the interview. It just didn’t occur to me.

None of my interviews panned out. Looking back on it in hindsight it was good this happened. Most of them didn’t really suit my interests. The problem was who did?

Once I graduated with my Bachelor’s in CS in 1993, and had an opportunity to relax, some thoughts settled in my mind. I really enjoyed the Programming Languages course I had taken in my fourth year. We covered Smalltalk for two weeks. I thoroughly enjoyed it. At the time I had seen many want ads for Smalltalk, but they were looking for people with years of experience. I looked for Smalltalk want ads after I graduated. They had entirely disappeared. Okay. Scratch that one off the list. The next thought was, “Compilers. I think I’d like working on language compilers.” I enjoyed the class and I reflected on the fact that I enjoyed studying and using language. Maybe there was something to that. But who was working on language compilers at the time? Microsoft? They had rejected me from my first interview with them. Who else was there that I knew of? Borland. Okay, there’s one. I didn’t know of anyone else. I got the sense very quickly that while there used to be many companies working on this stuff, it was a shrinking market. It didn’t look promising at the time.

I tried other leads, and thought about other interests I might have. There was a company nearby called XVT that had developed a multi-platform GUI application framework (for an analogy, think wxWindows), which I was very enthusiastic about. While I was in college I talked with some fellow computer enthusiasts on the internet, and we wished there was such a thing, so that we didn’t have to worry about what platform to write software for. I interviewed with them, but that didn’t go anywhere.

For whatever reason it never occurred to me to continue with school, to get a masters degree. I was glad to be done with school, for one thing. I didn’t see a reason to go back. My undergrad advisor subtly chided me once for not wanting to advance my education. He said, “Unfortunately most people can find work in the field without a masters,” but he didn’t talk with me in depth about why I might want to pursue that. I had this vision that I would get my Bachelor’s degree, and then it was just a given that I was going to go out into private industry. It was just my image of how things were supposed to go.

Ultimately, I went to work in what seemed like the one industry that would hire me, IT software development. My first big job came in 1995. At first it felt like my CS knowledge was very relevant, because I started out working on product development at a small company. I worked on adding features to, and refactoring a reporting tool that used scripts for report specification (what data to get and what formatting was required). Okay. So I was working on an interpreter instead of a compiler. It was still a language project. That’s what mattered. Besides developing it on MS-DOS (UGH!), I was thrilled to work on it.

It was very complex compared to what I had worked on before. It was written in C. It had more than 20 linked lists it created, and some of them linked with other lists via. pointers! Yikes! It was very unstable. Anytime I made a change to it I could predict that it was going to crash on me, causing my PC to freeze up every time, requiring me to reboot my machine. And we think now that Windows 95 was bad about this… I got so frustrated with this I spent weeks trying to build some robustness into it. I finally hit on a way to make it crash gracefully by using a macro, which I used to check every single pointer reference before it got used.

I worked on other software that required a knowledge of software architecture, and the ability to handle complexity. It felt good. As in school, I was goal-oriented. Give me a problem to solve, and I’d do my best to do so. I liked elegance, so I’d usually try to come up with what I thought was a good architecture. I also made an effort to comment well to make code clear. My efforts at elegance usually didn’t work out. Either it was impractical or we didn’t have time for it.

Fairly quickly my work evolved away from doing product development. The company I worked for ended up discarding a whole system they’d worked two years on developing. The reporting tool I worked on was part of that. We decided to go with commodity technologies, and I got more into working with regular patterns of IT software production.

I got a taste for programming for Windows, and I was surprised. I liked it! I had already developed a bias against Microsoft software at the time, because my compatriots in the field had nothing but bad things to say about their stuff. I liked developing for an interactive system though, and Windows had a large API that seemed to handle everything I needed to deal with, without me having to invent much of anything to make a GUI app. work. This was in contrast to GEM on my Atari STe, which was the only GUI API I knew about before this.

My foray into Windows programming was short lived. My employer found that I was more proficient in programming for Unix, and so pigeon-holed me into that role, working on servers and occasionally writing a utility. This was okay for a while, but I got bored of it within a couple years.

Triumph of the Nerds

Around 1996 PBS showed another mini-series, on the history of the microcomputer industry, focusing on Apple, Microsoft, and IBM. It was called Triumph of the Nerds, by Robert X. Cringely. This one was much easier for me to understand than The Machine That Changed The World. It talked about a history that I was much more familiar with, and it described things in terms of geeky fascination with technology, and battles for market dominance. This was the only world I really knew. There weren’t any deep concepts in the series about what the computer represented, though Steve Jobs added some philosophical flavor to it.

My favorite part was where Cringely talked about the development of the GUI at Xerox PARC, and then at Apple. Robert Taylor, Larry Tesler, Adele Goldberg, Andy Warnok, and Steve Jobs were interviewed. The show talked mostly about the work environment at Xerox (how the researchers worked together, and how the executives “just didn’t get it”), and the Xerox Alto computer. There was a brief clip of the GUI they had developed (Smalltalk), and Adele Goldberg briefly mentioned the Smalltalk system in relation to the demo Steve Jobs saw, though you’d have to know the history better to really get what was said about it. Superficially one could take away from it that Xerox had developed the GUI, and Apple used it as inspiration for the Mac, but there was more to the story than that.

Triumph of the Nerds showed the unveiling of the first Macintosh in 1984 for the first time, that I had seen. I read about it shortly after it happened in 1984, but I saw no pictures and no video. It was really neat to see. Cringely managed to give a feel for the significance of that moment.

Part 5

My journey, Part 3

See Part 1, Part 2

College

I went to Colorado State University in 1988. As I went through college I forgot about my fantasies of computers changing society. I was focused on writing programs that were more sophisticated than I had ever written before, appreciating architectural features of software and emulating them in my own projects, and learning computer science theory.

At the time, I thought my best classes were some basic hardware class I took, Data Structures, Foundations of Computer Architecture, Linguistics (a non-CS course), a half-semester course on the C language, Programming Languages, and a graduate level course on compilers. Out of all of them the last two felt the most rewarding. I had the same professor for both. Maybe that wasn’t a coincidence.

In my second year I took a course called Comparative Programming Languages, where we surveyed Icon, Prolog, Lisp, and C. My professor for the class was a terrible teacher. There didn’t appear to be much point to the course besides exposing us to these languages. To make things interesting (for my professor, I think), he assigned problems that were inordinately hard when compared to my other CS courses. I got through Icon and C fine. Prolog gave me a few problems, but I was able to get the gist of it. I was taking the half-semester C course at the same time, which was fortunate for me. Otherwise, I doubt I would’ve gotten through my C assignments.

Lisp was the worst! I had never encountered a language I couldn’t tackle before, but Lisp confounded me. We got some supplemental material in class on it, but it wasn’t that good in terms of helping me relate to it. What made it even harder is our professor insisted we use it in the functional style: no set, setq, etc., or anything that used it was allowed. All loops had to be recursive. We had two assignments in Lisp and I didn’t complete either one. I felt utterly defeated by it and I vowed never to look at it again.

My C class was fun. Our teacher had a loose curriculum, and the focus was on just getting us familiar with the basics. In a few assignments he would say “experiment with this construct.” There was no hard goal in mind. He just wanted to see that we had used it in some way and had learned about it. I loved this! I came to like C’s elegance.

I took Programming Languages in my fourth year. My professor was great. He described a few different types of programming languages, and he discussed some runtime operating models. He described how functional languages worked. Lisp made more sense to me after that. We looked at Icon, SML, and Smalltalk, doing a couple assignments in each. He gave us a description of the Smalltalk system that stuck with me for years. He said that in its original implementation it wasn’t just a language. It literally was the operating system of the computer it ran on. It had a graphical interface, and the system could be modified while it was running. This was a real brain twister for me. How could the user modify it while it was running?? I had never seen such a thing. The thought of it intrigued me though. I wanted to know more about it, but couldn’t find any resources on it.

I fell in love with Smalltalk. It was my very first object-oriented language. We only got to use the language, not the system. We used GNU Smalltalk in its scripting mode. We’d edit our code in vi, and then run it through GNU Smalltalk on the command line. Any error messages or “transcript” output would go to the console.

I learned what I think I would call a “Smalltalk style” of programming, of creating object instances (nodes) that have references to each other, each doing very simple tasks, working cooperatively to accomplish a larger goal. I had the experience in one Smalltalk assignment of feeling like I was creating my own declarative programming language of sorts. Nowadays we’d say I had created a DSL (Domain-Specific Language). Just the experience of doing this was great! I had no idea programming could be this expressive.

I took compilers in my fifth year. Here, CS started to take on the feel of math. Compiler design was expressed mathematically. We used the red “Dragon book”, Compilers: Principles, Techniques, and Tools, by Aho, Sethi, and Ullman. The book impressed me right away with this introductory acknowledgement:

This book was phototypeset by the authors using the excellent software available on the UNIX system. The typesetting command read:

pic files | tbl | eqn | troff -ms

pic is Brian Kernighan’s language for typesetting figures; we owe Brian a special debt of gratitude for accomodating our special and extensive figure-drawing needs so cheerfully. tbl is Mike Lesk’s language for laying out tables. eqn is Brian Kernighan and Lorinda Cherry’s language for typesetting mathematics. troff is Joe Ossana’s program for formatting text for a phototypesetter, which in our case was a Mergenthaler Linotron 202/N. The ms package of troff macros was written by Mike Lesk. In addition, we managed the text using make due to Stu Feldman. Cross references within the text were maintained using awk created by Al Aho, Brian Kernighan, and Peter Weinberger [“awk” was named after the initials of Aho, Weinberger, and Kernighan — Mark], and sed created by Lee McMahon.

I thought this was really cool, because it felt like they were “eating their own dog food.”

We learned at the time about the concepts of bootstrapping, cross-compilers for system development, LR and LALR parsers, bottom-up and top-down parsers, parse trees, pattern recognizers (lexers), stack machines, etc.

For our semester project we had to implement a compiler for a Pascal-like language, and it had to be capable of handling recursion. Rather than generate assembly or machine code, we were allowed to generate C code, but it had to be generated as if it were 3-address code. We were allowed to use a couple C constructs, but by and large it had to read like an assembly program. A couple other rules were we had to build our own symbol table (in the compiler), and call stack (in the compiled program).

We worked on our projects in pairs. We were taught some basics about how to use lex and yacc, but we weren’t told the whole story… I and my partner ended up using yacc as a driver to our own parse-tree-building routines. We wrote all of our code in C. We made the thing so complicated. We invented stacks for various things, like handling order of operations for mathematical expressions. We went through all this trouble, and then one day I happened to chat with one of my other classmates and he told me, “Oh, you don’t have to do all that. Yacc will do that for you.” I was dumbfounded. How come nobody told us this before?? Oh well, it was too late. It was near the end of the semester, and we had to turn in test results. My memory is even though it was an ad hoc design, our compiler got 4 out of 5 tests correct. The 5th one, the one that did recursion, failed. Anyway, I did okay in the course, and that felt like an accomplishment.

I wanted to do it up right, so I took the time after I graduated to rewrite the compiler, fully using yacc’s abilities. At the time I didn’t have the necessary tools available on my Atari STe to do the project, so I used Nyx, a free, publicly available Unix system that I could access via. a modem through a straight serial connection (PPP hadn’t been invented yet). It was just like calling up a BBS except I had shell access.

I structured everything cleanly in the compiler, and I got the bugs worked out so it could handle recursion.

A more sophisticated perspective

Close to the time I graduated a mini-series came out on PBS called “The Machine That Changed The World.” What interested me about it was its focus on computer history. It filled in more of the story from the time when I had researched it in Jr. high and high school.

My favorite episode was “The Paperback Computer,” which focused on the research efforts that went into creating the personal computer, and the commercial products (primarily the Apple Macintosh) that came from them.

It gave me my first glimpse ever of the work done by Douglas Engelbart, though it only showed a small slice–the invention of the mouse. Mitch Kapor, one of the people interviewed for this episode, pointed out that most people had never heard of Engelbart, yet he is the most important figure in computing when you consider what we are using today. This episode also gave me my first glimpse of the research done at Xerox PARC on GUIs, though there was no mention of the Smalltalk system (even though that’s the graphics display you see in that segment).

I liked the history lessons and the artifacts it showed. The deeper ideas lost me. By the time I saw this series, I had already heard of the idea that the computer was a new medium. It was mentioned sometimes in computer magazines I read. I was unclear on what this really meant, though.

I had already experienced some aspects of this idea without realizing it, especially when I used 8-bit computers with Basic or Logo, which gave me a feeling of interactivity. The responsiveness towards the programmer was pretty good for the limited capabilities they had. It felt like a machine I could mold and change into anything I wanted via. programming. It was what I liked most about using a computer. Being unfamiliar though with the concept of what a medium really was, I thought when digital video and audio came along, and the predictions about digital TV through the “Information Superhighway,” that this was what it was all about. I had fallen into the mindset a lot of people had at the time: the computer was meant to automate old media.

Part 4

My journey, Part 2

See Part 1

A sense of history

I had never seen a machine before that I could change so easily (relative to other machines). I had this sense that these computers represented something big. I didn’t know what it was. It was just a general feeling I got from reading computer magazines. They reported on what was going on in the industry. All I knew was that I really liked dinking around with them, and I knew there were some others who did, too, but they were at most ten years older than me.

The culture of computing and programming was all over the place as well, though the computer was always portrayed as this “wonder machine” that had magical powers, and programming it was a mysterious dark art which made neat things happen after typing furiously tappy-tappy on the keyboard for ten seconds. Still, it was all encouragement to get involved.

I got a sense early on that it was something that divided the generations. Most adults I ran into knew hardly anything about them, much less how to program them, like I did. They had no interest in learning about them either. They felt they were too complicated, threatening, mysterious. They didn’t have much of an idea about its potential, just that “it’s the future”, and they encouraged me to learn more about them, because I was going to need that knowledge, though their idea of “learn more about them” meant “how to use one”, not program it. If I told them I knew how to program them they’d say, “Wow! You’re really on top of it then.” They were kind of amazed that a child could have such advanced knowledge that seemed so beyond their reach. They couldn’t imagine it.

A few adults, including my mom, asked me why I was so fascinated by computers. Why were they important? I would tell them about the creative process I went through. I’d get a vision in my mind of something I thought would be interesting, or useful to myself and others. I’d get excited enough about it to try to create it. The hard part was translating what I saw in my mind, which was already finished, into the computer’s language, to get it to reproduce what I wanted. When I was successful it was the biggest high for me. Seeing my vision play out in front of me on a computer screen was electrifying. It made my day. I characterized computers as “creation machines”. What I also liked about them is they weren’t messy. Creating with a computer wasn’t like writing or typing on paper, or painting, where if you made a mistake you had to live with it, work around it, or start over somewhere to get rid of the mistake. I could always fix my mistakes on a computer, leaving no trace behind. The difference was with paper it was always easy to find my mistakes. On the computer I had to figure out where the mistakes were!

The few adults I knew at the time who knew how to use a computer tended to not be so impressed with programming ability, not because they knew better, but because they were satisfied being users. They couldn’t imagine needing to program the computer to do anything. Anything they needed to do could be satisfied by a commercial package they could buy at a computer store. They regarded programming as an interesting hobby of kids like myself, but irrelevant.

As I became familiar with the wider world of what represented computing at the time (primarily companies and the computers they sold), I got a sense that this creation was historic. Sometimes in school I would get an open-ended research assignment. Each chance I got I’d do a paper on the history of the computer, each time trying to deepen my knowledge of it.

The earliest computer I’d found in my research materials was a mechanical adding machine that Blaise Pascal created in 1645, called the Pascaline. There used to be a modern equivalent of it as late as 25-30 years ago that you could buy cheap at the store. It was shaped like a ruler, and it contained a set of dials you could stick a pencil or pen point into. All dials started out displaying zeros. Each dial represented a power of ten (starting at 100). If you wanted to add 5 + 5, you would “dial” 5 in the ones place, and then “dial” 5 in the same place again. Each “dial” action added to the quantity in each place. The dials had some hidden pegs in them to handle carries. So when you did this, the ones place dial would return to “0”, and the tens place dial would automatically shift to “1”, producing the result “10”.

The research material I was able to find at the time only went up to the late 1960s. They gave me the impression that there was a dividing line. Up to about 1950, all they talked about were the research efforts to create mechanical, electric, and finally electronic computers. The exception being Herman Hollerith, who I think was the first to find a commercial application for an electric computer, to calculate the 1890 census, and who built the company that would ultimately come to be known as IBM. The last computers they talked about being created by research institutions were the Harvard Mark I, ENIAC, and EDSAC. By the 1950s (in the timeline) the research material veered off from scientific/research efforts, for the most part, and instead talked about commercial machines that were produced by Remington Rand (the Univac), and IBM. Probably the last thing they talked about were minicomputer models from the 1960s. The research I did matched well with the ethos of computing at the time: it’s all about the hardware.

High school

When I got into high school I joined a computer club there. Every year club members participated in the American Computer Science League (ACSL). We had study materials that focused on computer science topics. From time to time we had programming problems to solve, and written quizzes. We earned individual scores for these things, as well as a combined score for the team.

We would have a week to come up with our programming solutions on paper (computer use wasn’t allowed). On the testing day we would have an hour to type in our programs, test and debug them, at the end of which the computer teacher would come around and administer the official test for a score.

What was neat was we could use whatever programming language we wanted. A couple of the students had parents who worked at the University of Colorado, and had access to Unix systems. They wrote their programs in C on the university’s computers. Most of us wrote ours in Basic on the Apples we had at school.

Just as an aside, I was alarmed to read Imran’s article in 2007 about using FizzBuzz to interview programmers, because the problem he posed was so simple, yet he said “most computer science graduates can’t [solve it in a couple minutes]”. It reminded me of an ACSL programming problem we had to solve called “Buzz”. I wrote my solution in Basic, just using iteration. Here is the algorithm we had to implement:

input 5 numbers
if any input number contains the digit "9", print "Buzz"
if any input number is evenly divisible by 8, print "Buzz"
if the sum of the digits of any input number is divisible by 4,
   print "Buzz"

for every input number
A: sum its digits (we'll use "sum" as a variable in this
loop for a sum of digits)
   if sum >= 10
      get the digits for sum and go back to Step A
   if sum equals 7
      print "Buzz"
   if sum equals any digit in the original input number
      print "Buzz"
end loop

test input   output
198          Buzz Buzz
36
144          Buzz
88           Buzz Buzz Buzz
10           Buzz

In my junior and senior year, based on our regional scores, we qualified to go to the national finals. The first year we went we bombed, scoring in last place. Ironically we got kudos for this. A complimentary blurb in the local paper was written about us. We got a congratulatory letter from the superintendent. We got awards at a general awards assembly (others got awards for other accomplishments, too). A bit much. I think this was all because it was the first time our club had been invited to the nationals. The next year we went we scored in the middle of the pack, and heard not a peep from anybody!

(Update 1-18-2010: I’ve had some other recollections about this time period, and I’ve included them in the following seven paragraphs.)

By this point I was starting to feel like an “experienced programmer”. I felt very comfortable doing it, though the ACSL challenges were at times too much for me.

I talk about a couple of software programs I wrote below. You can see video of them in operation here.

I started to have the desire to write programs in a more sophisticated way. I began to see that there were routines that I would write over and over again for different projects, and I wished there was a way for me to write code in what would now be called “components” or DLLs, so that it could be generalized and reused. I also wanted to be able to write programs where the parts were isolated from each other, so that I could make major revisions without having to change parts of the program which should not be concerned with how the part I changed was implemented.

I even started thinking of things in terms of systems a little. A project I had been working on since Jr. high was a set of programs I used to write, edit, and play Mad Libs on a computer. I noticed that it had a major emphasis on text, and I wondered if there was a way to generalize some of the code I had written for it into a “text system”, so that people could not only play this specific game, but they could also do other things with text. I didn’t have the slightest idea how to do that, but the idea was there.

By my senior year of high school I had started work on my biggest project yet, an app. I had been wanting for a while, called  “Week-In-Advance”. It was a weekly scheduler, but I wanted it to be user friendly, with a windowing interface. I was inspired by an Apple Lisa demo I had seen a couple years earlier. I spent months working on it. The code base was getting so big, I realized I had to break it up into subroutines, rather than one big long program, to make it manageable. I wrote it in Basic, back when all it had for branching was Goto, Gosub, and Return commands. I used Gosub and Return to implement the subroutines.

I learned some advanced techniques in this project. One feature I spent a lot of time on was how to create expanding and closing windows on the screen. I tried a bunch of different animation techniques. Most of them were too slow to be useable. I finally figured out one day that a box on the screen was really just a set of four lines, two vertical, and two horizontal, and that I could make it expand efficiently by just taking the useable area of the screen, and dividing it horizontally and vertically by the number of times I would allow the window to expand, until it reached its full size. The horizontal lines were bounded by the vertical lines, and the vertical lines were bounded by the horizontals. It worked beautifully. I had generalized it enough so that I could close a window the same way, with some animated shrinking boxes. I just ran the “window expanding” routine in reverse by negating the parameters. This showed me the power of “systematizing” something, rather than using code written in a narrow-minded fashion to make one thing happen, without considering how else the code might be used.

The experience I had with using subroutines felt good. It kept my code under control. Soon after I wanted to learn Pascal, so I devoted time to doing that. It had procedures and functions as built-in constructs. It felt great to use them compared to my experience with Basic. The Basic I used had no scoping rules whatsoever. Pascal had them, and programming felt so much more manageable.

Getting a sense of the future

As I worked with computers more and talked to people about them in my teen years I got a wider sense that they were going to change our society, I thought in beneficial ways. I’ve tried to remember why I thought this. I remember my mom told me this in one of the conversations I had with her about them. Besides this, I think I was influenced by futurist literature, artwork, and science fiction. I had this vague idea that somehow in the future computers would help us understand our world better, and to become better thinkers. We would be smarter, better educated. We would be a better society for it. I had no idea how this would happen. I had a magical image of the whole thing that was deterministic and technology-centered.

It was a foregone conclusion that I would go to college. Both my mom and my maternal grandparents (the only ones I knew) wanted me to do it. My grandparents even offered to pay full expenses so long as I went to a liberal arts college.

In my senior year I was trying to decide what my major would be. I knew I wanted to get into a career that involved computer programming. I tried looking at my past experience, what kinds of projects I liked to work on. My interest seemed to be in application programming. I figured I was more business-oriented, not a hacker. The first major I researched was CIS (Computer Information Science/Systems). I looked at their curriculum and was uninspired. I felt uncertain about what to do. I had heard a little about the computer science curriculum at a few in-state universities, but it felt too theoretical for my taste. Finally, I consulted with my high school’s computer teacher, and we had a heart to heart talk. She (yes, the computer teacher was a woman) had known me all through my time there, because she ran the computer club. She advised me to go into computer science, because I would learn about how computers worked, and this would be valuable in my chosen career.

She had a computer science degree that she earned in the 1960s. She had an interesting life story. I remember she said she raised her family on a farm. At some point, I don’t know if it was before or after she had kids, she went to college, and was the only woman in the whole CS program. She told me stories about that time. She experienced blatant sexism. Male students would come up to her and say, “What are you doing here? Women don’t take computer science,” and, “You don’t belong here.” She told me about writing programs on punch cards (rubber bands were her friends 🙂 ), taking a program in to be run on the school’s mainframe, and waiting until 3am for it to run and to get her printout. I couldn’t imagine it.

I took her advice that I should take CS, kind of on faith that she was right.

Part 3.

My journey, Part 1

This series of posts has been brewing within me for more than a year, though for some reason it just never felt like the right time to talk about it. Two months ago I found The Machine That Changed The World online, a mini-series I had seen about 15 years ago. I was going to just write about it here, but then all this other stuff came out of me, and it became about that, though I’ve included it in this series.

I go through things chronologically, though I only reference a few dates, so one subject will tend to abruptly transition into another, just because of what I was focused on at a point in time. I call this “my journey”.

Getting introduced

From when I was 6 or so, in the mid-1970s, I had had a couple experiences using computing devices, and a couple brief chances to use general purpose computers. My interest in computers began in earnest in 1981. I was 11 going on 12. My mother and I had moved to a new town in Colorado a year earlier. One day we both went by the local public library and I saw a man sitting in front of an Atari 400 computer that was set up with a Sony Trinitron TV set. I saw him switch between a blue screen with cryptic code on it, and a low-rez graphics screen that had what looked like a fence and some blocky “horses” set up on a starting line. And then I saw the horses multiply across the screen, fast. The colorful graphics really caught my eye. I was already used to the blocky graphics of the Atari 2600 VCS. I had seen those around in TV ads and stores. I sat and watched the man work on his project. Each time he ran the program the same thing happened. After watching this for a while I got the impression that he was trying to write a horse racing game, but that something was wrong with it.

My mother noticed my interest and tried to get me to ask the librarian about the computer, to see if I could use it. I was reticent at first. I assumed that only “special” people could use it. I figured the man worked at the library or something, or that only adults could use it. I asked the librarian. She asked my age. They had a minimum age requirement of ten years old. She said all I needed to do was sign up for an orientation that lasted for 15 minutes, and then sign up for computer time when it was available. So at my mother’s urging I signed up for the orientation. Several days later I went to orientation with about five other people of different ages (children and adults), and got a brief once-over about how to operate the Atari, how to sign up for time, and what software they had available for it behind the desk.

I was interested right off in learning to do what I saw that man do: program the computer. My first stab at this was running a tutorial called An Invitation to Programming that came on 3 cassette tapes, double-sided. The tape drive I put the tapes into looked like an ordinary tape recorder, except that it had a cable running right to the computer. I thought the tutorial was the neatest thing. At first I think I was distracted by how it worked and paid hardly any attention to what it was attempting to teach. The fascinating thing about it was the first part of each tape had a voice track on it that would play over the TV’s speaker while the tutorial loaded. This was really clever. Rather than sitting there waiting for the program to load, I could listen to a male narrator talk about what I was going to learn with some flashy music playing in the background. By the time he was done, the tutorial ran. Once it started running the voice track started up again, this time with a female narrator. What appeared on the screen was coordinated with the audio track. When the tutorial would stop to give me a quiz, the tape drive would stop. When I was ready to continue, the tape drive started up again. “How is it doing this,” I wondered with awe.

Introduction to “An Invitation to Programming” by Atari

Part 4 of “An Invitation to Programming”

By the way, this is not me using the tutorial. I found these videos on YouTube.

The tutorial was about the Basic programming language. I went through the whole thing two or three times, in multiple sessions, because I could tell as I got towards the more advanced stuff that I wasn’t getting it. I only understood the simple things, like how to print on the screen, and how to use the Goto command. Other stuff like DIMming variables, getting input, and forming loops gave me a lot of trouble. They had the manual for Atari BASIC behind the desk and I tried reading that. It really made my head hurt, but I came to understand more.

Eventually I found out there was an Atari 800 in another part of the library that I could sign up for, so I’d sometimes use that one. There seemed to be more people who would hang around it. I found a couple people who were more knowledgeable about Basic and I’d ask them to help me, which they did.

I fell into a LOT of pitfalls. I couldn’t write a complete program that worked, though I kept trying. Every time I’d encounter an error I’d guess at what the problem was, and I would try changing something at random. I had no context. I felt lost. I needed a lot of help from others in understanding different contexts in my program.

It took me a month or two before I had written my first program that went beyond “hello world” stuff. It was a math quiz. Over several more months I stumbled a lot on Basic but kept learning. I remember I used to get SO frustrated! Time and again I would think I knew what I was doing, but something would go wrong. Debugging felt so hard. I’d go home fuming. I had to practice relaxing.

As time passed a strange thing started happening. I’d go through my “frustration cycle,” focus on something else for the day, have dinner with my mom and talk about it, watch TV, do my homework, or something. I’d go to bed thinking that this problem I was obsessed with was insurmountable. I’d wake up the next morning, and the answer would just come to me, like it was the most obvious thing in the world. I couldn’t try it out right away, because I had to get to school, but all day I’d feel anxious to try out my solution. Finally, I’d get my chance, and it would work! Wow! What a feeling! This process totally mystified me. How was it that at one point I could be dealing with a problem that felt intractible, and then at another time, when I had the opportunity to really relax, the problem was as easy to solve as brushing my teeth? I have heard recently that the brain uses sleep time to “organize” stuff that’s been absorbed while awake. Maybe that’s it.

When I entered Jr. high school I eventually discovered that they had some computer magazines in their library. A few of them had program listings. One of them was a magazine called Compute!. I fell in love with it almost immediately. The first thing that drew me in was the cover art. It looked fun! Secondly, the content was approachable. It had complete listings for games. I went through their stack of Compute issues with a passion. I was checking them out often, taking them to the public library to type in to the Atari, debugging typing mistakes, and having fun seeing the programs come to life.

We had two Apple II’s at my school, one in the library that I could sign up for, and one that a math teacher had requested for his office. In my first year there he started a computer club. I signed up for the club immediately. Each Friday after school we would get together. There were about four of us, plus the math teacher, huddled around his computer. He would teach us about programming in Basic, or have us try to solve some programming problem.

Along the way I got my own ideas for projects to do. I wrote a few of my own programs on the Atari at the library. By my eighth grade year my school installed a computer lab filled with Apple II’s, and they offered computer use and programming classes. I signed up for both. In the programming course we covered Basic and Logo.

Logo was the first language I worked with where programming felt easy. Quite frankly the language felt well designed, too, though my memory is that it didn’t get much respect from the other programmers my age. They saw it as a child’s language–too immature for us teenagers. Then again, they thought “real programming” was making the computer do cool stuff in hexadecimal. Every construct we used in Logo was relevant to the platform we were using. Unfortunately all we learned about with Logo was procedural programming. We didn’t learn what it was intended for: to teach children math. After this course I got some more project ideas which I finished successfully after a lot of work.

In the computer use course we learned about a couple word processors (Apple Writer, and Bank Street Writer), Visicalc, and a simple database program whose name I can’t remember. I took it because I thought it would teach useful skills. I had no idea how to use any of these tools before then.

Part 2

The computer as medium

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s, that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been missing what’s powerful about computing because like the early users of the printing press we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he said that after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.” It said that there were such things as libraries, going back hundreds of years, but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has said in his recent speeches that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with the printing press’s ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media and processes enables powerful things to happen in our society via. efficiency. It further democratizes old media and modes of thought, but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power of EToys comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful, but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s fallen prey to ideologies. The goal of constructivism as Kay has used it is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play) the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path to the answer. The children do some of the work to discover the answers themselves, once a focus has been established. And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how the student thought in order to get it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to get otherwise. This keys right into a philosophy of computing that J.C.R. Licklider pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,” as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.” Computers have the power to make this complexity comprehensible. Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally think of our computers as simulators of “business models.” I don’t think the current definition of “business model” (a business plan) really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are in the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today. We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers have historically.

In my “Reminiscing” post I referred to above, I talked about the fact that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think Kay would like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching it play out, inform themselves about it. Rather than appealing to experts, they can understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that they see as relevant. This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different concept of what it means to be literate; a different model of education, and research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years. The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference. I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that. It adds another dimension to what I say here about the computer as medium, but it also expounds on what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

Related post: The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com

The culture of “air guitar”

Since I started listening to Alan Kay’s ideas I’ve kept hearing him use the phrase “air guitar” to describe what he sees as shallow ideas, both in terms of educational and industry practice, which are promoted by a pop culture. Kay is a musician, among other things, so I can see where he’d come up with this term. My impression is he’s referring to an almost exclusive focus on technique, perhaps even using a tool, looking confident and stylish while doing it, and an almost total lack of focus on what is being worked with.

I watched video recently of another one of his presentations on Squeak, this time in front of an audience of educators. He brought up the issues of math and science education, and said that in many environments they teach kids to calculate, to do “math,” not mathematics. Students are essentially trained to “appreciate” math, but not in how to be real mathematicians.

He’s also used the term “gestures” to characterize this shallowness. In the field of software development he alluded to this idea in his 1997 keynote speech at OOPSLA, titled, “The Computer Revolution Hasn’t Happened Yet”:

I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There’s no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those I don’t think really get what computing is all about. It’s like saying I have really great ideas for paintings, but I’m just going to use a brush, but no paint. You know, so my ideas will be represented by the gestures I make over the paper; and don’t tell any 20th century artists that, or they might decide to make a videotape of them doing that and put it in a museum.

Edit 4-5-2012: Case in point. I found this news feature segment about some air guitar enthusiasts who actually hold a national competition to see who’s the “best” at it. The winners go on to an international competition in Finland! Okay, it’s performance art, but this is like asking, “How great can you fake it?” It’s a glorified karaoke competition, except it’s worse, because there’s no attempt to express anything but gestures. When I first heard about this, I thought it was satire like the movie Spinal Tap, but no, it’s real.

I’ve been surprised when I’ve seen some piece of media come along that comes pretty darn close to illustrating one of Kay’s ideas. A while back I found a video clip of a Norwegian comedy show taking the learning curve of today’s novice computer users and making a media analogy to the “introduction of the book” after Gutenburg brought the printing press to Europe in the Middle Ages. Kay had always said that personal computers are a new form of media, and I thought this skit got the message across in a way that most people could understand, at least from having experienced the version of “personal computer media” that you can buy at a retail outlet or through mail order.

South Park is a show that’s been a favorite of mine for many years. It’s an odd mix of “pop culture with a message.” Despite the fact that it’s low brow and often offensive, on a few occasions it has been surprisingly poetic about real issues in our society. The show is about a group of kids going through life, misunderstanding things, playing pranks on each other, and getting in trouble. It also shows them trying to be powerful, trying to help, and trying to learn. Maybe that’s what interests me about it. It’s unclear what “grade level” the kids are at. There was one season where they were in “4th grade.” So I guess that gives you an idea.

I’ve included links to some clips of an episode I’ll talk about. The clips are from Comedy Central’s site. Fair warning: If you are easily offended, I would not encourage you to watch them. There is some language in the video that could offend.

Season 11, episode 13 is one where the kids buy a console video game system and play a game on it called “Guitar Hero”. The game is played by taking game controllers that look like small electric guitars, and manipulating switches and buttons on them in the right sequence and timing, to real music played by the game console.

What I’m going to say about it is my own interpretation, based on my own life experience.

What’s interesting to me is a parent of one of the kids tries to engage the group in learning how to play real music on a real instrument, but the kids are not interested.

http://www.southparkstudios.com/clips/155857/guitar-hero

The dad wonders what’s so special about the game, and that night sneaks down and tries it himself, showing what a bad fit a real musician is in this “air guitar” culture (or showing what a piece of crap it is).

http://www.southparkstudios.com/clips/155858/randy-sucks

I have the feeling this episode is based on a movie, though I don’t know which one. There are other parts not shown in these clips that dramatize betrayal between two friends, and reconciliation. Kind of your typical “buddy movie” plot line. These next clips show the “wider world” discovering the talent of these kids playing the game.

http://www.southparkstudios.com/clips/155861/rock-stars

This gets to what I think the pop culture promotes. Even though it’s pretty empty, it makes you feel like you are accomplishing something, and getting something out of it. You are rewarded for “going through the motions,” “making the right gestures.”

If this next clip doesn’t scream “air guitar”, I don’t know what does.

http://www.southparkstudios.com/clips/163728/acoustic-guitar-hero

I won’t show the ending, but it shows how utterly empty and worthless the whole “air guitar” exercise is–It’s not real!

I think the reason this episode had some meaning for me is what plays out feels kind of like my past experience as a software developer. Not that software development is an “air guitar” exercise in and of itself. Far from it. What I’m getting at is the wider computing culture with respect to software development is like this. Those of us who care about our craft are trying to play “good music,” often with bad instruments. In my case, I’m still learning what “good music” is. We may be with a good “band,” but most “bands” have “band managers” who don’t know a thing about “good music.” That’s why it seems like such a struggle.

“Real music” with good instruments in computing is available for those who seek it. You won’t find it in most programming languages, programming web sites, symposia, or tools. The idea of “good music” with “good instruments” doesn’t get much support, so it’s hard to find. Unfortunately the reality is in order to really be educated you have to seek out a real education. Just “going along for the ride” of school systems will usually leave you thinking the pop culture is the real thing. Seeking a real education, being your own learner, is much more rewarding.

Is OOP all it’s cracked up to be?: My guest post on ZDNet

Paul Murphy saw fit to give me another guest spot on his blog, called “The tattered history of OOP”, talking about the history of OOP practice, where the idea came from, and how industry has implemented it. If you’ve been reading my blog this will probably be review. I’m just spreading the message a little wider.

Paul has an interesting take on the subject. He thinks OOP is a failure in practice because with the way it’s been implemented it’s just another way to make procedure calls. I agree with him for the most part. He’s informed me that he’s going to put up another post soon that gets further into why he thinks OOP is a failure. I’ll update this post when that’s available.

In short, where I’m coming from is that OOP, in the original vision that was created at Xerox PARC, still has promise. The current implementation that most developers use has architectural problems that the PARC version did not, and it still promotes a mode of thinking that’s compatible with procedural programming.

Update 6/3/08: Paul Murphy’s response to my article is now up, called “Oddball thinking about OOP”. He argues that OOP is a failure because it’s an idea that was incompatible with digital computing to begin with, and is better suited to analog computing. I disagree that this is the reason for its failure, but to each their own.

Update 8/1/09: I realized later I may have misattributed a quote to Albert Einstein. Paul Murphy talked about this in a post sometime after “The tattered history of OOP” was published. I said that insanity is, “Doing the same thing over and over again and expecting a different result.” Murphy said he realized that this was misattributed to Einstein. I did a little research myself and it seems like there’s confusion about it. I’ve found sites of quotations that attribute this saying to Einstein. On Wikipedia though it’s attributed to Rita Mae Brown, a novelist, who wrote this in her 1983 book, Sudden Death. I don’t know. I had always heard it attributed to Einstein, though I agree with the naysayers that no one has produced a citation that would prove he said it.