My journey, Part 3

See Part 1, Part 2

College

I went to Colorado State University in 1988. As I went through college I forgot about my fantasies of computers changing society. I was focused on writing programs that were more sophisticated than I had ever written before, appreciating architectural features of software and emulating them in my own projects, and learning computer science theory.

At the time, I thought my best classes were some basic hardware class I took, Data Structures, Foundations of Computer Architecture, Linguistics (a non-CS course), a half-semester course on the C language, Programming Languages, and a graduate level course on compilers. Out of all of them the last two felt the most rewarding. I had the same professor for both. Maybe that wasn’t a coincidence.

In my second year I took a course called Comparative Programming Languages, where we surveyed Icon, Prolog, Lisp, and C. My professor for the class was a terrible teacher. There didn’t appear to be much point to the course besides exposing us to these languages. To make things interesting (for my professor, I think), he assigned problems that were inordinately hard when compared to my other CS courses. I got through Icon and C fine. Prolog gave me a few problems, but I was able to get the gist of it. I was taking the half-semester C course at the same time, which was fortunate for me. Otherwise, I doubt I would’ve gotten through my C assignments.

Lisp was the worst! I had never encountered a language I couldn’t tackle before, but Lisp confounded me. We got some supplemental material in class on it, but it wasn’t that good in terms of helping me relate to it. What made it even harder is our professor insisted we use it in the functional style: no set, setq, etc., or anything that used it was allowed. All loops had to be recursive. We had two assignments in Lisp and I didn’t complete either one. I felt utterly defeated by it and I vowed never to look at it again.

My C class was fun. Our teacher had a loose curriculum, and the focus was on just getting us familiar with the basics. In a few assignments he would say “experiment with this construct.” There was no hard goal in mind. He just wanted to see that we had used it in some way and had learned about it. I loved this! I came to like C’s elegance.

I took Programming Languages in my fourth year. My professor was great. He described a few different types of programming languages, and he discussed some runtime operating models. He described how functional languages worked. Lisp made more sense to me after that. We looked at Icon, SML, and Smalltalk, doing a couple assignments in each. He gave us a description of the Smalltalk system that stuck with me for years. He said that in its original implementation it wasn’t just a language. It literally was the operating system of the computer it ran on. It had a graphical interface, and the system could be modified while it was running. This was a real brain twister for me. How could the user modify it while it was running?? I had never seen such a thing. The thought of it intrigued me though. I wanted to know more about it, but couldn’t find any resources on it.

I fell in love with Smalltalk. It was my very first object-oriented language. We only got to use the language, not the system. We used GNU Smalltalk in its scripting mode. We’d edit our code in vi, and then run it through GNU Smalltalk on the command line. Any error messages or “transcript” output would go to the console.

I learned what I think I would call a “Smalltalk style” of programming, of creating object instances (nodes) that have references to each other, each doing very simple tasks, working cooperatively to accomplish a larger goal. I had the experience in one Smalltalk assignment of feeling like I was creating my own declarative programming language of sorts. Nowadays we’d say I had created a DSL (Domain-Specific Language). Just the experience of doing this was great! I had no idea programming could be this expressive.

I took compilers in my fifth year. Here, CS started to take on the feel of math. Compiler design was expressed mathematically. We used the red “Dragon book”, Compilers: Principles, Techniques, and Tools, by Aho, Sethi, and Ullman. The book impressed me right away with this introductory acknowledgement:

This book was phototypeset by the authors using the excellent software available on the UNIX system. The typesetting command read:

pic files | tbl | eqn | troff -ms

pic is Brian Kernighan’s language for typesetting figures; we owe Brian a special debt of gratitude for accomodating our special and extensive figure-drawing needs so cheerfully. tbl is Mike Lesk’s language for laying out tables. eqn is Brian Kernighan and Lorinda Cherry’s language for typesetting mathematics. troff is Joe Ossana’s program for formatting text for a phototypesetter, which in our case was a Mergenthaler Linotron 202/N. The ms package of troff macros was written by Mike Lesk. In addition, we managed the text using make due to Stu Feldman. Cross references within the text were maintained using awk created by Al Aho, Brian Kernighan, and Peter Weinberger [“awk” was named after the initials of Aho, Weinberger, and Kernighan — Mark], and sed created by Lee McMahon.

I thought this was really cool, because it felt like they were “eating their own dog food.”

We learned at the time about the concepts of bootstrapping, cross-compilers for system development, LR and LALR parsers, bottom-up and top-down parsers, parse trees, pattern recognizers (lexers), stack machines, etc.

For our semester project we had to implement a compiler for a Pascal-like language, and it had to be capable of handling recursion. Rather than generate assembly or machine code, we were allowed to generate C code, but it had to be generated as if it were 3-address code. We were allowed to use a couple C constructs, but by and large it had to read like an assembly program. A couple other rules were we had to build our own symbol table (in the compiler), and call stack (in the compiled program).

We worked on our projects in pairs. We were taught some basics about how to use lex and yacc, but we weren’t told the whole story… I and my partner ended up using yacc as a driver to our own parse-tree-building routines. We wrote all of our code in C. We made the thing so complicated. We invented stacks for various things, like handling order of operations for mathematical expressions. We went through all this trouble, and then one day I happened to chat with one of my other classmates and he told me, “Oh, you don’t have to do all that. Yacc will do that for you.” I was dumbfounded. How come nobody told us this before?? Oh well, it was too late. It was near the end of the semester, and we had to turn in test results. My memory is even though it was an ad hoc design, our compiler got 4 out of 5 tests correct. The 5th one, the one that did recursion, failed. Anyway, I did okay in the course, and that felt like an accomplishment.

I wanted to do it up right, so I took the time after I graduated to rewrite the compiler, fully using yacc’s abilities. At the time I didn’t have the necessary tools available on my Atari STe to do the project, so I used Nyx, a free, publicly available Unix system that I could access via. a modem through a straight serial connection (PPP hadn’t been invented yet). It was just like calling up a BBS except I had shell access.

I structured everything cleanly in the compiler, and I got the bugs worked out so it could handle recursion.

A more sophisticated perspective

Close to the time I graduated a mini-series came out on PBS called “The Machine That Changed The World.” What interested me about it was its focus on computer history. It filled in more of the story from the time when I had researched it in Jr. high and high school.

My favorite episode was “The Paperback Computer,” which focused on the research efforts that went into creating the personal computer, and the commercial products (primarily the Apple Macintosh) that came from them.

It gave me my first glimpse ever of the work done by Douglas Engelbart, though it only showed a small slice–the invention of the mouse. Mitch Kapor, one of the people interviewed for this episode, pointed out that most people had never heard of Engelbart, yet he is the most important figure in computing when you consider what we are using today. This episode also gave me my first glimpse of the research done at Xerox PARC on GUIs, though there was no mention of the Smalltalk system (even though that’s the graphics display you see in that segment).

I liked the history lessons and the artifacts it showed. The deeper ideas lost me. By the time I saw this series, I had already heard of the idea that the computer was a new medium. It was mentioned sometimes in computer magazines I read. I was unclear on what this really meant, though.

I had already experienced some aspects of this idea without realizing it, especially when I used 8-bit computers with Basic or Logo, which gave me a feeling of interactivity. The responsiveness towards the programmer was pretty good for the limited capabilities they had. It felt like a machine I could mold and change into anything I wanted via. programming. It was what I liked most about using a computer. Being unfamiliar though with the concept of what a medium really was, I thought when digital video and audio came along, and the predictions about digital TV through the “Information Superhighway,” that this was what it was all about. I had fallen into the mindset a lot of people had at the time: the computer was meant to automate old media.

Part 4

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s