Feeds:
Posts
Comments

Archive for January, 2009

I’ve been coming across videos lately that get more into the creative ideas that inspired research projects which were out of this world for their time, and give me feelings of inadequacy even today.

Two anniversaries happened in November 2008. One was the 40th anniversary of the idea of the Dynabook. The other was the 40th anniversary of Douglas Engelbart’s NLS demo.

Alan Kay – the Dynabook concept, 1968

Alan Kay gave a historical background on the ideas that led to the Dynabook concept. It’s 1 hour, 44 minutes.

Douglas Engelbart – NLS demo, 1968

Alan Kay and Andy van Dam met to discuss the significance of Engelbart’s work, and what still needs to be done, at the Program for the Future conference. It’s 1 hour, 26 minutes, and it’s a great talk.

Here is a collection of video clips of the 40th anniversary event for the NLS demo, held by the Stanford Research Institute (SRI). The original members of the NLS development team were in attendance to talk about the experience of building this amazing system. They give more details about how it was constructed. One of Engelbart’s daughters, Christina, talks about the conceptual framework her father implemented through the process of building NLS–incremental improvement of the group and the system. NLS was intended to increase the working power of professional groups through a concept Engelbart called “augmentation”, augmenting the human intellect. His goals were similar to Licklider’s concept of human-computer symbiosis.

My thanks go to Rosemary Simpson of Brown University for providing these links. This is great stuff.

In an interview on Nerd TV a few years ago Douglas Engelbart talked about the struggle he went through to implement his vision. It’s a sad tale, with occasional triumphs. It’s good to see his efforts getting public recognition in the present day.

You can learn more about Engelbart’s work at the Doug Engelbart Institute. What I think is of particular interest is the library section. You can view the complete 1968 demo there, along with the papers he wrote. It’s interesting to note when the papers were written, because his concepts of what was possible with the system he envisioned will sound familiar to people who are accustomed to today’s computer technology. Considering what was the norm in computing at the time this is amazing.

Related post: Great moments in modern computer history

Read Full Post »

See Part 1, Part 2, Part 3, Part 4, Part 5

What happened to my dreams?

I was working for a company as a contractor a few years ago, and the job was turning sour. I quit the job several months later. While I was working I started to wonder why I had gotten into this field. I didn’t ask this question in a cynical way. I mean, I wondered why I was so passionate about it in the first place. I knew there was a good reason. I wanted that back. I clearly remembered a time when I was excited about working with computers, but why? What did I imagine it would be like down the road? Maybe it was just a foolish dream, and what I had experienced was the reality.

I started reviewing stuff I had looked at when I was a teen, and reminiscing some (this is before I started this blog). I was trying to put myself back in that place where I felt like computing had an exciting, promising future.

Coming full circle

One day while reading a blog I frequented, I stumbled upon a line of inquiry that grabbed my interest. I pursued it on the internet, and got more and more engrossed. I was delving back into computer science, and realizing that it really was interesting to me.

When I first entered college I wasn’t sure that CS was for me, but I found it interesting enough to continue with it. For the most part I felt like I got through it, rather than engrossing myself in it by choice, though it had a few really interesting moments.

The first article I came upon that really captured my imagination was Paul Graham’s essay, “Beating The Averages”. I had never seen anyone talk about programming this way before. The essay was about a company he and a business partner developed and sold to Yahoo!, called ViaWeb, how they wrote the web application for it in Lisp, and how that was their competitive advantage over their rivals. It convinced me to give Lisp a second chance.

I decided to start this blog. I came up with the name “Tekkie” by thinking of two words: techie and trekkie. A desire had been building in me to write my thoughts somewhere about technology issues, particularly around .Net development and Windows technical issues. That was my original intent when I started it, but I was going through a transformative time. My new interest was too nascent for me to see it for what it was.

I dug out my old Lisp assignments from college. I still had them. I also pulled out my old Smalltalk assignments. They were in the same batch. Over the course of a few weeks I committed myself to learning a little Common Lisp and solving one of the Lisp problems that once gave me fits. I succeeded! For some reason it felt good, like I had conquered an old demon.

Next, I came upon Ruby via. a podcast. A guest the host had on referenced an online Ruby tutorial. I gave it a try and thoroughly enjoyed it. I had had a taste of programming with dynamic languages (when I actually had some good material with which to learn), and I liked it. I wondered if I could find work developing web apps. with one. I watched an impressive podcast on Rails, and so decided to learn the language and try my hand at it.

I had a conversation with an old college friend about Lisp and Ruby. He had been trying to convince me for a few years to give Ruby a try. I told him that Ruby reminded me of Smalltalk, and I was interested in seeing what the full Smalltalk system looked like, since I had never seen it. He told me that Squeak was the modern version, and it was free and openly available.

As I was going along, continuing to learn about Ruby and Rails, I was discovering online video services, which were a new thing, YouTube and Google Video. I had always wanted to see the event where Steve Jobs introduced the first Macintosh, in full. I had seen a bit of it in Triumph of The Nerds. I found it on Google Video. Great! I think one of the “related” videos it showed me was on the Smalltalk system at Xerox PARC. I watched that with excitement. Finally I was going to see this thing! I think one of the “related” videos there was a presentation by Alan Kay. The name sounded familiar. I watched that, and then a few more of his presentations. It gradually dawned on me that this was the “mystery man” I had seen on that local cable channel back in 1997!

I had heard of Kay years ago when I was a teenager. His name would pop up from time to time in computer magazine articles (no pictures though), but nothing he said then made an impression on me. I remember using a piece of software he had written for the Macintosh classic, but I can’t for the life of me remember what it was.

I had heard about the Dynabook within a few years of when I started programming, but the concept of it was very elusive to me. It was always described as “the predecessor to the personal computer” or something like that. For years I thought it had been a real product. I wondered, “Where are these Dynabooks?” Back in the 1980s I remember watching an episode of The Computer Chronicles and some inventor came on with an early portable Mac clone he called the “Dynamac”. I thought maybe that had something to do with it…

“What have I been doing with my life?”

So I watched a few of Kay’s presentations. Most of them were titled The Computer Revolution Hasn’t Happened Yet. I guess he was on a speaking tour with this subject. He had given them all between 2003 and 2005. And then I came upon the one he did for ETech 2003:

This blew me away. By the end of it I felt devastated. There was a subtle, electric feeling running through me, and at the same time a feeling of my own insignificance. I just sat in stunned silence for a moment. What I had seen and heard was so beautiful. In a way it was like my fondest hopes that I used to dare not hope had come true. I saw Kay demonstrate Squeak, and I saw the meaning of what he was doing with it. I saw him demonstrate Croquet, and it reminded me of a prediction I made about 11 years ago about 3D user interfaces being created. It was so gratifying to see a prototype of that in the works. What amazed me is it was all written in Squeak (Smalltalk).

I remembered that when I was a teenager I wished that software development would work like what I saw with Squeak (EToys), that I could imagine what I wanted and make it come to life in front of me quickly. I did this through my programming efforts back then, but it was a hard, laborious process. Kay’s phrase rang in my mind: “Why isn’t all programming like this?”

I also realized that nothing I had done in the past could hold a candle to what I had just seen, and for some reason I really cared about that. I had seen other amazing feats of software done in the past, but I used to just move on, feeling it was out of my realm. I couldn’t shake this demo. About a week later I was talking with a friend about it and the words for that feeling from that moment came out: “What have I been doing with my life?” The next thing I felt was, “I’ve got to find out what this is!”

Alan Kay captured the experience well in “The Early History of Smalltalk”:

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results. [my emphasis in bold]

Kay’s presentation of the ideas in Sketchpad, and the work by Engelbart was also enlightening. Sketchpad was covered briefly in the textbook for a computer graphics course I took in college. It didn’t say much, just that it was the first interactive graphical environment that allowed the user to draw shapes with a light pen, and that it was the predecessor to CAD systems. Kay helped me realize through this presentation, and other materials he’s produced, that the meaning of Sketchpad was deeper than that. It was the first object-oriented system, and it had a profound influence on the design of the Smalltalk system.

I didn’t understand very much of what Kay said on the first viewing. I revisited it several times after having time to think about what I had seen, read some more, and discuss these ideas with other people. Each time I came back to it I understood what he meant a little more. Even today I get a little more out of it.

After having the experience of watching it the first time, I reflected on the fact that Kay had shown a clip of the very same “mouse demo” with Douglas Engelbart that I remembered seeing in The Machine That Changed The World. I thought, “I wonder if I can find the whole thing.” I did. I was seeing a wonderful post on modern computer history come together, and so I wrote one called, “Great moments in modern computer history”.

Reacquiring the dream

So with this blog I have documented my journey of discovery. The line of inquiry I followed has brought me what I sought earlier: to get back what made me excited to get into this field in the first place. It feels different though. It’s not the same kind of adolescent excitement I used to have. Through this study I remembered my young, naive fantasies about positive societal change via. computer technology, and I’ve been gratified that Alan Kay wants something similar. I still like the idea that we can change, and realize the computer’s full potential, and that those benefits I imagined earlier might still be possible. I have a little better sense now (emphasis on “little”) of the kind of work that will be required to bring that about.

Having experienced what I have in the business world, I now know that technology itself will not bring about that change. Reading Lewis Mumford helped me realize that. I found out about him thanks to Alan Kay’s reading list, and his admonition that technologists should read. The culture is shaping the computer, and in turn it is shaping us, and at least in the business world we think of computers like our forebearers thought of machinery in factories. Looking back on my experience in the work world that was one of my disappointments.

About “The Machine That Changed The World”

The series I mentioned earlier, “The Machine That Changed The World”, showed that this transformation has happened in some ways, but I contend that it’s spotty. Alan Kay has pointed to two areas where promising progress has been made. The first is in the scientific community. He said several years ago that the computer revolution is happening in science today. He’s said that the computer has “revolutionized” science. The next place is in video games, though he hasn’t been that pleased with the content. What they have right is they create interactive worlds, and the designers are very cognizant of the players’ “flow”, making interaction with the simulated environment easy and as natural as is possible, considering current controller technology.

Looking back on this TV series now I can see its significance. It tried to blend a presentation of the important ideas that were discovered about computing with the reality of how computers were perceived and used. It showed the impact they had on our society, and how perceptions of them changed over time.

In the first episode, “Great Brains”, the narrator puts the emphasis on the computer being a new medium:

5,000 years ago mankind invented writing, a way to record and communicate ideas. These simple marks on clay and paper changed the world, becoming the cornerstone of our intellectual and commercial lives. Today we may be witnessing the emergence of a new medium whose influence may one day rival that of writing.

A modern computer fits on a desk, is affordable and simple enough for a child to play on. But what to the child is a computer game, is to the computer just patterns of voltages. Today we take it for granted that such patterns can help architects to draw, scientists to model complex phenomena, musicians to compose, and even aid scholars to search the literature of the past. Yet a machine with such powers of transformation is unlike any machine in history.

Computers don’t just do things, like other machines. They manipulate ideas. Computers conjure up artificial universes, and even allow people to experience them from the inside, exploring a new molecule, or walking through an unbuilt building.

Doron Swade of the London Science Museum adds (the first part of this quote was unintelligible. I’m making a guess that I think makes sense here):

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality. And it seems that that has a close affinity with the way minds and consciousness work. The mind manipulates processes, images, ideas, notions. Exactly how it does that is of course not yet known. But there seems to be a strong kindred relationship between the manner in which computers process information and the analogy that that has with the way minds, and thinking, and consciousness seem to work. So they have a very special place, because they’re the closest we have to a mind-like machine that we have yet had.

The final episode, “The World At Your Fingertips”, ends by bringing the series full circle, back to the idea that the computer is a new medium, in a way that I think is beautiful:

The computer is not a machine in the traditional sense. It is a new medium. Perhaps predicting the future of computers is as hard as it would have been to predict the consequences of the marks Sumerians made on clay 4,000 years ago.

Paul Ceruzzi, a computer historian says:

It’s ironic when you look at the history of writing, to find that it began as a utilitarian method of helping people remember how much grain they had. From those very humble, utilitarian beginnings came poetry, and literature, and all the kinds of wonderful things that we associate with writing today. Now we bring ourselves up to the invention of the computer. The very same thing is happening. It was invented for a very utilitarian, prozaic purpose of doing calculations, relieving the tedium and drudgery of cranking out numbers for insurance companies, or something like that. And now we begin to see this huge culture that’s grown up of people who are discovering all the things you can do by playing with the computer. We may see a time in the not too distant future when people will look at the computer’s impact on society, and they’ll totally forget its humble beginnings as a calculating device, but rather as something that enriches their culture in undreamed of ways just as literature and art is perceived today in this world.

This TV series was made at a time before the internet became popular. The focus was on desktop computers and multimedia. As you can see there was a positive outlook towards what the computer could become. Since then I’m sad to say the computer has faded into the background in favor of a terminal metaphor, one that is not particularly good at being an authoring platform, though there are a few good efforts being made at changing that. What this TV series brought out for me is that we have actually taken some steps backwards with the form that the internet has taken.

Perhaps I am closing a chapter with this series of posts. I’ve been meaning to move on from just talking about the ideas I’ve been thinking about, to something more concrete. What form that will take I’m not sure yet. I’ll keep writing here about it.

Read Full Post »

See Part 1, Part 2, Part 3, Part 4

Moments of inspiration

It was sometime in 1997, I think. One day I was flipping channels on my TV, and I happened upon an interview with a man who fascinated me. I didn’t recognize him. It was on a local cable channel. I caught the interview in the middle, and at no point did anyone say who he was. I didn’t care. I sat and watched with rapt attention. I was so impressed with what he was talking about I hit Record on my VCR (I might still have the tape somewhere). The man appeared to be looking at an interviewer as he spoke, but I didn’t hear any questions asked. He seemed to be talking about the history of Western civilization, how it developed from the Middle Ages onward. He wove in the development of technology and how it influenced civilization. This was amazing to me.

I remember he said that children were considered adults at the age of 7, hundreds of years ago. When students went to college they wrote their own textbooks, which were their lecture notes. When they completed their textbooks, they got their degrees. I think he said after that they were considered worthy to become professors, and this was how knowledge perpetuated from one generation to the next.

Moving up into recent history, I remember he talked about his observation of societal trends in the 1990s. He said something about how in a democracy the idea was we should be able to discuss issues with each other, no matter how controversial, as if the argument was separate from ourselves. The idea was we were to consider arguments objectively. This way we could criticize each other’s arguments without making it personal. He said at the time that our society had entered a dangerous phase, and I think he said it was reminiscent of a time in Western civilization centuries ago, where we could not talk about certain issues without others considering those who brought them up a threat. I remember he said something like, “I’ve talked to President Clinton, and he understands this.”

He shifted to talking about technology concepts that were written about 40-50 years earlier. He talked about Vannevar Bush, and his paper “As We May Think”. He talked about Bush’s conceptual Memex machine, that it would be the size of a desk, and that he had come up with a concept we would now call hyperlinking. He remarked that Bush was a very forward-thinking man. He said Bush envisioned that information would be stored on “optical discs”. This phrase really jumped out at me, and it blew me away. I knew that the laser wasn’t invented until the 1950s. “How could Bush have imagined laserdiscs?”, I thought (I misunderstood).

He ended his talk with the idea of agents in a computer system, a concept that was written about in the 1960s. The idea was these would be programs that would search systems and networks for specific sets of information on behalf of a user. He named the author of a paper on it, but I can’t remember now. I think he said something about how even though these ideas were thought about and written about years ago, long before there was technology capable of implementing them, they were still being developed in the 1990s. The span of history he spoke about had an impact on me. In terms of the history related to technology concepts, all I could grasp was the idea of hyperlinking. The rest felt too esoteric.

Once I started interacting with customers in my work I wanted to please them above all. Working on something and having the customer reject it was the biggest downer, even if the software’s innards were like a work of art to me. There was this constant tension with me, between creating an elegant solution and getting s__t done. I was convinced by my peers that my desire to create elegant solutions was just something eccentric about me. All code was good for was creating the end product. Who cared if it looked like spaghetti code? The customer certainly didn’t. I became convinced they were right. My operating philosophy became, “We may try for elegance, but ultimately the only thing that matters is delivering the product.” Really what it boiled down to was making the computer do something. Us engineers cared how that happened, how it was done, but no one else did, and they still don’t.

I quit my job in 1999. I became interested in C++, because I saw that most programming want ads were requiring it. In an exercise to learn it I decided to port a program I had written in C for my Atari STe back in 1993, to C++ for DOS. Around 1992 I had watched a program on the formation of our solar system. My memory is it went into more detail than I had seen before. It talked about how the solid inner planets were formed from rocky material, and that they grew via. meteor collisions. Previous explanations I’d seen had just focused on the “big picture”, saying that a cloud of gas and dust formed into a disc, and that eddies formed in it, and that eventually planets condensed out of that. Very vague. I decided in 1993 to try to write a simulator that would model the “collision” interactions I heard described in the 1992 show to see if it would work. I called it “orbit”. I created a particle system, where each object was the same mass, and interacted with every other object gravitationally, using Newton’s formula. Unfortunately my Atari was too slow to really do much with this. I was able to get a neat thing going where I could get one object to orbit another one that was stationary. A college friend of mine, who got a degree in aeronautical engineering, helped me out with this. When I got into multiple objects, it wasn’t that interesting. My Atari could run at 16 Mhz, but this enabled maybe ten gravitational objects to be on the screen. If I did more than that the computer would really bog down.

Re-writing it in C++ was nice. I could really localize functionality so that I didn’t have to worry about side-effects, and I enjoyed the ability to derive an object from another one and really leverage the base class. I got the simulator working at first on an old 33 Mhz 386, and then on a 166 Mhz Pentium I. I started with about 30-50 objects on screen at once. When I moved it up to the Pentium I was able to put a few hundred objects in it at one time. It ran at a decent speed. I randomized the masses as well, to make things a little more interesting.

I would just sit and watch the interactions. Out of the mass of dots on the screen I could focus on a few that were circling around each other, dancing almost. It was delightful to watch! It was the first time I was actually fascinated by something I had written. I had put some collision code in it so that if two or more objects hit each other they would form one mass (the mass being the sum of the colliders), and the new momentum would be calculated. I would sit and watch as a lot of objects bumped into each other, forming new masses. What tended to happen was not what I expected: eventually all of the masses would drift off the screen. I tried various ways of stopping this from happening, like making the objects wrap around (this just resulted in a few objects zipping by at a zillion miles per hour), or causing them to stop at the edges of the screen and then let gravity draw them back in (this ultimately didn’t work–they’d tend to congregate at the edges). The solution I finally hit upon was to “re-materialize” them at the center of the screen each time they drifted off. This seemed to create a “thriving” system that didn’t die. I had to concede that such a system was no longer realistic. It was still interesting though. Sometimes I’d let the simulator run for a few hours, and then I’d check back to see what had happened. In one case it spontaneously formed a stable orbiting system of a couple “planets”, with several objects scattered around the screen that were so massive they never moved. It didn’t form a solar system as we know it, but it did form a couple centers of gravity. Interesting.

I had ideas about creating a scalable graphics display so that I could create a larger “universe” than just the dimensions of the screen, and perhaps see if things would work out without me having to resort to these tricks, but I didn’t get around to it.

Fast forward three years…

As I was trying to acquire skills and keep up, I’d listen to podcasts recorded by developers. This was just coming on the scene. It was one way I tried to keep up on the current trends. You don’t want to get behind the trend, lest you become irrelevant in the field. In a couple podcasts I heard the host ask a guest, “What software product that you’ve worked on would you like put on your tombstone? What would you like to be in your epitaph?” These were kind of profound questions. I tried asking them of myself, and I couldn’t answer them. Nothing I’d worked on for my work felt that significant compared to what else I saw out there in the commercial market. I tried asking myself, “If I could work anyplace I wanted, that I could think of, is there anything they’re working on that I’d like to be remembered for, if I had worked on it?” I couldn’t think of anything. They just didn’t seem that interesting. I put the question aside and continued on with my work.

Inside, though, I knew I wanted what I wrote (in code) to mean something, not just to the people who used it, but to other programmers as well. I did not wish this for myself in order to receive kudos. It was part of my own personal integrity. I was capable of just grinding out sloppy code if that was required of me, but I was embarrassed by it in the end. I could use development IDE tools and frameworks, which I initially embraced, but in the end I felt like a plumber. I was spending half my time with the technologies just connecting things together, and converting pieces of data from one thing to another.

I had a couple experiences in the work world that made me feel like my heart wasn’t in it anymore. There were good times as well. I had an opportunity to work with an excellent team of people for a few years at one place I worked in the 1990’s. It wasn’t enough though. The only thing I could think to do was to continue my IT work. I didn’t have any alternative careers that I looked forward to, but dissatisfaction was growing within me.

Part 6

Read Full Post »

See Part 1, Part 2, Part 3

The real world

Each year while I was in school I looked for summer internships, but had no luck. The economy sucked. In my final year of school I started looking for permanent work, and I felt almost totally lost. I asked CS grads about it. They told me “You’ll never find an entry level programming job.” They had all landed software testing jobs as their entree into corporate software production. Something inside me said this would never do. I wanted to start with programming. I had the feeling I would die inside if I took a job where all I did was test software. About a year after I graduated I was proved right when I took up test duties at my first job. My brain became numb with boredom. Fortunately that’s not all I did there, but I digress.

In my final year of college I interviewed with some major employers who came to my school: Federal Express, Tandem, Microsoft, NCR. I wasn’t clear on what I wanted to do. It was a bit earth-shattering. I had gone into CS because I wanted to program computers for my career. I didn’t face the “what” (what specifically did I want to do with this skill?) until I was about ready to graduate. I had so many interests. When I entered school I wanted to do application development. That seemed to be my strength. But since I had gone through the CS program, and found some things about it interesting, I wasn’t sure anymore. I told my interviewer from Microsoft, for example, that I was interested in operating systems. What was I thinking? I had taken a course on linguistics, and found it pretty interesting. I had taken a course called Programming Languages the previous year, and had a similar level of interest in it. I had gone through the trouble of preparing for a graduate level course on language compilers. I was taking it at the time of the interview. It just didn’t occur to me.

None of my interviews panned out. Looking back on it in hindsight it was good this happened. Most of them didn’t really suit my interests. The problem was who did?

Once I graduated with my Bachelor’s in CS in 1993, and had an opportunity to relax, some thoughts settled in my mind. I really enjoyed the Programming Languages course I had taken in my fourth year. We covered Smalltalk for two weeks. I thoroughly enjoyed it. At the time I had seen many want ads for Smalltalk, but they were looking for people with years of experience. I looked for Smalltalk want ads after I graduated. They had entirely disappeared. Okay. Scratch that one off the list. The next thought was, “Compilers. I think I’d like working on language compilers.” I enjoyed the class and I reflected on the fact that I enjoyed studying and using language. Maybe there was something to that. But who was working on language compilers at the time? Microsoft? They had rejected me from my first interview with them. Who else was there that I knew of? Borland. Okay, there’s one. I didn’t know of anyone else. I got the sense very quickly that while there used to be many companies working on this stuff, it was a shrinking market. It didn’t look promising at the time.

I tried other leads, and thought about other interests I might have. There was a company nearby called XVT that had developed a multi-platform GUI application framework (for an analogy, think wxWindows), which I was very enthusiastic about. While I was in college I talked with some fellow computer enthusiasts on the internet, and we wished there was such a thing, so that we didn’t have to worry about what platform to write software for. I interviewed with them, but that didn’t go anywhere.

For whatever reason it never occurred to me to continue with school, to get a masters degree. I was glad to be done with school, for one thing. I didn’t see a reason to go back. My undergrad advisor subtly chided me once for not wanting to advance my education. He said, “Unfortunately most people can find work in the field without a masters,” but he didn’t talk with me in depth about why I might want to pursue that. I had this vision that I would get my Bachelor’s degree, and then it was just a given that I was going to go out into private industry. It was just my image of how things were supposed to go.

Ultimately, I went to work in what seemed like the one industry that would hire me, IT software development. My first big job came in 1995. At first it felt like my CS knowledge was very relevant, because I started out working on product development at a small company. I worked on adding features to, and refactoring a reporting tool that used scripts for report specification (what data to get and what formatting was required). Okay. So I was working on an interpreter instead of a compiler. It was still a language project. That’s what mattered. Besides developing it on MS-DOS (UGH!), I was thrilled to work on it.

It was very complex compared to what I had worked on before. It was written in C. It had more than 20 linked lists it created, and some of them linked with other lists via. pointers! Yikes! It was very unstable. Anytime I made a change to it I could predict that it was going to crash on me, causing my PC to freeze up every time, requiring me to reboot my machine. And we think now that Windows 95 was bad about this… I got so frustrated with this I spent weeks trying to build some robustness into it. I finally hit on a way to make it crash gracefully by using a macro, which I used to check every single pointer reference before it got used.

I worked on other software that required a knowledge of software architecture, and the ability to handle complexity. It felt good. As in school, I was goal-oriented. Give me a problem to solve, and I’d do my best to do so. I liked elegance, so I’d usually try to come up with what I thought was a good architecture. I also made an effort to comment well to make code clear. My efforts at elegance usually didn’t work out. Either it was impractical or we didn’t have time for it.

Fairly quickly my work evolved away from doing product development. The company I worked for ended up discarding a whole system they’d worked two years on developing. The reporting tool I worked on was part of that. We decided to go with commodity technologies, and I got more into working with regular patterns of IT software production.

I got a taste for programming for Windows, and I was surprised. I liked it! I had already developed a bias against Microsoft software at the time, because my compatriots in the field had nothing but bad things to say about their stuff. I liked developing for an interactive system though, and Windows had a large API that seemed to handle everything I needed to deal with, without me having to invent much of anything to make a GUI app. work. This was in contrast to GEM on my Atari STe, which was the only GUI API I knew about before this.

My foray into Windows programming was short lived. My employer found that I was more proficient in programming for Unix, and so pigeon-holed me into that role, working on servers and occasionally writing a utility. This was okay for a while, but I got bored of it within a couple years.

Triumph of the Nerds

Around 1996 PBS showed another mini-series, on the history of the microcomputer industry, focusing on Apple, Microsoft, and IBM. It was called Triumph of the Nerds, by Robert X. Cringely. This one was much easier for me to understand than The Machine That Changed The World. It talked about a history that I was much more familiar with, and it described things in terms of geeky fascination with technology, and battles for market dominance. This was the only world I really knew. There weren’t any deep concepts in the series about what the computer represented, though Steve Jobs added some philosophical flavor to it.

My favorite part was where Cringely talked about the development of the GUI at Xerox PARC, and then at Apple. Robert Taylor, Larry Tesler, Adele Goldberg, Andy Warnok, and Steve Jobs were interviewed. The show talked mostly about the work environment at Xerox (how the researchers worked together, and how the executives “just didn’t get it”), and the Xerox Alto computer. There was a brief clip of the GUI they had developed (Smalltalk), and Adele Goldberg briefly mentioned the Smalltalk system in relation to the demo Steve Jobs saw, though you’d have to know the history better to really get what was said about it. Superficially one could take away from it that Xerox had developed the GUI, and Apple used it as inspiration for the Mac, but there was more to the story than that.

Triumph of the Nerds showed the unveiling of the first Macintosh in 1984 for the first time, that I had seen. I read about it shortly after it happened in 1984, but I saw no pictures and no video. It was really neat to see. Cringely managed to give a feel for the significance of that moment.

Part 5

Read Full Post »