“Pascal is for building pyramids–imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms–imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.”
– from forward written by Alan Perlis for
Structure and Interpretation of Computer Programs
In this post I’m going to talk mostly about a keynote speech Alan Kay gave at the 1997 OOPSLA conference, called “The Computer Revolution Hasn’t Happened Yet,” how he relates computing to biological systems, and his argument that software systems, particularly OOP, should emulate biological systems. In part 1 I talked a bit about Peter Denning’s declaration that “Computing is a Natural Science”.
Edit 11/15/2012: A video existed of Alan Kay’s speech for the past 6 years or so, but sadly it’s now gone. Fortunately, I can still talk about it, as I quoted him verbatim.
Please note: There is a bit of mature language in this speech. It’s one sentence. If you are easily offended, errm…avert your eyes, whatever. It’s where he’s talking about Dr. Dijkstra.
This speech reminds me that when I started hearing about gene therapies being used to treat disease, I got the idea that this was a form of programming–reprogramming cells to operate differently, using the “machine code” of the cell: DNA.
While scientists are discovering that there’s computation going on around us in places we did not suspect, Kay was, and still is, trying to get us developers to make our software “alive,” capable of handling change and growth–to scale–without being disturbed to a significant degree. That’s what his conception of OOP was based on to begin with. A key part of this is a late-bound system which allows changes to be made dynamically. Kay makes the point as well that the internet is like this, though he only had a tiny hand in it.
One of the most striking things to me he’s touched on in this speech, which he’s given several times, though it’s a bit different each time, is that computing introduces a new kind of math, a finite mathematics that he says is more practical and more closely resembles things that are real, as opposed to the non-bounded quality of classical mathematics, which often does not. Maybe I’m wrong, but it seems to me from what he says, he sees this finite math as better than classical math at describing the world around us. As noted in my previous post, Peter Denning says in his article “Computing is a Natural Science”:
Stephen Wolfram proclaimed [in his book A New Kind of Science] that nature is written in the language of computation, challenging Galileo’s claim that it is written in mathematics.
So what’s going on here? It definitely sounds like CS and the other sciences are merging some. I learned certain finite math disciplines in high school and college, which related to my study of computer science. I never expected them to be substitutes for classical math in the sciences. If someone can fill me in on this I’d be interested to hear more. I don’t know, but this memory just flashed through my head; that scene in “Xanadu” where a band from the 40s, and a band from the 80s merge together into one performance. They’re singing different songs, but they mesh (video link). Crazy what one’s mind can come up with, huh. I know, I know. ”Xanadu” was a cheesy movie, but this scene was one of the better ones in it, if you can tolerate the fashions.
I think this speech has caused me to look at OOP in a new way. When I was first training myself in OOP years ago, I went through the same lesson plan I’m sure a lot of you did, which was “Let’s make a class called ‘Mammal’. Okay, now let’s create a derived class off of that called ‘Canine’, and derive ‘Dog’ and ‘Wolf’ off of that,” or, “Let’s create a ‘Shape’ class. Now derive ‘Triangle’ and ‘Ellipse’ off of that, and then derive ‘Circle’ from ‘Ellipse’.” What these exercises were really doing was teaching the structure and techniques of OOP, and showing some of their practical advantages, but none of it taught you why you were doing it. Kay’s presentation gave me the “why”, and he makes a pretty good case that OOP is not merely a neat new way to program, but is in fact essential for developing complex systems, or at least the best model discovered so far for doing it. He also points out that the way OOP is implemented today is missing one critical element–dynamism, due to the early-bound languages/systems we’re using, and that a lot of us have misunderstood what he intended OOP to be about.
Alan Kay relates the metaphor of biological systems to software systems, from his speech at the 1997 OOPSLA conference
If you don’t want to listen to the whole speech, I’ve printed some excerpts from it below, only including the parts that fit the biological metaphor he was getting across. I’ve included my own annotations in ‘s, and commentary in between excerpts. I’ve also included some links.
Dr. Kay gets into this discussion, using a spat he and Dr. Dijkstra got into some years before over who were the better computer scientists, the Europeans or the Americans, as a launching point. Dijkstra made the point that European computer scientists had more of a math background than Americans did, and they got their positions by working harder at it than Americans, etc. Kay responded with, “Huh, so why do we write most of the software, then?” (I’m paraphrasing). He continues:
Computers form a new kind of math. They don’t really fit well into classical math. And people who try to do that are basically indulging in a form of masturbation…maybe even realizing it. It was a kind of a practical math. The balance was between making structures that were supposed to be consistent of a much larger kind than classical math had ever come close to dreaming of attempting, and having to deal with [the] exact same problems that classical math of any size has to deal with, which is being able to be convincing about having covered all of the cases.
There’s a mathematician by the name of Euler [pronouned "Oiler"] whose speculations about what might be true formed 20 large books, and most of them were true. Most of them were right. Almost all of his proofs were wrong. And many PhDs in mathematics in the last and this century have been formed by mathematicians going to Euler’s books, finding one of his proofs, showing it was a bad proof, and then guessing that his insight was probably correct and finding a much more convincing proof. And so debugging actually goes on in mathematics as well.
And I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There’s no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those I don’t think really get what computing is all about. It’s like saying I have really great ideas for paintings, but I’m just going to use a brush, but no paint. You know, so my ideas will be represented by the gestures I make over the paper; and don’t tell any 20th century artists that, or they might decide to make a videotape of them doing that and put it in a museum. . . .
Kay used a concept, called bisociation, from a book called “The Act of Creation”, by Arthur Koestler. Koestler draws out a “pink plane” with a trail of ants on it, which represents one paradigm, one train of thought, that starts out with an idea and works towards continuous improvement upon it. Kay said about it:
If you think about that, it means that progress in a fixed context is almost always a form of optimization, because if you were actually coming up with something new it wouldn’t have been part of the rules or the context for what the pink plane is all about. So creative acts generally are ones that don’t stay in the same context that they’re in. So he says every once in a while, even though you have been taught carefully by parents and by school for many years, you have a “blue” idea.
This introduces a new context.
[Koestler] also pointed out that you have to have something “blue” to have “blue” thoughts with, and I think this is something that is generally missed in people who specialize to the extent of anything else. When you specialize you’re basically putting yourself into a mental state where optimization is pretty much all you can do. You have to learn lots of different kinds of things in order to have the start of these other contexts.
This is the reason he says a general education is important. Next, he gets into the biological metaphor for computing:
One of my undergraduate majors was in molecular biology, and my particular interest was both in cell physiology and in embryology–morphogenesis they call it today. And this book, The Molecular Biology of the Gene had just come out in 1965, and–wonderful book, still in print, and of course it’s gone through many, many editions, and probably the only words that are common between this book and the one of today are the articles, like “the” and “and”. Actually the word “gene” I think is still in there, but it means something completely different now. But one of the things that Watson did in this book is to make an assay–first assay of an entire living creature, and that was the E. Coli bacterium. So if you look inside one of these, the complexity is staggering.
He brings up the slide of an E. Coli bacterium as seen through an (electron?) microscope.
Those “popcorn” things are protein molecules that have about 5,000 atoms in them, and as you can see on the slide, when you get rid of the small molecules like water, and calcium ions, and potassium ions, and so forth, which constitute about 70% of the mass of this thing, the 30% that remains has about 120 million components that interact with each other in an informational way, and each one of these components carries quite a bit of information [my emphasis]. The simple-minded way of thinking of these things is it works kind of like OPS5 [OPS5 is an AI language that uses a set of condition-action rules to represent knowledge. It was developed in the late 1970s]. There’s a pattern matcher, and then there are things that happen if patterns are matched successfully. So the state that’s involved in that is about 100 Gigs., and you can multiply that out today. It’s only 100 desktops, or so [it would be 1/2-1/3 of a desktop today], but it’s still pretty impressive as [an] amount of computation, and maybe the most interesting thing about this structure is that the rapidity of computation seriously rivals that of computers today, particularly when you’re considering it’s done in parallel. For example, one of those popcorn-sized things moves its own length in just 2 nanoseconds. So one way of visualizing that is if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagon, and it’s moving its own length in 2 nanoseconds. That’s about 8 feet on our scale of things. And can anybody do the arithmetic to tell me what fraction of the speed of light moving 8 feet in 2 nanoseconds is?…[there's a response from the audience] Four times! Yeah. Four times the speed of light [he moves his arm up]–scale. So if you ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent, that we could not imagine it, even with the aid of computers. There’s nothing to be seen inside one of these things until you kill it, because it is just a complete blur of activity, and under good conditions it only takes about 15 to 18 minutes for one of these to completely duplicate itself. Okay. So that’s a bacterium. And of course, lots more is known today.
Another fact to relate this to us, is that these bacteria are about 1/500th the size of the cells in our bodies, which instead of 120 million informational components, have about 60 billion, and we have between 1012, maybe 1013, maybe even more of these cells in our body. And yet only 50 cell divisions happen in a 9-month pregnancy. It only takes 50 cell divisions to make a baby. And actually if you multiply it out, you realize you only need around 40. And the extra 10 powers of 10 are there because during the embryological process, many of the cells that are not fit in one way or another for the organism as a whole are killed. So things are done by over-proliferating, testing, and trimming to this much larger plan. And then of course, each one of these structures, us, is embedded in an enormous biomass.
So to a person whose “blue” context might have been biology, something like a computer could not possibly be regarded as particularly complex, or large, or fast. Slow. Small. Stupid. That’s what computers are. So the question is how can we get them to realize their destiny?
So the shift in point of view here is from–There’s this problem, if you take things like doghouses, they don’t scale [in size] by a factor of 100 very well. If you take things like clocks, they don’t scale by a factor of 100 very well. Take things like cells, they not only scale by factors of 100, but by factors of a trillion. And the question is how do they do it, and how might we adapt this idea for building complex systems?
He brings up a slide of a basic biological cell. He uses the metaphor of the cell to talk about OOP, as he conceived it, and building software systems:
Okay, this is the simple one. This is the one by the way that C++ has still not figured out, though. There’s no idea so simple and powerful that you can’t get zillions of people to misunderstand it. So you must, must, must not allow the interior of any one of these things to be a factor in the computation of the whole, okay. And this is only part of this story. It’s not just the cell–the cell membrane is there to keep most things out, as much as it is there to keep certain things in.
And a lot of, I think, our confusion with objects is the problem that in our Western culture, we have a language that has very hard nouns and verbs in it. So our process words stink. So it’s much easier for us when we think of an object–I have apologized profusely over the last 20 years for making up the term “object-oriented”, because as soon as it started to be misapplied, I realized that I should’ve used a much more process-oriented term for it. Now, the Japanese have an interesting word, which is called “ma”, which spelled in English is M-A, “ma”. And “ma” is the stuff in between what we call objects. It’s the stuff we don’t see, because we’re focused on the noun-ness of things, rather than the process-ness of things, whereas Japanese has a more process/feel-oriented way of looking at how things relate to each other. You can always tell that by looking at the size of a word it takes to express something that is important. So “ma” is very short. We have to use words like “interstitial”, or worse, to approximate what the Japanese are talking about.
I’ve seen him use this analogy of the Japanese word “ma” before in more recent versions of this speech. What he kind of beats around the bush about here is that the objects themselves are not nearly as important as the messages that are sent between them. This is the “ma” he’s talking about–what is going on between the objects, their relationships to each other. To me, he’s also emphasizing architecture here. In a different version of this speech (I believe), he said that the true abstraction in OOP is in the messaging that goes on between objects, not the objects themselves.
So, the realization here–and it’s not possible to assign this realization to any particular person, because it was in the seeds of Sketchpad, and in the seeds of the Air Training Command file system, and in the seeds of Simula–and that is, that once you have encapsulated in such a way that there is an interface between the inside and the outside, it is possible to make an object act like anything, and the reason is simply this: that what you have encapsulated is a computer [my emphasis]. So you have done a powerful thing in computer science, which is to take the powerful thing you’re working on and not lose it by partitioning up your design space. This is the bug in data procedures–data and procedure languages. And I think this is the most pernicious thing about languages like C++ and Java, [which] is they think they are helping the programmer by looking as much like the old thing as possible, but in fact they’re hurting the programmer terribly, by making it difficult for the programmer to understand what’s really powerful about this new metaphor.
The “Air Training Command file system” refers to a way that data was stored on tapes back when Kay was in the Air Force, I think in the 1950s in the early 1960s. It was effective and efficient. Some have pointed to it as the first example Kay had seen of an object-oriented system. The first section of tape contained pointers to routines (probably hard addresses to memory locations, in those days) whose code was stored in a second section of the tape, which read/manipulated the data on the third section of the tape. This way, it didn’t matter what format the data came in, since that was abstracted by the routines in the second section. No one knows who came up with the scheme.
And again, people who were doing time-sharing systems had already figured this out as well. Butler Lampson’s thesis in 1965 was about, that what you want to give a person on a time-sharing system is something that is now called a virtual machine, which is not the same as what the Java VM is, but something that is as much like the physical computer as possible, but give one separately to everybody. Unix had that sense about it, and the biggest problem with that scheme is that a Unix process had an overhead of about 2,000 bytes just to have a process. And so it’s going to be difficult in Unix to let a Unix process just be the number 3. So you’d be going from 3 bits to a couple of thousand bytes, and you have this problem with scaling.
So a lot of the problem here is both deciding that the biological metaphor [my emphasis] is the one that is going to win out over the next 25 years or so, and then committing to it enough to get it so it can be practical at all of the levels of scale that we actually need. Then we have one trick we can do that biology doesn’t know how to do, which is we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it’s done today. And systems do have cystic fibrosis, and some of you may know that cystic fibrosis today for some people is treated by infecting them with a virus, a modified cold virus, giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus, and the cold virus is too weak to actually destroy the lungs like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. And that is what does the trick. That’s a very complicated way of reprogramming an organism’s DNA once it has gotten started.
He really mixes metaphors here, going from ”cystic fibrosis” in computing, to it in a biological system. I got confused on what he said about the gene therapy. Perhaps he meant that the virus carries the ”non-defective gene”? I thought what the therapy did was substitute a good gene for the bad one, since cystic fibrosis is a genetic condition, but then, I don’t really know how this works.
When he talks about “taking the DNA out of the cell,” and dealing with “cystic fibrosis,” I’m pretty sure he’s talking about a quality of dynamic OOP systems (like Smalltalk) that work like Ivan Sutherland’s Sketchpad program from the 1960s, where each object instance was based on a master object. If you modified the master object, that change got reflected automatically in all of the object instances.
He brought up another slide that looks like a matrix (chart) of characteristics of some kind. I couldn’t see it too well. I recognized the subject though. He begins to talk about his conception of the semantic web, using the same biological and object-oriented metaphor:
So here’s one that is amazing to me, that we haven’t seen more of. For instance, one of the most amazing things to me, of people who have been trying to put OOP on the internet, is that I do not–I hope someone will come up afterwards and tell me of an exception to this–but I do not know of anybody yet who has realized that at the very least every object should have a URL, because what the heck are they if they aren’t these things. And I believe that every object on the internet should have an IP, because that represents much better what the actual abstractions are of physical hardware to the bits. So this is an early insight, that objects basically are like servers. And this notion of polymorphism, which used to be called generic procedures, is a way of thinking about classes of these servers. Everybody knows about that.
He brought up another slide, showing the picture of a building crane on one side, and a collection of biological cells on the other. More metaphors.
And here’s one that we haven’t really faced up to much yet, that now we’ll have to construct this stuff, and soon we’ll be required to grow it. So it’s very easy, for instance, to grow a baby 6 inches. They do it about 10 times in their life. You never have to take it down for maintenance. But if you try and grow a 747, you’re faced with an unbelievable problem, because it’s in this simple-minded mechanical world in which the only object has been to make the artifact in the first place, not to fix it, not to change it, not to let it live for 100 years.
So let me ask a question. I won’t take names, but how many people here still use a language that essentially forces you–the development system forces you to develop outside of the language [perhaps he means "outside the VM environment"?], compile and reload, and go, even if it’s fast, like Virtual Cafe (sic). How many here still do that? Let’s just see. Come on. Admit it. We can have a Texas tent meeting later. Yeah, so if you think about that, that cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go to trying to understand what the possibilities for interoperability is with things that already exist.
Now, I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago, and if you think about–the ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders or magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big! . . .
There are going to be dozens and dozens–there almost already are–dozens and dozens of different object systems, all with very similar semantics, but with very different pragmatic details. And if you think of what a URL actually is, and if you think of what an HTTP message actually is, and if you think of what an object actually is, and if you think of what an object-oriented pointer actually is, I think it should be pretty clear that any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made. That’s the whole point of not being able to see inside. And so a semantic interoperability is possible almost immediately by simply taking that stance. So this is going to change, really, everything. And things like Java Beans and CORBA are not going to suffice, because at some point one is going to have to start really discovering what objects think they can do. And this is going to lead to a universal interface language, which is not a programming language, per se. It’s more like a prototyping language that allows an interchange of deep information about what objects think they can do. It allows objects to make experiments with other objects in a safe way to see how they respond to various messages. This is going to be a critical thing to automate in the next 10 years. . . .
I think what he meant by “any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made” is that any OOP language can abstract its pointers so that there’s no distinction between pointers to local objects and
URLs to remote objects. This would make dealing with local and remote objects seamless, assuming you’re using a late-bound, message-passing metaphor throughout your system. The seamlessness would not be perfect in an early-bound system, because the compiler would ensure that all classes for which there are messages existed before runtime, but there’s no way to insure that every object on the internet is always going to be available, and always at the same URL/address.
You might ask, “Why would you want that kind of ‘perfect’ seamlessness? Don’t you want that insurance of knowing that everything connects up the way it should?” My answer is, things change. One of the things Kay is arguing here is that our software needs to be adaptable to change, both in itself, and in the “outside world” of the internet. He uses the internet as an example of a system that has been very robust in the face of change. Early-bound systems have a tendency to push the programmer towards assuming that everything will connect up just as it is today, and that nothing will change.
As I heard him talk about objects on the web, the first thing that came to mind was REST (the whole “each object should have a URL” idea). It seems like the first try at an implementation of this idea of objects “experimenting” and “figuring out” what other objects on the network can do was started with the idea of WSDL and UDDI, but that flopped, from what I hear. Microsoft has made a bit of a start with this, in my opinion, with WCF, but it only determines what type of object is at the other end, as in “Is it DCOM?”, “Is it an XML web service?”, etc. It frees the programmer from having to do that. It still allows the client program to interogate the object, but it’s not going to do “experiments” on it for you. It may allow the client program to do them more easily than before. I don’t know.
Kay ends with a note of encouragement:
Look for the “blue” thoughts. And I was trying to think of a way–how could I stop this talk, because I’ll go on and on, and I remembered a story. I’m a pipe organist. Most pipe organists have a hero whose name was E. Power Biggs. He kind of revived the interest in the pipe organ, especially as it was played in the 17th and 18th centuries, and had a tremendous influence on all of us organists. And a good friend of mine was E. Power Biggs’s assistant for many years, back in the 40s and 50s. He’s in his 80s now. When we get him for dinner, we always get him to tell E. Power Biggs stories. The organ that E. Power Biggs had in those days for his broadcasts was a dinky little organ, neither fish nor foul, in a small museum at Harvard, called the Busch-Reisinger Museum; but in fact all manner of music was played on it. And one day this assistant had to fill in for Biggs, and he asked Biggs, “What is the piece [to be] played?” And he said, “Well I had programmed Cesar Frank’s heroic piece”. And if you know this piece it is made for the largest organs that have ever been made, the loudest organs that have ever been made, in the largest cathedrals that have ever been made, because it’s a 19th century symphonic type organ work. And Biggs was asking my friend to play this on this dinky little organ, and he said, “How can I play it on this?”, and Biggsy said, “Just play it grand! Just play it grand!” And the way to stay with the future as it moves, is to always play your systems more grand than they seem to be right now. Thank you.
Read Full Post »