Redefining computing, Part 2

“Pascal is for building pyramids–imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms–imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.”

— from forward written by Alan Perlis for
Structure and Interpretation of Computer Programs

In this post I’m going to talk mostly about a keynote speech Alan Kay gave at the 1997 OOPSLA conference, called “The Computer Revolution Hasn’t Happened Yet,” how he relates computing to biological systems, and his argument that software systems, particularly OOP, should emulate biological systems. In part 1 I talked a bit about Peter Denning’s declaration that “Computing is a Natural Science”.

Please note: There is a bit of mature language in this speech. It’s one sentence. If you are easily offended, errm…plug your ears, avert your eyes, whatever. It’s where he’s talking about Dr. Dijkstra.

This speech reminds me that when I started hearing about gene therapies being used to treat disease, I got the idea that this was a form of programming–reprogramming cells to operate differently, using the “machine code” of the cell: DNA.

While scientists are discovering that there’s computation going on around us in places we did not suspect, Kay was, and still is, trying to get us developers to make our software “alive,” capable of handling change and growth–to scale–without being disturbed to a significant degree. That’s what his conception of OOP was based on to begin with. A key part of this is a late-bound system which allows changes to be made dynamically. Kay makes the point as well that the internet is like this, though he only had a tiny hand in it.

One of the most striking things to me he’s touched on in this speech, which he’s given several times, though it’s a bit different each time, is that computing introduces a new kind of math, a finite mathematics that he says is more practical and more closely resembles things that are real, as opposed to the non-bounded quality of classical mathematics, which often does not. Maybe I’m wrong, but it seems to me from what he says, he sees this finite math as better than classical math at describing the world around us. As noted in my previous post, Peter Denning says in his article “Computing is a Natural Science”:

Stephen Wolfram proclaimed [in his book A New Kind of Science] that nature is written in the language of computation, challenging Galileo’s claim that it is written in mathematics.

So what’s going on here? It definitely sounds like CS and the other sciences are merging some. I learned certain finite math disciplines in high school and college, which related to my study of computer science. I never expected them to be substitutes for classical math in the sciences. If someone can fill me in on this I’d be interested to hear more. I don’t know, but this memory just flashed through my head; that scene in “Xanadu” where a band from the 40s, and a band from the 80s merge together into one performance. They’re singing different songs, but they mesh. Crazy what one’s mind can come up with, huh. 🙂 I know, I know. “Xanadu” was a cheesy movie, but this scene was one of the better ones in it, if you can tolerate the fashions.

I think this speech has caused me to look at OOP in a new way. When I was first training myself in OOP years ago, I went through the same lesson plan I’m sure a lot of you did, which was “Let’s make a class called ‘Mammal’. Okay, now let’s create a derived class off of that called ‘Canine’, and derive ‘Dog’ and ‘Wolf’ off of that,” or, “Let’s create a ‘Shape’ class. Now derive ‘Triangle’ and ‘Ellipse’ off of that, and then derive ‘Circle’ from ‘Ellipse’.” What these exercises were really doing was teaching the structure and techniques of OOP, and showing some of their practical advantages, but none of it taught you why you were doing it. Kay’s presentation gave me the “why”, and he makes a pretty good case that OOP is not merely a neat new way to program, but is in fact essential for developing complex systems, or at least the best model discovered so far for doing it. He also points out that the way OOP is implemented today is missing one critical element–dynamism, due to the early-bound languages/systems we’re using, and that a lot of us have misunderstood what he intended OOP to be about.

Alan Kay relates the metaphor of biological systems to software systems, from his speech at the 1997 OOPSLA conference

If you don’t want to listen to the whole speech, I’ve printed some excerpts from it below, only including the parts that fit the biological metaphor he was getting across. I’ve included my own annotations in []’s, and commentary in between excerpts. I’ve also included some links.

Dr. Kay gets into this discussion, using a spat he and Dr. Dijkstra got into some years before over who were the better computer scientists, the Europeans or the Americans, as a launching point. Dijkstra made the point that European computer scientists had more of a math background than Americans did, and they got their positions by working harder at it than Americans, etc. Kay responded with, “Huh, so why do we write most of the software, then?” (I’m paraphrasing). He continues:

Computers form a new kind of math. They don’t really fit well into classical math. And people who try to do that are basically indulging in a form of masturbation…maybe even realizing it. It was a kind of a practical math. The balance was between making structures that were supposed to be consistent of a much larger kind than classical math had ever come close to dreaming of attempting, and having to deal with [the] exact same problems that classical math of any size has to deal with, which is being able to be convincing about having covered all of the cases.

There’s a mathematician by the name of Euler [pronouned “Oiler”] whose speculations about what might be true formed 20 large books, and most of them were true. Most of them were right. Almost all of his proofs were wrong. And many PhDs in mathematics in the last and this century have been formed by mathematicians going to Euler’s books, finding one of his proofs, showing it was a bad proof, and then guessing that his insight was probably correct and finding a much more convincing proof. And so debugging actually goes on in mathematics as well.

And I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There’s no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those I don’t think really get what computing is all about. It’s like saying I have really great ideas for paintings, but I’m just going to use a brush, but no paint. You know, so my ideas will be represented by the gestures I make over the paper; and don’t tell any 20th century artists that, or they might decide to make a videotape of them doing that and put it in a museum. . . .

Kay used a concept, called bisociation, from a book called The Act of Creation”, by Arthur Koestler. Koestler draws out a “pink plane” with a trail of ants on it, which represents one paradigm, one train of thought, that starts out with an idea and works towards continuous improvement upon it. Kay said about it:

If you think about that, it means that progress in a fixed context is almost always a form of optimization, because if you were actually coming up with something new it wouldn’t have been part of the rules or the context for what the pink plane is all about. So creative acts generally are ones that don’t stay in the same context that they’re in. So he says every once in a while, even though you have been taught carefully by parents and by school for many years, you have a “blue” idea.

This introduces a new context.

[Koestler] also pointed out that you have to have something “blue” to have “blue” thoughts with, and I think this is something that is generally missed in people who specialize to the extent of anything else. When you specialize you’re basically putting yourself into a mental state where optimization is pretty much all you can do. You have to learn lots of different kinds of things in order to have the start of these other contexts.

This is the reason he says a general education is important. Next, he gets into the biological metaphor for computing:

One of my undergraduate majors was in molecular biology, and my particular interest was both in cell physiology and in embryology–morphogenesis they call it today. And this book, The Molecular Biology of the Gene had just come out in 1965, and–wonderful book, still in print, and of course it’s gone through many, many editions, and probably the only words that are common between this book and the one of today are the articles, like “the” and “and”. Actually the word “gene” I think is still in there, but it means something completely different now. But one of the things that Watson did in this book is to make an assay–first assay of an entire living creature, and that was the E. Coli bacterium. So if you look inside one of these, the complexity is staggering.

He brings up the slide of an E. Coli bacterium as seen through an (electron?) microscope.

Those “popcorn” things are protein molecules that have about 5,000 atoms in them, and as you can see on the slide, when you get rid of the small molecules like water, and calcium ions, and potassium ions, and so forth, which constitute about 70% of the mass of this thing, the 30% that remains has about 120 million components that interact with each other in an informational way, and each one of these components carries quite a bit of information [my emphasis]. The simple-minded way of thinking of these things is it works kind of like OPS5 [OPS5 is an AI language that uses a set of condition-action rules to represent knowledge. It was developed in the late 1970s]. There’s a pattern matcher, and then there are things that happen if patterns are matched successfully. So the state that’s involved in that is about 100 Gigs., and you can multiply that out today. It’s only 100 desktops, or so [it would be 1/2-1/3 of a desktop today], but it’s still pretty impressive as [an] amount of computation, and maybe the most interesting thing about this structure is that the rapidity of computation seriously rivals that of computers today, particularly when you’re considering it’s done in parallel. For example, one of those popcorn-sized things moves its own length in just 2 nanoseconds. So one way of visualizing that is if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagon, and it’s moving its own length in 2 nanoseconds. That’s about 8 feet on our scale of things. And can anybody do the arithmetic to tell me what fraction of the speed of light moving 8 feet in 2 nanoseconds is?…[there’s a response from the audience] Four times! Yeah. Four times the speed of light [he moves his arm up]–scale. So if you ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent, that we could not imagine it, even with the aid of computers. There’s nothing to be seen inside one of these things until you kill it, because it is just a complete blur of activity, and under good conditions it only takes about 15 to 18 minutes for one of these to completely duplicate itself. Okay. So that’s a bacterium. And of course, lots more is known today.

Another fact to relate this to us, is that these bacteria are about 1/500th the size of the cells in our bodies, which instead of 120 million informational components, have about 60 billion, and we have between 1012, maybe 1013, maybe even more of these cells in our body. And yet only 50 cell divisions happen in a 9-month pregnancy. It only takes 50 cell divisions to make a baby. And actually if you multiply it out, you realize you only need around 40. And the extra 10 powers of 10 are there because during the embryological process, many of the cells that are not fit in one way or another for the organism as a whole are killed. So things are done by over-proliferating, testing, and trimming to this much larger plan. And then of course, each one of these structures, us, is embedded in an enormous biomass.

So to a person whose “blue” context might have been biology, something like a computer could not possibly be regarded as particularly complex, or large, or fast. Slow. Small. Stupid. That’s what computers are. So the question is how can we get them to realize their destiny?

So the shift in point of view here is from–There’s this problem, if you take things like doghouses, they don’t scale [in size] by a factor of 100 very well. If you take things like clocks, they don’t scale by a factor of 100 very well. Take things like cells, they not only scale by factors of 100, but by factors of a trillion. And the question is how do they do it, and how might we adapt this idea for building complex systems?

He brings up a slide of a basic biological cell. He uses the metaphor of the cell to talk about OOP, as he conceived it, and building software systems:

Okay, this is the simple one. This is the one by the way that C++ has still not figured out, though. There’s no idea so simple and powerful that you can’t get zillions of people to misunderstand it. So you must, must, must not allow the interior of any one of these things to be a factor in the computation of the whole, okay. And this is only part of this story. It’s not just the cell–the cell membrane is there to keep most things out, as much as it is there to keep certain things in.

And a lot of, I think, our confusion with objects is the problem that in our Western culture, we have a language that has very hard nouns and verbs in it. So our process words stink. So it’s much easier for us when we think of an object–I have apologized profusely over the last 20 years for making up the term “object-oriented”, because as soon as it started to be misapplied, I realized that I should’ve used a much more process-oriented term for it. Now, the Japanese have an interesting word, which is called “ma”, which spelled in English is M-A, “ma”. And “ma” is the stuff in between what we call objects. It’s the stuff we don’t see, because we’re focused on the noun-ness of things, rather than the process-ness of things, whereas Japanese has a more process/feel-oriented way of looking at how things relate to each other. You can always tell that by looking at the size of a word it takes to express something that is important. So “ma” is very short. We have to use words like “interstitial”, or worse, to approximate what the Japanese are talking about.

I’ve seen him use this analogy of the Japanese word “ma” before in more recent versions of this speech. What he kind of beats around the bush about here is that the objects themselves are not nearly as important as the messages that are sent between them. This is the “ma” he’s talking about–what is going on between the objects, their relationships to each other. To me, he’s also emphasizing architecture here. In a different version of this speech (I believe), he said that the true abstraction in OOP is in the messaging that goes on between objects, not the objects themselves.

He continues:

So, the realization here–and it’s not possible to assign this realization to any particular person, because it was in the seeds of Sketchpad, and in the seeds of the Air Training Command file system, and in the seeds of Simula–and that is, that once you have encapsulated in such a way that there is an interface between the inside and the outside, it is possible to make an object act like anything, and the reason is simply this: that what you have encapsulated is a computer [my emphasis]. So you have done a powerful thing in computer science, which is to take the powerful thing you’re working on and not lose it by partitioning up your design space. This is the bug in data procedures–data and procedure languages. And I think this is the most pernicious thing about languages like C++ and Java, [which] is they think they are helping the programmer by looking as much like the old thing as possible, but in fact they’re hurting the programmer terribly, by making it difficult for the programmer to understand what’s really powerful about this new metaphor.

The “Air Training Command file system” refers to a way that data was stored on tapes back when Kay was in the Air Force, I think in the 1950s in the early 1960s. It was effective and efficient. Some have pointed to it as the first example Kay had seen of an object-oriented system. The first section of tape contained pointers to routines (probably hard addresses to memory locations, in those days) whose code was stored in a second section of the tape, which read/manipulated the data on the third section of the tape. This way, it didn’t matter what format the data came in, since that was abstracted by the routines in the second section. No one knows who came up with the scheme.

And again, people who were doing time-sharing systems had already figured this out as well. Butler Lampson’s thesis in 1965 was about–that what you want to give a person on a time-sharing system is something that is now called a virtual machine, which is not the same as what the Java VM is, but something that is as much like the physical computer as possible, but give one separately to everybody. Unix had that sense about it, and the biggest problem with that scheme is that a Unix process had an overhead of about 2,000 bytes just to have a process. And so it’s going to be difficult in Unix to let a Unix process just be the number 3. So you’d be going from 3 bits to a couple of thousand bytes, and you have this problem with scaling.

So a lot of the problem here is both deciding that the biological metaphor [my emphasis] is the one that is going to win out over the next 25 years or so, and then committing to it enough to get it so it can be practical at all of the levels of scale that we actually need. Then we have one trick we can do that biology doesn’t know how to do, which is we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it’s done today. And systems do have cystic fibrosis, and some of you may know that cystic fibrosis today for some people is treated by infecting them with a virus, a modified cold virus, giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus, and the cold virus is too weak to actually destroy the lungs like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. And that is what does the trick. That’s a very complicated way of reprogramming an organism’s DNA once it has gotten started.

He really mixes metaphors here, going from “cystic fibrosis” in computing, to it in a biological system. I got confused on what he said about the gene therapy. Perhaps he meant that the virus carries the “non-defective gene”? I thought what the therapy did was substitute a good gene for the bad one, since cystic fibrosis is a genetic condition, but then, I don’t really know how this works.

When he talks about “taking the DNA out of the cell,” and dealing with “cystic fibrosis,” I’m pretty sure he’s talking about a quality of dynamic OOP systems (like Smalltalk) that work like Ivan Sutherland’s Sketchpad program from the 1960s, where each object instance was based on a master object. If you modified the master object, that change got reflected automatically in all of the object instances.

He brought up another slide that looks like a matrix (chart) of characteristics of some kind. I couldn’t see it too well. I recognized the subject though. He begins to talk about his conception of the semantic web, using the same biological and object-oriented metaphor:

So here’s one that is amazing to me, that we haven’t seen more of. For instance, one of the most amazing things to me, of people who have been trying to put OOP on the internet, is that I do not–I hope someone will come up afterwards and tell me of an exception to this–but I do not know of anybody yet who has realized that at the very least every object should have a URL, because what the heck are they if they aren’t these things. And I believe that every object on the internet should have an IP, because that represents much better what the actual abstractions are of physical hardware to the bits. So this is an early insight, that objects basically are like servers. And this notion of polymorphism, which used to be called generic procedures, is a way of thinking about classes of these servers. Everybody knows about that.

He brought up another slide, showing the picture of a building crane on one side, and a collection of biological cells on the other. More metaphors.

And here’s one that we haven’t really faced up to much yet, that now we’ll have to construct this stuff, and soon we’ll be required to grow it. So it’s very easy, for instance, to grow a baby 6 inches. They do it about 10 times in their life. You never have to take it down for maintenance. But if you try and grow a 747, you’re faced with an unbelievable problem, because it’s in this simple-minded mechanical world in which the only object has been to make the artifact in the first place, not to fix it, not to change it, not to let it live for 100 years.

So let me ask a question. I won’t take names, but how many people here still use a language that essentially forces you–the development system forces you to develop outside of the language [perhaps he means “outside the VM environment”?], compile and reload, and go, even if it’s fast, like Virtual Cafe (sic). How many here still do that? Let’s just see. Come on. Admit it. We can have a Texas tent meeting later. Yeah, so if you think about that, that cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go to trying to understand what the possibilities for interoperability is with things that already exist.

Now, I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago, and if you think about–the ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders or magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big! . . .

There are going to be dozens and dozens–there almost already are–dozens and dozens of different object systems, all with very similar semantics, but with very different pragmatic details. And if you think of what a URL actually is, and if you think of what an HTTP message actually is, and if you think of what an object actually is, and if you think of what an object-oriented pointer actually is, I think it should be pretty clear that any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made. That’s the whole point of not being able to see inside. And so a semantic interoperability is possible almost immediately by simply taking that stance. So this is going to change, really, everything. And things like Java Beans and CORBA are not going to suffice, because at some point one is going to have to start really discovering what objects think they can do. And this is going to lead to a universal interface language, which is not a programming language, per se. It’s more like a prototyping language that allows an interchange of deep information about what objects think they can do. It allows objects to make experiments with other objects in a safe way to see how they respond to various messages. This is going to be a critical thing to automate in the next 10 years. . . .

I think what he meant by “any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made” is that any OOP language can abstract its pointers so that there’s no distinction between pointers to local objects and URLs to remote objects. This would make dealing with local and remote objects seamless, assuming you’re using a late-bound, message-passing metaphor throughout your system. The seamlessness would not be perfect in an early-bound system, because the compiler would ensure that all classes for which there are messages existed before runtime, but there’s no way to insure that every object on the internet is always going to be available, and always at the same URL/address.

You might ask, “Why would you want that kind of ‘perfect’ seamlessness? Don’t you want that insurance of knowing that everything connects up the way it should?” My answer is, things change. One of the things Kay is arguing here is that our software needs to be adaptable to change, both in itself, and in the “outside world” of the internet. He uses the internet as an example of a system that has been very robust in the face of change. Early-bound systems have a tendency to push the programmer towards assuming that everything will connect up just as it is today, and that nothing will change.

As I heard him talk about objects on the web, the first thing that came to mind was REST (the whole “each object should have a URL” idea). It seems like the first try at an implementation of this idea of objects “experimenting” and “figuring out” what other objects on the network can do was started with the idea of WSDL and UDDI, but that flopped, from what I hear. Microsoft has made a bit of a start with this, in my opinion, with WCF, but it only determines what type of object is at the other end, as in “Is it DCOM?”, “Is it an XML web service?”, etc. It frees the programmer from having to do that. It still allows the client program to interogate the object, but it’s not going to do “experiments” on it for you. It may allow the client program to do them more easily than before. I don’t know.

Kay ends with a note of encouragement:

Look for the “blue” thoughts. And I was trying to think of a way–how could I stop this talk, because I’ll go on and on, and I remembered a story. I’m a pipe organist. Most pipe organists have a hero whose name was E. Power Biggs. He kind of revived the interest in the pipe organ, especially as it was played in the 17th and 18th centuries, and had a tremendous influence on all of us organists.  And a good friend of mine was E. Power Biggs’s assistant for many years, back in the 40s and 50s. He’s in his 80s now. When we get him for dinner, we always get him to tell E. Power Biggs stories. The organ that E. Power Biggs had in those days for his broadcasts was a dinky little organ, neither fish nor foul, in a small museum at Harvard, called the Busch-Reisinger Museum; but in fact all manner of music was played on it. And one day this assistant had to fill in for Biggs, and he asked Biggs, “What is the piece [to be] played?” And he said, “Well I had programmed Cesar Frank’s heroic piece”. And if you know this piece it is made for the largest organs that have ever been made, the loudest organs that have ever been made, in the largest cathedrals that have ever been made, because it’s a 19th century symphonic type organ work. And Biggs was asking my friend to play this on this dinky little organ, and he said, “How can I play it on this?”, and Biggsy said, “Just play it grand! Just play it grand!” And the way to stay with the future as it moves, is to always play your systems more grand than they seem to be right now. Thank you.

21 thoughts on “Redefining computing, Part 2

  1. Mark –

    No idea what odd brain wavelength we both re on today… I was reading an article about LINQ today (BTW, I am increasingly down on it, nt a fan of having app writers sticking dynamic SQL in their code), and I thunderbolt hit me, catalyzed by the LINQ article and Part 1 of this. An OO/functional langugage with a LOT more emphaize on the OO than most functional langugage that support OO have. Essentially, just as functional langugages have labels instead of variables, functions, etc., have an OO language where all class members are labels/lamba expressions. It wuld actually be relatively easy to implement this in C# 3.0.

    It would be insanely fast to have what you wrote about few weeks ago, a “programming medium” like this that smoothly transitions from prototype to real code, because you layout your classes at once to get the architecture, hardcode their functionality up front to prototype, and slowly fill in the right code to bring to fruition, with no changes needed above or below in the code.

    Just a though I had. Would it be slow? Probably. But I think development in it would be insanely fast.


  2. @Justin:

    Re: Linq introducing dynamic queries

    I’ve been hearing this complaint from others as well. It’s caused their opinion of Linq to sour. I can’t remember what the reason why is. How is it any different from the old method of the programmer creating Command or Adapter objects with select statements in them? Those were defined in strings, which are dynamic queries. Maybe the problem is that Linq doesn’t support stored procedures yet? I think that was one of the objections.

    Re: lambdas in member variables, late binding, etc.

    You can have lambdas (in effect) now, using anonymous methods in C# 2.0. The only difference is the return value from them is strongly typed. Maybe that’s what you’re trying to avoid. C# doesn’t have “duck typing” yet. Anonymous methods are more wordy than lambdas, but they work the same way. So maybe you could have what you want right now. You’d have lazy evaluation, and I guess you’d get late binding due to the new way of deploying ASP.Net apps., where you just put the source code of the code-behinds in the directory, right?

    As I read your description of the language you imagined, the ones that came to mind were Smalltalk and Ruby. Both have functional language features (they both support lambdas natively), but they’re primarily OO. Maybe Ruby has the same late binding model as Smalltalk in the sense that you can modify objects while the program is running. Not sure though.

    I’m curious what having lambdas for all member variables would get you though. Even if you were going for a dynamic language quality (“duck typing”), method calls on any objects you have are still statically bound to them. It’s not as if you can change types on your method calls and expect them to still work unless the new types conform to the same interface.

  3. Mark –

    On the LINQ issue, tere are a number of items at play here. I do not want developers using “Command” or “Adapter” classes either!

    * Any changes at the DB level requires a rewrite, retest, recompile, redeploy, and so on.
    * Most “programmers” produce pretty bad SQL. Leave it to a DBA, who knows what they are doing. This is primarily because programmers do not understand that SQL is an imperitive langugage, and try writing it like OO or procedural code.
    * If LINQ could use stored procedures, it would defeat much of the purpose of LINQ, IMHO.
    * Separation of powers and responsibilities. I like to have th DB do all of the work at that level in the DB, so this way, work does not need to be duplicated when someone else’s code accesses the database too; they just use the same stored procedures.
    * As a security model, writing tightly define stored procs, and giving the apps only enough permissions to run them is much more secure than giving full read/write access to the entire database to an application, particulary apps that run as anonymous users.
    * Debugging is much easier; with stored procs, the code is centralized.
    * Code writing enforcement. With dynamic queries, there is always the temptation to bypass the standards, and instead of asking the person who maintains the data objects to add a needed function, to just slap it in yourself. As a result, the data objects quickly lose status as the sole keeper of the keys and rogue SQL gets sprinkled all throughout the DB.

    On the other stuff, I will get back later tonight on it, I need to get going!


  4. @Justin:

    I guess I should’ve said what’s the difference between using Linq and you using Command or Adapter objects (under .Net 1.1). I’m sure somebody had to do that. I’m not sure how that works in an organization like yours though. I haven’t worked at a place where the developers weren’t skilled enough to handle SQL, much less write reasonably tight code.

    I had heard of people working at places where the DBAs wouldn’t allow you to write your own SQL. They’d do it for you. I was shocked the first time I heard that. I just assumed all developers would be able to do that sufficiently themselves. SQL is not that hard to pick up, but it is good to have some set theory under your belt.

    The one time I ran into a situation where a query I wrote ran too slowly was because the client hadn’t indexed a column. I advised them on which one needed to be indexed. Problem solved.

    You could still use Linq for collection filtering. In other words, fill a collection with business objects, but you only want to databind to a filtered set for a particular function, use case, or screen. I imagine these would be special cases. No dynamic SQL involved.

  5. Pingback: Top Posts «

  6. Mark-

    Another point about LINQ, which I forgot earlier: it is great from a DAL, but if you bring the SQL code (via dynamic queries or LINQ) into the app layer, now every developer working on the code needs to understand the database design. If you create a DAL, then the developers just need to know how to access that. It also makes changing the database less painful to the programmers.

    Regarding that language idea that I had, yes, duck typing was one side benefit of it, the other was to treat the whole code tree as a bunch of nested hashtables. This would allow the attachement of metadata to everything. Imagine if, for example, exception handling was a define attribute on a piece of code, rather than code itself? It would also allow the code to be represented (finally!) as something other than meaningless, structureless (other than whitespace and block delimiters) plain text. At this point, one of my biggest frustrations with coding is the plain text paradigm. IDEs need to become applications in and of themselves. Programming is far and away the computing task with the poorest data format (in terms of it being lacking in functionality) out there. I do not think we will EVER acheive a “programming medium” while we are still pushing ASCII text around in text editors that are hyped up to comprehend the difficult concepts we are trying to express. Why not have a langugage where the source itself is a better data format? The idea of a data tree containing lambda expressions would be the cornerstone of a system in which the entire concept shifts from “typing letters on a keyboard to explicitly spell out precise instructions” to “using textual directions to give precision to broad logic and instructions contained within metadata.”

    Just a thought. 🙂

    On a side note, typing into this box is now dropping every 5th character or so, it is extremely painful. I even rebooted my PC. I hate WordPress more and more every day, I am sure that it is due to some Web 2.0 nonsense, since my keyboard works fine in every other app and browser tab…


  7. @Justin:

    Re: lambda expressions

    I understand why you would want to program in a functional way. I just had trouble picturing what you were talking about with lambda expressions in each member variable of your classes. Would it be like lambda calculus, where these functions would represent values? Would they be there to substitute for methods? What does an OO language add to this scheme?

    Re: WordPress losing characters

    Hmm. I am not having that problem. Haven’t run into it at any point in the past either. I’ve looked at the source for this page, and the comment editor is just a text area. There’s some Javascript on the page, but it has to do with other stuff. Maybe it got into an infinite loop somehow or tried to access something on the server but couldn’t get through, tying it up. WordPress does occasionally change stuff, and sometimes it makes the blog more buggy than it used to be.

  8. Why OO? Well, mostly because OO’s big advantages actually lie outside of the realm of the code itself. Architecturally and organizationally, OO is a winner. It is easier to work with once the team expands beyond a few people. Humans seem to have a good amount of stack space, and low amounts of heap space… we can trace and write individual trains of functionality, but it is difficult for us to follow the “big picture” of code. OO is the best organizational model out there to help with that. My disdain of mainstream OO langugages has more to do with the languges themselves than the concept. For planning a project, OO works well too.

    WP is still dropping characters as I type, it is indeed truly bizarre. I saw nothing in the source to indicate why either…


  9. @Loz:

    Ah. Must be one of those words I consistently misspell. Thanks anyway. I corrected it. Apparently the blog spell checker doesn’t work. Or maybe it’s the browser I’m using. I use a browser-based editor to write the posts.

  10. @Justin:

    I wasn’t exactly asking “why OO?” More to the point, I was asking “Why lambdas in all of your member variables?” Don’t you want some scalar values, or are you just thinking of going completely all-out functional inside of classes? 🙂

    What would these lambdas represent? Methods?

    Re: you losing characters in the comments editor

    What browser are you using? I was going to lodge a complaint with WordPress about it, but I figured I should give them something to work with.

  11. I want 100% functional langugage within my classes, mainly because I think that the speed of deveopment, late binding, and “edit in place” will be of huge help. I envision a debugger that can tap into the running image and edit it in place, for no-downtime deployments and in memory patching. .Net is better than J2EE at patching/deployments, but they are both a pretty unpleasant experience.

    Oddly, the character dropping seems specific to this one thread… I almost wonder if the Google video widget is the culprit?


  12. hi Mark,

    Thanks for a very interesting post. I like reading about alan kay’s ideas.

    I liked your early comments about the need for the big picture overview of why we should learn OOP – and the detailed ongoing elaboration of the biological metaphor

    If you haven’t seen it, Alan Kay talks more about the origins in ‘The Early History of Smalltalk’ – you can get the pdf from:

    He talks about initially barely seeing the pattern, then it assumes cosmic significance and then eventually turns into an inflexible religion

  13. @Bill:

    Glad you liked it. It makes a little more sense if you read Part 1 of this as well. In this part I was segueing from talking about science disciplines (like physics) integrating computer science, and decided to show that CS can learn something from science as well. In this case molecular biology. Then it occurred to me: that’s what Alan Kay figured out. Biological systems are computation in action, just using an analog chemical medium. The symmetry is beautiful to me.

    I’m pretty sure I have seen this “Early History of Smalltalk” paper you speak of. I referred to an online paper by that name in one of my first posts on this blog, “Great moments in modern computer history”. I look back at that post from time to time, and I still get a small sense of awe from it. It was a celebration of the fact that for the first time, these crucial events in modern computer history could be seen by all. I had heard about them through magazines, documentaries, and courses I took, but I didn’t see most of them whole. I just saw bits and pieces of them. Media access wasn’t what it is today. To my great delight I was finding these videos online. So I decided to put links to them all in one place. It’s still a powerful story.

    I didn’t read the whole history of Smalltalk paper, but I don’t think I understood enough at the time to find it totally interesting. I was mainly after the history of the GUI, though my interest in revisiting Smalltalk, the language and the system, was just beginning then. What you say about the paper is interesting. I’ll take another look at it.

    BTW, I checked out the old link I had to the Smalltalk history paper in my “great moments” post and it’s gone stale, so I substituted the link to the PDF version of it that came from the abstract you pointed me to. So thanks! 🙂

  14. Pingback: The beauty of mathematics denied « Tekkie

  15. Pingback: SICP: What is meant by “data”? « Tekkie

  16. Pingback: Exploring the meaning of Tron Legacy « Tekkie

  17. Pingback: The 150th post « Tekkie

  18. Thanks Mark.I came to read this post from a link posted in the computer science Goggle + group . and I am glad I read and liked it.
    I have pursued this question of foundations of computing for a while in my own humble way – not being associated with any institution , I have over 25 years of software development and research as my background. I have made some small notes about what I feel it should be , I would welcome any one interested in those issues to read and comment ;

    The concept is mainly that computation can be looked upon as just plain relation building process between words with multiple category or levels reflecting our own emotion about how we perceive and react to world around us or in various “Scenes” that we find ourselves in various contexts as we grow up.

    I find in this view, there are only 5 basic instincts in computing that can act as the foundation on which any complex system can be built around a simple concept of a dynamic dictionary for every one of us.

    Here is the note about the basic instincts in computing:

    Here is the link of the Google + group where you can read those notes , please join the group if you get any good vibe (:

    all my articles are here :


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s