Yep, it’s McCarthy. Seriously, somebody should make a poster out of it and sell it, optionally framed. It would make a nice counterpoint to those inspirational posters they put up in offices. At a place I worked 6 years ago they had about four of them. I understand the benefits of positive thinking, but these posters never did it for me. If I’m going to think positively about something it needs to be more concrete than a one-sentence message about courage, etc.
Management pep rallies never did that much for me, either. We had a bit of that where I worked, though we didn’t get carried away with it. Nevertheless it didn’t help with employee morale. Middle management was pumped, I guess. It didn’t help us succeed. Within a year most of the employees in my division were laid off, including me. It’s what was happening at a lot of companies then. Just goes to show that inspiration can’t substitute for insight and perception.
Huh. I never saw refactoring that way. I thought it was to make code clearer, more maintainable or flexible. This isn’t in the same vein though.
The only other people I can think of who who would fit well with the message, besides Dijkstra, are Richard Gabriel, and Alan Kay, though his mug wouldn’t fit well with the message. Every picture I’ve seen of him has this nice friendly smile. 🙂
That’s true about McCarthy… to him, “programming” seems to have been a distraction from “computing”. Alan Kay seems to beleive the same thing.
Refactoring is how you describe it, but the process is indeed how I describe it. You’re rewriting something that was already “done”, but it becomes much better (“complete”) in the process!
On that note, back to “refactoring” the TechRepublic article I am working on (“10 Things that let you know you aren’t cut out to be a developer”) and then after that, to rejoin the parent process that I forked out of earlier to finish the refactoring + multithreading of a little project. 🙂 For whatever reason, it never occured to me that an image that is 16,000 x 12,000 pixels and has 32 bits per pixel would take up an insane amount of RAM… doh!
I think a more accurate way of saying it is they think of programming as a description of computation directed toward a particular goal.
Paul Murphy had an article on device computing not too long ago, and it occurred to me that maybe this is what Alan Kay has been driving at. You could think of “device computing” as “specialized computation”.
An idea that’s been creeping into my head is that when Kay is talking about using an object system for development, he’s saying that when we have a system in mind we should create a “universe” for that system to exist in. You can think of it in cosmological terms, about all of the substances and forces that make what we see possible. So rather than thinking linearly of getting from point A to point B, which creates a specialized and inflexible system, instead focus on “carving up nature at its joints”, decomposing the system into elemental objects, which have their own unique responsibilities, and think of how those objects should interact. Then build up the system you want out of that. By doing this you have effectively created your own computing system.
The part I still don’t quite get is when Kay says that each object is a computer in and of itself. I guess what this enables is the ability to have objects within objects. So not only do objects exist and interact within the larger system, they can exist and interact in your own little system as well. The image that comes to mind is “the powers of ten”–worlds within worlds, to infinity.
The reason I thought this related to device computing is Kay has often said he wanted Smalltalk to be used as a bootstrapping system. So you could effectively customize the entire environment: the VM, the compiler, the way objects interact, the whole thing. You can describe this new system entirely in terms of objects and messages inside Smalltalk, and then bootstrap this description into its own working system. This is a known technique in other languages, such as C. For example there are C and C++ compilers written in C. There are whole operating systems written in it. The same technique is used for them, using cross-compilers. So you could in effect create your own customized computing environment whose description and semantics are completely relevant to the domain you’re working in. The goal being that you end up with a system where you’re not thinking in terms of “How do I get this system to do what I want?”, but rather, “How do I describe my intent using formal logic?”, and you’re able to put your own stamp on what that formal logic looks like. You are in effect building your own abstractions, your own description of computation, and then building your system out of them.
I think what Kay has been trying to lead people away from is this idea that there should be one general description of computing, or just a few, and every solution to a problem to which a computer can be applied should be adapted to these descriptions. Instead he’d like to see people think about the problem and the solution on its own terms, and bring computing to it. So when you’re programming the computer, you are programming in the domain you’re working with. This relates directly to the idea of DSLs. Embedded DSLs are a compromise. With them you are still programming in a language that provides a general description of computation, but you’re relabeling stuff so it’s more semantically relevant.
I think a problem that Kay was trying to solve with what’s been called “OOP”, though I’ll call it “MOP”, is the problem of context in computing. We humans think and communicate in terms of context. Computers have a difficult time with this. Whenever I’ve been involved with discussions on programming language design, we talk in terms of context-free grammars. This way nothing, as far as the computer is concerned, is implied. Everything can at least be deduced deterministically. What MOP allows you to do is set up some deterministic structures that provide some context to what you’re writing. So it doesn’t look completely like English, but it comes closer than other programming systems.
Kay is currently working on a project to try to create a complete end-user system within 20KLOC. That’s the NSF project (PDF) I told you about. He’s said that Squeak took about 200KLOC. He hasn’t finished this new project, so it’s yet to be seen if he and his fellow researchers can pull it off. I think the problem he’s trying to solve here is this whole idea of creating your own semantically relevant system feels too daunting to most people. So by shrinking the amount of code necessary to do it, it’ll seem more feasible.
Re: the graphics
Yeah, I’ve found it’s prudent to do some math when working with large bitmaps. As long as you have 1 GB+ of RAM to spare you should have no problem with that graphic. 🙂