Learning Squeak is easier now

If you’re a software developer and you’ve wanted to learn Squeak, it’s been a struggle for a while now.

Edit 4-10-2017: Re. the paragraph below on Guzdial’s books, I recommend “Squeak: Object-Oriented Design with Multimedia Applications” with some caution now, because the Mac version of Squeak 2.8 will no longer work on a Mac. In fact, it hasn’t worked on a Mac since OS X Lion (Version 10.7), since Apple got rid of Rosetta. The Windows version of 2.8 may still work on a modern version of Windows. If you’re a Mac user, you can run Windows inside of a virtual machine, like VirtualBox, or a commercial VM. The Windows version of 2.8 might also work with Wine (a multi-platform, open source package that allows you to run some Windows programs in OS X or Linux), but I haven’t tried it.

It used to be possible to get Squeak 2.8 online, but I’m not sure if that’s possible anymore. Many of the examples in the book will probably still work on a modern Squeak version, since they just deal with basic Smalltalk features, but some examples will likely no longer work.

In 2001 and 2002 Mark Guzdial wrote two books on Squeak: Squeak: Object-Oriented Design with Multimedia Applications, and Squeak: Open Personal Computing and Multimedia (co-authored with Kim Rose). These are two books I found that have a system/programmer perspective on it. They’re both out of print, but last I checked they’re still available from Amazon.com. A lot of the knowledge in these two books is still relevant to Squeak, but they are in a sense out of date, because they were written for earlier versions, and some of the features Guzdial talks about are either broken now or non-existent in the current version, and there are tools that are in common use now in Squeak that didn’t exist at the time he wrote them. By the way, if you get these books I strongly recommend that you get Squeak version 2.8, because it’s the version the books talk about. If you can, you might want to get the CD with the book (some of the used editions don’t have it). It has version 2.8 on it.

Edit 10-19-2011: Just for clarification, I’ll add that “Squeak: Object-Oriented Design with Multimedia Applications” is a book you can get if you’re trying to learn how to use Squeak, and learn the Smalltalk language. It guides you through the features of the system, and contains exercises to help you learn what you can do in it. “Squeak: Open Personal Computing and Multimedia” is more of a philosophical book, talking about why Squeak was created, and what it represents in the world.

Just today (Sept. 14, 2007) I heard about a new book that’s been published called “Squeak By Example”, written by Andrew Black, Stephane Ducasse, Oscar Nierstrasz, Damien Pollet, Damien Cassou, and Marcus Denker. It’s available as a free PDF. The book is released under the Creative Commons Attribution-ShareAlike license. It is also open source.

This is a developer-focused book. If you’re looking for a book that looks at Squeak from a child’s/educator’s perspective, I’d recommend any of the other books on Squeak, such as Squeak: Programming With Robots, by Stephane Ducasse.

I’ve done a quick look-through of “Squeak By Example,” and my own assessment is it teaches you about the system features, both from a UI and programmatic perspective. While there is plenty of Smalltalk code in it, it’s not intended as a book for learning the Smalltalk language. For that they suggest you take a look at the collection of free books on Smalltalk that are available online. As far as I know these books are out of print, but they’re still useful. As an introduction I’d suggest “I Can Read C++ and Java But I Can’t Read Smalltalk” (PDF), by Wilf LaLonde.

As I’ve said before, there are tools you can run inside of Squeak. “Squeak by Example” discusses many of the ones developers will need to know about. Each tool also has an object model that is accessible from inside your own code, which is also discussed.

It talks about the system, or kernel objects, such as collections, streams, and the meta-object model. It talks about message passing, which is the mechanism by which you and objects communicate with other objects.

It teaches the basics of how to use Squeak in a nice level of detail. One of the things I haven’t liked about most of the Squeak tutorials out there is they ignore the basic, beginner-level knowledge someone new to it needs to know, like how to bring up menus, what menus are available, and what you can do with them, not to mention how to install and run it (though this is pretty easy). This book allows you to ease into it, showing you how to install Squeak, and how to use the mouse with it.

I’ve been itching to recommend this tutorial, done by Stephan Wessels, but I haven’t because it skips the beginner basics. Now I can, since you can learn these basics elsewhere. He takes you step by step through a complete development example, creating a “laser” game in Morphic, a graphical objects/UI framework in Squeak.

Just as a side note, when I first saw Stephan’s tutorial it reminded me of Laser Chess, by Mike Duppong, a game that was originally published in Compute!’s Atari ST Disk & Magazine 20 years ago (the multi-platform Compute! Magazine published ports of it for Commodore, Atari 8-bit, and Apple II computers).

Anyway, enough reminiscing. Here are some other tutorials on the Squeak Wiki. Like I said, they typically don’t cover the beginner basics, but once you know them, you’ll probably find other useful information here.

For you web developers out there, once you learn the stuff I mentioned above, you can move on to the Seaside tutorials.

Go forth, and enjoy Squeak!

—Mark Miller, https://tekkie.wordpress.com

Dolphin Smalltalk is discontinued

I learned this past weekend via. a posting on James Robertson’s blog that the Professional edition of Dolphin Smalltalk has been discontinued. The free version will continue to be available for free. According to Object Arts the costs outpaced the profits.

When I used to see discussions on blogs about Seaside, often the subject of Squeak would come up, since that was the first platform Seaside appeared on. Most commenters didn’t like Squeak much, but they raved about Dolphin Smalltalk. Incidentally I heard about a month ago that Seaside had been ported to Dolphin, and VisualWorks (which is sold by Cincom).

Dolphin was a Windows-only implementation. People described it as having a nice native look and feel on Windows, and a consistent UI. It did not run a bytecode-executing VM, unlike most other Smalltalk implementations. Instead it was an interpreter.

It’s sad to see it go. I imagine that this will drive some people away from Smalltalk, since some considered Dolphin the best of the lot.

One of the comments in the Object Arts announcement that jumped out at me was:

There will no doubt be a number of you who would suggest that we Open Source Dolphin. Of course, you are free [to] harbour such opinions and to discuss the idea on the newsgroup but please do not expect us to be persuaded. It simply will not happen! Both Blair and I dislike the Open Source movement intensely and we would rather see Dolphin gradually disappear into the sands of time than instantly lose all commercial value in one fell swoop.

My emphasis. Why the hostility towards open source? Andy Bower follows with a possible answer:

The best, and probably only, way in which the future of Dolphin could be assured would be a sale of the assets to another company. Whilst we are not actively seeking buyers, serious negotiations can be started by writing to me at my e-mail address.

Well if it does get bought that would be a relief, because then it would have a new life. One can hope. Personally I think it would have a better chance at new life if it was open sourced. It would have the attraction of being free (as in beer), in addition to being a good development environment. It would be a waste of the effort to port Seaside to Dolphin if it faded into oblivion.

There are other commercial implementations out there. There’s VisualWorks, as I mentioned (also known as VW Smalltalk), and VA Smalltalk which is currently sold by Instantiations, Inc. These are the major ones I’ve heard about often, but there are several others.

This event with Dolphin should not be taken as a sign of decline for the Smalltalk realm. As James Robertson over at Cincom often points out they are in no danger of dropping Smalltalk. It remains solidly profitable for them. I wish I could comment on the health of VA Smalltalk, but I have no source of information on that.

A history of VisualWorks (VW) and VisualAge (VA) Smalltalk

Both VisualWorks and VisualAge Smalltalk have interesting histories.

From what I’ve been able to gather so far in my research, VisualWorks is a direct descendent of Smalltalk-80 developed at Xerox PARC, much as Squeak is also a descendent (through Apple Computer’s licensing of ST-80). I’m not clear on the distinction between the two, but apparently there is one. My recollection is a bit fuzzy, but it sounds to me like each was based on a different revision of ST-80.

I based the following history on three sources: (Update 5-24-2013: VisualWorks was originally called ObjectWorks, and was introduced in 1991 by ParcPlace Systems, a spin-off of Xerox. ParcPlace and another company by the name of Digitalk were the two largest Smalltalk companies in the world at the time (from what I hear). Digitalk had introduced the first commercial implementation of Smalltalk, called Methods, in 1983, which eventually became Visual Smalltalk. ParcPlace and Digitalk decided to merge in 1995 and form ObjectShare, Inc. ObjectWorks was renamed VisualWorks.

As is often the case with mergers, the two corporate cultures were incompatible, and the company imploded in 1997. From what I’ve come to understand, this implosion led to “the death of Smalltalk,” as it’s often called. One of the Answer.com sources I found cited a few technical issues that turned customers off to it. Others blame the current state of Smalltalk on Java’s success (here’s the video Steve Yegge refers to. His link is broken). The impression I’ve gotten from reading the stories of others who were around then, though no one has directly put their finger on it, is that one of the conflicts at ObjectShare was over whether to adapt Smalltalk to the internet, and how to go about it. Java was seen as “made for the internet”. I think that was a key differentiator. Stephan Wessels blames the decline on the corporate failure. It sounds to me like both theories are right.

As is evident with Seaside, Smalltalk can deal with the internet quite well.

Anyway, Cincom bought the Smalltalk intellectual property from ObjectShare in 1999, making it part of its ObjectStudio package, which it still sells today.

According to Answers.com VisualAge Smalltalk was the first VisualAge product from IBM, released in the late 1980s early 1990s. In the 1990s Other languages were added to the VisualAge line, like C++ and Java. According to Wikipedia no matter which VisualAge programming language the programmer was using, the development environment for it was always written in Smalltalk. VisualAge for Java actually ran Java programs on a hybrid VM that ran both Smalltalk and Java bytecodes. The Java primitives were written in Smalltalk.

It looks like in April 2006 IBM stopped supporting VisualAge Smalltalk. Instantiations, Inc. took over product development completely in 2004. According to Wikipedia it is currently licensed by IBM to Instantiations, Inc., and renamed VA Smalltalk. In fact IBM sells no VisualAge products anymore for any language. They’ve been replaced by the WebSphere line, or the XL compiler line.

So there are still versions of Smalltalk around with a long heritage and a visible user base. I have no idea what they’re like since I haven’t used them. I found Squeak and I’ve been quite happy with it.

Disclaimer: I do not work for any of the companies I have mentioned above. I have no sales relationship with them. I’m just writing this to inform about the goings on in the Smalltalk community.

Note: Since it’s come to my attention that some of my posts are being pilfered by a “web search honeypot” trying to siphon some ad revenue from Google, I’m going to try signing my posts from now on so at least people can accurately determine authorship.

—Mark Miller, https://tekkie.wordpress.com

Straying into dangerous territory

This is the 2nd part to my previous post, Squeak is like an operating system.

It’s easy to get lulled into the notion when programming in Smalltalk that you can use it like you would a programming language that is separated from the OS, that uses libraries, and is only linked in by a loader. Indeed Smalltalk is a programming language, but more in the sense that Bourne Shell is a programming language on Unix, and you’re using it logged in as “root”. That’s a bit of a bad analogy, because I don’t mean to imply that Smalltalk is a scripting language that uses terse symbols, but the power that you have over the system is the same (in this case, the Squeak image).

In Unix, you manipulate processes and files. In Squeak, you manipulate objects. Everything is an object.

I had an interesting “beginner root” experience with Squeak recently, and it highlights a difference between Ruby (file-based) and Smalltalk (image-based). Some might judge it in Ruby’s favor. I was doing some experimenting in a Workspace. To further the analogy, think of a Workspace as a command line shell instance in X-Windows. I had been using a variable called “duration” in my code. It was late. I was tired (really interested in what I was doing), and I got to a point where I didn’t need this variable. Cognizant of the fact that in certain workspaces variables hang around, I decided I wanted to get rid of the object it was holding, but I was a bit spacey and couldn’t remember the exact spelling of it. I had already changed the code that used it. So I evaluated:

duration := nil


Duration := nil

just to be sure I got both spellings. At first nothing weird happened, but then I tried to save the image and quit.

I had forgotten that 1) all variables in Smalltalk begin in lower-case, and 2) “Duration” (with the capital D) is the name of a class. Smalltalk is case-sensitive. Instead of saving, I got an exception dialog saying “Message Not Understood”. I found the source of the error, and for some mysterious reason, a message being passed to the Duration class seemed to be causing the error. I inspected “Duration” in the code and Smalltalk said “Unknown object”. Alright. So it wasn’t the message. Still, I thought, “Huh. Wonder what that’s about?” I ended up just quitting without saving. I realized the next day the reason I got the error is I had deleted the Duration class from the image (in memory, not on disk)! I confirmed this by trying it again, bringing up a code browser and trying to search for the Duration class. Nothing. I thought, “Wow. That’s pretty dangerous,” just being able to make a spelling error and delete a critical class without warning.

What happened?

Say you’re using Unix, and you’re at the command line, and there’s a critical executable that Unix uses to calculate the passage of time, called Duration (I know this would normally involve using a time structure in C and some API calls, but bear with me). What I did would be the equivalent to saying, “mv Duration /dev/null”, and when the system tried to shut down, it couldn’t find it, and aborted the shutdown. I know this sounds weird, because I just said the image on disk was unaltered. Anything that is actively available as an object in Squeak is in memory, not on disk. The disk image is roughly equivalent to the file that’s created when you put your computer into hibernation. It’s just a byte for byte copy of what was in memory.

Unix treats directories/files, processes, and shell variables differently. It treats them each as separate “species”. One does not work like the other, and has no relationship with the other. I know it’s often said of Unix that “everything is a file”, but there are these distinctions.

Another fundamental difference between Unix and Squeak is Unix is process-oriented. In Unix, to start up a new process entity, you run an executable with command line arguments. This can be done via. GUI menus, but still this is what’s going on. In Squeak you run something by passing a message to a class that instantiates one or more new objects. This is typically done via. icons and menus, but at heart this is what’s going on.

Anyway, back to my dilemma. I was looking to see if there was a way to easily fix this situation, since I considered this behavior to be a bug. I didn’t like that it did this without warning. It turned out to be more complicated than I thought. I realized that it would take a heuristic analyzer to determine that what I was doing was dangerous.

I did a little research, using Google to search the Squeak developers list, looking up the keywords “Squeak ‘idiot proof'”, and interestingly, the first entry that came up was from 1998 where Paul Fernhout had done something very similar. He asked, “Shouldn’t we prevent this?” Here’s what he did:

Dictionary become: nil

Dictionary is, again, a class. become: is a message handled by ProtoObject, the root class of every class in the system. In this case it makes every variable in the entire system that used to point to Dictionary point to nil. The method description also says, “and vice versa”, which I take to mean that every variable in the system that pointed to the argument in become: would point to Dictionary (to use the example). I’m guessing that nil is a special case, but I don’t know.

Anyway, this caused Squeak to crash. The consensus that ensued was that this was okay. Squeak was designed to be a fully modifiable system. In fact, Alan Kay has often said that he wishes that people would take Squeak and use it to create a better system than itself. “Obsolete the darn thing,” he says. The argument on the developers list was if it was made crash-proof, anticipating every possible way it could crash, it would become less of what it was intended to be. Someone feared that, “if you make it idiot proof, then only idiots will use it.” The conclusion seemed to be that Squeak programmers should just follow “the doctor’s rule”: The patient says, “It hurts when I do this.” The doctor says, “Well, don’t do that.”

I can see that this argument has validity, but in this particular instance, I don’t like this explanation. It seems to me that this is an area where some error checking could be easily implemented. Isn’t it erroneous to put nil in for become:? Maybe the answer is no. I at least have an excuse, since I was using the := operator, which is a primitive, and it’s much harder to check for this sort of thing in that context.

I also realized that the reason that code browsers can warn you about doing such things easily is they have menu options for doing things like deleting a class, and they throw up a warning if you try to do that. The browser can clearly make out what you’re trying to do, and check if there are dependencies on what you’re deleting. Checking for “Duration := nil” would be akin to a virus scanner looking for suspicious behavior. I don’t believe even a browser would pick up on this piece of code.

This example drove home the point for me that Squeak is like an operating system, and I as a developer am logged in as root–I can do anything, which means I can royally screw the system up (the image) if I don’t know what I’m doing.

I do have a way out of a mess, though, with Squeak. Fortunately, since it’s image-based, if I had managed to save a screwed-up image, I could just substitute a working image. It’s a good idea to keep a backup “clean” image around for such occasions, that you can just make a copy of. I wouldn’t need to reinstall the OS (to use the analogy). It’s like what people who use virtualization software do to manage different operating systems. They save disk images of their sessions. If one becomes screwed up or infected with a virus, they can just substitute a good one and delete the bad one. Problem solved.

For the most part, I wouldn’t lose any source code either, because that’s all saved in changesets, which I could recover (though Workspaces are only saved if you explicitly save them).

Another thing I could’ve done is opened up another instance of Squeak on a good image, filed out the Duration class (have Squeak serialize the class definition to a text file), and then filed it in (deserialized it), in the corrupted image. I imagine that would’ve solved the problem in my case.

Having said all this, I don’t mean to say that if you’re a Squeak user, working at a more abstract level, say with eToys, that you’re likely to experience a crash in the system. That level of use is pretty stable. Individual apps. may crash, and people have complained about apps. getting messed up in the newer versions, but the system will stay up.

I can understand why some like Ruby. You get access to Smalltalk-like language features, without having “root” access, and thereby the ability to mess things up in the runtime environment. The core runtime is written in C, which conveys a “None but the brave enter here” message. It’s compatible with the file-based paradigm that most developers are familiar with. From what I hear, with the way programming teams are staffed these days at large firms, a technology like Squeak in unskilled hands would probably lead to problems. It’s like the saying in Spiderman: “With power comes responsibility.” Squeak gives the programmer a lot of power, and some power to mess things up. Interestingly, in the 1998 discussion I link to, there was a distinction made between “shooting yourself in the foot” in Smalltalk vs. doing the same in a language like C. The commenter said that a C programmer has to be much more careful than a Squeak programmer does, because a) pointers in C are dangerous, and b) you can’t avoid using them. Whereas in Squeak you don’t have to use facilities that are dangerous to get something useful done. In fact, if you do screw up, Squeak offers several ways you can recover.

In closing, I’ll say that just as a seasoned Unix administrator would never say “rm -r *” from the root directory (though Unix would allow such a command to be executed, destructive as it is), people learning Squeak come to the realization that they have the full power of the system in their hands, and it behooves them to use it with discipline. I know that in a Unix shell you can set up an alias for rm that says “rm -i” that forces a warning. I suppose such an option could be built into Squeak if there was the desire for that. It would depend a lot on context, and Squeak doesn’t really embody that concept at this point.

Squeak is like an operating system

Squeak desktop

The Squeak 3.9 desktop

In foreground is a System Browser (what I call a “code browser”–the default developer IDE), and one of the desktop “flaps” on the right (like a “quick launch” bar–it retracts). Behind the System Browser is the Monticello Browser, a version control system for Squeak. The bar at the top contains some system menus and desktop controls. The wallpaper is called “Altitude”.

Smalltalk is misunderstood. It’s often criticized as one big lock-in platform (like Microsoft Windows, I guess), or as I saw someone say, a “walled garden”, or more often a language that’s “in its own little world.” I think all of these interpretations come about because of this misunderstanding: people think Smalltalk is just a language running on a VM. Period. They don’t understand why it has its own development tools, and its own way of interacting. That doesn’t seem like a typical language to them, but they can’t get past this notion. The truth is Smalltalk is unlike anything most programmers have ever seen before. It doesn’t seem to fit most of the conventions that programmers are trained to see in a language, or a system.

We’re trained that all an operating system represents to a user or a programmer is a manager of resources, and an interaction environment. Programs are stored inactive on disk, along with our data files. We learn that we initiate our programs by starting them from a command line, or clicking on an icon or menu item, through the interaction environment of the OS. When a program is run, it is linked into the system via. a loader, assigned an area of memory, and given access to certain OS resources. Programs are only loaded into memory when they’re running, or put into a state of stasis for later activation.

We’re trained that a “virtual machine,” in the .Net or JVM sense, is a runtime that executes bytecodes and manages its own garbage collected memory environment, and is strictly there to run programs. It may manage its own threads, or not. It becomes active when it’s running programs, and becomes inactive when we shut those programs down. It’s a bit like a primitive OS without its own user interaction model. It remains hidden to most people. We’re trained that programming languages are compilers or interpreters that run on top of the OS. Each is a separate piece: the OS, the development tools, and the software we run. They are linked together only occasionally, and only so long as a tool or a piece of software is in use. Smalltalk runs against the grain of most of this conventional thinking.

Like its predecessor, Smalltalk-80, Squeak functions like an operating system (in Smalltalk-80’s case, it really was an OS, running on the Xerox Alto a Xerox computer). The reason I say “like” an OS, is while it implements many of the basic features we expect from an OS, it doesn’t do everything. In its current implementation it does not become the OS of your computer, nor do any of the other Smalltalk implementations. It doesn’t have its own file system, nor does it natively handle devices. It needs an underlying OS. What distinguishes it from runtimes, like .Net and the JVM, is it has its own GUI–its own user interaction environment (which can be turned off). It has its own development tools, which run inside this interaction environment. It multitasks its own processes. It allows the user to interactively monitor and control background threads through the Process Monitor that also runs inside the environment. Like Lisp’s runtime environment, everything is late-bound, unlike .Net and the JVM. Everything is more integrated together: the kernel, the Smalltalk compiler, the UI, the development tools, and applications. All are implemented as objects running in one environment.

Creating a unified language/OS system was intentional. Alan Kay, the designer of the Smalltalk language, was in a discussion recently on operating systems on the Squeak developers list. He said that an operating system is just the part that’s not included in a programming language. In his opinion, a language should be an integral part of what we call an OS. His rationale was that in order for a program to run, it needs the facilities of the OS. The OS governs I/O, and it governs memory management. What’s the point in keeping a language separate from these facilities if programs are always going to need these things, and would be utterly useless without them? (Update 12-14-2010: Just a note of clarification about this paragraph. Though Alan Kay’s utterance of this idea was the first I’d heard of it, I’ve since seen quotes from Dan Ingalls (the implementor of Smalltalk), from material written a few decades ago, which suggest that the idea should be attributed to him. To be honest I’m not real sure who came up with it first.)

Another reason that there’s misunderstanding about Smalltalk is people think that the Smalltalk language is trying to be the “one language to rule them all” in the Squeak/Smalltalk environment. This is not true.  If you take a look at other systems, like Unix/Linux, Windows, or Mac OS, most of the code base for them was written in C. In some OSes, some of the code is probably even written in assembly language. What are the C/C++ compilers, the Java and .Net runtimes, and Python and Ruby written in? They’re all written in C. Why? Is C trying to be the “one language to rule them all” on those systems? No, but why are they all written in the same language? Why aren’t they written in a variety of languages like Pascal, Lisp, or Scheme? I think any of them would be capable of handling the task of translating source code into machine code, or creating an interpreter. Sure, performance is part of the reason. The main one, though, is that all of the operating systems are fundamentally oriented around the way C does things. The libraries that come with the OS, or with the development systems, are written in it. Most of the kernel is probably written in it. Pascal, assuming it was not written in C, could probably handle dealing with the OS, but it would have a rough time, due to the different calling conventions that are used between it and C. Could Lisp, assuming its runtime wasn’t written in C, either, interact with the OS with no help from C? I doubt it, not unless the OS libraries allowed some kind of universal message passing scheme that it could adapt to.

A major reason Smalltalk seems to be its “own little world” is like any OS, it needs a large variety of tools, applications, and supporting technologies to feel useful to most users, and Smalltalk doesn’t have that right now. It does have its own way of doing things, but nothing says it can’t be changed. As an example, before Monticello came along, versioning used to only take place in the changesets, or via. filing out classes to text files, and then saving them in a version control system.

Saying Smalltalk is isolating is equivalent to saying that BeOS is a “walled garden,” and I’m sure it would feel that way to most. Is any OS any less a “walled garden”? It may not seem like it, but try taking a Windows application and run it on a Mac or Linux without an emulator, or vice versa. You’ll see what I mean. It’s just a question of is the walled garden so big it doesn’t feel confining?

Having said this, Squeak does offer some opportunities for interoperating with the “outside world.” Squeak can open sockets, so it can communicate via. network protocols with other servers and databases. There’s a web service library so it can communicate in SOAP. Other commercial implementations have these features as well. Squeak also has FFI (Foreign Functionality Interface), and OSProcess libraries, which enables Squeak programmers to initiate processes that run on the underlying OS. As I mentioned a while ago, someone ported the wxWidgets library to it, called wxSqueak, so people can write GUI apps. that run on the underlying OS. There’s also Squeak/.Net Bridge, which allows interaction with the .Net runtime, in a late-bound way (though .Net makes you work harder to get what you want in this mode). So, Squeak at least is not an island in principle. Its system support could be better. Other languages like Ruby have more developed OS interoperability libraries at this point.

The “C/C++ effect” you see on most other systems is the equivalent of what you see in Squeak. The Smalltalk language is the “C” of the system. The reason most software out there, even open source, can’t run on it, is at a fundamental level, they’re all written in C or C++. The system languages are so different. Smalltalk also represents a very different system model.

What Smalltalk, and especially Squeak, has achieved that no other system has achieved to date, with the exception of the Lisp machines of old, is a symbolic virtual machine environment that is more relevant to its own use than any other runtime environment out there. In .Net and the JVM, you just use their runtimes to run programs. They’re rather like the old operating systems at the beginning of computing, which were glorified program loaders, though they have some sophisticated technical features the old OSes didn’t have. With Squeak, you’re not just running software on it, you’re using the VM as well, interacting with it. Squeak can be stripped down to the point that it is just a runtime for running programs. There are a few Squeak developers who have done that.

In Smalltalk, rather than machine code and C being software’s primary method of interacting with the machine, bytecode and the Smalltalk language are the basic technologies you work with. This doesn’t mean at all that more languages couldn’t be written to run in the Smalltalk environment. I have no doubt that Ruby or Python could be written in Smalltalk, and used inside the environment, with their syntax and libraries intact. The way in which they’d be run (initiated) would be different, but at heart they would be the same as they are now.

Both Apple and Microsoft are moving towards the Smalltalk model, with the Cocoa development environment on the Mac, and .Net/WPF on Windows. Both have you write software that can run in a bytecode-executing runtime, though the OS is still written in large part in compiled-to-binary code. If desktop OSes have a future, I think we’ll see them move more and more to this model. Even so, there remain a couple of major differences between these approaches: Smalltalk uses a late-binding model for everything it does, so it doesn’t have to be shut off every time you want to change what’s running inside it. Even programs that are executing in it don’t have to be stopped. .Net uses an early-binding model, though it emulates late binding pretty well in ASP.Net now. I can’t comment on Cocoa and Objective C, since I haven’t used them. Another difference is, like Linux, Squeak is completely open source. So everything is modifiable, right down to the kernel. As I’ll get into in my next post, this is a double-edged sword.

Update 7/20/07: I found this in the reddit comments to this blog post. Apparently, there is a version of Squeak that IS an operating system, called SqueakNOS, that allows you to boot your computer into Squeak, as if it was an OS. The name stands for “Squeak No Operating System” (underneath it). It comes in an ISO image that you can burn to CD. From the screenshot, it shows that it has a concept of devices, displaying tabs for the CD-ROM drive, serial port, and NIC.

I think I had heard the name of SqueakNOS before this, but I didn’t know what it was. The version of Squeak that most people have is the kind that rides on top of an existing OS, not this, but it’s nice to know that this is out there. Personally, I like the kind I have, since it allows me to do what I like doing in Squeak, and if I want resources it doesn’t have available, then I can go to the OS I’m running. I used to hear about a version of Linux that ran this way, so people didn’t have to dual boot, or use virtualization software. There’s no “dishonor” in it.

One person in the reddit comments said they wished it had a “good web browser that supported AJAX.” Me too. There is a HTML browser called “Scamper”, available from SqueakMap. It used to be included in the image, but in the latest version from squeak.org it looks like it’s been taken out. It seems that lately, Squeak developers have preferred the idea of having Squeak control the default browser on your system (IE, Safari, Firefox, etc.) to using a browser internal to Squeak. The last I saw Scamper, it was a bare-bones HTML browser that just displayed text in a standard system font. It could be better.

Update 1/18/20: I happened to research SqueakNOS last year for a bit, and found out it didn’t technically run as the computer’s operating system. It just acted like one. What actually happened is the computer booted into a stripped down version of Linux, which automatically ran Squeak as its sole application at boot up. This is how Chromebooks operate. My guess is the device management in SqueakNOS manipulated drivers in Linux.

Redefining computing, Part 2

“Pascal is for building pyramids–imposing, breathtaking, static structures built by armies pushing heavy blocks into place. Lisp is for building organisms–imposing, breathtaking, dynamic structures built by squads fitting fluctuating myriads of simpler organisms into place.”

— from forward written by Alan Perlis for
Structure and Interpretation of Computer Programs

In this post I’m going to talk mostly about a keynote speech Alan Kay gave at the 1997 OOPSLA conference, called “The Computer Revolution Hasn’t Happened Yet,” how he relates computing to biological systems, and his argument that software systems, particularly OOP, should emulate biological systems. In part 1 I talked a bit about Peter Denning’s declaration that “Computing is a Natural Science”.

Please note: There is a bit of mature language in this speech. It’s one sentence. If you are easily offended, errm…plug your ears, avert your eyes, whatever. It’s where he’s talking about Dr. Dijkstra.

This speech reminds me that when I started hearing about gene therapies being used to treat disease, I got the idea that this was a form of programming–reprogramming cells to operate differently, using the “machine code” of the cell: DNA.

While scientists are discovering that there’s computation going on around us in places we did not suspect, Kay was, and still is, trying to get us developers to make our software “alive,” capable of handling change and growth–to scale–without being disturbed to a significant degree. That’s what his conception of OOP was based on to begin with. A key part of this is a late-bound system which allows changes to be made dynamically. Kay makes the point as well that the internet is like this, though he only had a tiny hand in it.

One of the most striking things to me he’s touched on in this speech, which he’s given several times, though it’s a bit different each time, is that computing introduces a new kind of math, a finite mathematics that he says is more practical and more closely resembles things that are real, as opposed to the non-bounded quality of classical mathematics, which often does not. Maybe I’m wrong, but it seems to me from what he says, he sees this finite math as better than classical math at describing the world around us. As noted in my previous post, Peter Denning says in his article “Computing is a Natural Science”:

Stephen Wolfram proclaimed [in his book A New Kind of Science] that nature is written in the language of computation, challenging Galileo’s claim that it is written in mathematics.

So what’s going on here? It definitely sounds like CS and the other sciences are merging some. I learned certain finite math disciplines in high school and college, which related to my study of computer science. I never expected them to be substitutes for classical math in the sciences. If someone can fill me in on this I’d be interested to hear more. I don’t know, but this memory just flashed through my head; that scene in “Xanadu” where a band from the 40s, and a band from the 80s merge together into one performance. They’re singing different songs, but they mesh. Crazy what one’s mind can come up with, huh. 🙂 I know, I know. “Xanadu” was a cheesy movie, but this scene was one of the better ones in it, if you can tolerate the fashions.

I think this speech has caused me to look at OOP in a new way. When I was first training myself in OOP years ago, I went through the same lesson plan I’m sure a lot of you did, which was “Let’s make a class called ‘Mammal’. Okay, now let’s create a derived class off of that called ‘Canine’, and derive ‘Dog’ and ‘Wolf’ off of that,” or, “Let’s create a ‘Shape’ class. Now derive ‘Triangle’ and ‘Ellipse’ off of that, and then derive ‘Circle’ from ‘Ellipse’.” What these exercises were really doing was teaching the structure and techniques of OOP, and showing some of their practical advantages, but none of it taught you why you were doing it. Kay’s presentation gave me the “why”, and he makes a pretty good case that OOP is not merely a neat new way to program, but is in fact essential for developing complex systems, or at least the best model discovered so far for doing it. He also points out that the way OOP is implemented today is missing one critical element–dynamism, due to the early-bound languages/systems we’re using, and that a lot of us have misunderstood what he intended OOP to be about.

Alan Kay relates the metaphor of biological systems to software systems, from his speech at the 1997 OOPSLA conference

If you don’t want to listen to the whole speech, I’ve printed some excerpts from it below, only including the parts that fit the biological metaphor he was getting across. I’ve included my own annotations in []’s, and commentary in between excerpts. I’ve also included some links.

Dr. Kay gets into this discussion, using a spat he and Dr. Dijkstra got into some years before over who were the better computer scientists, the Europeans or the Americans, as a launching point. Dijkstra made the point that European computer scientists had more of a math background than Americans did, and they got their positions by working harder at it than Americans, etc. Kay responded with, “Huh, so why do we write most of the software, then?” (I’m paraphrasing). He continues:

Computers form a new kind of math. They don’t really fit well into classical math. And people who try to do that are basically indulging in a form of masturbation…maybe even realizing it. It was a kind of a practical math. The balance was between making structures that were supposed to be consistent of a much larger kind than classical math had ever come close to dreaming of attempting, and having to deal with [the] exact same problems that classical math of any size has to deal with, which is being able to be convincing about having covered all of the cases.

There’s a mathematician by the name of Euler [pronouned “Oiler”] whose speculations about what might be true formed 20 large books, and most of them were true. Most of them were right. Almost all of his proofs were wrong. And many PhDs in mathematics in the last and this century have been formed by mathematicians going to Euler’s books, finding one of his proofs, showing it was a bad proof, and then guessing that his insight was probably correct and finding a much more convincing proof. And so debugging actually goes on in mathematics as well.

And I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There’s no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those I don’t think really get what computing is all about. It’s like saying I have really great ideas for paintings, but I’m just going to use a brush, but no paint. You know, so my ideas will be represented by the gestures I make over the paper; and don’t tell any 20th century artists that, or they might decide to make a videotape of them doing that and put it in a museum. . . .

Kay used a concept, called bisociation, from a book called The Act of Creation”, by Arthur Koestler. Koestler draws out a “pink plane” with a trail of ants on it, which represents one paradigm, one train of thought, that starts out with an idea and works towards continuous improvement upon it. Kay said about it:

If you think about that, it means that progress in a fixed context is almost always a form of optimization, because if you were actually coming up with something new it wouldn’t have been part of the rules or the context for what the pink plane is all about. So creative acts generally are ones that don’t stay in the same context that they’re in. So he says every once in a while, even though you have been taught carefully by parents and by school for many years, you have a “blue” idea.

This introduces a new context.

[Koestler] also pointed out that you have to have something “blue” to have “blue” thoughts with, and I think this is something that is generally missed in people who specialize to the extent of anything else. When you specialize you’re basically putting yourself into a mental state where optimization is pretty much all you can do. You have to learn lots of different kinds of things in order to have the start of these other contexts.

This is the reason he says a general education is important. Next, he gets into the biological metaphor for computing:

One of my undergraduate majors was in molecular biology, and my particular interest was both in cell physiology and in embryology–morphogenesis they call it today. And this book, The Molecular Biology of the Gene had just come out in 1965, and–wonderful book, still in print, and of course it’s gone through many, many editions, and probably the only words that are common between this book and the one of today are the articles, like “the” and “and”. Actually the word “gene” I think is still in there, but it means something completely different now. But one of the things that Watson did in this book is to make an assay–first assay of an entire living creature, and that was the E. Coli bacterium. So if you look inside one of these, the complexity is staggering.

He brings up the slide of an E. Coli bacterium as seen through an (electron?) microscope.

Those “popcorn” things are protein molecules that have about 5,000 atoms in them, and as you can see on the slide, when you get rid of the small molecules like water, and calcium ions, and potassium ions, and so forth, which constitute about 70% of the mass of this thing, the 30% that remains has about 120 million components that interact with each other in an informational way, and each one of these components carries quite a bit of information [my emphasis]. The simple-minded way of thinking of these things is it works kind of like OPS5 [OPS5 is an AI language that uses a set of condition-action rules to represent knowledge. It was developed in the late 1970s]. There’s a pattern matcher, and then there are things that happen if patterns are matched successfully. So the state that’s involved in that is about 100 Gigs., and you can multiply that out today. It’s only 100 desktops, or so [it would be 1/2-1/3 of a desktop today], but it’s still pretty impressive as [an] amount of computation, and maybe the most interesting thing about this structure is that the rapidity of computation seriously rivals that of computers today, particularly when you’re considering it’s done in parallel. For example, one of those popcorn-sized things moves its own length in just 2 nanoseconds. So one way of visualizing that is if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagon, and it’s moving its own length in 2 nanoseconds. That’s about 8 feet on our scale of things. And can anybody do the arithmetic to tell me what fraction of the speed of light moving 8 feet in 2 nanoseconds is?…[there’s a response from the audience] Four times! Yeah. Four times the speed of light [he moves his arm up]–scale. So if you ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent, that we could not imagine it, even with the aid of computers. There’s nothing to be seen inside one of these things until you kill it, because it is just a complete blur of activity, and under good conditions it only takes about 15 to 18 minutes for one of these to completely duplicate itself. Okay. So that’s a bacterium. And of course, lots more is known today.

Another fact to relate this to us, is that these bacteria are about 1/500th the size of the cells in our bodies, which instead of 120 million informational components, have about 60 billion, and we have between 1012, maybe 1013, maybe even more of these cells in our body. And yet only 50 cell divisions happen in a 9-month pregnancy. It only takes 50 cell divisions to make a baby. And actually if you multiply it out, you realize you only need around 40. And the extra 10 powers of 10 are there because during the embryological process, many of the cells that are not fit in one way or another for the organism as a whole are killed. So things are done by over-proliferating, testing, and trimming to this much larger plan. And then of course, each one of these structures, us, is embedded in an enormous biomass.

So to a person whose “blue” context might have been biology, something like a computer could not possibly be regarded as particularly complex, or large, or fast. Slow. Small. Stupid. That’s what computers are. So the question is how can we get them to realize their destiny?

So the shift in point of view here is from–There’s this problem, if you take things like doghouses, they don’t scale [in size] by a factor of 100 very well. If you take things like clocks, they don’t scale by a factor of 100 very well. Take things like cells, they not only scale by factors of 100, but by factors of a trillion. And the question is how do they do it, and how might we adapt this idea for building complex systems?

He brings up a slide of a basic biological cell. He uses the metaphor of the cell to talk about OOP, as he conceived it, and building software systems:

Okay, this is the simple one. This is the one by the way that C++ has still not figured out, though. There’s no idea so simple and powerful that you can’t get zillions of people to misunderstand it. So you must, must, must not allow the interior of any one of these things to be a factor in the computation of the whole, okay. And this is only part of this story. It’s not just the cell–the cell membrane is there to keep most things out, as much as it is there to keep certain things in.

And a lot of, I think, our confusion with objects is the problem that in our Western culture, we have a language that has very hard nouns and verbs in it. So our process words stink. So it’s much easier for us when we think of an object–I have apologized profusely over the last 20 years for making up the term “object-oriented”, because as soon as it started to be misapplied, I realized that I should’ve used a much more process-oriented term for it. Now, the Japanese have an interesting word, which is called “ma”, which spelled in English is M-A, “ma”. And “ma” is the stuff in between what we call objects. It’s the stuff we don’t see, because we’re focused on the noun-ness of things, rather than the process-ness of things, whereas Japanese has a more process/feel-oriented way of looking at how things relate to each other. You can always tell that by looking at the size of a word it takes to express something that is important. So “ma” is very short. We have to use words like “interstitial”, or worse, to approximate what the Japanese are talking about.

I’ve seen him use this analogy of the Japanese word “ma” before in more recent versions of this speech. What he kind of beats around the bush about here is that the objects themselves are not nearly as important as the messages that are sent between them. This is the “ma” he’s talking about–what is going on between the objects, their relationships to each other. To me, he’s also emphasizing architecture here. In a different version of this speech (I believe), he said that the true abstraction in OOP is in the messaging that goes on between objects, not the objects themselves.

He continues:

So, the realization here–and it’s not possible to assign this realization to any particular person, because it was in the seeds of Sketchpad, and in the seeds of the Air Training Command file system, and in the seeds of Simula–and that is, that once you have encapsulated in such a way that there is an interface between the inside and the outside, it is possible to make an object act like anything, and the reason is simply this: that what you have encapsulated is a computer [my emphasis]. So you have done a powerful thing in computer science, which is to take the powerful thing you’re working on and not lose it by partitioning up your design space. This is the bug in data procedures–data and procedure languages. And I think this is the most pernicious thing about languages like C++ and Java, [which] is they think they are helping the programmer by looking as much like the old thing as possible, but in fact they’re hurting the programmer terribly, by making it difficult for the programmer to understand what’s really powerful about this new metaphor.

The “Air Training Command file system” refers to a way that data was stored on tapes back when Kay was in the Air Force, I think in the 1950s in the early 1960s. It was effective and efficient. Some have pointed to it as the first example Kay had seen of an object-oriented system. The first section of tape contained pointers to routines (probably hard addresses to memory locations, in those days) whose code was stored in a second section of the tape, which read/manipulated the data on the third section of the tape. This way, it didn’t matter what format the data came in, since that was abstracted by the routines in the second section. No one knows who came up with the scheme.

And again, people who were doing time-sharing systems had already figured this out as well. Butler Lampson’s thesis in 1965 was about–that what you want to give a person on a time-sharing system is something that is now called a virtual machine, which is not the same as what the Java VM is, but something that is as much like the physical computer as possible, but give one separately to everybody. Unix had that sense about it, and the biggest problem with that scheme is that a Unix process had an overhead of about 2,000 bytes just to have a process. And so it’s going to be difficult in Unix to let a Unix process just be the number 3. So you’d be going from 3 bits to a couple of thousand bytes, and you have this problem with scaling.

So a lot of the problem here is both deciding that the biological metaphor [my emphasis] is the one that is going to win out over the next 25 years or so, and then committing to it enough to get it so it can be practical at all of the levels of scale that we actually need. Then we have one trick we can do that biology doesn’t know how to do, which is we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it’s done today. And systems do have cystic fibrosis, and some of you may know that cystic fibrosis today for some people is treated by infecting them with a virus, a modified cold virus, giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus, and the cold virus is too weak to actually destroy the lungs like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. And that is what does the trick. That’s a very complicated way of reprogramming an organism’s DNA once it has gotten started.

He really mixes metaphors here, going from “cystic fibrosis” in computing, to it in a biological system. I got confused on what he said about the gene therapy. Perhaps he meant that the virus carries the “non-defective gene”? I thought what the therapy did was substitute a good gene for the bad one, since cystic fibrosis is a genetic condition, but then, I don’t really know how this works.

When he talks about “taking the DNA out of the cell,” and dealing with “cystic fibrosis,” I’m pretty sure he’s talking about a quality of dynamic OOP systems (like Smalltalk) that work like Ivan Sutherland’s Sketchpad program from the 1960s, where each object instance was based on a master object. If you modified the master object, that change got reflected automatically in all of the object instances.

He brought up another slide that looks like a matrix (chart) of characteristics of some kind. I couldn’t see it too well. I recognized the subject though. He begins to talk about his conception of the semantic web, using the same biological and object-oriented metaphor:

So here’s one that is amazing to me, that we haven’t seen more of. For instance, one of the most amazing things to me, of people who have been trying to put OOP on the internet, is that I do not–I hope someone will come up afterwards and tell me of an exception to this–but I do not know of anybody yet who has realized that at the very least every object should have a URL, because what the heck are they if they aren’t these things. And I believe that every object on the internet should have an IP, because that represents much better what the actual abstractions are of physical hardware to the bits. So this is an early insight, that objects basically are like servers. And this notion of polymorphism, which used to be called generic procedures, is a way of thinking about classes of these servers. Everybody knows about that.

He brought up another slide, showing the picture of a building crane on one side, and a collection of biological cells on the other. More metaphors.

And here’s one that we haven’t really faced up to much yet, that now we’ll have to construct this stuff, and soon we’ll be required to grow it. So it’s very easy, for instance, to grow a baby 6 inches. They do it about 10 times in their life. You never have to take it down for maintenance. But if you try and grow a 747, you’re faced with an unbelievable problem, because it’s in this simple-minded mechanical world in which the only object has been to make the artifact in the first place, not to fix it, not to change it, not to let it live for 100 years.

So let me ask a question. I won’t take names, but how many people here still use a language that essentially forces you–the development system forces you to develop outside of the language [perhaps he means “outside the VM environment”?], compile and reload, and go, even if it’s fast, like Virtual Cafe (sic). How many here still do that? Let’s just see. Come on. Admit it. We can have a Texas tent meeting later. Yeah, so if you think about that, that cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go to trying to understand what the possibilities for interoperability is with things that already exist.

Now, I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago, and if you think about–the ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders or magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big! . . .

There are going to be dozens and dozens–there almost already are–dozens and dozens of different object systems, all with very similar semantics, but with very different pragmatic details. And if you think of what a URL actually is, and if you think of what an HTTP message actually is, and if you think of what an object actually is, and if you think of what an object-oriented pointer actually is, I think it should be pretty clear that any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made. That’s the whole point of not being able to see inside. And so a semantic interoperability is possible almost immediately by simply taking that stance. So this is going to change, really, everything. And things like Java Beans and CORBA are not going to suffice, because at some point one is going to have to start really discovering what objects think they can do. And this is going to lead to a universal interface language, which is not a programming language, per se. It’s more like a prototyping language that allows an interchange of deep information about what objects think they can do. It allows objects to make experiments with other objects in a safe way to see how they respond to various messages. This is going to be a critical thing to automate in the next 10 years. . . .

I think what he meant by “any object-oriented language can internalize its own local pointers to any object in the world, regardless of where it was made” is that any OOP language can abstract its pointers so that there’s no distinction between pointers to local objects and URLs to remote objects. This would make dealing with local and remote objects seamless, assuming you’re using a late-bound, message-passing metaphor throughout your system. The seamlessness would not be perfect in an early-bound system, because the compiler would ensure that all classes for which there are messages existed before runtime, but there’s no way to insure that every object on the internet is always going to be available, and always at the same URL/address.

You might ask, “Why would you want that kind of ‘perfect’ seamlessness? Don’t you want that insurance of knowing that everything connects up the way it should?” My answer is, things change. One of the things Kay is arguing here is that our software needs to be adaptable to change, both in itself, and in the “outside world” of the internet. He uses the internet as an example of a system that has been very robust in the face of change. Early-bound systems have a tendency to push the programmer towards assuming that everything will connect up just as it is today, and that nothing will change.

As I heard him talk about objects on the web, the first thing that came to mind was REST (the whole “each object should have a URL” idea). It seems like the first try at an implementation of this idea of objects “experimenting” and “figuring out” what other objects on the network can do was started with the idea of WSDL and UDDI, but that flopped, from what I hear. Microsoft has made a bit of a start with this, in my opinion, with WCF, but it only determines what type of object is at the other end, as in “Is it DCOM?”, “Is it an XML web service?”, etc. It frees the programmer from having to do that. It still allows the client program to interogate the object, but it’s not going to do “experiments” on it for you. It may allow the client program to do them more easily than before. I don’t know.

Kay ends with a note of encouragement:

Look for the “blue” thoughts. And I was trying to think of a way–how could I stop this talk, because I’ll go on and on, and I remembered a story. I’m a pipe organist. Most pipe organists have a hero whose name was E. Power Biggs. He kind of revived the interest in the pipe organ, especially as it was played in the 17th and 18th centuries, and had a tremendous influence on all of us organists.  And a good friend of mine was E. Power Biggs’s assistant for many years, back in the 40s and 50s. He’s in his 80s now. When we get him for dinner, we always get him to tell E. Power Biggs stories. The organ that E. Power Biggs had in those days for his broadcasts was a dinky little organ, neither fish nor foul, in a small museum at Harvard, called the Busch-Reisinger Museum; but in fact all manner of music was played on it. And one day this assistant had to fill in for Biggs, and he asked Biggs, “What is the piece [to be] played?” And he said, “Well I had programmed Cesar Frank’s heroic piece”. And if you know this piece it is made for the largest organs that have ever been made, the loudest organs that have ever been made, in the largest cathedrals that have ever been made, because it’s a 19th century symphonic type organ work. And Biggs was asking my friend to play this on this dinky little organ, and he said, “How can I play it on this?”, and Biggsy said, “Just play it grand! Just play it grand!” And the way to stay with the future as it moves, is to always play your systems more grand than they seem to be right now. Thank you.

Did some “housecleaning”

I occasionally go in and fix past blog posts as a matter of course. I decided to go through all of my past blog posts today and fix any links that have gotten out of date or don’t work anymore. In a few cases I deleted links because the sites they refer to have disappeared. In one case I got rid of a link because it turned out to be misleading.

A while back I got a complaint from a reader that old posts were showing up as “new” in his RSS reader. I don’t know what’s causing that, but it may be my occasional edits in old posts. Just a guess. Sometimes I’ll make frequent edits to posts I’ve just put up. I try to proofread my posts before putting them up, but in the long ones something will usually slip by me, or I’ll realize a spot doesn’t say what I intended. Hopefully it’s not annoying. Just trying to keep this blog informative and useful, and occasionally a little fun. 🙂

Thoughts on the 3D GUI

I got into a discussion with Giles Bowkett last week about 3D GUIs (see the comments on his post). It provoked some thought for me. Giles pointed me to an article on Jakob Nielsen’s assessment of 3D UIs. It’s almost 10 years old, though. Nielsen makes some good points, but it sounded to me like it was in the context of what was out there, not so much its potential.

When I imagined what a 3D GUI would be like, around the time this article was written, I tried to think of uses for it from a home user’s perspective. The first thing that came to mind is it extended “wallpaper”. Rather than having a background image on a desktop, you had a whole environment you could fashion to your taste–to actually decorate. That doesn’t sound very useful, and in fact sounds like a huge waste of productive time.

When I thought about it last week, I came to the conclusion that it would allow for a more flexible environment. One of the applications Nielsen talked about for 3D was modeling interior decorating. What if an interior decorating app. wasn’t its own app. like we know it today, but instead added capabilities to the existing environment? What about these 3D role playing games? They could extend the environment as well, rather than just being an app. in a window. Think of the “holodeck” as a metaphore. Everything that’s 3D could live in the environment, or in a sub-environment. Everything that’s 2D could live in a window in the 3D space, but could be found immediately. It wouldn’t have the problem in real 3D space where you can actually lose something :). A question that comes to mind is how do you switch apps like you can in 2D today, if this is the way things are done? Well, I think that would necessitate having multiple environments you could switch between quickly.

It could make apps. more extendable, because there would be a common programming metaphore for adding new elements to the environment. Say some third party company wanted to create an add-on to a game. They wouldn’t have to conform to the game developer’s API, because the original game was just a modification of the standard environment. The third party mod could do the same.

I referenced Croquet as an example of what I imagined. It’s not a pure 3D environment. There are still 2D elements that show up right in front of you no matter where you are in the space. I think that’s practical. I think 3D UIs can add more capabilities to what we’d call “thick client” computing today, but I think some shortcuts need to be added to the environment so that you don’t have to look and travel everywhere just to get something done. I agree that makes it harder than 2D today.

A question I have about it is, is it easier for people to think of “travelling around” to look at a collection of things, or to scroll a window up and down, and side to side?

Corrections to C# and Smalltalk case study #1

I’ve uploaded some corrections to both the C# and Smalltalk versions of my first case study of these two. I’ve put up a new C# version. The old one is still there (and can be referenced from the post I just linked to). The link to the new version is below. I’ve replaced the Smalltalk code with a new version.

Edit 10/18/2017: I’ve hosted the code samples on DropBox. DropBox will say that the files cannot be previewed, but it will display a blue “Download” button. Click that to download each zip file.

Case study code

C# EmployeeTest

Smalltalk Employees-Test package

I was inspired to make these changes by a blogger’s scathing post on my case study (Update 5-24-2013: This blog has apparently disappeared). By the way, if you want to see what this same app. would look like using C#/.Net 2.0, read his code. My code is in C#/.Net 1.x. Leaving aside his pompous language, he found a bug in my C# exception code in the Employee class, which I’ve fixed, and he found that I was using overly complex code to convert some of the data to strings. As I said in my defense, “My C#-fu is not what it used to be.” I’ve optimized those places. I also changed the exception’s message so it’s more descriptive.

He criticized the way I named things. He had some good suggestions for changing the names of things, which I have not followed here. I figured I should fix the bug and the needlessly complex code, but I wasn’t going to rewrite the app. I want to move on. I’ll try to be more conscious of how I name things in code in the future.

The only change I made to the Smalltalk code was making the exception message (for the same reason as in the C# code) more descriptive.

The end result of doing this was in terms of lines of code, the C# code and the Smalltalk code came out virtually the same in size. The Smalltalk code increased to 71 LOC (from 70), and the C# code decreased from 75 LOC to 73. The difference in size is statistically insignificant now. I haven’t bothered to look, but maybe C#/.Net 2.0 would’ve required less LOC than the Smalltalk solution in this case. As I said in my first post, this was not a straight line-of-code count. I used certain criteria which I discussed in my first post on this (at the first link in this post).

I think what this exercise put into relief is that there are certain problems that a dynamic language and a statically typed language are equally good at solving. What I’ll be exploring the next time I feel inspired to do this is if there are problems Smalltalk solves better than C# can.

Edit 9/29/2015: Since writing this, I lost interest in doing such comparisons. I think the reason is I realized I’m not skilled at doing this sort of thing. I had the idea at the time that I wanted to somehow apply Smalltalk to business cases. I’m less interested in doing such things now. I’ve been applying myself toward studying language/VM architecture. Perhaps at some point I’ll revisit issues like this.

C# (1.x) and Smalltalk case study #1

As promised, here is the first case study I did comparing the “stylings” of C# and Smalltalk. I used C# 1.0 code on the .Net CLR 1.0/1.1. I set up a programming problem for myself, creating a business object, and looked at how well each language handled it. I was inspired to do this by Hacknot’s “Invasion of the Dynamic Language Weenies,” (it has to be viewed as a series of slides) where he said there was no discernable difference in lines of code used in a static vs. a dynamic language, nor were there any advantages in programmer productivity (my response to his article is here). I figured I’d give it my own shot and see if he was right. Up to that point I hadn’t bothered to question my impressions.

What follows is the programming problem I set up for myself:

Create a program that records name, home address, work phone, home phone, salary, department, and employment class of employees. The program must store this information in some fashion (like a data structure).

On input, it must validate the department and employment class of employees when their information is entered. The departments are as follows:

– Manufacturing
– Sales
– Accounting
– Board

The employment classes are as follows:

– Employee
– MidLevel <- Management designation
– Executive <- Management designation

Workers with “Employee” or “MidLevel” class designations can be placed in Manufacturing, Sales, or Accounting departments. They cannot be placed in the Board department. Executives can only be placed in the Board department.

The program will restrict access to the employee information by the following criteria. “all access” means all employment classes may see the information:

name – all access
home address – MidLevel and Executive only
work phone – all access
home phone – MidLevel and Executive only
salary – Executive only
department – all access
class – all access

No elaborate keys are necessary. For the purposes of this exercise using the labels “Employee”, “MidLevel”, and “Executive” as “access tokens” will be sufficient evidence for the information store that the user is of that employment class, and can have commensurate access to the data.


Edit 10/18/2017: Here are the two versions of the project. I use DropBox to host the files. DropBox will say that the files cannot be previewed, but will display a blue “Download” button. Press that to download each file:

C#/.Net 1.x Employee test

Smalltalk Employee test

The C# zip file contains the test project I used.

The Smalltalk zip file contains the “file out” of the Employees-Test package I created, the test code (as a separate file), and the test output I got. It contains two sets of these files. The files in the main directory have been converted to use CR-LF for easy reading (by you) on PCs. They have the suffix “.txt”. The Squeak folder contains the files in their original format, which use LF as the end-of-line character This will be human readable on Unix-compatible systems, and within the Squeak environment. They have the suffix “.text” or “.st”.

For those not familiar with Squeak, the “file out” operation causes Squeak to create a single text file that contains the source code for a package, plus its metadata. A package is a collection of classes. The only code that a programmer would normally see in the Squeak environment are the class definitions, class comments, the method names and parameters, and the method code and comments. There’s some other code and comments that Squeak puts out when it does its file out operation. This is done so that when the code is read by Squeak (or some other Smalltalk system) in what’s called a “file in” operation, it can reconstitute the classes and their methods as they were intended. Normally code is accessed by the programmer strictly via. a “code browser”, which is the equivalent of a developer IDE within the Squeak environment. All classes in the whole system, including this package, are accessible via. a code browser. There’s no need to look up code via. directories and filenames. Code is accessed via. package name, category, class, and method.

The code that carries out the test for the Smalltalk version is included in a separate file in the zip, called “Employee test data.text” in the Squeak folder (“Employee test data.txt” if you want to display it on the screen on a PC). The test code for the C# version is included in main.cs in the C# project zip file.

I timed myself on each version, evaluated how many lines of code (LOC) it took to accomplish the task in each one, and I looked at code readability. I’ll comment on the other factors, but I’ll leave it up to you to judge which is more readable.

As I said earlier, the Smalltalk version came out slightly ahead in using fewer lines of code, despite the fact that a little more code had to be written for it to accomplish the effect you get with enumerations in C# (validating that a symbol–rough equivalent to an enumeration–is in the set of legal symbols). To keep things fair between them I did not count comments, or lines of code taken by curly (C#) or square (Smalltalk) braces. I also did not count test code, just the code for the business object and helper classes. Within these parameters I counted every line that was broken by an end of line character.

I wrote the C# version first. It came to 75 LOC, and took me 125 minutes to code, debug, and test.

The Smalltalk version came to 70 LOC, and took me 85 minutes to code, debug, and test. Another way of looking at it is 93% the LOC of the C# version, but I wrote it in 68% of the time.

A possible factor in the shorter time it took with the Smalltalk version is I wrote the solution first in C#. I can’t deny that doing it once already probably factored into the shorter time. I made a point, however, of not looking at the C# code at all while I was writing the Smalltalk version, and I wrote them on different days. Though I didn’t break it down, my guess is that debugging the code may have taken longer in the C# version. Since I haven’t worked in .Net for a while I’m a bit rusty. Despite having worked out the basic scheme of how to do it the first time, I was writing it in a different language. So I don’t chalk it up to me just repeating what I had already written.

This is my first crack at doing this. I hope to present more test cases in the future. Since I consider Smalltalk worth evaluating for real world uses, I think it should be evaluated seriously, which means taking measurements like this. Though this isn’t scientific, I’m doing this for me, to check my own perceptions. I just thought I’d share it with the reading audience as well. Something I’ll try in the future is writing a test case in Smalltalk first, and then in C#, and I’ll see if that has an impact on how much time it takes me with each. What I may also do is do future C# test cases in C#/.Net 2.0, just to be more current.

If you’d like to try out my sample projects:

I explain below how to run the Smalltalk and C# tests. Smalltalk first.

I wrote and tested my Smalltalk code using Squeak 3.8.1, from Ramon Leon’s site, onsmalltalk.com. I’ve tested it on the standard Squeak 3.8 version as well (which I got from squeak.org), and it works. Ramon has updated his public version of Squeak to 3.9, which still works with my Employee test package. So you can use any of these versions.

The best thing to do would be to unzip the Employee test files into the same directory as where you installed Squeak. My instructions below will assume you have done this.

Here’s a little Squeak primer. For those who are new, you’ll need this to try out the Employee test. These instructions will work on PCs and Unix/Linux machines. For Mac owners, I’m not sure how this would work for you. My guess is clicking the single mouse button would be equivalent to left-clicking.

When you run Squeak you will see a new desktop interface appear. It’s inside an application window frame supplied by your operating system. As long as your mouse stays inside this window, and the window has focus, most of your mouse and keyboard actions will be directed at objects and activities in the Squeak desktop space.

What follows are instructions for helper applications which you’ll be asked to open in the Employee test instructions (below), to load the necessary files, and execute the test.

File list – This is kind of like Windows Explorer. You access it by left-clicking on the Squeak desktop. This brings up what’s called the “World menu”. Within this menu select “Open…”. A second menu will appear. From it, select “file list”. You will see a new window open, which is the File list app. It has a top row which contains a text box on the left where filter expressions go, and then there are a series of buttons. Below this are 3 panes. The middle-left pane is the directory tree of the drive you’re running Squeak on. The middle-right pane is a listing of all the files in the currently selected directory. The lower, 3rd pane contains the text of whatever file is selected in the middle-right pane.

Workspace window – This gives you a text workspace to work in. You can work with code, and assign variable values within it, and it will keep them persistent for you (the variables won’t disappear when you’re done executing a task), for as long as the window is kept on the desktop. You bring up a Workspace by clicking on the right tab on the Squeak desktop (called “Tools”). You should then find the Workspace icon, and click and drag it onto the Squeak desktop.

Transcript window – This is like an active log viewer. It’s basically output-only. You carry out your tasks in the Workspace window, but the output (for this test) goes to the Transcript window. You bring up a Transcript window by clicking on the “Tools” tab. You should then find the Transcript icon, and click and drag it onto the Squeak desktop.

DoIt (action) – This is how you interactively evaluate an expression in Squeak. It’s a universal command. Anywhere, in any window on the Squeak desktop, where you have legal Smalltalk code, you can select it with the mouse, just as you would with text in your native operating system, and hit Alt-D. This will evaluate the expression, carrying out the instruction(s) you selected with the mouse.

Running the Smalltalk Employee test

Run Squeak, and open the File list. Go to the middle-left pane and click on the small triangle to the left of the currently selected directory’s name (this is the Squeak directory). This will expand its directory tree. Select the Squeak sub-folder (this came from the zip file), and select “Employees-Test.st” in the middle-right pane. You will see a button appear in the top row of the window that says “filein”. Click on this button. This will load the Employees-Test package into Squeak.

Click on “Employee test data.text” in the middle-right pane of File list. Click on the lower pane, where the text of this file is displayed, then press Alt-A, and then press Alt-C. Open a Workspace window from the “Tools” tab, and then press Alt-V. (Note that many of the keyboard commands are the same as on Windows, but they use Alt rather than Ctrl).

You should also open a Transcript window (from the “Tools” tab).

Select the Workspace window. Select and evaluate every line of code in the Workspace. You can do this by pressing Alt-A, and then pressing Alt-D. This will “select all” of the text in the window, and evaluate it, which will initialize the collection of Employee objects, and run the test. Look at the Transcript window. That’s where the output will be.

The employee class symbol (in the last line of the Workspace) is set to #Executive. This will output all of the input data to the Transcript window. You can change the employee class symbol (to #Employee or #MidLevel) to get different results. All you need to do hereafter is select and evaluate the very last line of code in the Workspace to rerun the test.

Running the C# Employee test

Unzip the project files into a directory, load the project into Visual Studio.Net (2002 or 2003), build the project, and run it. The output will show up in a console window. The test code is initialized to use the Employee.zClass.EXECUTIVE enumeration. This will output all input data. You can change the employee class enumeration passed in to emp.PrintUsingViewerToken() on line 178 of main.cs (to Employee.zClass.EMPLOYEE or Employee.zClass.MIDLEVEL) to get different results.

Edit 5/16/07: I’ve made corrections to the C# version. See my post here.