Apple changes its iOS developer terms again, allowing Squeak apps.

A few months ago Apple made some controversial changes to its App Store developer terms, which were seen as overly restrictive. The scuttlebutt was that the changes were aimed at banning Adobe Flash from the iPhone, since Apple had already said that Flash was not going to be included, nor allowed on the iPad. The developer terms said that all apps. which could be downloaded through the App Store had to be originally written only in C, C++, Objective-C, or Javascript compatible with Apple’s WebKit engine. No cross-compiled code, private libraries, translation or compatibility layers were allowed. Apple claimed they made these changes to increase the stability and security of the iOS environment for users. I considered this a somewhat dubious claim given that they were allowing C and C++. We all know the myriad security issues that Microsoft has had to deal with as a result of using C and C++ in Windows, and in the applications that run on it. A side-effect I noticed was that the terms also banned Squeak apps., since Squeak’s VM source code is written in Smalltalk and is translated to C for cross-compilation. In addition any apps. written in it are originally written in Smalltalk (typically), and are executed by the Squeak VM, which would be considered a translation layer. The reason this was relevant was that someone had ported Squeak to the iPhone a couple years ago, and had developed several apps. in it.

The Weekly Squeak revealed today that Apple has made changes to its App Store terms, and they just so happen to allow Squeak apps. As Daring Fireball has revealed, Apple has removed all programming language restrictions. They have even removed the ban on “intermediary translation or compatibility layers”. The one caveat is they do not allow App Store apps. to download code. So if you’re using an interpreter in your app., the interpreter and all of the code that will execute on it must be included in the package. (Update 9-12-10: Justin James pointed out that the one exception to this rule is Javascript code which is downloaded and run by the WebKit engine). This still restricts Squeak some, because it would disallow users or the app. from using something like SqueakMap or SqueakSource as a source for downloading code into a Squeak image, but it allows the typical stand-alone application case to work.

John Gruber, the author of Daring Fireball, speculates that these new rules could allow developers to use Adobe’s Flash cross-compiler, which Adobe had scuttled when Apple imposed the previous restrictions. John said, “If you can produce a binary that complies with the guidelines, how you produced it doesn’t matter.” Sounds right to me.

However, looking over the other terms that John excerpts from the license agreement gives me the impression that Apple still hasn’t figured everything out yet about what it will allow, and what it won’t allow, in the future. It has this capricious attitude of, “Just be cool, bro.” So things could still change. That’s the thing that would be disappointing to me about this if I were an iPhone developer right now. I got the impression when the last license terms came out that Apple hadn’t really thought through what they were doing. While I get a better impression about the recent changes, I still have a sense that they haven’t thought everything through. To me the question is why? I guess it’s like what Tom R. Halfhill once told me, that Steve Jobs never understood developers, even back in the days when the company was young. Steve Wozniak was the resident “master developer” in those days, and he had Jobs’s ear. Once he left Apple in the 1980s that influence was gone.

A couple updates from the Smalltalk world

I was going through my list of links for this blog (some call it a “blogroll”) and I came upon a couple items that might be of interest.

The first is, it appears that Dolphin Smalltalk is getting back on its feet again. You can check it out at Object Arts. I had reported 3 years ago that it was being discontinued. So I’m updating the record about that. Object Arts says it’s working with Lesser Software to produce an updated commercial version of Dolphin that will run on top of Lesser’s Smalltalk VM. According to the Object Arts website this product is still in development, and there’s no release date yet.

Another item is since last year I’ve been hearing about a branch-off of Squeak called Pharo. According to its description it’s a version of Squeak designed specifically for professional developers. From what I’ve read, even though people have had the impression that the squeak.org release was also for professional developers, there were some things that the Pharo dev. team felt were getting in the way of making Squeak a better professional dev. tool, mainly the EToys package, which has caused consternation. EToys was stripped out of Pharo.

There’s a book out now called “Pharo by Example”, written by the same people who wrote “Squeak by Example”. Just from perusing the two books, they look similar. There were a couple differences I picked out.

The PbE book says that Pharo, unlike Squeak, is 100% open source. There’s been talk for some time now that while Squeak is mostly open source, there has been some code in it that was written under non-open source licenses. In the 2007-2008 time frame I had been hearing that efforts were under way to make it open source. I stopped keeping track of Squeak about a year ago, but last I checked this issue hadn’t been resolved. The Pharo team rewrote the non-open source code after they forked from the Squeak project, and I think they said that all code in the Pharo release is under a uniform license.

The second difference was that they had changed some of the fundamental architecture of how objects operate. If you’re an application developer I imagine you won’t notice a difference. Where you would notice it is at the meta-class/meta-object level.

Other than that, it’s the same Squeak, as best I can tell. According to what I’ve read Pharo is compatible with the Seaside web framework.

An introduction to the power of Smalltalk

I’m changing the subject some, but I couldn’t resist talking about this, because I read a neat thing in the PbE book. I imagine it’s in SbE as well. Coming from the .Net world, I had gotten used to the idea of “setters” and “getters” for class properties. When I first started looking at Squeak, I downloaded Ramon Leon‘s Squeak image. I may have seen this in a screencast he produced. I found out there was a modification to the browser in his image that I could use to have it set up default “setters” and “getters” to my class’s variables automatically. I thought this was neat, and I imagine other IDEs already had such a thing (like Eclipse). I used that feature for a bit, and it was a good time-saver.

PbE revealed that there’s a way to have your class set up its own “setters” and “getters”. You don’t even need a browser tool to do it for you. You just use the #doesNotUnderstand message handler (also known as “DNU”), and Smalltalk’s ability to “compile on the fly” with a little code generation. Keep in mind that this happens at run time. Once you get the idea, it’s not that hard, it turns out.

Assume you have a class called DynamicAccessors (though it can be any class). You add a message handler called “doesNotUnderstand” to it:

DynamicAccessors>>doesNotUnderstand: aMessage
| messageName |
messageName := aMessage selector asString.
(self class instVarNames includes: messageName)
ifTrue: [self class compile: messageName, String cr, ' ^ ', messageName.
         ^aMessage sendTo: self].
^super doesNotUnderstand: aMessage

This code traps the message being sent to a DynamicAccessors instance, because there is no method for what’s being called for at the moment. It extracts the method name that’s being called, looks to see if the class (DynamicAccessors) has a variable by the same name, and if so, compiles a method by that name, with a little boilerplate code that just returns the variable’s value. Once it’s created, it resends the original message to itself, so that the now-compiled accessor can return the value. However, if no variable exists that matches the message name, it triggers the superclass’s “doesNotUnderstand” method, which will typically activate the debugger, halting the program, and notifying the programmer that the class, “doesn’t understand this message.”

Assuming that DynamicAccessors has a member variable “x”, but no “getter”, it can be accessed by:

myDA := DynamicAccessors new.
someValue := myDA x

If you want to set up “setters” as well, you could add a little code to the doesNotUnderstand method that looks for a parameter value being passed along with the message, and then compiles a default method for that.

Of course, one might desire to have some member variables protected from external access and/or modification. I think that could be accomplished by having a variable naming convention, or some other convention, such as a collection that contains member variable names along with a notation specifying to the class how certain variables should be accessed. The above code could follow those rules, allowing access to some internal values and not others. A thought I had is you could set this up as a subclass of Object, and then just derive your own objects off of that. That way this action will apply to any classes you create, which you choose to have it apply to (otherwise, just have them derive from Object).

Once an accessor is compiled, the above code will not be executed for it again, because Smalltalk will know that the accessor exists, and will just forward the message to it. You can go in and modify the method’s code however you want in a browser as well. It’s as good as if you created the accessor yourself.

Edit 5-6-2010: Heard about this recently. Squeak 4.1 has been released. From what I’ve read on The Weekly Squeak, Squeak has been 100% open source since Version 4.0. I was talking about this earlier in this article in relation to Pharo. 4.1 features some “touch up” stuff. It sounds like this makes it nicer to use. The description says it includes some first-time user features and a nicer, cleaner visual interface.

My journey, Part 6

See Part 1, Part 2, Part 3, Part 4, Part 5

What happened to my dreams?

I was working for a company as a contractor a few years ago, and the job was turning sour. I quit the job several months later. While I was working I started to wonder why I had gotten into this field. I didn’t ask this question in a cynical way. I mean, I wondered why I was so passionate about it in the first place. I knew there was a good reason. I wanted that back. I clearly remembered a time when I was excited about working with computers, but why? What did I imagine it would be like down the road? Maybe it was just a foolish dream, and what I had experienced was the reality.

I started reviewing stuff I had looked at when I was a teen, and reminiscing some (this is before I started this blog). I was trying to put myself back in that place where I felt like computing had an exciting, promising future.

Coming full circle

One day while reading a blog I frequented, I stumbled upon a line of inquiry that grabbed my interest. I pursued it on the internet, and got more and more engrossed. I was delving back into computer science, and realizing that it really was interesting to me.

When I first entered college I wasn’t sure that CS was for me, but I found it interesting enough to continue with it. For the most part I felt like I got through it, rather than engrossing myself in it by choice, though it had a few really interesting moments.

The first article I came upon that really captured my imagination was Paul Graham’s essay, “Beating The Averages”. I had never seen anyone talk about programming this way before. The essay was about a company he and a business partner developed and sold to Yahoo!, called ViaWeb, how they wrote the web application for it in Lisp, and how that was their competitive advantage over their rivals. It convinced me to give Lisp a second chance.

I decided to start this blog. I came up with the name “Tekkie” by thinking of two words: techie and trekkie. A desire had been building in me to write my thoughts somewhere about technology issues, particularly around .Net development and Windows technical issues. That was my original intent when I started it, but I was going through a transformative time. My new interest was too nascent for me to see it for what it was.

I dug out my old Lisp assignments from college. I still had them. I also pulled out my old Smalltalk assignments. They were in the same batch. Over the course of a few weeks I committed myself to learning a little Common Lisp and solving one of the Lisp problems that once gave me fits. I succeeded! For some reason it felt good, like I had conquered an old demon.

Next, I came upon Ruby via. a podcast. A guest the host had on referenced an online Ruby tutorial. I gave it a try and thoroughly enjoyed it. I had had a taste of programming with dynamic languages (when I actually had some good material with which to learn), and I liked it. I wondered if I could find work developing web apps. with one. I watched an impressive podcast on Rails, and so decided to learn the language and try my hand at it.

I had a conversation with an old college friend about Lisp and Ruby. He had been trying to convince me for a few years to give Ruby a try. I told him that Ruby reminded me of Smalltalk, and I was interested in seeing what the full Smalltalk system looked like, since I had never seen it. He told me that Squeak was the modern version, and it was free and openly available.

As I was going along, continuing to learn about Ruby and Rails, I was discovering online video services, which were a new thing, YouTube and Google Video. I had always wanted to see the event where Steve Jobs introduced the first Macintosh, in full. I had seen a bit of it in Triumph of The Nerds. I found it on Google Video. Great! I think one of the “related” videos it showed me was on the Smalltalk system at Xerox PARC. I watched that with excitement. Finally I was going to see this thing! I think one of the “related” videos there was a presentation by Alan Kay. The name sounded familiar. I watched that, and then a few more of his presentations. It gradually dawned on me that this was the “mystery man” I had seen on that local cable channel back in 1997!

I had heard of Kay years ago when I was a teenager. His name would pop up from time to time in computer magazine articles (no pictures though), but nothing he said then made an impression on me. I remember using a piece of software he had written for the Macintosh classic, but I can’t for the life of me remember what it was.

I had heard about the Dynabook within a few years of when I started programming, but the concept of it was very elusive to me. It was always described as “the predecessor to the personal computer” or something like that. For years I thought it had been a real product. I wondered, “Where are these Dynabooks?” Back in the 1980s I remember watching an episode of The Computer Chronicles and some inventor came on with an early portable Mac clone he called the “Dynamac”. I thought maybe that had something to do with it…

“What have I been doing with my life?”

So I watched a few of Kay’s presentations. Most of them were titled The Computer Revolution Hasn’t Happened Yet. I guess he was on a speaking tour with this subject. He had given them all between 2003 and 2005. And then I came upon the one he did for ETech 2003:

This blew me away. By the end of it I felt devastated. There was a subtle, electric feeling running through me, and at the same time a feeling of my own insignificance. I just sat in stunned silence for a moment. What I had seen and heard was so beautiful. In a way it was like my fondest hopes that I used to dare not hope had come true. I saw Kay demonstrate Squeak, and I saw the meaning of what he was doing with it. I saw him demonstrate Croquet, and it reminded me of a prediction I made about 11 years ago about 3D user interfaces being created. It was so gratifying to see a prototype of that in the works. What amazed me is it was all written in Squeak (Smalltalk).

I remembered that when I was a teenager I wished that software development would work like what I saw with Squeak (EToys), that I could imagine what I wanted and make it come to life in front of me quickly. I did this through my programming efforts back then, but it was a hard, laborious process. Kay’s phrase rang in my mind: “Why isn’t all programming like this?”

I also realized that nothing I had done in the past could hold a candle to what I had just seen, and for some reason I really cared about that. I had seen other amazing feats of software done in the past, but I used to just move on, feeling it was out of my realm. I couldn’t shake this demo. About a week later I was talking with a friend about it and the words for that feeling from that moment came out: “What have I been doing with my life?” The next thing I felt was, “I’ve got to find out what this is!”

Alan Kay captured the experience well in “The Early History of Smalltalk”:

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results. [my emphasis in bold]

Kay’s presentation of the ideas in Sketchpad, and the work by Engelbart was also enlightening. Sketchpad was covered briefly in the textbook for a computer graphics course I took in college. It didn’t say much, just that it was the first interactive graphical environment that allowed the user to draw shapes with a light pen, and that it was the predecessor to CAD systems. Kay helped me realize through this presentation, and other materials he’s produced, that the meaning of Sketchpad was deeper than that. It was the first object-oriented system, and it had a profound influence on the design of the Smalltalk system.

I didn’t understand very much of what Kay said on the first viewing. I revisited it several times after having time to think about what I had seen, read some more, and discuss these ideas with other people. Each time I came back to it I understood what he meant a little more. Even today I get a little more out of it.

After having the experience of watching it the first time, I reflected on the fact that Kay had shown a clip of the very same “mouse demo” with Douglas Engelbart that I remembered seeing in The Machine That Changed The World. I thought, “I wonder if I can find the whole thing.” I did. I was seeing a wonderful post on modern computer history come together, and so I wrote one called, “Great moments in modern computer history”.

Reacquiring the dream

So with this blog I have documented my journey of discovery. The line of inquiry I followed has brought me what I sought earlier: to get back what made me excited to get into this field in the first place. It feels different though. It’s not the same kind of adolescent excitement I used to have. Through this study I remembered my young, naive fantasies about positive societal change via. computer technology, and I’ve been gratified that Alan Kay wants something similar. I still like the idea that we can change, and realize the computer’s full potential, and that those benefits I imagined earlier might still be possible. I have a little better sense now (emphasis on “little”) of the kind of work that will be required to bring that about.

Having experienced what I have in the business world, I now know that technology itself will not bring about that change. Reading Lewis Mumford helped me realize that. I found out about him thanks to Alan Kay’s reading list, and his admonition that technologists should read. The culture is shaping the computer, and in turn it is shaping us, and at least in the business world we think of computers like our forebearers thought of machinery in factories. Looking back on my experience in the work world that was one of my disappointments.

About “The Machine That Changed The World”

The series I mentioned earlier, “The Machine That Changed The World”, showed that this transformation has happened in some ways, but I contend that it’s spotty. Alan Kay has pointed to two areas where promising progress has been made. The first is in the scientific community. He said several years ago that the computer revolution is happening in science today. He’s said that the computer has “revolutionized” science. The next place is in video games, though he hasn’t been that pleased with the content. What they have right is they create interactive worlds, and the designers are very cognizant of the players’ “flow”, making interaction with the simulated environment easy and as natural as is possible, considering current controller technology.

Looking back on this TV series now I can see its significance. It tried to blend a presentation of the important ideas that were discovered about computing with the reality of how computers were perceived and used. It showed the impact they had on our society, and how perceptions of them changed over time.

In the first episode, “Great Brains”, the narrator puts the emphasis on the computer being a new medium:

5,000 years ago mankind invented writing, a way to record and communicate ideas. These simple marks on clay and paper changed the world, becoming the cornerstone of our intellectual and commercial lives. Today we may be witnessing the emergence of a new medium whose influence may one day rival that of writing.

A modern computer fits on a desk, is affordable and simple enough for a child to play on. But what to the child is a computer game, is to the computer just patterns of voltages. Today we take it for granted that such patterns can help architects to draw, scientists to model complex phenomena, musicians to compose, and even aid scholars to search the literature of the past. Yet a machine with such powers of transformation is unlike any machine in history.

Computers don’t just do things, like other machines. They manipulate ideas. Computers conjure up artificial universes, and even allow people to experience them from the inside, exploring a new molecule, or walking through an unbuilt building.

Doron Swade of the London Science Museum adds (the first part of this quote was unintelligible. I’m making a guess that I think makes sense here):

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality. And it seems that that has a close affinity with the way minds and consciousness work. The mind manipulates processes, images, ideas, notions. Exactly how it does that is of course not yet known. But there seems to be a strong kindred relationship between the manner in which computers process information and the analogy that that has with the way minds, and thinking, and consciousness seem to work. So they have a very special place, because they’re the closest we have to a mind-like machine that we have yet had.

The final episode, “The World At Your Fingertips”, ends by bringing the series full circle, back to the idea that the computer is a new medium, in a way that I think is beautiful:

The computer is not a machine in the traditional sense. It is a new medium. Perhaps predicting the future of computers is as hard as it would have been to predict the consequences of the marks Sumerians made on clay 4,000 years ago.

Paul Ceruzzi, a computer historian says:

It’s ironic when you look at the history of writing, to find that it began as a utilitarian method of helping people remember how much grain they had. From those very humble, utilitarian beginnings came poetry, and literature, and all the kinds of wonderful things that we associate with writing today. Now we bring ourselves up to the invention of the computer. The very same thing is happening. It was invented for a very utilitarian, prozaic purpose of doing calculations, relieving the tedium and drudgery of cranking out numbers for insurance companies, or something like that. And now we begin to see this huge culture that’s grown up of people who are discovering all the things you can do by playing with the computer. We may see a time in the not too distant future when people will look at the computer’s impact on society, and they’ll totally forget its humble beginnings as a calculating device, but rather as something that enriches their culture in undreamed of ways just as literature and art is perceived today in this world.

This TV series was made at a time before the internet became popular. The focus was on desktop computers and multimedia. As you can see there was a positive outlook towards what the computer could become. Since then I’m sad to say the computer has faded into the background in favor of a terminal metaphor, one that is not particularly good at being an authoring platform, though there are a few good efforts being made at changing that. What this TV series brought out for me is that we have actually taken some steps backwards with the form that the internet has taken.

Perhaps I am closing a chapter with this series of posts. I’ve been meaning to move on from just talking about the ideas I’ve been thinking about, to something more concrete. What form that will take I’m not sure yet. I’ll keep writing here about it.

Update on Seaside hosting

I deleted a post I wrote in the past on Seaside hosting, since it was out of date. I brought one I wrote on June 10, 2007 up to date with new information, and gave it a current posting date. There are 4 comments you’ll see on it that are dated from around June 10 last year.

=========================

It looks like Seaside Parasol, the Canadian commercial Seaside hosting service set up by Chris Cunnington is gone. Too bad. It would’ve been nice to see that take off. So we’re back to just having Seaside-Hosting, a free service based in Switzerland. They offer free hosting for non-commercial applications. This service does not host databases like MySQL or PostgreSQL. You can however store data inside your Squeak image. You can also store external files on the server filesystem, using them as support for your site (like graphics images), though they don’t want you using it for hosting files for download. From what I’ve heard, the people who set up Seaside-Hosting will try to accomodate you for a fee if you have requirements that go beyond what they typically offer. There’s an e-mail address you can use to contact them for this purpose on their home page.

A new feature they’ve added since I last looked is you can now set up your own domain name (using a domain registration service) and use that instead of their seasidehosting.st domain.

There are a couple of screencasts here and here by Lukas Renggli, on how to register and use the features of the Seaside-Hosting service.

Where can I get Seaside?

There are a couple Seaside developers who have created their own preconfigured Squeak images you can use:

  • Ramon Leon
  • Damien Cassou’s “Squeak web” image. Look for “squeak-web” on this page. There you will find the image, and information on how to use it.

If you go to Ramon’s page, you get the Squeak VM and his preconfigured image in one Zip file. Ramon’s image contains a bunch of extras. I’ve used his image in the past, though it’s been a while, so some of this may have changed. In addition to Seaside it may contain Monticello (a version control system for Squeak), Magritte (a meta language in Smalltalk for creating forms, reports, and graphs, and works with Seaside), Glorp (an object-relational framework for interfacing with an external database), SUnit (a unit testing framework), Albatross (a web app. testing framework, like Watir in Ruby), Mondrian (an app. that shows you graphically how classes are related, and their properties), Toothpick (a configurable logging library), SOAP-Client for interfacing with XML web services (including .Net), Scriptaculous (an AJAX framework for creating slick web UIs), VB-Regex (a regular expression pattern matching library), and some XML parsers/frameworks, among other things.

Damien Cassou’s image contains packages for syntax highlighting, code completion, different code browsers, Seaside, AIDA/Web (another web framework/application server), Magritte, and Pier (a content management system).

One thing about Ramon’s image is it has a few customizations for running on Windows. I think it’ll run on other platforms okay, but expect to get some error messages when you first load it up.

Storing data in the image

Since Seaside-Hosting does not offer database hosting, if you want to be able to store data, you should do it within the image. One possibility I’ve mentioned before is MinneStore, an object-oriented database written entirely in Smalltalk. I’m not sure how it works. It might just store data inside the image.

A simple way of storing read-only data is to create a class, and then create a class variable within it (equivalent to a static variable inside a class in languages like C++, C#, and Java). You can create your own data structure and put it in this variable using “class side” methods (again, equivalent to static methods in traditional languages). It will remain persistent. You do not need to instantiate an object to access it. It is globally available. If you want the data structure to be updateable and accessible from multiple Seaside sessions, then you’re going to have to use semaphores to make sure operations on the data structure are atomic. That’s when it can get hairy.

Another solution is SandstoneDB, a Smalltalk persistence framework Ramon Leon developed recently. It’s designed to make object persistence as easy as possible, getting around the limitations of many object databases.

The computer as medium

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s, that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been missing what’s powerful about computing because like the early users of the printing press we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he said that after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.” It said that there were such things as libraries, going back hundreds of years, but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has said in his recent speeches that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with the printing press’s ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media and processes enables powerful things to happen in our society via. efficiency. It further democratizes old media and modes of thought, but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power of EToys comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful, but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s fallen prey to ideologies. The goal of constructivism as Kay has used it is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play) the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path to the answer. The children do some of the work to discover the answers themselves, once a focus has been established. And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how the student thought in order to get it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to get otherwise. This keys right into a philosophy of computing that J.C.R. Licklider pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,” as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.” Computers have the power to make this complexity comprehensible. Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally think of our computers as simulators of “business models.” I don’t think the current definition of “business model” (a business plan) really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are in the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today. We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers have historically.

In my “Reminiscing” post I referred to above, I talked about the fact that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think Kay would like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching it play out, inform themselves about it. Rather than appealing to experts, they can understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that they see as relevant. This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different concept of what it means to be literate; a different model of education, and research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years. The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference. I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that. It adds another dimension to what I say here about the computer as medium, but it also expounds on what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

Related post: The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com