Feeds:
Posts
Comments

Archive for the ‘Programming’ Category

In my time on Quora.com, I’ve answered a bunch of questions on object-oriented programming (OOP). In giving my answers, I’ve tried my best to hew as closely as I can to what I’ve understood of Alan Kay’s older notion of OOP from Smalltalk. The theme of most of them is, “What you think is OOP is not OOP.” I did this partly because it’s what’s most familiar to me, and partly because writing about it helped me think clearer thoughts about this older notion. I hoped that other Quora readers would be jostled out of their sense of complacency about OO architecture, and it seems I succeeded in doing that with a few of them. I’ve had the thought recently that I should share some of my answers on this topic with readers on here, since they are more specific descriptions than what I’ve shared previously. I’ve linked to these answers below (they are titled as the questions to which I responded).

What is Alan Kay’s definition of object-oriented? (Quora member Andrea Ferro had a good answer to this as well.)

A big thing I realized while writing this answer is that Kay’s notion of OOP doesn’t really have to do with a specific programming language, or a programming “paradigm.” As I said in my answer, it’s a method of system organization. One can use an object-oriented methodology in setting up a server farm. It’s just that Kay has used the same idea in isolating and localizing code, and setting up a system of communication within an operating system.

Another idea that had been creeping into my brain as I answered questions on OOP is that his notion of interfaces was really an abstraction. Interfaces were the true types in his message passing notion of OOP, and interfaces can and should span classes. So, types were supposed to span classes as well! The image that came to mind is that interfaces can be thought to sit between communicating objects. Messages, in a sense, pass through them, to dispatch logic in the receiving object, which then determines what actual functionality is executed as a result. Even the concept of behavior is an abstraction, a step removed from classes, because the whole idea of interfaces is that you can change the class that carries out a behavior with a completely new implementation (supporting the same interface), and the expected behavior for it will be exactly the same. In one of my answers I said that objects in OOP “emulate” behavior. That word choice was deliberate.

This seemed to finally make more sense for me than it ever had before about why Kay said that OOP is about what goes on between objects, not what goes on with the objects themselves (which are just endpoints for messages). The abstraction is interstitial, in the messaging (which is passed between objects), and the interfaces.

This is a conceptual description. The way interfaces were implemented in Smalltalk were as collections of methods in classes. They were strictly a “gentleman’s agreement,” both in terms of the messages to which they matched, and their behavior. They did not have any sort of type identifiers, except for programmers recognizing a collection of method headers (and their concomitant behaviors) as an interface.

Are utility classes good OO design?

Object-Oriented Programming: In layman’s terms, what is the difference between abstraction, encapsulation and information hiding?

What is the philosophical genesis of object-oriented programming methodology?

Why are C# and Java not considered as Object Oriented Language compared to the original concept by Alan Kay?

The above answer also applies to C++.

What does Alan Kay mean when he said: “OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP”?

What does Darwin’s theory of evolution bring to object oriented programming methodology?

This last answer gets to what I think are some really interesting thoughts that one can have about OOP. I posted a video in my answer from a presentation by philosopher Daniel Dennet on “Free Will Determinism and Evolution.” Dennet takes an interesting approach to philosophy. He seems almost like a scientist, but not quite. In this presentation, he uses Conway’s game of “life” to illustrate a point about free will in complex, very, very, very large scale systems. He proposes a theory that determinism is necessary for free will (not contrary to it), and that as systems survive, evolve, and grow, free will becomes more and more possible, whereby multiple systems that are given the same inputs will make different decisions (once they reach a structured complexity that makes it possible for them to make what could be called  “decisions”). I got the distinct impression after watching this that this idea has implications for where object-orientation can go in the direction of artificial intelligence. It’s hard for me to say what this would require of objects, though.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I found a version of Smalltalk-72 online through Gilad Bracha on Google+. It’s being written by Dan Ingalls, the original implementor of Smalltalk, on his relatively new web authoring platform called Lively Kernel (or Lively Web). I don’t know what Ingalls has planned for it, whether this will be up for long. I’m just writing about it because I found it an interesting and enlightening experience, and I wanted to share about it with others so that hopefully you’ll gain similar value from trying it out. You can use ST-72 here. I have some words of warning about it, as I’ll gradually describe in this post.

First of all, I’ve noticed as of this writing that as it’s loading up it puts up a scary “error occurred” message, and a bunch of “loading” messages get spewed all over the screen. However, if you are patient, all will be well. The error screen clears after a bit, and you can start using ST-72. This looks to be a work in progress, and some of it works.

What you’ll see is a white vertical rectangle surrounded by a yellowish box that contains some buttons. The white rectangle represents the screen area you work in. At the bottom of that screen area is a command line, which starts off with “Welcome to SMALLTALK.”

This environment was created using an original saved image of a running Smalltalk-72 system from 40 years ago. Even though the title says “ALTO Smalltalk-72,” it’s apparently running on a Nova emulator. The Nova was a minicomputer from Data General, which was the original platform Smalltalk was written on at Xerox PARC in the early 1970s. I’m unclear on what the connection is to the Alto’s hardware. You can even look at a Nova assembly monitor display by clicking on the “Show Nova” button, which will show you a disassembly of the machine instructions that the emulator is executing as you use Smalltalk. (To get back to Smalltalk, click on “Show Smalltalk.”) The screen display you see is as it was originally.

Smalltalk-72 is not the first version of Smalltalk, but it comes pretty close. According to Alan Kay’s retelling in “The Early History of Smalltalk,” there was an earlier version (which was the result of a bet he made with his PARC colleagues), but it was rudimentary.

This version does not run like the versions of Smalltalk most people familiar with it have used, which were based on Smalltalk-80. There are no workspace windows (yet), other than the command line interface at the bottom of the screen area, though there is a class editor (which as of this writing I’d say is non-functional). It doesn’t look like there’s a debugger. Anytime an error occurs, a subwindow with an error message pops up (which can be dismissed by pressing ctrl-D). You can still get some things done with the environment. To really get an idea of how to use it you need to click on “Open ST-72 Manual.” This will bring up a PDF of the original Smalltalk-72 documentation.

What you should keep in mind, though, (again, as of this writing) is that some features will not work at all. File functions do not work. There’s a keystroke the manual calls “‘s”, which is supposed to recall a specified object attribute, such as, for example, “title’s string” to access a variable called “string” out of an object called “title.” (“title” receives “‘s string” as separate messages (“‘s” and “string”), but “‘s” is supposed to be received as a single character, not two, as shown.) I have not been able to find this keystroke on the keyboard, though you can substitute a different message for it (whatever you want to use) inside a class.  The biggest thing missing at this point is that while it can read the mouse pointer’s position, this ST-72 environment does not detect any mouse button presses (the original mouse that was used with it had several mouse buttons). This disables any ability to use the built-in class editor, which makes using ST-72 a chore for anything more than “hello world”-type stuff, because besides adding a method to a class (which is easy to do), it seems the only way to change code inside a class is to erase the class (assign nil to its name), and then completely rewrite it. I also found this made later code examples in the documentation (which got more into sophisticated graphics) impossible to try out, as much as I tried to use keyboard actions to substitute for mouse clicks.

What I liked about seeing this version was it helped explain what Kay was talking about when he described the first versions of Smalltalk as “iconic” programming languages, and what he described with Smalltalk’s parsing technique, in “The Early History of Smalltalk” (which I will call “TEHS” hereafter).

To use this version, you’ll need to get familiar with how to use your keyboard to type out the right symbols. It’s not cryptic like APL, but it’s impossible to do much without using some symbolic characters in the code.

The most fascinating aspect to me is that parsing was a major part of how objects operated in this environment. If you’re familiar with Lisp, ST-72 will seem a bit Lisp-like. In fact, Kay says in TEHS that one of his objectives was to improve on Lisp’s use of “special forms” by making them unnecessary. He said that he did this by making parsing an integral part of how objects operate, and by making what appear to be syntactic elements (what would be reserved words or characters in other languages) their own classes, which receive messages, which are just other elements of a Smalltalk expression in your code; so that the parsing action goes pretty deep into how the whole system operates.

Objects in ST-72 exist in what I’d call a “nether world” between the way that Lisp functions and macros behave (without the code generation part). Rather than substitute bound values and evaluate, like Lisp does, it parses tokens and evaluates (though it has bound values in class, instance, and temporary variables).

It’s possible to create a class which has no methods (or just an implicit anonymous one). Done this way, it looks almost like a lambda (an anonymous function) in Lisp with no parameters. However, to pass parameters to an object, the preferred method is to execute some parsing actions inside the class’s code. ST-72 makes this pretty easy. Once you start doing it, using the “eyeball” character, classes start to look a bit like Lisp functions with a COND expression in it.

Distinguishing between using a class and an instance of it can be confusing in this environment. If you reference a class and pass it a message, that’s executing a class’s code using its class variable(s). (In ST-80 this would be called using “class-side code,” though in ST-72 the only distinction between class and instance is what variables inside the class specification get used. The code that gets executed is exactly the same in both cases.) In order to use an object instance (using its instance variable(s)), you have to assign the class to a variable first, and then pass your message to that variable. Assigning a class to a variable automatically creates an instance of the class. There is no “new” operator. The semantics of assignment look similar to that of SETQ in Lisp.

A major difference I notice between ST-72 and ST-80 is that in ST-80 you generally set up a method with a signature that lays out what message it will receive, and its parameter values, and this is distinct from the functionality that the method executes. In ST-72 you create a method by setting up a parsing expression (using the “eyeball” character) within the body of the class code, followed by an “implies” (conditional) expression, which then may contain additional parsing/implies expressions within it to get the message’s parameters, and somewhere in the chain of these expressions you execute an action once the message is fully parsed. I imagine this could get messy with sophisticated interactions, but at the same time I appreciated it, because it can allow a very straightforward programming style when using the class.

Creating a class is easy. You use the keyword “to” (“to” is itself a class, though I won’t get into that). If anyone has used the Logo language, this will look familiar. One of Kay’s inspirations for Smalltalk was Logo. After the “to”, you give the class a name, and enter any temporary, instance, and/or class variables. After that, you start a vector (a list) which contains your code for the class, and then you close off the vector to complete the class specification.

All graphics in ST-72 are done using a “turtle,” (though it is possible to use x-y coordinates with it to position it, and to direct it to draw lines). So it’s easy to do turtle graphics in this version of Smalltalk. If you go all the way through the documentation, you’ll see that Kay and Adele Goldberg, who co-wrote the documentation, do some sophisticated things with the graphics capabilities, such as creating windows, and menus, though since mouse button presses are non-functional at this point, I found this part difficult, if not impossible to deal with.

I have at times found that I’ve wanted to just start the emulator over and erase what I’ve done. You can do this by hitting the “Show Nova” button, hitting the “Restart” button, and then hitting “Show Smalltalk.”

Here is the best I could do for an ST-72 key map. Some of this is documented. Some of it I found through trial and error. You’ll generally get the gist of what these keys do as you go through the ST-72 documentation.

function keystroke description
! (“do it”) \ (backslash) evaluates expression entered on the command line, and executes it
hand shift-‘ quotes a literal (equivalent to “quote” in Lisp)
eyeball shift-5 “look for” – for parsing tokens
open colon ctrl-C fetch the next token, but do not evaluate (looks like two small circles one on top of the other)
keyhole ctrl-K peek at the input stream, but do not fetch token from it
implies shift-/ used to create if…then…else… expressions
return shift-1 returns value
smiley/turtle shift-2 for turtle graphics
open square shift-7 bitwise operation, followed by:
a * (number) = a AND (number)
a + (number) = a OR (number)
a – (number) = a XOR (number)
a / (number) = a left-shift by (number)
? shift-~ (tilde) question mark character
‘s (unknown)
done ctrl-D exit subwindow
unary minus
less-than-or-equal ctrl-A
greater-than-or-equal ctrl-Z
not equal ctrl-N
% ctrl-V
@ ctrl-F
! ctrl-Q exclamation point character
ctrl-O double quote character
zero(?) shift-4
up-arrow shift-6
assignment shift-_ displays a left-arrow, equivalent to “=” in Algol-derived languages.
temp/instance/class variable separator : (colon) (this character looks like ‘/’ in the ST-72 documentation, but you should use “:” instead)

When entering Smalltalk code, the Return key does not enter the code into the system. It just does a line-feed so you can enter more code. To execute code you must “do it,” by pressing ‘\’.

You’ll notice some of the keystrokes are just for typing out a literal character, like ‘?’. This is due to the emulator re-mapping some of the keystrokes on your keyboard to do something else, so it was necessary to also re-map the substituted characters to something else so they could still be used.

Using ST-72 feels like playing around in a sandbox. It’s fun seeing what you can find out. Enjoy.

Read Full Post »

This presentation by Bret Victor could alternately be called “Back To The Future,” but he called it, “The Future of Programming.” What’s striking is this is still a possible future for programming, though it was all really conceivable back in 1973, the time period in which Bret set it.

I found this video through Mark Guzdial. It is the best presentation on programming and computing I’ve seen since I’ve watched what Alan Kay has had to say on the subject. Bret did a “time machine” performance set in 1973 on what was accomplished by some great engineers working on computers in the 1960s and ’70s, and generally what those accomplishments meant to computing. I’ve covered some of this history in my post “A history lesson in government R&D, Part 2,” though I de-emphasized the programming aspect.

Bret posed as someone who was presenting new information in 1973 (complete with faux overhead projector), and someone who tried to predict what the future of programming and computing would be like, given these accomplishments, and reasoning about the future, if what was then-current thinking progressed.

The main theme, which has been a bit of a revelation to me recently, is this idea of programming being an exercise in telling the computer what you want, and having the computer figure out how to deliver it, and even computers figuring out how to get what they need from each other, without the programmer having to spell out how to do these things step by step. What Bret’s presentation suggested to me is that one approach to doing this is having a library of “solvers,” operating under a set of principles (or perhaps a single principle), that the computing system can invoke at any time, in a number of combinations, in an attempt to accomplish that goal; that operational parameters are not fixed, but relatively fluid.

Alan Kay talked about Licklider’s idea of “communicating with aliens,” and how this relates to computing, in this presentation he gave at SRII a couple years ago. A “feature” of many of Alan’s videos is that you don’t get to see the slides he’s showing…which can make it difficult to follow what he’s talking about. So I’ll provide a couple reference points. At about 17 minutes in “this guy” he’s talking about is Licklider, and a paper he wrote called “Man-Computer Symbiosis.” At about 44 minutes in I believe Alan is talking about the Smalltalk programming environment.

http://vimeo.com/22463791

I happened to find this from “The Dream Machine,” by M. Mitchell Waldrop, as well, sourced from an article written by J.C.R. Licklider and Robert Taylor called, “The Computer as a Communication Device,” in a publication called “Science & Technology” in 1968, also reprinted in “In Memoriam: J.C.R. Licklider, 1915-1990,” published in Digital Systems Research Center Reports, vol. 61, 1990:

“Modeling, we believe, is basic and central to communications.” Conversely, he and Taylor continued, “[a successful communication] we now define concisely as ‘cooperative modeling’–cooperation in the construction, maintenance, and use of a model. [Indeed], when people communicate face to face, they externalize their models so they can be sure they are talking about the same thing. Even such a simple externalized model as a flow diagram or an outline–because it can be seen by all the communicators–serves as a focus for discussion. It changes the nature of communication: When communicators have no such common framework, they merely make speeches at each other; but when they have a manipulable model before them, they utter a few words, point, sketch, nod, or object.”

I remember many years ago hearing Alan Kay say that what’s happened in computing since the 1970s “has not been that exciting.” Bret Victor ably justified that sentiment. What he got across (to me, anyway) was a sense of tragedy that the thinking of the time did not propagate and progress from there. Academic computer science, and the computer industry acted as if this knowledge barely existed. The greater potential tragedy Bret expressed was, “What if this knowledge was forgotten?”

The very end of Bret’s presentation made me think of Bob Barton, a man that Alan has sometimes referred to when talking about this subject. Barton was someone who Alan seemed very grateful to have met as a graduate student, because he disabused the computer science students at the University of Utah in their notions of what computing was. Alan said that Barton freed their minds on the subject, which opened up a world of possibilities that they could not previously see. In a way Bret tried to do the same thing by leaving the ending of his presentation open-ended. He did not say that meta-programming “is the future,” just one really interesting idea that was developed decades ago. Many people in the field think they know what computing and programming are, but these are still unanswered questions. I’d add to that message that what’s needed is a spirit of adventure and exploration. Like our predecessors, we’ll find some really interesting answers along the way which will be picked up by others and incorporated into the devices the public uses, as happened before.

I hope that students just entering computer science will see this and carry the ideas from it with them as they go through their academic program. What Bret presents is a perspective on computer science that is not shared by much of academic CS today, but if CS is to be revitalized one thing it needs to do is “get” what this perspective means, and why it has value. I believe it is the perspective of a real computer scientist.

Related post: Does computer science have a future?

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I’ve been following José A Ortega-Ruiz (he goes by “jao” for short) for about 6 years now, on his blog Programming Musings. Recently I noticed he wasn’t posting much, if at all. Finally he said he was starting up a new blog here, using a static web page tool, written in Racket (a Scheme implementation), called “Frog” for “frozen blog.” I found his 2nd post, called “Where my mouth is” inspirational.

For many years, I’ve been convinced that programming needs to move forward and abandon the Algol family of languages that, still today, dampens the field. And that that forward direction has been signaled for decades by (mostly) functional, possibly dynamic languages with an immersive environment. But it wasn’t until recently that I was able to finally put my money where my mouth has been all these years.

A couple of years ago I finally became a co-founder and started working on a company I could call my own (I had been close in the past, but not really there), and was finally in a position to really influence our development decisions at every level. Even at the most sensitive of them all: language choice.

He says he’s working with a team of people who are curious, and are willing to take the risk of trying out less traditional, but as they perceive, more powerful programming environments, such as Clojure (a Lisp variant that runs on the JVM, and is pronounced “closure”). Read the rest of it.

I once aspired to work with a team such as this. I had been working in IT services for several years, and I thought I’d continue on in that vein, with something like Smalltalk, starting my own little “web shop,” or just happening to find a company that was already creating IT systems with something like it. More recently I’ve been finding computing as a field of study more interesting, not in the typical sense that academic computer science teaches, but as a phenomenon unto itself, a kind of interesting effect of a machine that’s capable of generating, and acting like, a machine of a different kind from itself, simply by feeding it different patterns of a certain nature. As Alan Kay has demonstrated, looking at computing this way, you can still arrive at an environment that looks recognizable, but is vastly different under the covers than the typical user-oriented system is structured today.

I’ve put two links in my links sidebar for jao’s blogs, “Programming Musings” (his old blog) and “Programming Musings 2,” since his old blog is rather like an archive of really interesting and valuable knowledge in computer science, and I encourage people to scan through it for anything they find of value.

I wish jao the best of luck in his new venture!

Read Full Post »

I came across this interview with Lesley Chilcott, the producer of “An Inconvenient Truth” and “Waiting For Superman.” Kind of extending her emphasis on improving education, she produced a short 9-minute video selling the idea of “You should learn to code,” both to adults and children. It addresses two points: 1) the anticipated shortage of programmers needed to write software in the future, and 2) the increasing ubiquity of programming in all sorts of fields where people would think it wouldn’t exist, such as manufacturing and agriculture.

The interview gets interesting at 3 minutes 45 seconds in.

Michelle Fields, the interviewer, asked what I thought were some insightful questions. She started things off with:

It seems as though the next generation is so fluent in technology. How is it that they don’t know what computer programming is?

Chilcott said:

I think the reason is, you know, we all use technology every day. It’s surrounding us. Like, we can debate the pro’s and con’s of technology/social media, but the bottom line is it’s everywhere, right? So I think a lot of people know how to read it. They grow up playing with an iPhone or something like that, but they don’t know how to write it. And so when you say, “Do you know what this is,” specifically, or what this job is–and you know, those kids are in first, second, fifth grade–they know all about it, but they don’t know what the job is.

I found this answer confusing. She’s kind of on the right track, thinking of programming as “writing.” I cut her some slack, because as she admits in the interview, she’s just started programming herself. However, as I’ve said before, running software is not “reading.” It’s really more like being read to by a machine, like listening to an audio book, or someone else reading to you. You don’t have to worry about the mental tasks of pronunciation, sentence construction, or punctuation. You can just listen to the story. Running software doesn’t communicate the process that the code is generating, because there’s a lot that the person using it is not shown. This is on purpose, because most people use software to accomplish some utilitarian task unrelated to how a computer works. They’re not using it to understand a process.

The last sentence came across as muddled. I think what she meant was they know all about using technology, but they don’t know how to create it (“what the job is”).

Fields then asked,

There was this study which found that 56% of students would rather eat broccoli than learn math. Do you think that since computer programming is somewhat related to math, that that’s the reason children and students shy away from it?

Chilcott said:

It could be. That is one of the myths that exist. There is some, you know, math, but as Bill Gates and some other people said, you know, addition, subtraction–It’s much more about problem solving, and I think people like to problem-solve, they like mysteries, they like decoding things. It’s much more about that than complicated algorithms.

She’s right that there is problem solving involved with programming, but she’s either mistaken or confusing math with arithmetic when she says that the relationship between math and programming is a “myth.” I can understand why she tries to wave it off, because as Fields pointed out, most students don’t like math. I contend, as do some mathematicians, this is due to the way it’s taught in our schools. The essence of math gets lost. Instead it’s presented as a tool for calculation, and possibly a cognitive development discipline for problem solving, both of which don’t communicate what it really is, and remove a lot of its beauty.

In reality math is pervasive in programming, but to understand why I say this you have to understand that math is not arithmetic–addition, subtraction, like she suggests. This confusion is common in our society. I talk more about this here. Having said this, it does not mean that programming is hard right off the bat. The math involved has more to do with logic and reasoning. I like the message in the video below from a couple of the programmers interviewed: “You don’t have to be a genius to know how to code. … Do you have to be a genius to do math? No.” I think that’s the right way to approach this. Math is important to programming, but it’s not just about calculating a result. While there’s some memorization, understanding a programming language’s rules, and knowing what different things are called, that’s not a big part of it.

The cool thing is you can accomplish some simple things in programming, to get started, without worrying about math at all. It becomes more important if you want to write complex programs, but that’s something that can wait.

My current understanding is the math in programming is about understanding the rules of a system and what statements used in that system imply, and then understanding the effects of those implications. That sounds complicated, but it’s just something that has to be learned to do anything significant with programming, and once learned will become more and more natural. I liken it to understanding how to drive a car on the road. You don’t have to learn this concept right away, though. When first starting out, you can just look at and enjoy the effects of trying out different things, exploring what a programming environment offers you.

Where Chilcott shines in the interview above is when she becomes the “organizer.” She said that even though 95% of the schools have computers and internet access, only 10% have what she calls a “computer science” course. (I wish they’d go back to calling it a “programming course.” Computer science is more than what most of these schools teach, but I’m being nit-picky.) The cool thing about Code.org, a web site she promotes, is that it tries to locate a school near you that offers programming courses. If there aren’t any, no problem. You can learn some basics of programming right inside your browser using the online tools that it offers on the site.

The video Chilcott produced is called “Code Stars” in the above interview, but when I went looking for it I found it under the name “the Code.org film,” or, “What Most Schools Don’t Teach.”

Here is the full 9-minute video:

If you want the shorter videos, you can find them here.

The programming environment you see kids using in these videos is called “Scratch.”

Gabe Newell said of programming:

When you’re programming, you’re teaching possibly the stupidest thing in the entire universe–a computer–how to do something.

I see where Newell is going with this, but from my perspective it depends on what programming environment you’re using. Some programming languages have the feel of you “teaching” the system when you’re programming. Others have the feel of creating relationships between simple behaviors. Others, still, have the feel of using relationships to set up rules for a new system. Programming comes in a variety of approaches. However, the basic idea that Newell gets across is true, that computers only come with a set of simple operations, and that’s it. They don’t do very much by themselves, or even in combination. It’s important for those new to programming to learn this early on. Some of my early experiences in programming match those of new programmers even today. One of them is, when using a programming language, one is tempted to assume that the computer will infer the meaning of some programming expression from context. There is some context used in programming, but not much, and it’s highly formalized. It’s not intuitive. I can remember the first time I learned this it was like the joke where, say, someone introduces his/her friend to a dumb, witless character in a skit. He/she says, “Say hi to my friend, Frank,” and the dummy says, “Hi to my friend Frank.” And the guy/gal says, “NO! I mean…say hello,” making a hand gesture trying to get the two to connect, and the dummy might look at the friend and say, “Hello,” but that’s it. That’s kind of a realization to new programmers. Yeah, the computer has to have almost everything explained to it (or modeled), even things we do without thinking about it. It’s up to the programmer to make the connections between the few things the computer knows how to do, to make something larger happen.

Jack Dorsey talked about programming in a way that I think is important. His ultimate goal when he started out was to model something, and make the model malleable enough that he could manipulate it, because he wanted to use it for understanding how cities work.

Bill Gates emphasized control. This is a common early motivation for programmers. Not necessarily controlling people, but controlling the computer. What Gates was talking about was what I’d call “making your own world,” like Dorsey was saying, but he wanted to make it real. When I was in high school (late 1980s) it was a rather common project for aspiring programming students to create “matchmaking” programs, where boys and girls in the whole school would answer a simple questionnaire, and a computer program that a student had written would try to match them up by interests, not unlike some of the online dating sites that are out there now. I never heard of any students finding their true love through one of these projects, but it was fun for some people.

Vanessa Hurst said, “You don’t have to be a genius to know how to code. You need to be determined.” That’s pretty much it in a nutshell. In my experience everything else flowed from determination when I was learning how to do this. It will drive you to learn what you need to learn to get it, even if sometimes it’s subject matter you find tedious and icky. You learn to just push through it to get to the glorious feeling at the end of having accomplished what you set out to do.

Newell said at the end of the video,

The programmers of tomorrow are the wizards of the future. You’re going to look like you have magic powers compared to everybody else.

That’s true, but this has been true for a long time. In my professional work developing custom database solutions for business customers I had the experience of being viewed like a magician, because customers didn’t know how I did what I did. They just appreciated the fact that I could do it. I really don’t mean to discourage anyone, because I still enjoy programming today, and I want to encourage people to learn programming, but I feel the need to say something, because I don’t want people to get disillusioned over this. This status of “wizard,” or “magician” is not always what it’s cracked up to be. It can feel great, but there is a flip side to it that can be downright frustrating. This is because people who don’t know a wit of what you know how to do can get confused about what your true abilities are, and they can develop unrealistic expectations of you. I’ve found that wherever possible, the most pleasurable work environment is working among those who also know how to code, because we’re able to size each other up, and assign tasks appropriately. I encourage those who are pursuing software development as a career to shoot for that.

A couple things I can say for being able to code are:

  • It makes you less of a “victim” in our technology world. Once you know how to do it, you have an idea about how other programs work, and the pitfalls they can fall into that might compromise your private information, allow a computer cracker to access it, or take control of your system. You don’t have to feel scared at the alarming “hacking” or phishing reports you hear on the news, because you can be choosey about what software you use based on how it was constructed, what it’s capable of, how much power it gives you (not someone else), and not just base a decision on the features it has, or cool graphics and promotion. You can become a discriminating user of software.
  • You gain the power to create the things that suite you. You don’t have to use software that you don’t like, or you think is being offered on unreasonable terms. You can create your own, and it can be whatever you want. It’s just a matter of the knowledge you’re willing to gather and the amount of energy you’re willing to put into developing the software.

Edit 5-20-2013: While I’m on this subject, I thought I should include this video by Mitch Resnick, who has been involved in creating Scratch at MIT. Similar to what Lesley Chilcott said above, he said, “It’s almost as if [users of new technologies] can read, but not write,” referring to how people use technology to interact. I disagreed with the notion, above, that using technology is the same as reading. Resnick hedged a bit on that. I can kind of understand why he might say this, because by running a Scratch program, it is like reading it, because you can see how code creates its results in the environment. This is not true, however, of much of the technology people use today.

Mark Guzdial asked a question a while back that I thought was important, because it brings this issue down to where a lot of people live. If the kind of literacy I’m going to talk about below is going to happen, the concept needs to be able to come down “out of the clouds” and become more pedestrian. Not to say that literacy needs to be watered down in toto (far from it), but that it should be possible to read and write to communicate everyday ideas and experiences without being super sophisticated about it. What Mark asked was, in the context of a computing medium, what would be the equivalent of a “note to grandma”? I remember suggesting Dan Ingalls’s prop-piston concept from his Lively Kernel demos as one candidate. Resnick provided what I thought were some other good ones, but in the context of Mother’s Day.

Context reversal

The challenge that faces new programmers today is different from when I learned programming as a child in one fundamental way. Today, kids are introduced to computers before they enter school. They’re just “around.” If you’ve got a cell phone, you’ve got a computer in your pocket. The technology kids use presents them with an easy-to-use interface, but the emphasis is on use, not authoring. There is so much software around it seems you can just wish for it, and it’s there. The motivation to get into programming has to be different than what motivated me.

When I was young the computer industry was still something new. It was not widespread. Most computers that were around were big mainframes that only corporations and universities could afford and manage. When the first microcomputers came out, there wasn’t much software for them. It was a lot easier to be motivated to learn programming, because if you didn’t write it, it probably didn’t exist, or it was too expensive to get (depending on your financial circumstances). The way computers operated was more technical than they are today. We didn’t have graphical user interfaces (at first). Everything was done from some kind of text command line interface that filled the entire screen. Every computer came with a programming language as well, along with a small manual giving you an introduction on how to use it.

PC-DOS, from Wikipedia

It was expected that if you bought a computer you’d learn something about programming it, even if it was just a little scripting. Sometimes the line between what was the operating system’s command line interface, and what was the programming language was blurred. So even if all you wanted to do was manipulate files and run programs, you were learning a little about programming just by learning how to use the computer. Some of today’s software developers came out of that era (including yours truly).

Computer and operating system manufacturers had stopped including programming languages with their systems by the mid-1990s. Programming languages had also been taken over by professionals. The typical languages used by developers were much harder to learn for beginners. There were educational languages around, but they had fallen behind the times. They were designed for older personal computer systems, and when the systems got more sophisticated no one had come around to update them. That began to be remedied only in the last 10 years.

Computer science was still a popular major at universities in the 1990s, due to the dot-com craze. When that bubble burst in 2000, that went away, too. So in the last 18 years we’ve had what I’d call an “educational programming winter.” Maybe we’ll see a revival. I hope so.

Literacy reconsidered

I’m directing the rest of this post to educators, because there are some issues around a programming revival I’d like to address. I’m going to share some more detailed history, and other perspectives on computer programming.

What many may not know is that we as a society have already gone through this once. From the late 1970s to the mid-1980s there was a major push to teach programming in schools as “computer literacy.” This was the regime that I went through. The problem was some mistakes were made, and this caused the educational movement behind it to collapse. I think the reason this happened was due to a misunderstanding of what’s powerful about programming, and I’d like educators to evaluate their current thinking in light of this, so that hopefully they do not repeat the mistakes of the past.

As I go through this part, I’ll mostly be quoting from a Ph.D. thesis written by John Maxwell in 2006 called Tracing the Dynabook: A Study of Technocultural Transformations.” (h/t Bill Kerr)

Back in the late 1970s microcomputers/personal computers were taking off like wildfire with Apple II’s, and Commodore VIC-20’s, and later, Commodore 64’s, and IBM PCs. They were seen as “the future.” Parents didn’t want their children to be “left behind” as “technological illiterates.” This was the first time computers were being brought into the home. It was also the first time many schools were able to grant students access to computers.

Educators thought about the “benefits” of using a computer for certain cognitive and social skills.  Programming spread in public school systems as something to teach students. Fred D’Ignazio wrote in an article called “Beyond Computer Literacy,” from 1983:

A recent national “computers in the schools” survey conducted by the Center for the Social Organization of Schools at Johns Hopkins University found that most secondary schools are using computers to teach programming. … According to the survey, the second most popular use of computers was for drill and practice, primarily for math and language arts. In addition, the majority of the teachers who responded to the survey said that they looked at the computer as a “resource” rather than as a “tool.”

…Another recent survey (conducted by the University of Maryland) echoes the Johns Hopkins survey. It found that most schools introduce computers into the curriculum to help students become literate in computer technology. But what does this literacy entail?

Because of the pervasive spread of computers throughout our society, we have all become convinced that computers are important. From what we read and hear, when our kids grow up almost everyone will have to use computers in some aspect of their lives. This makes computers, as a subject, not only important, but also relevant.

An important, relevant subject like computers should be part of a school’s curriculum. The question is how “Computers” ought to be taught.

Special computer classes are being set up so that students can play with computers, tinker with them, and learn some basic programming. Thus, on a practical level, computer literacy turns out to be mere computer exposure.

But exposure to what? Kids who are now enrolled in elementary and secondary schools are exposed to four aspects of computers. They learn that computers are programmable machines. They learn that computers are being used in all areas of society. They learn that computers make good electronic textbooks. And (something they already knew), they learn that computers are terrific game machines.

… According to the surveys, real educational results have been realized at schools which concentrate on exposing kids to computers. … Kids get to touch computers, play with them, push their buttons, order them about, and cope with computers’ incredible dumbness, their awful pickiness, their exasperating bugs, and their ridiculous quirks.

The main benefits D’Ignazio noted were ancillary. Students stayed at school longer, came in earlier, and stayed late. They were more attentive to their studies, and the computers fostered a sense of community, rather than competition and rivalry. If you read his article, you get a sense that there was almost a “worship” of computers on the part of educators. They didn’t understand what they were, or what they represented, but they were so interesting! There’s a problem there… When people are fascinated by something they don’t understand, they tend to impose meanings on it that are not backed by evidence, and so miss the point. The mistaken perceptions can be strengthened by anecdotal evidence (one of the weakest kinds). This is what happened to programming in schools.

The success of the strategy of using computers to try to improve higher-order thinking was illusory. John Maxwell’s telling of the “life and death of Logo” (my phrasing) serves as a useful analog to what happened to programming in schools generally. For those unfamiliar with it, the basic concept of Logo was a programming environment in which the student manipulates an object called a “turtle” via. commands. The student can ask the turtle to rotate and move. As it moves it drags a pen behind it, tracing its trail.  Other versions of this language were created that allowed more capabilities, allowing further exploration of the concepts for which it was created. The original idea Seymour Papert, who taught children using Logo, had was to teach young children about sophisticated math concepts, but our educational system imposed a very different definition and purpose on it. Just because something is created on a computer with the intent of it being used for a specific purpose doesn’t mean that others can’t use it for completely different, and possibly less valuable purposes. We’ve seen this a lot with computers over the years; people “misusing” them for both constructive and destructive ends.

As I go forward with this, I just want to put out a disclaimer that I don’t have answers to the problems I point out here. I point them out to make people aware of them, to get people to pause with the pursuit of putting people through this again, and to point to some people who are working on trying to find some answers. I present some of their learned opinions. I encourage interested readers to read up on what these people have had to say about the use of computers in education, and perhaps contact them with the idea of learning more about what they’ve found out.

I ask the reader to pay particular attention to the “benefits” that educators imposed on the idea of programming during this period that Maxwell talks about, via. what Papert called “technocentrism.” You hear this being echoed in the videos above. As you go through this, I also want you to notice that Papert, and another educator by the name of Alan Kay, who have thought a lot about what computers represent, have a very different idea about the importance of computers and programming than is typical in our school system, and in the computer industry.

The spark that started Logo’s rise in the educational establishment was the publication of Papert’s book, “Mindstorms: Children, Computers, and Powerful Ideas” in 1980. Through the process of Logo’s promotion…

Logo became in the marketplace (in the broad sense of the word) [a] particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the “thinking skills” strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon.

It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. [my emphasis — Mark] In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo “didn’t work as promised,” as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:

“Egocentrism for Piaget does not mean ‘selfishness’—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like ‘What is THE effect of THE computer on cognitive development?’ or ‘Does Logo work?’ … such turns of phrase often betray a tendency to think of ‘computers’ and ‘Logo’ as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology.”

But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.

By the early 1990s … Logo—and with it, programming—had faded.

The message–or black box–resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. (pp. 183-185)

To be clear, the last part of the quote refers only to the educational value placed on programming by our school system. When educators attempted to formally study and evaluate programming’s benefits on higher-order thinking and the like, they found it wanting, and so most schools gradually dropped teaching programming in the 1990s.

Maxwell addresses the conundrum of computing and programming in schools, and I think what he says is important to consider as people try to “reboot” programming in education:

[The] critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary ‘humanist’ stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of “the technology must be in the service of the educational objectives and not the other way around.” The determinist critique, in turn, says, ‘the use of computers encourages a mechanistic way of thinking that is a danger to natural/human/traditional ways of life’ (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000).

Missing from either version of this critique is any idea that digital information technology might present something worth actually engaging with. De Castell, Bryson & Jenson write:

“Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become ‘comfortable’ using it. […] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things–writing and speech, images and sound–every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity?”

Surely the power of education is transformation. And yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. And so we preserve the divide between ‘expert’ and ‘end-user;’ for the ‘end-user’ is profoundly she who is unchanged, uninitiated, unempowered.

A seemingly endless literature describes study after study, project after project, trying to identify what really ‘works’ or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time.

But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don’t amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things.

But clearly, it would be inane to try to claim that computing technology–one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down–has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is important, but we don’t really understand how. And so what shall we do, right now?

It is not that there haven’t been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology “paradigm” as Timothy Koschmann put it. (pp. 18-19)

The answer is not a “reboot” of programming, but rather a rethinking of it. Maxwell makes a humble suggestion: that educators stop being blinded by “the shiny new thing,” or some so-called “new” idea such that they lose their ability to think clearly about what’s being done with regard to computers in education, and that they deal with history and historicism. He said that the technology field has had a problem with its own history, and this tends to bleed over into how educators regard it. The tendency is to forget the past, and to downplay it (“That was neat then, but it’s irrelevant now”).

In my experience, people have associated technology’s past with memories of using it. They’ve given little if any thought to what it represented. They take for granted what it enabled them to do, and do not consider what that meant. Maxwell said that this…

…makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. We are faced with a tangle of hobbles–instrumentalism, ahistoricism, fear of transformation, Snow’s “two cultures,” and a consumerist subjectivity.

An examination of the history of educational technology–and educational computing in particular–reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert’s philosophy and his more than two decades of practical work on Logo than is commonly remembered. And Papert is not the only one. (p. 20)

Maxwell went into what Alan Kay thought about the subject. Kay has spent almost as many years as Papert working on a meaningful context for computing and programming within education. Some of the quotes Maxwell uses are from “The Early History of Smalltalk,” (h/t Bill Kerr) which I’ll also refer to. The other sources for Kay’s quotes are included in Maxwell’s bibliography:

What is Literacy?

“The music is not in the piano.” — Alan Kay

The past three or four decades are littered with attempts to define “computer literacy” or something like it. I think that, in the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have won large numbers of supporters across the board.

Kay’s appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a much more fruitful framing. His argument is thus not for computer literacy per se, but for systems literacy, of which computing is a key part.

That this is a massive undertaking is clear … and the size of the challenge is not lost on Kay. Reflecting on the difficulties they faced in trying to teach programming to children at PARC in the 1970s, he wrote that:

“The connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas.”

Because literature is about ideas, Kay connects the notion of literacy firmly to literature:

“What is literature about? Literature is a conversation in writing about important ideas. That’s why Euclid’s Elements and Newton’s Principia Mathematica are as much a part of the Western world’s tradition of great books as Plato’s Dialogues. But somehow we’ve come to think of science and mathematics as being apart from literature.”

There are echoes here of Papert’s lament about mathophobia, not fear of math, but the fear of learning that underlies C.P. Snow’s “two cultures,” and which surely underlies our society’s love-hate relationship with computing. Kay’s warning that too few of us are truly fluent with the ways of thinking that have shaped the modern world finds an anchor here. How is it that Euclid and Newton, to take Kay’s favourite examples, are not part of the canon, unless one’s very particular scholarly path leads there? We might argue that we all inherit Euclid’s and Newton’s ideas, but in distilled form. But this misses something important … Kay makes this point with respect to Papert’s experiences with Logo in classrooms:

“Despite many compelling presentations and demonstrations of Logo, elementary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children in a way that illuminates how we think about mathematics and how mathematics relates to the real world.” (Maxwell, pp. 135-137)

Just a note of clarification: I refer back to what Maxwell said re. Logo and mathematics. Papert did not use his language to teach programming as an end in itself. His goal was to use a computer to teach mathematics to children. Programming with Logo was the means for doing it. This is an important concept to keep in mind as one considers what role computer programming plays in education.

The problem, in Kay’s portrayal, isn’t “computer literacy,” it’s a larger one of familiarity and fluency with the deeper intellectual content; not just that which is specific to math and science curriculum. Kay’s diagnosis runs very close to Neil Postman’s critiques of television and mass media … that we as a society have become incapable of dealing with complex issues.

“Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in [Thomas] Paine’s Common Sense and being able (and happy) to fluently write a critique or defense of it.” (Maxwell, p. 137)

Extending this quote (from “The Early History of Smalltalk”), Kay went on to say:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

Continuing with Maxwell (and Kay):

“Many adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like AIDS, or even compound interest on their credit cards. In contrast, a 12-year-old child in a few lines of Logo […] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that would ordinarily be consigned to common sense (the worst of all reasoning methods), can now be tried out with a modest amount of effort.”

Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? We have enough trouble coping with arguments presented in print, let alone simulations and modeling. Postman’s argument implicates television, but television is not a techno-deterministic anomaly within an otherwise sensible cultural milieu; rather it is a manifestation of a larger pattern. What is wrong here has as much to do with our relationship with print and other media as it does with television. Kay noted that “In America, printing has failed as a carrier of important ideas for most Americans.” To think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, economy, simplicity) make a real difference in the lives of only a small number of individuals, even in the Western world. Kay put it eloquently thus: “The computer really is the next great thing after the book. But as was also true with the book, most [people] are being left behind.” This is a sobering thought for those who advocate public access to digital resources and lament a “digital divide” along traditional socioeconomic lines. Kay notes,

“As my wife once remarked to Vice President Al Gore, the ‘haves and have-nots’ of the future will not be caused so much by being connected or not to the Internet, since most important content is already available in public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wherever it may be found.” (Maxwell, pp. 138-139)

I’m still trying to understand myself what exactly Alan Kay means by “literature” in the realm of computing. He said that it is a means for discussing important ideas, but in the context of computing, what ideas? I suspect from what’s been said here he’s talking about what I’d call “model content,” thought forms, such as the idea of an exponential progression, or the concept of velocity and acceleration, which have been fashioned in science and mathematics to describe ideas and phenomena. “Literature,” as he defined it, is a means of discussing these thought forms–important ideas–in some meaningful context.

In prior years he had worked on that in his Squeak environment, working with some educators. They would show children a car moving across the screen, dropping dots as it went, illustrating velocity, and then, modifying the model, acceleration. Then they would show them Galileo’s experiment, dropping heavy and light balls from the roof of a building (real balls from a real building), recording the ball dropping, and allowing the children to view the video of the ball, and simultaneously model it via. programming, and discovering that the same principle of acceleration applied there as well. Thus, they could see in a couple contexts how the principle worked, how they could recognize it, and see its relationship to the real world. The idea being that they could grasp the concepts that make up the idea of acceleration, and then integrate it into their thinking about other important matters they would encounter in the future.

Maxwell quoted from an author named Andrew diSessa to get deeper into the concept of literacy, specifically what literacy in a type of media offers our understanding of issues:

The hidden metaphor behind transparency–that seeing is understanding–is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.”

What’s a good goal for computing?

The temptation in teaching and learning programming is to get students familiar enough with the concepts and a language that they can start creating things with it. But create what? The typical cases are to allow students to tinker, and/or to create applications which gradually become more complex and feature-rich, with the idea of building confidence and competence with increasing complexity. The latter is not a bad idea in itself, but listening to Alan Kay has led me to believe that starting off with this is the equivalent of jumping to a conclusion too quickly, and to miss the point of what’s powerful about computers and programming.

I like what Kay said in “The Early History of Smalltalk” about this:

A twentieth century problem is that technology has become too “easy.” When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

Edit 4-5-2013: I thought I should point out that this quote has some nuance to it that people might miss. I don’t believe Kay is saying that “programming should be hard.” Quite the contrary. One can observe from his designs that he’s advocated the opposite. Not that technology should mold itself to what is “natural” for humans. It might require some training and practice, but once mastered, it should magnify or enhance human capabilities, thereby making previously difficult or tedious tasks easier to accomplish and incorporate into a larger goal.

Kay was making an observation about the history of technology’s relationship to society, that the effect on people of useful technology being hard to build has generally caused the people who created something useful to make it well. What he’s pointing out is that people generally take the presence of technology as an excuse to use it as a crutch, in this case to make immediate use of it towards some other goal that has little to do with what the technology represents, rather than an invitation to revisit it, criticize its design, and try to make it better. This is an easy sell, because everyone likes something that makes their lives easier (or seems to), but we rob ourselves of something important in the process if that becomes the only end goal. What I see him proposing is that people with some skill should impose a high standard for design on themselves, drawing inspiration for that standard from how the best art humanity has produced was developed and nurtured, but guard against the sense of feeling small, inadequate, and overwhelmed by the challenge.

Maxwell (and Kay) explain further why this idea of “literacy” as being able to understand and communicate important ideas, which includes ideas about complexity, is something worth pursuing:

“If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is in story form! Newton’s treatise on the laws of motion, the force of gravity, and the behavior of the planets is set up as a sequence of arguments that imitate Euclid’s books on geometry.”

The most important ideas in modern Western culture in the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented in narrative. …

But more recent still are forms of argumentation that defy linear representation at all: ‘complex’ systems, dynamic models, ecological relationships of interacting parts. These can be hinted at with logical or mathematical representations, but in order to flesh them out effectively, they need to be dynamically modeled. This kind of modeling is in many cases only possible once we have computational systems at our disposal, and in fact with the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the “regime of computation” inherent in the work of thinkers like Stephen Wolfram, Edward Fredkin, and Harold Morowitz, N. Katherine Hayles explains:

“Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work in chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so in the social sciences and humanities.”

Kay’s lament too is that though these non-narrative forms of communication and understanding–both in the linear and complex varieties–are key to our modern world, a tiny fraction of people in Western society are actually fluent in them.

“In order to be completely enfranchised in the 21st century, it will be very important for children to become fluent in all three of the central forms of thinking that are now in use. […] the question is: How can we get children to explore ways of thinking beyond the one they’re ‘wired for’ (storytelling) and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems ‘eco-logic?'” …

In this we get Kay’s argument for ‘what computers are good for’ … It does not contradict Papert’s vision of children’s access to mathematical thinking; rather, it generalizes the principle, by applying Kay’s vision of the computer as medium, and even metamedium, capable of “simulating the details of any descriptive model.” The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as the promise of personal computing, with millions of users and millions of machines.

“The thing that jumped into my head was that simulation would be the basis for this new argument. […] If you’re going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a mathematical equation. If, for example, you’re talking about an epidemic, you can make claims in an essay, and you can put mathematical equations in there. Still, it is really difficult for your reader to understand what you’re actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim in the form of a working simulation, something that can be examined, and also can be changed.”

The computer is thus to be seen as a modeling tool. The models might be relatively mundane–our familiar word processors and painting programs define one end of the scale–or they might be considerably more complex. [my emphasis — Mark] It is important to keep in mind that this conception of computing is in the first instance personal–“personal dynamic media”–so that the ideal isn’t simulation and modeling on some institutional or centralized basis, but rather the kind of thing that individuals would engage in, in the same way in which individuals read and write for their own edification and practical reasons. This is what defines Kay’s vision of a literacy that encompasses logic and systems thinking as well as narrative.

And, as with Papert’s enactive mathematics, this vision seeks to make the understanding of complex systems something to which young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a ‘computer-science’ or an ‘information technology’ curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools:

“So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects–which I think is enough complexity for anybody–gets them into that space in thinking about things that I think is more interesting than just simple input/output mechanisms.” (Maxwell, pp. 132-135)

I wanted to highlight the part about “word processors” and “paint programs,” because this idea that’s being discussed is not limited to simulating real world phenomena. It could be incorporated into simulating “artificial phenomena” as well. It’s a different way of looking at what you are doing and creating when you are programming. It takes it away from asking, “How do I get this thing to do what I want,” and redirects it to, “What entities do we want to make up this desired system, what are they like, and how can they interact to create something that we can recognize, or otherwise leverages human capabilities?”

Maxwell said that computer science is not the important thing. Rather, what’s important about computer science is what it makes possible: “the study and engagement with complex or dynamic systems–and it is this latter issue which is of key importance to education.” Think about this in relation to what we do with reading and writing. We don’t learn to read and write just to be able to write characters in some sequence, and then for others to read what we’ve written. We have events and ideas, perhaps more esoteric to this subject, emotions and poetry, that we write about. That’s why we learn to read and write. It’s the same thing with computer science. It’s pretty worthless, if we as a society value it for communicating ideas, if it’s just about learning to read and write code. To make the practice something that’s truly valuable to society, we need to have content, ideas, to read and write about in code. There’s a lot that can be explored with that idea in mind.

Characterizing Alan Kay’s vision for personal computing, Maxwell talked about Kay’s concept of the Dynabook:

Alan Kay’s key insight in the late 1960s was that computing would become the practice of millions of people, and that they would engage with computing to perform myriad tasks; the role of software would be to provide a flexible medium with which people could approach those myriad tasks. … [The] Dynabook’s user is an engaged participant rather than a passive, spectatorial consumer—the Dynabook’s user was supposed to be the creator of her own tools, a smarter, more capable user than the market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the “end-user” as documented by Bardini & Horvath. (p. 218)

Kay’s contribution begins with the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. What discursive possibilities does this new modality open up, and for whom? Kay argues that this latter communications revolution should in the first place be in the hands of children. What we are left with is a sketch of a possible new literacy; not “computer literacy” as an alternative to book literacy, but systems literacy—the realm of powerful ideas in a world in which complex systems modelling is possible and indeed commonplace, even among children. Kay’s fundamental and sustained admonition is that this literacy is the task and responsibility of education in the 21st century. The Dynabook vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. (p. 262)

I would encourage interested readers to read Maxwell’s paper in full. He gives a rich description of the problem of computers in the educational context, giving a much more detailed history of it than I have here, and what the best minds on the subject have tried to do to improve the situation.

The main point I want to get across is if we as a society really want to get the greatest impact out of what computers can do for us, beyond just being tools that do canned, but useful things, I implore educators to see computers and programming environments more as apparatus, instruments, media (the computers and programming environments themselves, not what’s “played” on computers, and languages and metaphors, which are the media’s means of expression, not just a means to some non-expressive end), rather than as agents and tools. Sure, there will be room for them to function as agents and tools, but the main focus that I see as important in this subject area is in how the machine helps facilitate substantial pedagogies and illuminates epistemological concepts that would otherwise be difficult or impossible to communicate.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I was going through exercises in Section 3.3 of SICP recently (Modeling with Mutable Data), and discovered that my version of PLTScheme (4.1.4) does not include set-car! and set-cdr! operators. It turns out the team that maintains this development environment (now called “Racket”) changed this in Version 4. Originally Scheme had mutable pairs by default. So when you used cons, it created a mutable pair, though it treated it as immutable if you used car and cdr on it. You had to use set-car! and set-cdr! to change a pair’s contents. The dev. team changed PLTScheme such that cons creates immutable pairs (and car and cdr operate on it in an immutable fashion as usual). To use mutable pairs you need to use mcons, mcar, and mcdr to do the same operations that you used cons, car, and cdr to carry out on immutable pairs. Where the SICP text says to use set-car!, and set-cdr!, to manipulate mutable pairs, you need to use set-mcar!, and set-mcdr!.

This change only applies to the issue of immutable vs. mutable pairs. The dev. team made this decision, because in their view it made Scheme more of a pure functional language. However, I noticed that the set! operator (which changes a binding) still exists and works as expected in my copy of PLTScheme.

Edit: I goofed a bit when I posted this earlier today. I said that PLTScheme users should use mcar and mcdr to carry out the same operations as set-car! and set-cdr! in the SICP text. That is not the case. People should use set-mcar! and set-mcdr! for those operations.

Read Full Post »

I want to discuss what’s introduced in Section 2.1.3 of SICP, because I think it’s a really important concept.

When we think of data in typical CS and IT settings, we think of strings and numbers, tables, column names (or a reference from an ordinal value), XML, etc. We think of things like “Item #63A955F006-006”, “Customer name: Barbara Owens”, “Invoice total: $2,336.00”.

How do we access that data? We typically use SQL, either directly or through stored procedures, using relations to serialize/deserialize data to/from a persistence engine that uses indexing (an RDBMS). This returns an organized series of strings and numbers, which are then typically organized into objects for a time by the programmer, or a framework. And then data is input into them, and/or extracted from them to display. We send data over a network, or put it into the persistence engine. Data goes back and forth between containers, each of which use different systems of access, but the data is always considered to be “dead” numbers and strings.

Section 2.1.2 introduces the idea of data abstraction, and I’m sure CS students have long been taught this concept, that data abstraction leads to more maintainable programs because you “program to the interface, not to the object”. This way you can change the underlying implementation without affecting too much how the rest of the program runs. SICP makes the same point.

In 2.1.2 it uses a simple abstraction, a rational number (a fraction), as an example. It defined 3 functions: make-rat, numer, and denom. Make-rat would create a rational number abstraction out of two numbers: a numerator and denominator. Numer and denom return the numerator and denominator from the abstraction, respectively. It calls make-rat a “constructor”, and numer and denom “selectors”.

In 2.1.3 it questions our notion of “data”, though it starts from a higher level than most people would. It asserts that data can be thought of as this constructor and set of selectors, so long as a governing principle is applied which says a relationship must exist between the selectors such that the abstraction operates the same way as the actual concept would. So if you use (in pseudo-code):

x = (make-rat n d)

then (numer x) / (denom x) must always produce the same result as n / d. The book doesn’t mention this at this point, but in order to be really useful, the abstraction would need to work the same way that a rational number would when arithmetic operators are applied to it.

So rather than creating a decimal out of a fraction, as would be the case in most languages:

x = 1 / 4 (which equals 0.25)

You can instead say:

x = (make-rat 1 4)

and this representation does not have to be a decimal, just the ratio 1 over 4, but it can be used in the same way arithmetically as the decimal value. In other words, it is data!

In Section 2.1.2, the abstraction used for a rational number was a “pair”, a cons structure (created by executing (cons a b)). In 2.1.3 it shows that the same thing that was done to create a rational number data abstraction can be applied to the concept of a “pair”:

This point of view can serve to define not only “high-level” data objects, such as rational numbers, but lower-level objects as well. Consider the notion of a pair, which we used in order to define our rational numbers. We never actually said what a pair was, only that the language supplied procedures cons, car, and cdr for operating on pairs. But the only thing we need to know about these three operations is that if we glue two objects together using cons we can retrieve the objects using car and cdr. That is, the operations satisfy the condition that, for any objects x and y, if z is (cons x y) then (car z) is x and (cdr z) is y. Indeed, we mentioned that these three procedures are included as primitives in our language. However, any triple of procedures that satisfies the above condition can be used as the basis for implementing pairs. This point is illustrated strikingly by the fact that we could implement cons, car, and cdr without using any data structures at all but only using procedures. Here are the definitions:

(define (cons x y)
  (define (dispatch m)
    (cond ((= m 0) x)
          ((= m 1) y)
          (else (error "Argument not 0 or 1 -- CONS" m))))
  dispatch) 

(define (car z) (z 0)) 

(define (cdr z) (z 1))

This use of procedures corresponds to nothing like our intuitive notion of what data should be. Nevertheless, all we need to do to show that this is a valid way to represent pairs is to verify that these procedures satisfy the condition given above.

This example also demonstrates that the ability to manipulate procedures as objects automatically provides the ability to represent compound data. This may seem a curiosity now, but procedural representations of data will play a central role in our programming repertoire. This style of programming is often called message passing.

So, just to make it clear, what “cons” does is construct a procedure that “cons” calls “dispatch” (I say it this way, because “dispatch” is defined in cons’s scope), and return that procedure to the caller of “cons”. The procedure that is returned can then be used as data since the above “car” and “cdr” procedures produce the same result as the “car” and “cdr” procedures we’ve been using in the Scheme language.

SICP mentions in this section that message passing will be used further as a basic tool in Chapter 3 for more advanced concepts. Note that I have talked about message passing before. It is central to the power of object-oriented programming.

Note as well that the text implies an analog between procedures, in a Lisp language, and objects. The above example is perhaps a bad way to write code to get that across. In the corresponding lecture for this section (Lecture 2b, “Compound Data”) Abelson used a lambda ( (lambda (m) (cond …)) ) instead of “(define (dispatch m) …)”. What they meant to get across is that just as you can use “cons” multiple times in Scheme to create as many pairs as you want, you can use this rendition of “cons” to do the same. Instead of creating multiple blobs of memory which just contain symbols and pointers (as pairs are traditionally understood), you’re creating multiple instances of the same procedure, each one with the different pair values that were assigned embedded inside them. These procedure instances exist as their own entities. There is not just one procedure established in memory called “dispatch”, which is called with pieces of memory to process, or anything like that. These procedure instances are entities “floating” in memory, and are activated when parameters are passed to them (as the above “car” and “cdr” do). These procedures are likewise garbage collected when nothing is referring to them anymore (they pass out of existence).

In the above example, “cons” is the constructor (of a procedure/object called “dispatch”), and “car” and “cdr” are the selectors. What we have here is a primitive form of object-orientation, similar to what is known in the Smalltalk world, with the mechanics of method dispatch exposed. The inference from this is that objects can also be data.

What SICP begins to introduce here is a blurring of the lines between code and data, though at this point it only goes one way, “Code is data.” Perhaps at some point down the road it will get to sophisticated notions of the inverse: “Data is code”. The book got into this concept in the first chapter just a little, but it was really simple stuff.

Getting back to our traditional notions of data, I hope this will begin to show how limited those notions are. If procedures and objects can be data, why can’t we transmit, receive, persist, and search for them just as we can with “dead” data (strings and numbers)? In our traditional notions of CS and IT (assuming they’re focused on using OOP) we use classes to define methods for dealing with data, and elemental interactions with other computing functions, such as data storage, display, interaction between users and other computers, etc. We have a clear notion of separating code (functionality) from data, with the exception of objects that get created while a program is running. In that case we put a temporary “persona” over the real data to make it easier to deal with, but the “persona” is removed when data is stored and/or we exit the program. What we in the whole field neglect is that objects can be data, but they can only really be such if we can do everything with them that we now do with the basic “dead” forms of data.

There is at least one solution out there that is designed for enterprises, and which provides inherent object persistence and search capabilities, called Gemstone. It’s been around for 20+ years. The thing is most people in IT have never heard of it, and would probably find the concept too scary and/or strange to accept. From what I’ve read, it sounds like a good concept. I think such a system would make application design, and even IT system design if it was done right, a lot more straightforward, at least for systems where OO architecture is a good fit. Disclaimer: I do not work for Gemstone, and I’m not attempting to advertise for them.

As I think about this more advanced concept, I can see some of where the language designers of Smalltalk were going with it. In it, objects and their classes, containing information, can be persisted and retrieved in the system very easily. A weakness I notice now in the language is there’s no representational way to create instance-specific functionality, as exists in the above Scheme example. Note that the values that are “consed” are inlined in “dispatch”. There’s a way to put instance-specific code into an object in Smalltalk, but it involves creating a closure–an anonymous object–and you would essentially have to copy the code for “dispatch”. You would have to treat Smalltalk like a functional language to get this ability. The reason this interests me is I remember Alan Kay saying once that he doesn’t like the idea of state being stored inside of an object. Well, the above Scheme example is a good one for demonstrating “stateless OOP”.

Edit 7-9-2010: After I posted this article I got the idea to kind of answer my own question, because it felt insensitive of me to just leave it hanging out there when there are some answers that people already know. The question was, “Why don’t we have systems of data that allow us to store, retrieve, search, and transmit procedures, objects, etc. as data?” I’ve addressed this issue before from a different perspective. Alan Kay briefly got into it in one of his speeches that’s been on the internet. Now I’m expanding on it.

The question was somewhat rhetorical to begin with. It’s become apparent to me as I’ve looked at my programming experience from a different perspective that a few major things have prevented it from being resolved. One is we view computers as devices for automation, not as a new medium. So the idea of computers adding meta-knowledge to information, and that this meta-knowledge would be valuable to the user is foreign, though slowly it’s becoming less so with time, particularly in the consumer space. What’s missing is a confidence that end users will be capable of, and want to, manipulate and modify that meta-knowledge. A second reason is that each language has had its own internal modeling system, and for quite a while language designers have gotten away with this, because most people weren’t paying attention to that. This is beginning to change, as we can see in the .Net and Java worlds, where a common modeling system for languages is preferred. A third reason is that with the introduction of the internet, openness to programming is seen as a security risk. This is because of a) bad system design, and b) in the name of trying to be “all things to all people” we have systems that try to compromise, introducing some risk, to promote backward compatibility with older models of interaction that became less secure when wide scale networking was introduced, and to cater to programmers’ desires for latitude and features so that the platform would be more widely accepted, so the thinking goes.

No one’s figured out a way to robustly allow any one modeling system to interact or integrate with other modeling systems. There have been solutions developed to allow different modeling systems to interact, either via. XML using networking protocols, or by developing other languages on top of a runtime. I’ve seen efforts to transmit objects between two specific runtimes: .Net and the JVM, but this area of research is not well developed. Each modeling system creates its own “walled garden”, usually with some sort of “escape hatch” to allow a program written in one system to interact with another (usually using a C interface or some FFI (foreign functionality interface)) which isn’t that satisfactory. It’s become a bit like what used to be a plethora of word processing or spreadsheet document file formats, each incompatible with the other.

Programmers find file format standards easier to deal with, because it involves loading binary data into a data structure, and/or parsing a text file into such a structure, but this limits what can be done with the information that’s in a document, because it takes away control from the recipient in how the information can be used and presented. As I’ve discussed earlier, even our notions of “document” are limiting. Even though it could serve a greater purpose, it seems not much time has been invested in trying to make different modeling systems seamless between each other. I’m not saying it’s easy, but it would help unleash the power of computing.

Code is encoded knowledge. Knowledge should be fungible such that if you want to take advantage of it in a modeling system that is different from the one it was originally encoded in, it should still be accessible through some structure that is compatible with the target system. I understand that if the modeling system is low level, or more primitive, the result is not going to look pretty, but it could still be accessed through some interface. What I think should be a goal is a system that makes such intermingling easy to create, rather than trying to create “runtime adapters”. This would involve understanding what makes a modeling system possible, and focusing on creating a “system for modeling systems”. You can see people striving for this in a way with the creation of alternative languages that run on the JVM, but which have features that are difficult to include with the Java language itself. Java is just used as a base layer upon which more advanced modeling systems can be built. This has happened with C as well, but Java at least allows (though the support is minimal) a relatively more dynamic environment. What I’m talking about would be in a similar vein, except it would allow for more “machine models” to be created, as different modeling systems expect to run on different machine architectures, regardless of whether they’re real or virtual.

I think the reason for this complexity is, as Alan Kay has said, we haven’t figured out what computing is yet. So what language designers create reduces everything down to one or more aspects of computing, which is embedded in the design of languages.

So my question is really a challenge for future reference.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

This problem challenged me to really think about the efficiency of my algorithm. I have trouble relating arithmetic notions to computing, so understanding the information that the authors had laid out was a problem for me at first.

Whatever solution you come up with for this, be sure to check its Big-O function. I thought I had written a novel solution, based on the ideas in the book. What I found after checking the number of steps it took was that its performance was better than linear, but it was close to linear. I wasn’t satisfied with that. So I looked at ways of making it more efficient, and found that not only was I inefficient in my process, but I was also inefficient with my code. You need to put the focus on successive squaring. A good working knowledge of what exponents represent will work in your favor.

I thought at first that I needed two functions: one for even powers, and one for odd, because I figured out different implementations for each. Then I found that I needed only one function for computing an exponent. There’s a hint about that in the text: bn = b ⋅ bn-1.

I found the hints written into the exercise a bit confusing, but now that I’ve solved it I see what they were getting at. To me, they turned out to be somewhat helpful, but also somewhat misleading. They tried to be helpful, but they didn’t want to reveal too much. In the end I think they were “too smart” for their own good.

One of the hints is an abstraction. One of the hints relates to a concept that was discussed earlier, and it’s just supposed to help you see it more clearly so you can use the idea in your code. Don’t think that you need to translate all of the hints literally into equivalent code. Use them as inspiration–use what you find useful at the moment. If it doesn’t make sense after thinking about it for a bit, just put it out of your mind. Focus on developing what you think will solve the problem. Once you get something going, see how it does performance-wise, and then perhaps reconsider the hints. Don’t be thinking about, “Did I put all of the hints in my code correctly?” Once you get the problem solved, you’ll see as I did what the hints were all about.

In one of my early versions of a solution, I noticed it was making “leaps and bounds” using successive squaring in computing the answer, and then I had it slow down quite a bit because I was trying to get to an intermediate step, which the successive squaring technique would overshoot. I’m being a bit misleading by saying this, but I don’t want to give too much away. Look at the gap between where successive squaring gets you, and where you need to go, and think of how to get through that gap more quickly. The hint I’ll give you is: once you’ve gotten to the point of considering this, the answer is already within your grasp.

Read Full Post »

I was going through my list of links for this blog (some call it a “blogroll”) and I came upon a couple items that might be of interest.

The first is, it appears that Dolphin Smalltalk is getting back on its feet again. You can check it out at Object Arts. I had reported 3 years ago that it was being discontinued. So I’m updating the record about that. Object Arts says it’s working with Lesser Software to produce an updated commercial version of Dolphin that will run on top of Lesser’s Smalltalk VM. According to the Object Arts website this product is still in development, and there’s no release date yet.

Another item is since last year I’ve been hearing about a branch-off of Squeak called Pharo. According to its description it’s a version of Squeak designed specifically for professional developers. From what I’ve read, even though people have had the impression that the squeak.org release was also for professional developers, there were some things that the Pharo dev. team felt were getting in the way of making Squeak a better professional dev. tool, mainly the EToys package, which has caused consternation. EToys was stripped out of Pharo.

There’s a book out now called “Pharo by Example”, written by the same people who wrote “Squeak by Example”. Just from perusing the two books, they look similar. There were a couple differences I picked out.

The PbE book says that Pharo, unlike Squeak, is 100% open source. There’s been talk for some time now that while Squeak is mostly open source, there has been some code in it that was written under non-open source licenses. In the 2007-2008 time frame I had been hearing that efforts were under way to make it open source. I stopped keeping track of Squeak about a year ago, but last I checked this issue hadn’t been resolved. The Pharo team rewrote the non-open source code after they forked from the Squeak project, and I think they said that all code in the Pharo release is under a uniform license.

The second difference was that they had changed some of the fundamental architecture of how objects operate. If you’re an application developer I imagine you won’t notice a difference. Where you would notice it is at the meta-class/meta-object level.

Other than that, it’s the same Squeak, as best I can tell. According to what I’ve read Pharo is compatible with the Seaside web framework.

An introduction to the power of Smalltalk

I’m changing the subject some, but I couldn’t resist talking about this, because I read a neat thing in the PbE book. I imagine it’s in SbE as well. Coming from the .Net world, I had gotten used to the idea of “setters” and “getters” for class properties. When I first started looking at Squeak, I downloaded Ramon Leon‘s Squeak image. I may have seen this in a screencast he produced. I found out there was a modification to the browser in his image that I could use to have it set up default “setters” and “getters” to my class’s variables automatically. I thought this was neat, and I imagine other IDEs already had such a thing (like Eclipse). I used that feature for a bit, and it was a good time-saver.

PbE revealed that there’s a way to have your class set up its own “setters” and “getters”. You don’t even need a browser tool to do it for you. You just use the #doesNotUnderstand message handler (also known as “DNU”), and Smalltalk’s ability to “compile on the fly” with a little code generation. Keep in mind that this happens at run time. Once you get the idea, it’s not that hard, it turns out.

Assume you have a class called DynamicAccessors (though it can be any class). You add a message handler called “doesNotUnderstand” to it:

DynamicAccessors>>doesNotUnderstand: aMessage
| messageName |
messageName := aMessage selector asString.
(self class instVarNames includes: messageName)
ifTrue: [self class compile: messageName, String cr, ' ^ ', messageName.
         ^aMessage sendTo: self].
^super doesNotUnderstand: aMessage

This code traps the message being sent to a DynamicAccessors instance, because there is no method for what’s being called for at the moment. It extracts the method name that’s being called, looks to see if the class (DynamicAccessors) has a variable by the same name, and if so, compiles a method by that name, with a little boilerplate code that just returns the variable’s value. Once it’s created, it resends the original message to itself, so that the now-compiled accessor can return the value. However, if no variable exists that matches the message name, it triggers the superclass’s “doesNotUnderstand” method, which will typically activate the debugger, halting the program, and notifying the programmer that the class, “doesn’t understand this message.”

Assuming that DynamicAccessors has a member variable “x”, but no “getter”, it can be accessed by:

myDA := DynamicAccessors new.
someValue := myDA x

If you want to set up “setters” as well, you could add a little code to the doesNotUnderstand method that looks for a parameter value being passed along with the message, and then compiles a default method for that.

Of course, one might desire to have some member variables protected from external access and/or modification. I think that could be accomplished by having a variable naming convention, or some other convention, such as a collection that contains member variable names along with a notation specifying to the class how certain variables should be accessed. The above code could follow those rules, allowing access to some internal values and not others. A thought I had is you could set this up as a subclass of Object, and then just derive your own objects off of that. That way this action will apply to any classes you create, which you choose to have it apply to (otherwise, just have them derive from Object).

Once an accessor is compiled, the above code will not be executed for it again, because Smalltalk will know that the accessor exists, and will just forward the message to it. You can go in and modify the method’s code however you want in a browser as well. It’s as good as if you created the accessor yourself.

Edit 5-6-2010: Heard about this recently. Squeak 4.1 has been released. From what I’ve read on The Weekly Squeak, Squeak has been 100% open source since Version 4.0. I was talking about this earlier in this article in relation to Pharo. 4.1 features some “touch up” stuff. It sounds like this makes it nicer to use. The description says it includes some first-time user features and a nicer, cleaner visual interface.

Read Full Post »

This is the first in what I hope will be many posts talking about Structure and Interpretation of Computer Programs, by Abelson and Sussman. This and future posts on this subject will be based on the online (second) edition that’s available for free. I started in on this book last year, but I haven’t been posting about it because I hadn’t gotten to any interesting parts. Now I finally have.

After I solved this problem, I really felt that this scene from The Matrix, “There is no spoon” (video), captured the experience of what it was like to finally realize the solution. First realize the truth, that “there is no spoon”–separate form from appearance (referring to Plato’s notion of “forms”). Once you do that, you can come to see, “It is not the spoon that bends. It is only yourself.” I know, I’m gettin’ “totally trippin’ deep, man,” but it was a real trip to do this problem!

This exercise is an interesting look at how programmers such as myself have viewed how we should practice our craft. It starts off innocuously:

A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n – 1) + 2f(n – 2) + 3f(n – 3) if n >= 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by an iterative process.

(Update 5-22-2010: Don’t read too much into the “cross-outs” I’m using. I’m just trying to be more precise in my description.) Writing it recursively is pretty easy. You just do a translation from the mathematical classical algebraic notation to Scheme code. It speaks for itself. The challenging part is writing the same thing a solution that gives you the same result using an iterative process. Before you say it can’t be done, it can!

I won’t say never, but you probably won’t see me describing how I solve any of the exercises, because that makes it too easy for CS students to just read this and copy it. I want people to learn this stuff for themselves. I will, however, try to give some “nudges” in the right direction.

  • I’ll say this right off the bat: This is not a (classical) mathematical an algebra problem you’re dealing with. It’s easy to fall into thinking this, especially since you’re presented with some algebra as something to implement. This is a computing problem. I’d venture to say it’s more about the mathematics of computing than it is about algebra. As developers we often think of the ideal as “expressing the code in a way that means something to us.” While this is ideal, it can get in the way of writing something optimally. This is one of those cases. I spent several hours trying to optimize the math algebraic operations, and looking for mathematical patterns that might optimize the recursive algorithm into an iterative one. Most of my mathematical the patterns I thought I had fell to pieces. It was a waste of time anyway. I did a fair amount of fooling myself into thinking that I had created an iterative algorithm when I hadn’t. It turned out to be the same recursive algorithm done differently.
  • Pay attention to the examples in Section 1.2.1 (Linear Recursion and Iteration), and particularly Section 1.2.2 (Tree Recursion). Notice the difference, in the broadest sense, in the design between the recursive and iterative algorithms in the sample code.
  • As developers we’re used to thinking about dividing code into component parts, and that this is the right way to do it. The recursive algorithm lends itself to that kind of thinking. Think about information flow instead for the iterative algorithm. Think about what computers do beyond calculation, and get beyond the idea of calling a function to get a desired result.
  • It’s good to take a look at how the recursive algorithm works to get some ideas about how to implement the iterative version. Try some example walk-throughs with the recursive code and watch what develops.
  • Here’s a “you missed your turn” signpost (using a driving analogy): If you’re repeating calculation steps in your iterative algorithm, you’re probably not writing an iterative procedure. The authors described the characteristics of a recursive and an iterative procedure earlier in the chapter. See how well your procedure fits into either description. There’s a fine line between what’s “iterative” and what’s “recursive” in Scheme, because in both cases you have a function calling itself. The difference is in a recursive procedure you tend to have the function calling itself more than once in the same expression. Whereas In an iterative procedure you tend to have a simpler calling scheme where the function only calls itself once in an expression (though it may call itself once in more than one expression inside the function), and all you’re doing is computing an intermediate result, and incrementing a counter, to input into the next step. You should not have operators which are “lingering”, waiting for a function call to return, as a rule, though they did show an example earlier of a function that was mostly iterative, with a little recursion thrown in now and then. With this exercise I found that I was able to find a solution that was strictly iterative.
  • As I believe was mentioned earlier in this chapter (in the SICP book), remember to use the parameter list of your function as a means for changing state.
  • I will say that both the recursive and the iterative functions are pretty simple, though the iterative function uses a couple concepts that most developers are not used to. That’s why it’s tricky. When I finally got it, I thought, “Oh! There it is! That’s cool.” When you get it, it will just seem to flow very smoothly.

Happy coding!

Read Full Post »

Older Posts »