A blogger finally getting their wings

I’ve been following José A Ortega-Ruiz (he goes by “jao” for short) for about 6 years now, on his blog Programming Musings. Recently I noticed he wasn’t posting much, if at all. Finally he said he was starting up a new blog here, using a static web page tool, written in Racket (a Scheme implementation), called “Frog” for “frozen blog.” I found his 2nd post, called “Where my mouth is” inspirational.

For many years, I’ve been convinced that programming needs to move forward and abandon the Algol family of languages that, still today, dampens the field. And that that forward direction has been signaled for decades by (mostly) functional, possibly dynamic languages with an immersive environment. But it wasn’t until recently that I was able to finally put my money where my mouth has been all these years.

A couple of years ago I finally became a co-founder and started working on a company I could call my own (I had been close in the past, but not really there), and was finally in a position to really influence our development decisions at every level. Even at the most sensitive of them all: language choice.

He says he’s working with a team of people who are curious, and are willing to take the risk of trying out less traditional, but as they perceive, more powerful programming environments, such as Clojure (a Lisp variant that runs on the JVM, and is pronounced “closure”). Read the rest of it.

I once aspired to work with a team such as this. I had been working in IT services for several years, and I thought I’d continue on in that vein, with something like Smalltalk, starting my own little “web shop,” or just happening to find a company that was already creating IT systems with something like it. More recently I’ve been finding computing as a field of study more interesting, not in the typical sense that academic computer science teaches, but as a phenomenon unto itself, a kind of interesting effect of a machine that’s capable of generating, and acting like, a machine of a different kind from itself, simply by feeding it different patterns of a certain nature. As Alan Kay has demonstrated, looking at computing this way, you can still arrive at an environment that looks recognizable, but is vastly different under the covers than the typical user-oriented system is structured today.

I’ve put two links in my links sidebar for jao’s blogs, “Programming Musings” (his old blog) and “Programming Musings 2,” since his old blog is rather like an archive of really interesting and valuable knowledge in computer science, and I encourage people to scan through it for anything they find of value.

I wish jao the best of luck in his new venture!

Reviving programming as literacy

I came across this interview with Lesley Chilcott, the producer of “An Inconvenient Truth” and “Waiting For Superman.” Kind of extending her emphasis on improving education, she produced a short 9-minute video selling the idea of “You should learn to code,” both to adults and children. It addresses two points: 1) the anticipated shortage of programmers needed to write software in the future, and 2) the increasing ubiquity of programming in all sorts of fields where people would think it wouldn’t exist, such as manufacturing and agriculture.

The interview gets interesting at 3 minutes 45 seconds in.

Michelle Fields, the interviewer, asked what I thought were some insightful questions. She started things off with:

It seems as though the next generation is so fluent in technology. How is it that they don’t know what computer programming is?

Chilcott said:

I think the reason is, you know, we all use technology every day. It’s surrounding us. Like, we can debate the pro’s and con’s of technology/social media, but the bottom line is it’s everywhere, right? So I think a lot of people know how to read it. They grow up playing with an iPhone or something like that, but they don’t know how to write it. And so when you say, “Do you know what this is,” specifically, or what this job is–and you know, those kids are in first, second, fifth grade–they know all about it, but they don’t know what the job is.

I found this answer confusing. She’s kind of on the right track, thinking of programming as “writing.” I cut her some slack, because as she admits in the interview, she’s just started programming herself. However, as I’ve said before, running software is not “reading.” It’s really more like being read to by a machine, like listening to an audio book, or someone else reading to you. You don’t have to worry about the mental tasks of pronunciation, sentence construction, or punctuation. You can just listen to the story. Running software doesn’t communicate the process that the code is generating, because there’s a lot that the person using it is not shown. This is on purpose, because most people use software to accomplish some utilitarian task unrelated to how a computer works. They’re not using it to understand a process.

The last sentence came across as muddled. I think what she meant was they know all about using technology, but they don’t know how to create it (“what the job is”).

Fields then asked,

There was this study which found that 56% of students would rather eat broccoli than learn math. Do you think that since computer programming is somewhat related to math, that that’s the reason children and students shy away from it?

Chilcott said:

It could be. That is one of the myths that exist. There is some, you know, math, but as Bill Gates and some other people said, you know, addition, subtraction–It’s much more about problem solving, and I think people like to problem-solve, they like mysteries, they like decoding things. It’s much more about that than complicated algorithms.

She’s right that there is problem solving involved with programming, but she’s either mistaken or confusing math with arithmetic when she says that the relationship between math and programming is a “myth.” I can understand why she tries to wave it off, because as Fields pointed out, most students don’t like math. I contend, as do some mathematicians, this is due to the way it’s taught in our schools. The essence of math gets lost. Instead it’s presented as a tool for calculation, and possibly a cognitive development discipline for problem solving, both of which don’t communicate what it really is, and remove a lot of its beauty.

In reality math is pervasive in programming, but to understand why I say this you have to understand that math is not arithmetic–addition, subtraction, like she suggests. This confusion is common in our society. I talk more about this here. Having said this, it does not mean that programming is hard right off the bat. The math involved has more to do with logic and reasoning. I like the message in the video below from a couple of the programmers interviewed: “You don’t have to be a genius to know how to code. … Do you have to be a genius to do math? No.” I think that’s the right way to approach this. Math is important to programming, but it’s not just about calculating a result. While there’s some memorization, understanding a programming language’s rules, and knowing what different things are called, that’s not a big part of it.

The cool thing is you can accomplish some simple things in programming, to get started, without worrying about math at all. It becomes more important if you want to write complex programs, but that’s something that can wait.

My current understanding is the math in programming is about understanding the rules of a system and what statements used in that system imply, and then understanding the effects of those implications. That sounds complicated, but it’s just something that has to be learned to do anything significant with programming, and once learned will become more and more natural. I liken it to understanding how to drive a car on the road. You don’t have to learn this concept right away, though. When first starting out, you can just look at and enjoy the effects of trying out different things, exploring what a programming environment offers you.

Where Chilcott shines in the interview above is when she becomes the “organizer.” She said that even though 95% of the schools have computers and internet access, only 10% have what she calls a “computer science” course. (I wish they’d go back to calling it a “programming course.” Computer science is more than what most of these schools teach, but I’m being nit-picky.) The cool thing about Code.org, a web site she promotes, is that it tries to locate a school near you that offers programming courses. If there aren’t any, no problem. You can learn some basics of programming right inside your browser using the online tools that it offers on the site.

The video Chilcott produced is called “Code Stars” in the above interview, but when I went looking for it I found it under the name “the Code.org film,” or, “What Most Schools Don’t Teach.”

Here is the full 9-minute video:

If you want the shorter videos, you can find them here.

The programming environment you see kids using in these videos is called “Scratch.”

Gabe Newell said of programming:

When you’re programming, you’re teaching possibly the stupidest thing in the entire universe–a computer–how to do something.

I see where Newell is going with this, but from my perspective it depends on what programming environment you’re using. Some programming languages have the feel of you “teaching” the system when you’re programming. Others have the feel of creating relationships between simple behaviors. Others, still, have the feel of using relationships to set up rules for a new system. Programming comes in a variety of approaches. However, the basic idea that Newell gets across is true, that computers only come with a set of simple operations, and that’s it. They don’t do very much by themselves, or even in combination. It’s important for those new to programming to learn this early on. Some of my early experiences in programming match those of new programmers even today. One of them is, when using a programming language, one is tempted to assume that the computer will infer the meaning of some programming expression from context. There is some context used in programming, but not much, and it’s highly formalized. It’s not intuitive. I can remember the first time I learned this it was like the joke where, say, someone introduces his/her friend to a dumb, witless character in a skit. He/she says, “Say hi to my friend, Frank,” and the dummy says, “Hi to my friend Frank.” And the guy/gal says, “NO! I mean…say hello,” making a hand gesture trying to get the two to connect, and the dummy might look at the friend and say, “Hello,” but that’s it. That’s kind of a realization to new programmers. Yeah, the computer has to have almost everything explained to it (or modeled), even things we do without thinking about it. It’s up to the programmer to make the connections between the few things the computer knows how to do, to make something larger happen.

Jack Dorsey talked about programming in a way that I think is important. His ultimate goal when he started out was to model something, and make the model malleable enough that he could manipulate it, because he wanted to use it for understanding how cities work.

Bill Gates emphasized control. This is a common early motivation for programmers. Not necessarily controlling people, but controlling the computer. What Gates was talking about was what I’d call “making your own world,” like Dorsey was saying, but he wanted to make it real. When I was in high school (late 1980s) it was a rather common project for aspiring programming students to create “matchmaking” programs, where boys and girls in the whole school would answer a simple questionnaire, and a computer program that a student had written would try to match them up by interests, not unlike some of the online dating sites that are out there now. I never heard of any students finding their true love through one of these projects, but it was fun for some people.

Vanessa Hurst said, “You don’t have to be a genius to know how to code. You need to be determined.” That’s pretty much it in a nutshell. In my experience everything else flowed from determination when I was learning how to do this. It will drive you to learn what you need to learn to get it, even if sometimes it’s subject matter you find tedious and icky. You learn to just push through it to get to the glorious feeling at the end of having accomplished what you set out to do.

Newell said at the end of the video,

The programmers of tomorrow are the wizards of the future. You’re going to look like you have magic powers compared to everybody else.

That’s true, but this has been true for a long time. In my professional work developing custom database solutions for business customers I had the experience of being viewed like a magician, because customers didn’t know how I did what I did. They just appreciated the fact that I could do it. I really don’t mean to discourage anyone, because I still enjoy programming today, and I want to encourage people to learn programming, but I feel the need to say something, because I don’t want people to get disillusioned over this. This status of “wizard,” or “magician” is not always what it’s cracked up to be. It can feel great, but there is a flip side to it that can be downright frustrating. This is because people who don’t know a wit of what you know how to do can get confused about what your true abilities are, and they can develop unrealistic expectations of you. I’ve found that wherever possible, the most pleasurable work environment is working among those who also know how to code, because we’re able to size each other up, and assign tasks appropriately. I encourage those who are pursuing software development as a career to shoot for that.

A couple things I can say for being able to code are:

  • It makes you less of a “victim” in our technology world. Once you know how to do it, you have an idea about how other programs work, and the pitfalls they can fall into that might compromise your private information, allow a computer cracker to access it, or take control of your system. You don’t have to feel scared at the alarming “hacking” or phishing reports you hear on the news, because you can be choosey about what software you use based on how it was constructed, what it’s capable of, how much power it gives you (not someone else), and not just base a decision on the features it has, or cool graphics and promotion. You can become a discriminating user of software.
  • You gain the power to create the things that suite you. You don’t have to use software that you don’t like, or you think is being offered on unreasonable terms. You can create your own, and it can be whatever you want. It’s just a matter of the knowledge you’re willing to gather and the amount of energy you’re willing to put into developing the software.

Edit 5-20-2013: While I’m on this subject, I thought I should include this video by Mitch Resnick, who has been involved in creating Scratch at MIT. Similar to what Lesley Chilcott said above, he said, “It’s almost as if [users of new technologies] can read, but not write,” referring to how people use technology to interact. I disagreed with the notion, above, that using technology is the same as reading. Resnick hedged a bit on that. I can kind of understand why he might say this, because by running a Scratch program, it is like reading it, because you can see how code creates its results in the environment. This is not true, however, of much of the technology people use today.

Mark Guzdial asked a question a while back that I thought was important, because it brings this issue down to where a lot of people live. If the kind of literacy I’m going to talk about below is going to happen, the concept needs to be able to come down “out of the clouds” and become more pedestrian. Not to say that literacy needs to be watered down in toto (far from it), but that it should be possible to read and write to communicate everyday ideas and experiences without being super sophisticated about it. What Mark asked was, in the context of a computing medium, what would be the equivalent of a “note to grandma”? I remember suggesting Dan Ingalls’s prop-piston concept from his Lively Kernel demos as one candidate. Resnick provided what I thought were some other good ones, but in the context of Mother’s Day.

Context reversal

The challenge that faces new programmers today is different from when I learned programming as a child in one fundamental way. Today, kids are introduced to computers before they enter school. They’re just “around.” If you’ve got a cell phone, you’ve got a computer in your pocket. The technology kids use presents them with an easy-to-use interface, but the emphasis is on use, not authoring. There is so much software around it seems you can just wish for it, and it’s there. The motivation to get into programming has to be different than what motivated me.

When I was young the computer industry was still something new. It was not widespread. Most computers that were around were big mainframes that only corporations and universities could afford and manage. When the first microcomputers came out, there wasn’t much software for them. It was a lot easier to be motivated to learn programming, because if you didn’t write it, it probably didn’t exist, or it was too expensive to get (depending on your financial circumstances). The way computers operated was more technical than they are today. We didn’t have graphical user interfaces (at first). Everything was done from some kind of text command line interface that filled the entire screen. Every computer came with a programming language as well, along with a small manual giving you an introduction on how to use it.

PC-DOS, from Wikipedia

It was expected that if you bought a computer you’d learn something about programming it, even if it was just a little scripting. Sometimes the line between what was the operating system’s command line interface, and what was the programming language was blurred. So even if all you wanted to do was manipulate files and run programs, you were learning a little about programming just by learning how to use the computer. Some of today’s software developers came out of that era (including yours truly).

Computer and operating system manufacturers had stopped including programming languages with their systems by the mid-1990s. Programming languages had also been taken over by professionals. The typical languages used by developers were much harder to learn for beginners. There were educational languages around, but they had fallen behind the times. They were designed for older personal computer systems, and when the systems got more sophisticated no one had come around to update them. That began to be remedied only in the last 10 years.

Computer science was still a popular major at universities in the 1990s, due to the dot-com craze. When that bubble burst in 2000, that went away, too. So in the last 18 years we’ve had what I’d call an “educational programming winter.” Maybe we’ll see a revival. I hope so.

Literacy reconsidered

I’m directing the rest of this post to educators, because there are some issues around a programming revival I’d like to address. I’m going to share some more detailed history, and other perspectives on computer programming.

What many may not know is that we as a society have already gone through this once. From the late 1970s to the mid-1980s there was a major push to teach programming in schools as “computer literacy.” This was the regime that I went through. The problem was some mistakes were made, and this caused the educational movement behind it to collapse. I think the reason this happened was due to a misunderstanding of what’s powerful about programming, and I’d like educators to evaluate their current thinking in light of this, so that hopefully they do not repeat the mistakes of the past.

As I go through this part, I’ll mostly be quoting from a Ph.D. thesis written by John Maxwell in 2006 called Tracing the Dynabook: A Study of Technocultural Transformations.” (h/t Bill Kerr)

Back in the late 1970s microcomputers/personal computers were taking off like wildfire with Apple II’s, and Commodore VIC-20’s, and later, Commodore 64’s, and IBM PCs. They were seen as “the future.” Parents didn’t want their children to be “left behind” as “technological illiterates.” This was the first time computers were being brought into the home. It was also the first time many schools were able to grant students access to computers.

Educators thought about the “benefits” of using a computer for certain cognitive and social skills.  Programming spread in public school systems as something to teach students. Fred D’Ignazio wrote in an article called “Beyond Computer Literacy,” from 1983:

A recent national “computers in the schools” survey conducted by the Center for the Social Organization of Schools at Johns Hopkins University found that most secondary schools are using computers to teach programming. … According to the survey, the second most popular use of computers was for drill and practice, primarily for math and language arts. In addition, the majority of the teachers who responded to the survey said that they looked at the computer as a “resource” rather than as a “tool.”

…Another recent survey (conducted by the University of Maryland) echoes the Johns Hopkins survey. It found that most schools introduce computers into the curriculum to help students become literate in computer technology. But what does this literacy entail?

Because of the pervasive spread of computers throughout our society, we have all become convinced that computers are important. From what we read and hear, when our kids grow up almost everyone will have to use computers in some aspect of their lives. This makes computers, as a subject, not only important, but also relevant.

An important, relevant subject like computers should be part of a school’s curriculum. The question is how “Computers” ought to be taught.

Special computer classes are being set up so that students can play with computers, tinker with them, and learn some basic programming. Thus, on a practical level, computer literacy turns out to be mere computer exposure.

But exposure to what? Kids who are now enrolled in elementary and secondary schools are exposed to four aspects of computers. They learn that computers are programmable machines. They learn that computers are being used in all areas of society. They learn that computers make good electronic textbooks. And (something they already knew), they learn that computers are terrific game machines.

… According to the surveys, real educational results have been realized at schools which concentrate on exposing kids to computers. … Kids get to touch computers, play with them, push their buttons, order them about, and cope with computers’ incredible dumbness, their awful pickiness, their exasperating bugs, and their ridiculous quirks.

The main benefits D’Ignazio noted were ancillary. Students stayed at school longer, came in earlier, and stayed late. They were more attentive to their studies, and the computers fostered a sense of community, rather than competition and rivalry. If you read his article, you get a sense that there was almost a “worship” of computers on the part of educators. They didn’t understand what they were, or what they represented, but they were so interesting! There’s a problem there… When people are fascinated by something they don’t understand, they tend to impose meanings on it that are not backed by evidence, and so miss the point. The mistaken perceptions can be strengthened by anecdotal evidence (one of the weakest kinds). This is what happened to programming in schools.

The success of the strategy of using computers to try to improve higher-order thinking was illusory. John Maxwell’s telling of the “life and death of Logo” (my phrasing) serves as a useful analog to what happened to programming in schools generally. For those unfamiliar with it, the basic concept of Logo was a programming environment in which the student manipulates an object called a “turtle” via. commands. The student can ask the turtle to rotate and move. As it moves it drags a pen behind it, tracing its trail.  Other versions of this language were created that allowed more capabilities, allowing further exploration of the concepts for which it was created. The original idea Seymour Papert, who taught children using Logo, had was to teach young children about sophisticated math concepts, but our educational system imposed a very different definition and purpose on it. Just because something is created on a computer with the intent of it being used for a specific purpose doesn’t mean that others can’t use it for completely different, and possibly less valuable purposes. We’ve seen this a lot with computers over the years; people “misusing” them for both constructive and destructive ends.

As I go forward with this, I just want to put out a disclaimer that I don’t have answers to the problems I point out here. I point them out to make people aware of them, to get people to pause with the pursuit of putting people through this again, and to point to some people who are working on trying to find some answers. I present some of their learned opinions. I encourage interested readers to read up on what these people have had to say about the use of computers in education, and perhaps contact them with the idea of learning more about what they’ve found out.

I ask the reader to pay particular attention to the “benefits” that educators imposed on the idea of programming during this period that Maxwell talks about, via. what Papert called “technocentrism.” You hear this being echoed in the videos above. As you go through this, I also want you to notice that Papert, and another educator by the name of Alan Kay, who have thought a lot about what computers represent, have a very different idea about the importance of computers and programming than is typical in our school system, and in the computer industry.

The spark that started Logo’s rise in the educational establishment was the publication of Papert’s book, “Mindstorms: Children, Computers, and Powerful Ideas” in 1980. Through the process of Logo’s promotion…

Logo became in the marketplace (in the broad sense of the word) [a] particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the “thinking skills” strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon.

It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. [my emphasis — Mark] In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo “didn’t work as promised,” as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:

“Egocentrism for Piaget does not mean ‘selfishness’—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like ‘What is THE effect of THE computer on cognitive development?’ or ‘Does Logo work?’ … such turns of phrase often betray a tendency to think of ‘computers’ and ‘Logo’ as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology.”

But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.

By the early 1990s … Logo—and with it, programming—had faded.

The message–or black box–resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. (pp. 183-185)

To be clear, the last part of the quote refers only to the educational value placed on programming by our school system. When educators attempted to formally study and evaluate programming’s benefits on higher-order thinking and the like, they found it wanting, and so most schools gradually dropped teaching programming in the 1990s.

Maxwell addresses the conundrum of computing and programming in schools, and I think what he says is important to consider as people try to “reboot” programming in education:

[The] critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary ‘humanist’ stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of “the technology must be in the service of the educational objectives and not the other way around.” The determinist critique, in turn, says, ‘the use of computers encourages a mechanistic way of thinking that is a danger to natural/human/traditional ways of life’ (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000).

Missing from either version of this critique is any idea that digital information technology might present something worth actually engaging with. De Castell, Bryson & Jenson write:

“Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become ‘comfortable’ using it. […] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things–writing and speech, images and sound–every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity?”

Surely the power of education is transformation. And yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. And so we preserve the divide between ‘expert’ and ‘end-user;’ for the ‘end-user’ is profoundly she who is unchanged, uninitiated, unempowered.

A seemingly endless literature describes study after study, project after project, trying to identify what really ‘works’ or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time.

But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don’t amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things.

But clearly, it would be inane to try to claim that computing technology–one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down–has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is important, but we don’t really understand how. And so what shall we do, right now?

It is not that there haven’t been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology “paradigm” as Timothy Koschmann put it. (pp. 18-19)

The answer is not a “reboot” of programming, but rather a rethinking of it. Maxwell makes a humble suggestion: that educators stop being blinded by “the shiny new thing,” or some so-called “new” idea such that they lose their ability to think clearly about what’s being done with regard to computers in education, and that they deal with history and historicism. He said that the technology field has had a problem with its own history, and this tends to bleed over into how educators regard it. The tendency is to forget the past, and to downplay it (“That was neat then, but it’s irrelevant now”).

In my experience, people have associated technology’s past with memories of using it. They’ve given little if any thought to what it represented. They take for granted what it enabled them to do, and do not consider what that meant. Maxwell said that this…

…makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. We are faced with a tangle of hobbles–instrumentalism, ahistoricism, fear of transformation, Snow’s “two cultures,” and a consumerist subjectivity.

An examination of the history of educational technology–and educational computing in particular–reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert’s philosophy and his more than two decades of practical work on Logo than is commonly remembered. And Papert is not the only one. (p. 20)

Maxwell went into what Alan Kay thought about the subject. Kay has spent almost as many years as Papert working on a meaningful context for computing and programming within education. Some of the quotes Maxwell uses are from “The Early History of Smalltalk,” (h/t Bill Kerr) which I’ll also refer to. The other sources for Kay’s quotes are included in Maxwell’s bibliography:

What is Literacy?

“The music is not in the piano.” — Alan Kay

The past three or four decades are littered with attempts to define “computer literacy” or something like it. I think that, in the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have won large numbers of supporters across the board.

Kay’s appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a much more fruitful framing. His argument is thus not for computer literacy per se, but for systems literacy, of which computing is a key part.

That this is a massive undertaking is clear … and the size of the challenge is not lost on Kay. Reflecting on the difficulties they faced in trying to teach programming to children at PARC in the 1970s, he wrote that:

“The connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas.”

Because literature is about ideas, Kay connects the notion of literacy firmly to literature:

“What is literature about? Literature is a conversation in writing about important ideas. That’s why Euclid’s Elements and Newton’s Principia Mathematica are as much a part of the Western world’s tradition of great books as Plato’s Dialogues. But somehow we’ve come to think of science and mathematics as being apart from literature.”

There are echoes here of Papert’s lament about mathophobia, not fear of math, but the fear of learning that underlies C.P. Snow’s “two cultures,” and which surely underlies our society’s love-hate relationship with computing. Kay’s warning that too few of us are truly fluent with the ways of thinking that have shaped the modern world finds an anchor here. How is it that Euclid and Newton, to take Kay’s favourite examples, are not part of the canon, unless one’s very particular scholarly path leads there? We might argue that we all inherit Euclid’s and Newton’s ideas, but in distilled form. But this misses something important … Kay makes this point with respect to Papert’s experiences with Logo in classrooms:

“Despite many compelling presentations and demonstrations of Logo, elementary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children in a way that illuminates how we think about mathematics and how mathematics relates to the real world.” (Maxwell, pp. 135-137)

Just a note of clarification: I refer back to what Maxwell said re. Logo and mathematics. Papert did not use his language to teach programming as an end in itself. His goal was to use a computer to teach mathematics to children. Programming with Logo was the means for doing it. This is an important concept to keep in mind as one considers what role computer programming plays in education.

The problem, in Kay’s portrayal, isn’t “computer literacy,” it’s a larger one of familiarity and fluency with the deeper intellectual content; not just that which is specific to math and science curriculum. Kay’s diagnosis runs very close to Neil Postman’s critiques of television and mass media … that we as a society have become incapable of dealing with complex issues.

“Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in [Thomas] Paine’s Common Sense and being able (and happy) to fluently write a critique or defense of it.” (Maxwell, p. 137)

Extending this quote (from “The Early History of Smalltalk”), Kay went on to say:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

Continuing with Maxwell (and Kay):

“Many adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like AIDS, or even compound interest on their credit cards. In contrast, a 12-year-old child in a few lines of Logo […] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that would ordinarily be consigned to common sense (the worst of all reasoning methods), can now be tried out with a modest amount of effort.”

Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? We have enough trouble coping with arguments presented in print, let alone simulations and modeling. Postman’s argument implicates television, but television is not a techno-deterministic anomaly within an otherwise sensible cultural milieu; rather it is a manifestation of a larger pattern. What is wrong here has as much to do with our relationship with print and other media as it does with television. Kay noted that “In America, printing has failed as a carrier of important ideas for most Americans.” To think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, economy, simplicity) make a real difference in the lives of only a small number of individuals, even in the Western world. Kay put it eloquently thus: “The computer really is the next great thing after the book. But as was also true with the book, most [people] are being left behind.” This is a sobering thought for those who advocate public access to digital resources and lament a “digital divide” along traditional socioeconomic lines. Kay notes,

“As my wife once remarked to Vice President Al Gore, the ‘haves and have-nots’ of the future will not be caused so much by being connected or not to the Internet, since most important content is already available in public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wherever it may be found.” (Maxwell, pp. 138-139)

I’m still trying to understand myself what exactly Alan Kay means by “literature” in the realm of computing. He said that it is a means for discussing important ideas, but in the context of computing, what ideas? I suspect from what’s been said here he’s talking about what I’d call “model content,” thought forms, such as the idea of an exponential progression, or the concept of velocity and acceleration, which have been fashioned in science and mathematics to describe ideas and phenomena. “Literature,” as he defined it, is a means of discussing these thought forms–important ideas–in some meaningful context.

In prior years he had worked on that in his Squeak environment, working with some educators. They would show children a car moving across the screen, dropping dots as it went, illustrating velocity, and then, modifying the model, acceleration. Then they would show them Galileo’s experiment, dropping heavy and light balls from the roof of a building (real balls from a real building), recording the ball dropping, and allowing the children to view the video of the ball, and simultaneously model it via. programming, and discovering that the same principle of acceleration applied there as well. Thus, they could see in a couple contexts how the principle worked, how they could recognize it, and see its relationship to the real world. The idea being that they could grasp the concepts that make up the idea of acceleration, and then integrate it into their thinking about other important matters they would encounter in the future.

Maxwell quoted from an author named Andrew diSessa to get deeper into the concept of literacy, specifically what literacy in a type of media offers our understanding of issues:

The hidden metaphor behind transparency–that seeing is understanding–is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.”

What’s a good goal for computing?

The temptation in teaching and learning programming is to get students familiar enough with the concepts and a language that they can start creating things with it. But create what? The typical cases are to allow students to tinker, and/or to create applications which gradually become more complex and feature-rich, with the idea of building confidence and competence with increasing complexity. The latter is not a bad idea in itself, but listening to Alan Kay has led me to believe that starting off with this is the equivalent of jumping to a conclusion too quickly, and to miss the point of what’s powerful about computers and programming.

I like what Kay said in “The Early History of Smalltalk” about this:

A twentieth century problem is that technology has become too “easy.” When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

Edit 4-5-2013: I thought I should point out that this quote has some nuance to it that people might miss. I don’t believe Kay is saying that “programming should be hard.” Quite the contrary. One can observe from his designs that he’s advocated the opposite. Not that technology should mold itself to what is “natural” for humans. It might require some training and practice, but once mastered, it should magnify or enhance human capabilities, thereby making previously difficult or tedious tasks easier to accomplish and incorporate into a larger goal.

Kay was making an observation about the history of technology’s relationship to society, that the effect on people of useful technology being hard to build has generally caused the people who created something useful to make it well. What he’s pointing out is that people generally take the presence of technology as an excuse to use it as a crutch, in this case to make immediate use of it towards some other goal that has little to do with what the technology represents, rather than an invitation to revisit it, criticize its design, and try to make it better. This is an easy sell, because everyone likes something that makes their lives easier (or seems to), but we rob ourselves of something important in the process if that becomes the only end goal. What I see him proposing is that people with some skill should impose a high standard for design on themselves, drawing inspiration for that standard from how the best art humanity has produced was developed and nurtured, but guard against the sense of feeling small, inadequate, and overwhelmed by the challenge.

Maxwell (and Kay) explain further why this idea of “literacy” as being able to understand and communicate important ideas, which includes ideas about complexity, is something worth pursuing:

“If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is in story form! Newton’s treatise on the laws of motion, the force of gravity, and the behavior of the planets is set up as a sequence of arguments that imitate Euclid’s books on geometry.”

The most important ideas in modern Western culture in the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented in narrative. …

But more recent still are forms of argumentation that defy linear representation at all: ‘complex’ systems, dynamic models, ecological relationships of interacting parts. These can be hinted at with logical or mathematical representations, but in order to flesh them out effectively, they need to be dynamically modeled. This kind of modeling is in many cases only possible once we have computational systems at our disposal, and in fact with the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the “regime of computation” inherent in the work of thinkers like Stephen Wolfram, Edward Fredkin, and Harold Morowitz, N. Katherine Hayles explains:

“Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work in chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so in the social sciences and humanities.”

Kay’s lament too is that though these non-narrative forms of communication and understanding–both in the linear and complex varieties–are key to our modern world, a tiny fraction of people in Western society are actually fluent in them.

“In order to be completely enfranchised in the 21st century, it will be very important for children to become fluent in all three of the central forms of thinking that are now in use. […] the question is: How can we get children to explore ways of thinking beyond the one they’re ‘wired for’ (storytelling) and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems ‘eco-logic?'” …

In this we get Kay’s argument for ‘what computers are good for’ … It does not contradict Papert’s vision of children’s access to mathematical thinking; rather, it generalizes the principle, by applying Kay’s vision of the computer as medium, and even metamedium, capable of “simulating the details of any descriptive model.” The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as the promise of personal computing, with millions of users and millions of machines.

“The thing that jumped into my head was that simulation would be the basis for this new argument. […] If you’re going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a mathematical equation. If, for example, you’re talking about an epidemic, you can make claims in an essay, and you can put mathematical equations in there. Still, it is really difficult for your reader to understand what you’re actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim in the form of a working simulation, something that can be examined, and also can be changed.”

The computer is thus to be seen as a modeling tool. The models might be relatively mundane–our familiar word processors and painting programs define one end of the scale–or they might be considerably more complex. [my emphasis — Mark] It is important to keep in mind that this conception of computing is in the first instance personal–“personal dynamic media”–so that the ideal isn’t simulation and modeling on some institutional or centralized basis, but rather the kind of thing that individuals would engage in, in the same way in which individuals read and write for their own edification and practical reasons. This is what defines Kay’s vision of a literacy that encompasses logic and systems thinking as well as narrative.

And, as with Papert’s enactive mathematics, this vision seeks to make the understanding of complex systems something to which young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a ‘computer-science’ or an ‘information technology’ curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools:

“So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects–which I think is enough complexity for anybody–gets them into that space in thinking about things that I think is more interesting than just simple input/output mechanisms.” (Maxwell, pp. 132-135)

I wanted to highlight the part about “word processors” and “paint programs,” because this idea that’s being discussed is not limited to simulating real world phenomena. It could be incorporated into simulating “artificial phenomena” as well. It’s a different way of looking at what you are doing and creating when you are programming. It takes it away from asking, “How do I get this thing to do what I want,” and redirects it to, “What entities do we want to make up this desired system, what are they like, and how can they interact to create something that we can recognize, or otherwise leverages human capabilities?”

Maxwell said that computer science is not the important thing. Rather, what’s important about computer science is what it makes possible: “the study and engagement with complex or dynamic systems–and it is this latter issue which is of key importance to education.” Think about this in relation to what we do with reading and writing. We don’t learn to read and write just to be able to write characters in some sequence, and then for others to read what we’ve written. We have events and ideas, perhaps more esoteric to this subject, emotions and poetry, that we write about. That’s why we learn to read and write. It’s the same thing with computer science. It’s pretty worthless, if we as a society value it for communicating ideas, if it’s just about learning to read and write code. To make the practice something that’s truly valuable to society, we need to have content, ideas, to read and write about in code. There’s a lot that can be explored with that idea in mind.

Characterizing Alan Kay’s vision for personal computing, Maxwell talked about Kay’s concept of the Dynabook:

Alan Kay’s key insight in the late 1960s was that computing would become the practice of millions of people, and that they would engage with computing to perform myriad tasks; the role of software would be to provide a flexible medium with which people could approach those myriad tasks. … [The] Dynabook’s user is an engaged participant rather than a passive, spectatorial consumer—the Dynabook’s user was supposed to be the creator of her own tools, a smarter, more capable user than the market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the “end-user” as documented by Bardini & Horvath. (p. 218)

Kay’s contribution begins with the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. What discursive possibilities does this new modality open up, and for whom? Kay argues that this latter communications revolution should in the first place be in the hands of children. What we are left with is a sketch of a possible new literacy; not “computer literacy” as an alternative to book literacy, but systems literacy—the realm of powerful ideas in a world in which complex systems modelling is possible and indeed commonplace, even among children. Kay’s fundamental and sustained admonition is that this literacy is the task and responsibility of education in the 21st century. The Dynabook vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. (p. 262)

I would encourage interested readers to read Maxwell’s paper in full. He gives a rich description of the problem of computers in the educational context, giving a much more detailed history of it than I have here, and what the best minds on the subject have tried to do to improve the situation.

The main point I want to get across is if we as a society really want to get the greatest impact out of what computers can do for us, beyond just being tools that do canned, but useful things, I implore educators to see computers and programming environments more as apparatus, instruments, media (the computers and programming environments themselves, not what’s “played” on computers, and languages and metaphors, which are the media’s means of expression, not just a means to some non-expressive end), rather than as agents and tools. Sure, there will be room for them to function as agents and tools, but the main focus that I see as important in this subject area is in how the machine helps facilitate substantial pedagogies and illuminates epistemological concepts that would otherwise be difficult or impossible to communicate.

—Mark Miller, https://tekkie.wordpress.com

My journey, Part 2

See Part 1

A sense of history

I had never seen a machine before that I could change so easily (relative to other machines). I had this sense that these computers represented something big. I didn’t know what it was. It was just a general feeling I got from reading computer magazines. They reported on what was going on in the industry. All I knew was that I really liked dinking around with them, and I knew there were some others who did, too, but they were at most ten years older than me.

The culture of computing and programming was all over the place as well, though the computer was always portrayed as this “wonder machine” that had magical powers, and programming it was a mysterious dark art which made neat things happen after typing furiously tappy-tappy on the keyboard for ten seconds. Still, it was all encouragement to get involved.

I got a sense early on that it was something that divided the generations. Most adults I ran into knew hardly anything about them, much less how to program them, like I did. They had no interest in learning about them either. They felt they were too complicated, threatening, mysterious. They didn’t have much of an idea about its potential, just that “it’s the future”, and they encouraged me to learn more about them, because I was going to need that knowledge, though their idea of “learn more about them” meant “how to use one”, not program it. If I told them I knew how to program them they’d say, “Wow! You’re really on top of it then.” They were kind of amazed that a child could have such advanced knowledge that seemed so beyond their reach. They couldn’t imagine it.

A few adults, including my mom, asked me why I was so fascinated by computers. Why were they important? I would tell them about the creative process I went through. I’d get a vision in my mind of something I thought would be interesting, or useful to myself and others. I’d get excited enough about it to try to create it. The hard part was translating what I saw in my mind, which was already finished, into the computer’s language, to get it to reproduce what I wanted. When I was successful it was the biggest high for me. Seeing my vision play out in front of me on a computer screen was electrifying. It made my day. I characterized computers as “creation machines”. What I also liked about them is they weren’t messy. Creating with a computer wasn’t like writing or typing on paper, or painting, where if you made a mistake you had to live with it, work around it, or start over somewhere to get rid of the mistake. I could always fix my mistakes on a computer, leaving no trace behind. The difference was with paper it was always easy to find my mistakes. On the computer I had to figure out where the mistakes were!

The few adults I knew at the time who knew how to use a computer tended to not be so impressed with programming ability, not because they knew better, but because they were satisfied being users. They couldn’t imagine needing to program the computer to do anything. Anything they needed to do could be satisfied by a commercial package they could buy at a computer store. They regarded programming as an interesting hobby of kids like myself, but irrelevant.

As I became familiar with the wider world of what represented computing at the time (primarily companies and the computers they sold), I got a sense that this creation was historic. Sometimes in school I would get an open-ended research assignment. Each chance I got I’d do a paper on the history of the computer, each time trying to deepen my knowledge of it.

The earliest computer I’d found in my research materials was a mechanical adding machine that Blaise Pascal created in 1645, called the Pascaline. There used to be a modern equivalent of it as late as 25-30 years ago that you could buy cheap at the store. It was shaped like a ruler, and it contained a set of dials you could stick a pencil or pen point into. All dials started out displaying zeros. Each dial represented a power of ten (starting at 100). If you wanted to add 5 + 5, you would “dial” 5 in the ones place, and then “dial” 5 in the same place again. Each “dial” action added to the quantity in each place. The dials had some hidden pegs in them to handle carries. So when you did this, the ones place dial would return to “0”, and the tens place dial would automatically shift to “1”, producing the result “10”.

The research material I was able to find at the time only went up to the late 1960s. They gave me the impression that there was a dividing line. Up to about 1950, all they talked about were the research efforts to create mechanical, electric, and finally electronic computers. The exception being Herman Hollerith, who I think was the first to find a commercial application for an electric computer, to calculate the 1890 census, and who built the company that would ultimately come to be known as IBM. The last computers they talked about being created by research institutions were the Harvard Mark I, ENIAC, and EDSAC. By the 1950s (in the timeline) the research material veered off from scientific/research efforts, for the most part, and instead talked about commercial machines that were produced by Remington Rand (the Univac), and IBM. Probably the last thing they talked about were minicomputer models from the 1960s. The research I did matched well with the ethos of computing at the time: it’s all about the hardware.

High school

When I got into high school I joined a computer club there. Every year club members participated in the American Computer Science League (ACSL). We had study materials that focused on computer science topics. From time to time we had programming problems to solve, and written quizzes. We earned individual scores for these things, as well as a combined score for the team.

We would have a week to come up with our programming solutions on paper (computer use wasn’t allowed). On the testing day we would have an hour to type in our programs, test and debug them, at the end of which the computer teacher would come around and administer the official test for a score.

What was neat was we could use whatever programming language we wanted. A couple of the students had parents who worked at the University of Colorado, and had access to Unix systems. They wrote their programs in C on the university’s computers. Most of us wrote ours in Basic on the Apples we had at school.

Just as an aside, I was alarmed to read Imran’s article in 2007 about using FizzBuzz to interview programmers, because the problem he posed was so simple, yet he said “most computer science graduates can’t [solve it in a couple minutes]”. It reminded me of an ACSL programming problem we had to solve called “Buzz”. I wrote my solution in Basic, just using iteration. Here is the algorithm we had to implement:

input 5 numbers
if any input number contains the digit "9", print "Buzz"
if any input number is evenly divisible by 8, print "Buzz"
if the sum of the digits of any input number is divisible by 4,
   print "Buzz"

for every input number
A: sum its digits (we'll use "sum" as a variable in this
loop for a sum of digits)
   if sum >= 10
      get the digits for sum and go back to Step A
   if sum equals 7
      print "Buzz"
   if sum equals any digit in the original input number
      print "Buzz"
end loop

test input   output
198          Buzz Buzz
36
144          Buzz
88           Buzz Buzz Buzz
10           Buzz

In my junior and senior year, based on our regional scores, we qualified to go to the national finals. The first year we went we bombed, scoring in last place. Ironically we got kudos for this. A complimentary blurb in the local paper was written about us. We got a congratulatory letter from the superintendent. We got awards at a general awards assembly (others got awards for other accomplishments, too). A bit much. I think this was all because it was the first time our club had been invited to the nationals. The next year we went we scored in the middle of the pack, and heard not a peep from anybody!

(Update 1-18-2010: I’ve had some other recollections about this time period, and I’ve included them in the following seven paragraphs.)

By this point I was starting to feel like an “experienced programmer”. I felt very comfortable doing it, though the ACSL challenges were at times too much for me.

I talk about a couple of software programs I wrote below. You can see video of them in operation here.

I started to have the desire to write programs in a more sophisticated way. I began to see that there were routines that I would write over and over again for different projects, and I wished there was a way for me to write code in what would now be called “components” or DLLs, so that it could be generalized and reused. I also wanted to be able to write programs where the parts were isolated from each other, so that I could make major revisions without having to change parts of the program which should not be concerned with how the part I changed was implemented.

I even started thinking of things in terms of systems a little. A project I had been working on since Jr. high was a set of programs I used to write, edit, and play Mad Libs on a computer. I noticed that it had a major emphasis on text, and I wondered if there was a way to generalize some of the code I had written for it into a “text system”, so that people could not only play this specific game, but they could also do other things with text. I didn’t have the slightest idea how to do that, but the idea was there.

By my senior year of high school I had started work on my biggest project yet, an app. I had been wanting for a while, called  “Week-In-Advance”. It was a weekly scheduler, but I wanted it to be user friendly, with a windowing interface. I was inspired by an Apple Lisa demo I had seen a couple years earlier. I spent months working on it. The code base was getting so big, I realized I had to break it up into subroutines, rather than one big long program, to make it manageable. I wrote it in Basic, back when all it had for branching was Goto, Gosub, and Return commands. I used Gosub and Return to implement the subroutines.

I learned some advanced techniques in this project. One feature I spent a lot of time on was how to create expanding and closing windows on the screen. I tried a bunch of different animation techniques. Most of them were too slow to be useable. I finally figured out one day that a box on the screen was really just a set of four lines, two vertical, and two horizontal, and that I could make it expand efficiently by just taking the useable area of the screen, and dividing it horizontally and vertically by the number of times I would allow the window to expand, until it reached its full size. The horizontal lines were bounded by the vertical lines, and the vertical lines were bounded by the horizontals. It worked beautifully. I had generalized it enough so that I could close a window the same way, with some animated shrinking boxes. I just ran the “window expanding” routine in reverse by negating the parameters. This showed me the power of “systematizing” something, rather than using code written in a narrow-minded fashion to make one thing happen, without considering how else the code might be used.

The experience I had with using subroutines felt good. It kept my code under control. Soon after I wanted to learn Pascal, so I devoted time to doing that. It had procedures and functions as built-in constructs. It felt great to use them compared to my experience with Basic. The Basic I used had no scoping rules whatsoever. Pascal had them, and programming felt so much more manageable.

Getting a sense of the future

As I worked with computers more and talked to people about them in my teen years I got a wider sense that they were going to change our society, I thought in beneficial ways. I’ve tried to remember why I thought this. I remember my mom told me this in one of the conversations I had with her about them. Besides this, I think I was influenced by futurist literature, artwork, and science fiction. I had this vague idea that somehow in the future computers would help us understand our world better, and to become better thinkers. We would be smarter, better educated. We would be a better society for it. I had no idea how this would happen. I had a magical image of the whole thing that was deterministic and technology-centered.

It was a foregone conclusion that I would go to college. Both my mom and my maternal grandparents (the only ones I knew) wanted me to do it. My grandparents even offered to pay full expenses so long as I went to a liberal arts college.

In my senior year I was trying to decide what my major would be. I knew I wanted to get into a career that involved computer programming. I tried looking at my past experience, what kinds of projects I liked to work on. My interest seemed to be in application programming. I figured I was more business-oriented, not a hacker. The first major I researched was CIS (Computer Information Science/Systems). I looked at their curriculum and was uninspired. I felt uncertain about what to do. I had heard a little about the computer science curriculum at a few in-state universities, but it felt too theoretical for my taste. Finally, I consulted with my high school’s computer teacher, and we had a heart to heart talk. She (yes, the computer teacher was a woman) had known me all through my time there, because she ran the computer club. She advised me to go into computer science, because I would learn about how computers worked, and this would be valuable in my chosen career.

She had a computer science degree that she earned in the 1960s. She had an interesting life story. I remember she said she raised her family on a farm. At some point, I don’t know if it was before or after she had kids, she went to college, and was the only woman in the whole CS program. She told me stories about that time. She experienced blatant sexism. Male students would come up to her and say, “What are you doing here? Women don’t take computer science,” and, “You don’t belong here.” She told me about writing programs on punch cards (rubber bands were her friends 🙂 ), taking a program in to be run on the school’s mainframe, and waiting until 3am for it to run and to get her printout. I couldn’t imagine it.

I took her advice that I should take CS, kind of on faith that she was right.

Part 3.

My journey, Part 1

This series of posts has been brewing within me for more than a year, though for some reason it just never felt like the right time to talk about it. Two months ago I found The Machine That Changed The World online, a mini-series I had seen about 15 years ago. I was going to just write about it here, but then all this other stuff came out of me, and it became about that, though I’ve included it in this series.

I go through things chronologically, though I only reference a few dates, so one subject will tend to abruptly transition into another, just because of what I was focused on at a point in time. I call this “my journey”.

Getting introduced

From when I was 6 or so, in the mid-1970s, I had had a couple experiences using computing devices, and a couple brief chances to use general purpose computers. My interest in computers began in earnest in 1981. I was 11 going on 12. My mother and I had moved to a new town in Colorado a year earlier. One day we both went by the local public library and I saw a man sitting in front of an Atari 400 computer that was set up with a Sony Trinitron TV set. I saw him switch between a blue screen with cryptic code on it, and a low-rez graphics screen that had what looked like a fence and some blocky “horses” set up on a starting line. And then I saw the horses multiply across the screen, fast. The colorful graphics really caught my eye. I was already used to the blocky graphics of the Atari 2600 VCS. I had seen those around in TV ads and stores. I sat and watched the man work on his project. Each time he ran the program the same thing happened. After watching this for a while I got the impression that he was trying to write a horse racing game, but that something was wrong with it.

My mother noticed my interest and tried to get me to ask the librarian about the computer, to see if I could use it. I was reticent at first. I assumed that only “special” people could use it. I figured the man worked at the library or something, or that only adults could use it. I asked the librarian. She asked my age. They had a minimum age requirement of ten years old. She said all I needed to do was sign up for an orientation that lasted for 15 minutes, and then sign up for computer time when it was available. So at my mother’s urging I signed up for the orientation. Several days later I went to orientation with about five other people of different ages (children and adults), and got a brief once-over about how to operate the Atari, how to sign up for time, and what software they had available for it behind the desk.

I was interested right off in learning to do what I saw that man do: program the computer. My first stab at this was running a tutorial called An Invitation to Programming that came on 3 cassette tapes, double-sided. The tape drive I put the tapes into looked like an ordinary tape recorder, except that it had a cable running right to the computer. I thought the tutorial was the neatest thing. At first I think I was distracted by how it worked and paid hardly any attention to what it was attempting to teach. The fascinating thing about it was the first part of each tape had a voice track on it that would play over the TV’s speaker while the tutorial loaded. This was really clever. Rather than sitting there waiting for the program to load, I could listen to a male narrator talk about what I was going to learn with some flashy music playing in the background. By the time he was done, the tutorial ran. Once it started running the voice track started up again, this time with a female narrator. What appeared on the screen was coordinated with the audio track. When the tutorial would stop to give me a quiz, the tape drive would stop. When I was ready to continue, the tape drive started up again. “How is it doing this,” I wondered with awe.

Introduction to “An Invitation to Programming” by Atari

Part 4 of “An Invitation to Programming”

By the way, this is not me using the tutorial. I found these videos on YouTube.

The tutorial was about the Basic programming language. I went through the whole thing two or three times, in multiple sessions, because I could tell as I got towards the more advanced stuff that I wasn’t getting it. I only understood the simple things, like how to print on the screen, and how to use the Goto command. Other stuff like DIMming variables, getting input, and forming loops gave me a lot of trouble. They had the manual for Atari BASIC behind the desk and I tried reading that. It really made my head hurt, but I came to understand more.

Eventually I found out there was an Atari 800 in another part of the library that I could sign up for, so I’d sometimes use that one. There seemed to be more people who would hang around it. I found a couple people who were more knowledgeable about Basic and I’d ask them to help me, which they did.

I fell into a LOT of pitfalls. I couldn’t write a complete program that worked, though I kept trying. Every time I’d encounter an error I’d guess at what the problem was, and I would try changing something at random. I had no context. I felt lost. I needed a lot of help from others in understanding different contexts in my program.

It took me a month or two before I had written my first program that went beyond “hello world” stuff. It was a math quiz. Over several more months I stumbled a lot on Basic but kept learning. I remember I used to get SO frustrated! Time and again I would think I knew what I was doing, but something would go wrong. Debugging felt so hard. I’d go home fuming. I had to practice relaxing.

As time passed a strange thing started happening. I’d go through my “frustration cycle,” focus on something else for the day, have dinner with my mom and talk about it, watch TV, do my homework, or something. I’d go to bed thinking that this problem I was obsessed with was insurmountable. I’d wake up the next morning, and the answer would just come to me, like it was the most obvious thing in the world. I couldn’t try it out right away, because I had to get to school, but all day I’d feel anxious to try out my solution. Finally, I’d get my chance, and it would work! Wow! What a feeling! This process totally mystified me. How was it that at one point I could be dealing with a problem that felt intractible, and then at another time, when I had the opportunity to really relax, the problem was as easy to solve as brushing my teeth? I have heard recently that the brain uses sleep time to “organize” stuff that’s been absorbed while awake. Maybe that’s it.

When I entered Jr. high school I eventually discovered that they had some computer magazines in their library. A few of them had program listings. One of them was a magazine called Compute!. I fell in love with it almost immediately. The first thing that drew me in was the cover art. It looked fun! Secondly, the content was approachable. It had complete listings for games. I went through their stack of Compute issues with a passion. I was checking them out often, taking them to the public library to type in to the Atari, debugging typing mistakes, and having fun seeing the programs come to life.

We had two Apple II’s at my school, one in the library that I could sign up for, and one that a math teacher had requested for his office. In my first year there he started a computer club. I signed up for the club immediately. Each Friday after school we would get together. There were about four of us, plus the math teacher, huddled around his computer. He would teach us about programming in Basic, or have us try to solve some programming problem.

Along the way I got my own ideas for projects to do. I wrote a few of my own programs on the Atari at the library. By my eighth grade year my school installed a computer lab filled with Apple II’s, and they offered computer use and programming classes. I signed up for both. In the programming course we covered Basic and Logo.

Logo was the first language I worked with where programming felt easy. Quite frankly the language felt well designed, too, though my memory is that it didn’t get much respect from the other programmers my age. They saw it as a child’s language–too immature for us teenagers. Then again, they thought “real programming” was making the computer do cool stuff in hexadecimal. Every construct we used in Logo was relevant to the platform we were using. Unfortunately all we learned about with Logo was procedural programming. We didn’t learn what it was intended for: to teach children math. After this course I got some more project ideas which I finished successfully after a lot of work.

In the computer use course we learned about a couple word processors (Apple Writer, and Bank Street Writer), Visicalc, and a simple database program whose name I can’t remember. I took it because I thought it would teach useful skills. I had no idea how to use any of these tools before then.

Part 2

The computer as medium

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s, that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been missing what’s powerful about computing because like the early users of the printing press we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he said that after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.” It said that there were such things as libraries, going back hundreds of years, but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has said in his recent speeches that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with the printing press’s ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media and processes enables powerful things to happen in our society via. efficiency. It further democratizes old media and modes of thought, but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power of EToys comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful, but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s fallen prey to ideologies. The goal of constructivism as Kay has used it is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play) the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path to the answer. The children do some of the work to discover the answers themselves, once a focus has been established. And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how the student thought in order to get it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to get otherwise. This keys right into a philosophy of computing that J.C.R. Licklider pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,” as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.” Computers have the power to make this complexity comprehensible. Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally think of our computers as simulators of “business models.” I don’t think the current definition of “business model” (a business plan) really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are in the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today. We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers have historically.

In my “Reminiscing” post I referred to above, I talked about the fact that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think Kay would like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching it play out, inform themselves about it. Rather than appealing to experts, they can understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that they see as relevant. This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different concept of what it means to be literate; a different model of education, and research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years. The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference. I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that. It adds another dimension to what I say here about the computer as medium, but it also expounds on what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

Related post: The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com

The culture of “air guitar”

Since I started listening to Alan Kay’s ideas I’ve kept hearing him use the phrase “air guitar” to describe what he sees as shallow ideas, both in terms of educational and industry practice, which are promoted by a pop culture. Kay is a musician, among other things, so I can see where he’d come up with this term. My impression is he’s referring to an almost exclusive focus on technique, perhaps even using a tool, looking confident and stylish while doing it, and an almost total lack of focus on what is being worked with.

I watched video recently of another one of his presentations on Squeak, this time in front of an audience of educators. He brought up the issues of math and science education, and said that in many environments they teach kids to calculate, to do “math,” not mathematics. Students are essentially trained to “appreciate” math, but not in how to be real mathematicians.

He’s also used the term “gestures” to characterize this shallowness. In the field of software development he alluded to this idea in his 1997 keynote speech at OOPSLA, titled, “The Computer Revolution Hasn’t Happened Yet”:

I think the main thing about doing OOP work, or any kind of programming work, is that there has to be some exquisite blend between beauty and practicality. There’s no reason to sacrifice either one of those, and people who are willing to sacrifice either one of those I don’t think really get what computing is all about. It’s like saying I have really great ideas for paintings, but I’m just going to use a brush, but no paint. You know, so my ideas will be represented by the gestures I make over the paper; and don’t tell any 20th century artists that, or they might decide to make a videotape of them doing that and put it in a museum.

Edit 4-5-2012: Case in point. I found this news feature segment about some air guitar enthusiasts who actually hold a national competition to see who’s the “best” at it. The winners go on to an international competition in Finland! Okay, it’s performance art, but this is like asking, “How great can you fake it?” It’s a glorified karaoke competition, except it’s worse, because there’s no attempt to express anything but gestures. When I first heard about this, I thought it was satire like the movie Spinal Tap, but no, it’s real.

I’ve been surprised when I’ve seen some piece of media come along that comes pretty darn close to illustrating one of Kay’s ideas. A while back I found a video clip of a Norwegian comedy show taking the learning curve of today’s novice computer users and making a media analogy to the “introduction of the book” after Gutenburg brought the printing press to Europe in the Middle Ages. Kay had always said that personal computers are a new form of media, and I thought this skit got the message across in a way that most people could understand, at least from having experienced the version of “personal computer media” that you can buy at a retail outlet or through mail order.

South Park is a show that’s been a favorite of mine for many years. It’s an odd mix of “pop culture with a message.” Despite the fact that it’s low brow and often offensive, on a few occasions it has been surprisingly poetic about real issues in our society. The show is about a group of kids going through life, misunderstanding things, playing pranks on each other, and getting in trouble. It also shows them trying to be powerful, trying to help, and trying to learn. Maybe that’s what interests me about it. It’s unclear what “grade level” the kids are at. There was one season where they were in “4th grade.” So I guess that gives you an idea.

I’ve included links to some clips of an episode I’ll talk about. The clips are from Comedy Central’s site. Fair warning: If you are easily offended, I would not encourage you to watch them. There is some language in the video that could offend.

Season 11, episode 13 is one where the kids buy a console video game system and play a game on it called “Guitar Hero”. The game is played by taking game controllers that look like small electric guitars, and manipulating switches and buttons on them in the right sequence and timing, to real music played by the game console.

What I’m going to say about it is my own interpretation, based on my own life experience.

What’s interesting to me is a parent of one of the kids tries to engage the group in learning how to play real music on a real instrument, but the kids are not interested.

http://www.southparkstudios.com/clips/155857/guitar-hero

The dad wonders what’s so special about the game, and that night sneaks down and tries it himself, showing what a bad fit a real musician is in this “air guitar” culture (or showing what a piece of crap it is).

http://www.southparkstudios.com/clips/155858/randy-sucks

I have the feeling this episode is based on a movie, though I don’t know which one. There are other parts not shown in these clips that dramatize betrayal between two friends, and reconciliation. Kind of your typical “buddy movie” plot line. These next clips show the “wider world” discovering the talent of these kids playing the game.

http://www.southparkstudios.com/clips/155861/rock-stars

This gets to what I think the pop culture promotes. Even though it’s pretty empty, it makes you feel like you are accomplishing something, and getting something out of it. You are rewarded for “going through the motions,” “making the right gestures.”

If this next clip doesn’t scream “air guitar”, I don’t know what does.

http://www.southparkstudios.com/clips/163728/acoustic-guitar-hero

I won’t show the ending, but it shows how utterly empty and worthless the whole “air guitar” exercise is–It’s not real!

I think the reason this episode had some meaning for me is what plays out feels kind of like my past experience as a software developer. Not that software development is an “air guitar” exercise in and of itself. Far from it. What I’m getting at is the wider computing culture with respect to software development is like this. Those of us who care about our craft are trying to play “good music,” often with bad instruments. In my case, I’m still learning what “good music” is. We may be with a good “band,” but most “bands” have “band managers” who don’t know a thing about “good music.” That’s why it seems like such a struggle.

“Real music” with good instruments in computing is available for those who seek it. You won’t find it in most programming languages, programming web sites, symposia, or tools. The idea of “good music” with “good instruments” doesn’t get much support, so it’s hard to find. Unfortunately the reality is in order to really be educated you have to seek out a real education. Just “going along for the ride” of school systems will usually leave you thinking the pop culture is the real thing. Seeking a real education, being your own learner, is much more rewarding.