Feeds:
Posts
Comments

Posts Tagged ‘alan kay’

This is the clearest explanation I’ve seen Alan Kay give for what the ARPA community’s conception of computer science was. His answer on Quora was particularly directed at someone who was considering entering a CS major. Rather than summarize what he said, I’m just going to let him speak for himself. I’d rather preserve this if anything happens to Quora down the road. There is a lot of gold here.

As a beginner, what are the best ways to approach Computer Science?

If you are just looking to get a job in computing, don’t bother to read further.

First, there are several things to get clear with regard to any field.

  • What is the best conception of what the field is about?
  • What is the best above threshold knowledge to date?
  • How incomplete is the field; how much will it need to change?

When I’ve personally asked most people for a definition of “Computer Science” I’ve gotten back an engineering definition, not one of a science. Part of what is wrong with “CS” these days is both a confusion about what it is, and that the current notion is a weak idea.

The good news is that there is some above threshold knowledge. The sobering news is that it is hard to find in any undergrad curriculum. So it must be ferreted out these days.

Finally, most of the field is yet to be invented — or even discovered. So the strategies for becoming a Computer Scientist have to include learning how to invent, learning how to change, learning how to criticize, learning how to convince.

Most people in the liberal arts would not confuse learning a language like English and learning to read and write well in it, with the main contents of the liberal arts — which, in a nutshell, are ideas. The liberal arts spans many fields, including mathematics, science, philosophy, history, literature, etc. and we want to be fluent about reading and writing and understanding these ideas.

So programming is a good thing to learn, but it is most certainly not at the center of the field of computer science! When the first ever Turing Award winner says something, we should pay attention, and Al Perlis — who was one of if not the definer of the term said: “Computer Science is the Science of Processes”, and he meant all processes, not just those that run on digital computers. Most of the processes in the world to study are more complex than most of the ones we’ve been able to build on computers, so just looking at computers is looking at the least stuff you can look at.

Another way to figure out what you should be doing, is to realize that CS is also a “blank canvas” to “something” kind of field — it produces artifacts that can be studied scientifically, just as the building of bridges has led to “bridge science”. Gravitational and weather forces keep bridge designers honest, but analogous forces are extremely weak in computing, and this allows people who don’t know much to get away with murder (rather like the fashion field, where dumb designs can even become fads, and failures are not fatal). Getting into a “learned science of designs that happen to be dumb” is not the best path!

We (my research community) found that having an undergraduate degree in something really difficult and developed helped a lot (a) as a bullshit detector for BS in computing (of which there is a lot), (b) as a guide to what a real “Computer Science” field should be and could be like, and (c) to provide a lot of skills on the one hand and heuristic lore on the other for how to make real progress. Having a parallel interest in the arts, especially theater, provides considerable additional perspective on what UI design is really about, and also in the large, what computing should be about.

So I always advise young people -not- to major in computing as an undergraduate (there’s not enough “there there”) but instead to actually learn as much about the world and how humans work as possible. In grad school you are supposed to advance the state of the art (or at least this used to be the case), so you are in a better position with regard to an unformed field.

Meanwhile, since CS is about systems, you need to start learning about systems, and not to restrict yourself just those on computers. Take a look at biology, cities, economics, etc just to get started.

Finally, at some age you need to take responsibility for your own education. This should happen in high school or earlier (but is rare). However, you should not try to get through college via some degree factory’s notion of “certification” without having formed a personal basis for criticizing and choosing. Find out what real education is, and start working your way through it.

When I’ve looked over other material written by Kay, and what has been written about his work, I’ve seen him talk about a “literate” computing science. This ties into his long-held view that computing is a new medium, and that the purpose of a medium is to allow discussion of ideas. I’ve had a bit of a notion of what he’s meant by this for a while, but the above made it very clear in my mind that a powerful purpose of the medium is to discuss systems, and the nature of them, through the fashioning of computing models that incorporate developed notions of one or more specific kinds of systems. The same applies to the fashioning of simulations. In fact, simulations are used for the design of said computer systems, and their operating systems and programming languages, before they’re manufactured. That’s been the case for decades. This regime can include the study of computer systems, but it must not be restricted to them. This is where CS currently falls flat. What this requires, though, is some content about systems, and Kay points in some directions for where to look for it.

Computing is a means for us to produce system models, and to represent them, and thereby to develop notions of what systems are, and how they work, or don’t work. This includes regarding human beings as systems. When considering how people will interact with computers, the same perspective can apply, looking at it as how two systems will interact. I’m sure this sounds dehumanizing, but from what I’ve read of how the researchers at Xerox PARC approached developing Smalltalk and the graphical user interface as a learning environment, this is how they looked at it, though they talked about getting two “participants” (the human and the computer) to interact/communicate well with each other (See discussion of “Figure 1” in “Design Principles Behind Smalltalk”). It involved scientific knowledge of people and of computer system design. It seems this is how they achieved what they did.

— Mark Miller, https://tekkie.wordpress.com

Advertisements

Read Full Post »

I received a request 2-1/2 weeks ago to write a post based on video of a speech that Alan Kay gave at Kyoto University in February, titled “Systems Thinking For Children And Adults.” Here it is. The volume in the first 10 minutes of the video is really low, so you’ll probably need to turn up your volume. The volume in the video gets readjusted louder after that.

Edit 1-2-2014: See also Kay’s 2012 speech, “Normal Considered Harmful,” which I’ve included at the bottom of this post.

On the science of computer science

Kay said what he means by a science of computing is the forward-looking study, understanding, and invention of computing. Will the “science” come to mean something like the other real sciences? Or will it be like library and social science, which means a gathering of knowledge? He said this is not the principled way that physics, chemistry, and biology have been able to revolutionize our understanding of phenomena. Likewise, will we develop a software engineering that is like the other engineering disciplines?

I’ve looked back at my CS education with more scrutiny, and given what I’ve found, I’m surprised that Kay is asking this question. Maybe I’m misunderstanding what he said, but for me CS was a gathering of knowledge a long time ago. The question for me is can it change from that to a real science? Perhaps he’s asking about the top universities.

When I took CS as an undergraduate in the late 80s/early 90s it was clear that some research had gone into what I was studying. All the research was in the realm of math. There was no sense of tinkering with architectures that existed, and very little practice in analyzing it. We were taught to apply a little analysis to algorithms. There was no sense of trying to create new architectures. What we were given was pre-digested analysis of what existed. So it had gotten as far as exposing us to the “TEM” parts (of “TEMS”–Technology, Engineering, Mathematics, Science), but only in a narrow band. The (S)cience was non-existent.

What we got instead is what I’d call “small science” in the sense that we had lots of programming labs where we experimented with our own knowledge of how to write programs that worked; how to use, organize, and address memory; and how to manage complexity in our software. We were given some strategies for doing this, which were taught as catechisms. The labs gave us an opportunity to see where those strategies were most effective. We were sometimes graded on how well we applied them.

We got to experience a little bit of how computers could manipulate symbols, which I thought was real interesting. I wished that there would’ve been more of that.

One of the tracks I took while in college was focused on software engineering, which really focused on project management techniques and rules of thumb. It was not a strong engineering discipline backed by scientific findings and methods.

When I got out into the work world I felt like I had to “spread the gospel,” because what IT shops were doing was ad hoc, worse than the methodologies I was taught. I was bringing them “enlightenment” compared to what they were doing. The nature and constraints of the workplace broke me out of this narrow-mindedness, and not always in good ways.

It’s only been by doing a lot of thinking about what I’ve learned, and my POV of computers, that I’ve been able to see this that I’ve been able to see that what I got out of CS was a gathering of knowledge with some best practices. At the time I had no concept that I was only getting part of the picture even though our CS professors openly volunteered with a wry humor that “computer science is not a science.” They compared the term “computer science” to “social science” in the sense that it was an ill-defined field. There was the expectation that it would develop into something more cohesive, hopefully a real science, later on. Given the way we were taught though, I guess they expected it to develop with no help from us.

Kay has complained previously that the commercial personal computing culture has contributed greatly to the deterioration of CS in academia. I have to admit I was a case in point. A big reason why I thought of CS the way I did was this culture I grew up in. Like I’ve said before, I have mixed feelings about this, because I don’t know if I would be a part of this field at all if the commercial culture he complains about never existed.

What I saw was that people were discouraged from tinkering with the hardware, seeing how it worked, much less trying to create their own computers. Not to say this was impossible, because there were people who tinkered with 8- and 16-bit computers. Of course, as I think most people in our field still know, Steve Wozniak was able to build his own computer. That’s how he and Steve Jobs created Apple. Computer kits were kind of popular in the late 1970s, but that faded by the time I really got into it.

When I used to read articles about modifying hardware there was always the caution about, “Be careful or you could hose your entire machine.” These machines were expensive at the time. There were the horror stories about people who tried some machine language programming and corrupted the floppy disk that had the only copy of their program on it (ie. hours and hours of work). So people like me didn’t venture into the “danger zone.” Companies (except for Apple with the Apple II) wouldn’t tell you about the internals of their computers without NDAs and licensing agreements, which I imagine one had to pay for handsomely. Instead we were given open access to a layer we could experiment on, which was the realm of programming either in assembly or a HLL. There were books one could get that would tell you about memory locations for system functions, and how to manipulate features of the system in software. I never saw discussion of how to create a software computer, for example, that one could tinker with, but then the hardware probably wasn’t powerful enough for that.

By and large, CS fit the fashion of the time. The one exception I remember is that in the CS department’s orientation/introductory materials they encouraged students to build their own computers from kits (this was in the late 1980s), and try writing a few programs, before entering the CS program. I had already written plenty of my own programs, but as I said, I was intimidated by the hardware realm.

My education wasn’t the vocational school setting that it’s turning into today, but it was not as rigorous as it could have been. It met my expectations at the time. What gave me a hint that my education wasn’t as complete as I thought was that opportunities which I thought would be open to me were not available when I looked for employment after graduation. The hint was there, but I don’t think I really got it until a year or two ago.

What would computing as a real science be like?

I attended the 2009 Rebooting Computing summit on CS education in January, and one of the topics discussed was what is the science of computer science? In my opinion it was the only topic brought up there that was worth discussing at that time, but that’s just me. The consensus among the luminaries that participated was that historically science has always followed technology and engineering. The science explains why some engineering works and some doesn’t, and it provides boundaries for a type of engineering.

We asked the question, “What would a science of computing look like?” Some CS luminaries used an acronym “TEMS” (Technology, Engineering, Mathematics, Science), and there seemed to be a deliberate reason why they had those terms in that order. In other circles it’s often expressed as “STEM.” Technology is developed first. Some engineering gets developed from patterns that are seen (best practices). Some math can be derived from it. Then you have something you can work with, experiment with, and reason about–science. The part that’s been missing from CS education is the science itself: experimentation, an interest and proclivity to get into the guts of something and try out new things with whatever–the hardware, the operating system, a programming language, what have you–just because we’re curious. Or, we see that what we have is inadequate and there’s a need for something that addresses the problem better.

Alan Kay participated in the summit and gave a description of a computing science that he had experienced at Xerox PARC. They studied existing computing artifacts, tried to come up with better architectures that did the same things as the old artifacts, and then applied the new architectures to “everything else.” I imagine that this would test the limits of the architecture, and provide more avenues for other scientists to repeat the process (take an artifact, create a new architecture for it, “spread it everywhere”) and gain more improvement.

(Update 12-14-2010: I added the following 3 paragraphs after finding Dan Ingalls’s “Design Principles Behind Smalltalk” article. It clarifies the idea that a science of computing was once attempted.)

Dan Ingalls gave a brief description of a process they used at Xerox to drive their innovative research, in “Design Principles Behind Smalltalk,” published in Byte Magazine in 1981:

Our work has followed a two- to four-year cycle that can be seen to parallel the scientific method:

  • Build an application program within the current system (make an observation)
  • Based on that experience, redesign the language (formulate a theory)
  • Build a new system based on the new design (make a prediction that can be tested)

The Smalltalk-80 system marks our fifth time through this cycle.

This parallels a process that was once described to me in computer science, called “bootstrapping”: Building a language and system “B” from a “lower form” language and system “A”. What was unique here was they had a set of overarching philosophies they followed, which drove the bootstrapping process. The inventors of the C language and Unix went through a similar process to create those artifacts. Each system had different goals and design philosophies, which is reflected in their design.

To give you a “starter” idea of what this process is like, read the introduction to Design Patterns, by Gamma, Helm, Johnson, and Vlissides, and take note of how they describe coming up with their patterns. Then notice how widely those patterns have been applied to projects that have nothing to do with what the Gang of Four originally created with the patterns they came up with. The exception here is the Gang of Four didn’t redesign a language as part of their process. They invented a terminology set, and a concept of coding patterns that are repeatable. They established architectural patterns to use within an existing language. The thing is, if they had explored what was going on with the patterns mathematically, they might very well have been able to formulate a new architecture that encompassed the patterns they saw, which if successful, would’ve allowed them to create a new language.

The problem in our field is illustrated by the fact that more often than not, there’s been no study of the other technologies where these patterns have been applied. When the Xerox Learning Research Group came up with Smalltalk (object-orientation, late-binding, GUI), Alan Kay expected that others would use the same process they did to improve on it, but instead people either copied OOP into less advanced environments and then used them to build practical software applications, or they built applications on top of Smalltalk. It’s just as I described with my CS education: The strategies get turned into a catechism by most practitioners. Rather than studying our creations, we’ve just kept building upon and using the same frameworks, and treating them with religious reverence–we dare not change them lest we lose community support. Instead of looking at how to improve upon the architecture of Smalltalk, people adopted OOP as a religion. This happened and continues to happen because almost nobody in the field is being taught to apply mathematical and scientific principles to computing (except in the sense of pursuing proofs of computability), and there’s little encouragement from funding sources to carry out this kind of research.

There are degrees of scientific thinking that software developers use. One IT software house where I worked for a year used patterns on a regular basis that we created ourselves. We understood the essential idea of patterns, though we did not understand the scientific principles Kay described. Everywhere else I worked didn’t use design patterns at all.

There has been more movement in the last few years to break out of the confines developers have been “living” in; to try and improve upon fundamental runtime/VM architecture, and build better languages on top of it. This is good, but in reality most of it has just recapitulated language features that were invented decades ago through scientific approaches to computing. There hasn’t been anything dramatically new developed yet.

The ignorance we ignore

“What is the definition of ignorance and apathy?”

“I don’t know, and I don’t care.”

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well.

— Alan Kay, The Early History of Smalltalk

A fundamental problem with our field is there’s very little appreciation for good architecture. We keep tinkering with old familiar structures, and produce technologies that are marginally better than what came before. We assume that brute force will get us by, because it always has in the past. This is ignoring a lot. One reason that brute force has been able to work productively in the past is because of the work of people who did not use brute force, creating: functional programming, interactive computing, word processing, hyperlinking, search, semiconductors, personal computing, the internet, object-oriented programming, IDEs, multimedia, spreadsheets, etc. This kind of research went into decline after the 1970s and has not recovered. It’s possible that the brute force mentality will run into a brick wall, because there will be no more innovative ideas to save it from itself. A symptom of this is the hand-wringing I’ve been hearing about for a few years now about how to leverage multiple CPU cores, though Kay thinks this is the wrong direction to look in for improvement.

There’s a temptation to say “more is better” when you run into a brick wall. If one CPU core isn’t providing enough speed, add another one. If the API is not to your satisfaction, just add another layer of abstraction to cover it over. If the language you’re using is weak, create a large API to give it lots of functionality and/or a large framework to make it easier to develop apps. in it. What we’re ignoring is the software architecture (all of it, including the language(s), and OS), and indeed the hardware architecture. These are the two places where we put up the greatest resistance to change. I think it’s because we acknowledge to ourselves that the people who make up our field by and large lack some basic competencies that are necessary to reconsider these structures. Even if we had the competencies nobody would be willing to fund us for the purpose of reconsidering said structures. We’re asked to build software quickly, and with that as the sole goal we’ll never get around to reconsidering what we use. We don’t like talking about it, but we know it’s true, and we don’t want to bother gaining those competencies, because they look hard and confusing. It’s just a suspicion I have, but I think An understanding of mathematics and the scientific outlook is important for all this essential to the process of invention in computing. We can’t get to better architectures without it. Nobody else seems to mind the way things are going. They accept it. So there’s no incentive to try to bust through some perceptual barriers to get to better answers, except the sense that some of us have that what’s being done is inadequate.

In his presentation in the video, Kay pointed out the folly of brute force thinking by showing how humungous software gets with this approach, and how messy the software is architecturally. He said that the “garbage dump” that is our software is tolerated because most people can’t actually see it. Our perceptual horizons are so limited that most people can only comprehend a piece of the huge mess, if they’re able to look at it. In that case it doesn’t look so bad, but if we could see the full expanse, we would be horrified.

This clarifies what had long frustrated me about IT software development, and I’m glad Kay talked about it. I hated the fact that my bosses often egged me on towards creating a mess. They were not aware they were doing this. Their only concern was getting the computer to do what the requirements said in the quickest way possible. They didn’t care about the structure of it, because they couldn’t see it. So to them it was irrelevant whether it was built well or not. They didn’t know the difference. I could see the mess, at least as far as my project was concerned. It eventually got to the point that I could anticipate it, just from the way the project was being managed. So often I wished that the people managing me or my team, and our customers, could see what we saw. I thought that if they did they would recognize the consequences of their decisions and priorities, and we could come to an agreement about how to avoid it. That was my “in my dreams” wish.

Kay said, “Much of the applications we use … actually take longer now to load than they did 20 years ago.” I’ve read about this (h/t to Paul Murphy). (Update 3-25-2010: I used to have video of this, but it was taken down by the people who made it. So I’ll just describe it.) A few people did a side-by-side test of a 2007 Vista laptop with a dual-core Intel processor (I’m guessing 2.4 Ghz) and 1 Gig. RAM vs. a Mac Classic II with a 16 Mhz Motorola 68030 and 2 MB RAM. My guess is the Mac was running a SCSI hard drive (the only kind you could install on a Mac when they were made back then). I didn’t see them insert a floppy disk. They said in the video that the Mac is “1987 technology.” Even though the Mac Classic II was introduced in 1991, they’re probably referring to the 68030 CPU it uses, which came out in 1987.

The tasks the two computers were given were to boot up, load a document into a word processor, quit out of the word processor, and shut down the machine. The Mac completed the contest in 1 minute, 42 seconds, 25% faster than the Vista laptop, which took 2 minutes, 17 seconds. Someone else posted a demonstration video in 2009 of a Vista desktop PC which completed the same tasks in 1 minute, 18 seconds–23% faster than the old Mac. I think the difference was that the laptop vs. Mac demo likely used a slower processor for the laptop (vs. the 2009 demo), and probably a slower hard drive. The thing is though, it probably took a 4 Ghz dual-processor (or quad core?) computer, faster memory, and a faster hard drive to beat the old Mac. To be fair, I’ve heard from others that the results would be much the same if you compared a modern Mac to the old Mac. The point is not which platform is better. It’s that we’ve made little progress in responsiveness.

The wide gulf between the two pieces of hardware (old Mac vs. Vista PC) is dramatic. The hardware got about 15,000-25,000% faster via. Moore’s Law (though Moore’s Law only applied to transistors, not speed), but we have not seen a commensurate speed up in system responsiveness. In some cases the newer software technology is slower than what existed 22 years ago.

When Kay has elaborated on this in the past he’s said this is also partly due to the poor hardware architecture which was adopted by the microprocessor industry in the 1970s, and which has been marginally improved over the years. Quoting from an interview with Alan Kay in ACM Queue in 2004:

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea [referring to the Burroughs B5000 computer].

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

Another factor is the configuration and speed of main and cache memory. Cache is built into the CPU, is directly accessed by it, and is very fast. Main memory has always been much slower than the CPU in microcomputers. The CPU spends the majority of its time waiting for memory, or data to stream from a hard drive or internet connection, if the data is not already in cache. This has always been true. It may be one of the main reasons why no matter how fast CPU speeds have gotten the user experience has not gotten commensurately more responsive.

The revenge of data processing

The section of Kay’s speech where he talks about “embarrassing questions” from his wife (the slide is subtitled “Two cultures in computing”) gets to a complaint I’ve had for a while now. There have been many times while I’m writing for this blog when I’ve tried to “absent-mindedly” put the cursor somewhere on a preview page and start editing, but then realize that the technology won’t let me do it. You can always tell when a user interaction issue needs to be addressed when your subconscious tries to do something with a computer and it doesn’t work.

Speaking about her GUI apps. his wife said, “In these apps I can see and do full WYSIWYG authoring.” She said about web apps., “But with this stuff in the web browser, I can’t–I have to use modes that delay seeing what I get, and I have to start guessing,” and, “I have to edit through a keyhole.” He said what’s even more embarrassing is that the technology she likes was invented in the 1970s (things like word processing and desktop publishing in a GUI with WYSIWYG–aspects of the personal computing model), whereas the stuff she doesn’t like was invented in the 1990s (the web/terminal model). Actually I have to quibble a bit with him about the timeline.

“The browser = next generation 3270 terminal”
3270 terminal image from Wikipedia.org

The technology she doesn’t like–the interaction model–was invented in the 1970s as well. Kay mentioned 3270 terminals earlier in his presentation. The web browser with its screens and forms is an extension of the old IBM mainframe batch terminal architecture from the 1970s. The difference is one of culture, not time. Quoting from the Wikipedia article on the 3270:

[T]he Web (and HTTP) is similar to 3270 interaction because the terminal (browser) is given more responsibility for managing presentation and user input, minimizing host interaction while still facilitating server-based information retrieval and processing.

Applications development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. [my emphasis]

It’s true that the appearance and user interaction with the browser itself has changed a lot from the 3270 days in terms of a nicer presentation (graphics and fonts vs. green-screen text), and the ability to use a mouse with the browser UI (vs. keyboard-only with the 3270). It features document composition, which comes from the GUI world. The 3270 did not. Early on, a client scripting language was added, Javascript, which enabled things to happen on the browser without requiring server interaction. The 3270 had no scripting language.

There’s more freedom than the 3270 allowed. One can point their browser anywhere they want. A 3270 was designed to be hooked up to one IBM mainframe, and the user was not allowed to “roam” on other systems with it, except if given permission via. mainframe administration policies. What’s the same, though, is the basic interaction model of filling in a form on the client end, sending that information to the server in a batch, and then receiving a response form.

The idea that Kay brought to personal computing was immediacy. You immediately see the effect of what you are doing in real time. He saw it as an authoring platform. The browser model, at least in its commercial incarnation, was designed as a form submission and publishing platform.

Richard Gabriel wrote extensively about the difference between using an authoring platform and a publishing platform for writing, and its implications from the perspective of a programmer, in The Art of Lisp and Writing. The vast majority of software developers work in what is essentially a “publishing” environment, not an authoring environment, and this has been the case for most of software development’s history. The only time when most developers got closer to an authoring environment was when Basic interpreters were installed on microcomputers, and it became the most popular language that programmers used. The old Visual Basic offered the same environment. The interpreter got us closer to an authoring environment, but it still had one barrier: you either had to be editing code, or running your program. You could not do both at the same time, but there was less of a wait to see your code run because there was no compile step. Unfortunately with the older Basics, the programmer’s power was limited.

People’s experience with the browser has been very slowly coming back towards the idea of authoring via. the AJAX kludge. Kay showed a screenshot of a more complete authoring environment inside the browser using Dan Ingalls’s Lively Kernel. It looks and behaves a lot like Squeak/Smalltalk. The programming language within Lively is Javascript.

We’ve been willing to sacrifice immediacy to get rid of having to install apps. on PCs. App. installation is not what Kay envisioned with the Dynabook, but that’s the model that prevailed in the personal computer marketplace. You know you’ve got a problem with your thinking if you’re coming up with a bad design to compensate for past design decisions that were also bad. The bigger problem is that our field is not even conscious that the previous bad design was avoidable.

Kay said, “A large percentage of the main software that a lot of people use today, because it’s connected to the web, is actually inferior to stuff that was done before, and for no good reason whatsoever!”

I agree with him, but there are people who would disagree on this point. They are not enlightened, but they’re a force to be reckoned with.

What’s happened with the web is the same thing that happened to PCs: The GUI was seen as a “good idea” and was grafted on to what has been essentially a minicomputer, and then a mainframe mindset. Blogger Paul Murphy used to write, from a business perspective, about how there are two cultures in computing/IT that have existed for more than 100 years. The oldest culture that’s operated continuously throughout this time is data processing. This is the culture that brought us punch cards, mainframes, and 3270 terminals. It brought us the idea that the most important function of a computer system is to encode data, retrieve it on demand, and produce reports from automated analysis. The other culture is what Murphy called “scientific computing,” and it’s closer to what Kay promotes.

The data processing culture has used web applications to recapitulate the style of IT management that existed 30 years ago. Several years ago, I heard from a few data processing believers who told me that PCs had been just a fad, a distraction. “Now we can get back to real computing,” they said. The meme from data processing is, “The data is more important than the software, and the people.” Why? Because software gets copied and devalued. People come and go. Data is what you’ve uniquely gathered and can keep proprietary.

I think this is one reason why CS enrollment has fallen. It’s becoming dead like Latin. What’s the point of going into it if the knowledge you learn is irrelevant to the IT jobs (read “most abundant technology jobs”) that are available, and there’s little to no money going into exciting computing research? You don’t need a CS degree to create your own startup, either. All you need is an application server, a database, and a scripting/development platform, like PHP. Some university CS departments have responded by following what industry is doing in an effort to seem relevant and increase enrollment (ie. keep the department going). The main problem with this strategy is they’re being led by the nose. They’re not leading our society to better solutions.

What’s natural and what’s better

Kay tried to use some simple terms for the next part of his talk to help the audience relate to two ideas:

He gave what I think is an elegant graphical representation of one of the reasons things have turned out the way they have. He talked about modes of thought, and said that 80% of people are outer-directed and instrumental reasoners. They only look at ideas and tools in the context of whether it meets their current goals. They’re very conservative, seeking consensus before making a change. Their goals are the most important thing. Tools and ideas are only valid if they help meet these goals. This mentality dominates IT. Some would say, “Well, duh! Of course that’s the way they think. Technology’s role in IT is to automate business processes.” That’s the kind of thinking that got us to where we are. As Paul Murphy has written previously, the vision of “scientific computing,” as he calls it, is to extend human capabilities, not replace/automate them.

Kay said that 1% of people look at tools and respond to them, reconsidering their goals in light of what they see as the tool’s potential. This isn’t to say that they accept the tool as it’s given to them, and adjust their goals to fit the limitations of that version of the tool. Rather, they see its potential and fashion the tool to meet it. Think about Lively Kernel, which I mentioned above. It’s a proof of this concept.

There’s a symbiosis that takes place between the tool and the tool user. As the tool is improved, new ideas and insights become more feasible, and so new avenues for improvement can be explored. As the tool develops, the user can rethink how they work with it, and so improve their thinking about processes. As I’ve thought about this, it reminds me of how Engelbart’s NLS developed. The NLS team called it “bootstrapping.” It led to a far more powerful leveraging of technology than the instrumental approach.

The “80%” dynamic Kay described happened to NLS. Most people didn’t understand the technology, how it could empower them, and how it was developed. They still don’t. In fact Engelbart is barely known for his work today. Through the work that was done in the early days of Xerox PARC, a few of his ideas managed to get into technology that we’ve used over the years. He’s most known now for the invention of the mouse, but he did much more than that.

In the instrumental approach the goals precede the tools, and are actually much more conservative than the approach of the 1-percenters. In the instrumental scenario the goals people have are similar whether the tools exist or not, and so there’s no potential for this human-tool interaction to provide insight into what might be better goals.

Kay talked about this subject at Rebooting Computing, and I think he said that a really big challenge in education is to get students to shift from the “80%” mindset to one of the other modes of thought that have to do with deep thinking, at least exposing students to this potential within themselves. I think he would say that we’re not predisposed to only think one way. It’s just that left to our own devices we tend to fall into one of these categories.

I base the following on the notes Kay had on his slides in his speech:

He said that in the commercial computing realm (which is based on the way mainframes were sold to the public years ago) it’s about “news”: The emphasis is on functionality, and people as components. This approach also says you get your hardware, OS, programming language, tools, and user interface from a vendor. You should not try to make your own. This sums up the attitude of IT in a nutshell. It’s not too far from the mentality of CS in academia, either.

The “new” in the 1970s was a focus on the end-user and whether what they can learn and do. “We should design how user interactions and learning will be done and work our way down to the functions they need!” In other words, rather than thinking functionality-first, think user-first–in the context of the computer being a new medium, extending human capabilities. Aren’t “functionality” and “user” equally high priorities? Users just want functionality from computers, don’t they? Ask yourself this question: Is a book’s purpose only to provide functionality? The idea at Xerox PARC was to create a friendly, inviting, creative environment that could be explored, built on, and used to create a better version of itself–an authoring platform, in a very deep sense.

Likewise, with Engelbart’s NLS, the idea was to create an information sharing and collaboration environment that improved how groups work together. Again, the emphasis was on the interaction between the computer and the user. Everything else was built on that basis.

By thinking functionality-first you turn the computer into a device, or a device server, speaking metaphorically. You’re not exposing the full power of computing to the user. I’m going to use some analogies which Chris Crawford has written about previously. In the typical IT mentality this is the idea. It’s too dangerous to expose the full power of computing to end users, so the thinking goes. They’ll mess things up either on purpose or by mistake. Or they’ll steal or corrupt proprietary information. Such low expectations… It’s like they’re children or illiterate peasants who can’t be trusted with the knowledge of how to read or write. They must be read to aloud rather than reading for themselves. Most IT environments believe that they cannot be allowed to write, for they are not educated enough to write in the computer’s complex language system (akin to hieroglyphics). Some allow their workers to do some writing, but in limited, and not very expressive languages. This “limits their power to do damage.” If they were highly educated, they wouldn’t be end users. They’d be scribes, writing what others will hear. Further, these “illiterates” are never taught the value of books (again, using a metaphor). They are only taught to believe that certain tomes, written by “experts” (scribes), are of value. It sounds Medieval if you ask me. Not that this is entirely IT’s fault. Computer science has fallen down on the job by not creating better language systems, for one thing. Our whole educational/societal thought structure also makes it difficult to break out of this dynamic.

The “news” way of thinking has fit people into specialist roles that require standardized training. The “new” emphasized learning by doing and general “literacy.”

We’re in danger of losing it

Everyone interested in seeing what the technology developed at ARPA and Xerox PARC (the internet, personal computing, object-oriented programming) was intended to represent should pay special attention to the part of Kay’s speech titled “No Gears, No Centers: ARPA/PARC Outlook.” Kay told me about most of this a couple years ago, and I incorporated it into a guest post I wrote for Paul Murphy’s blog, called “The tattered history of OOP” (see also “The PC vision was lost from the get-go”). Kay shows it better in his presentation.

I think the purpose of Kay’s criticism of what exists now is to point out what we’ve lost, and continue to lose. Since our field doesn’t understand the research that created what we have today, we’ve helplessly taken it for granted that what we have, and the method of conservative incremental improvement we’ve practiced, will always be able to handle future challenges.

Kay said that because of what he’s seen with the development of the field of computing, he doesn’t believe what we have in computer science is a real field. And we are in danger of losing altogether any remnants of the science that was developed decades ago. This is because it’s been ignored. The artifacts that are based on that research are all around us. We use them every day, but the vast majority of practitioners don’t understand the principles and outlook that made it possible to create them. This isn’t a call to reinvent the wheel, but rather a qualitative statement: We’re losing the ability to make the kind of leap that was made in the 1970s, to go beyond what that research has brought us. In fact, in Kay’s view, we’re regressing. The potential scenario he paints reminds me of the science fiction stories where people use a network of space warping portals for efficient space travel and say, “We don’t know who built it. They disappeared a long time ago.”

I think there are three forces that created this problem:

The first was the decline in funding for interesting and powerful computing research. What’s become increasingly clear to me from listening to CS luminaries, who were around when this research was being done, is that the world of personal computing and the internet we have today is an outgrowth–you could really say an accident–of defense funding that took place during the Cold War–ARPA, specifically research funded by the IPTO (the Information Processing Techniques Office). This isn’t to say that ARPA was like a military operation. Far from it. When it started, it was staffed by academics and they were given a wide berth in which to do their research on computing. An important ingredient of this was the expectation that researchers would come up with big ideas, ones that were high risk. We don’t see this today.

The reason this computing research got as much funding as it did was due to the Sputnik launch. This got Americans to wake up to the fact that mathematics, science, and engineering were important, because of the perception that they were important for the defense of the country. The U.S. knew that the Soviets had computer technology they were using as part of their military operations, and the U.S. wanted to be able to compete with them in that realm.

In terms of popular perception, there’s no longer a sense that we need technical supremacy over an enemy. However, computing is still important to national defense, even taking the war against Al Qaeda into account. A few defense and prominent technology leaders in the know have said that Al Qaeda is a very early adopter of new technologies, and an innovator in their use for their own ends.

In 1970 computing research split into part government-funded and part private, with some ARPA researchers moving to the newly created PARC facility at Xerox. This is where they created the modern form of personal computing, some aspects of which made it into the technologies we’ve been using since the mid-1980s. The groundbreaking work at PARC ended in the early 1980s.

The second force, which Kay identified, was the rise of the commercial market in PCs. He’s said that computers were introduced to society before we were ready to understand what’s really powerful about them. This market invited us in, and got us acquainted with very limited ideas about computing and programming. We went into CS in academia with an eye towards making it like Bill Gates (never mind that Gates is actually a college drop-out), and in the process of trying to accommodate our limited ideas about computing, the CS curriculum was dumbed down. I didn’t go into CS for the dream of becoming a billionaire, but I had my eye on a career at first.

It’s been difficult for universities in general to resist this force, because it’s pretty universal. The majority of parents of college-bound youth have believed for decades that a college degree means a secure economic future–meaning a job, a career that pays well. It’s not always true, but it’s believed in our culture. This sets expectations for universities to be career “launching pads” and centers for social networking, rather than institutions that develop a student’s faculties, their ability to think, and expose them to high level perspectives to improve their perception.

The third force is, as Paul Murphy wrote, the imperative of IT since the earliest days of the 20th century, which is to use technology to extend and automate bureaucracy. I don’t see bureaucracy as a totally bad thing. I think of Alexander Hamilton’s quote about public debt, rephrased as, “A bureaucracy, if not excessive, will be to us a blessing.” However, in its traditional role it creates and follows rules and procedures formulated in a hierarchy. There’s accountability for meeting directives, not broad goals. Its thinking is deterministic and algorithmic. IT, being traditionally a bureaucratic function, models this (we should be asking ourselves, “Does it have to be limited to this?”) My sense of it is CS responded to the economic imperatives. It felt the need to lean towards what industry was doing, and how it thinks. The way it tried to add value was by emphasizing optimization strategies. The difference was it used to have a strong belief in staying true to some theoretical underpinnings, which somewhat counter-balanced the strong pull from industry. That resolve is slipping.

Kay and I have talked a little about math and science education. He said that our educational system has long viewed math and science as important, and so these subjects have always been taught, but our school system hasn’t understood their significance. So the essential ideas of both tend to get lost. What the computer brings to light as a new medium is that math (as mathematicians understand it, not how it’s typically taught) and science are essential for understanding computing’s potential. They are foundations for a new literacy. Without this perspective, and the general competencies in math and science to support it, both students and universities will likely continue the status quo.

Outlook: The key ingredient

Kay’s focus on outlook really struck me, because we have such an emphasis in our society on knowledge and IQ. My view of this has certainly evolved as I’ve gotten into my studies.

Whenever you interview for a job in the IT industry you get asked about your knowledge, and maybe some questions that probe your management skills. In technology companies that actually invent stuff you are probed for your mental faculties, particularly at tech companies that are known as “where the tech whizzes work.” I had an interview 5 years ago with a software company where the employer gave me something that resembled an IQ test. I have not seen nor heard of a technology company that asks questions about one’s outlook.

Kay said,

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that. And in contrast to these two, IQ is just a big lump of lead. It’s one of the worst things in our field that we have clever people in it, because like Leonardo [da Vinci] none of us is clever enough to deal with the scaling problems that we’re dealing with. So we need to be less clever and be able to look at things from better points of view.

This is truly remarkable! I have never heard anyone say this, but I think he may be right. All of the technology that has been developed, that has been used by millions of people, and has been talked about lo these many years was created by very smart people. More often than not what’s been created, though, has reflected a limited vision.

Given how society’s perception of computing has been, I doubt Kay’s vision of human-computer interaction would’ve been able to fly in the commercial marketplace, because as he’s said, we weren’t ready for it. In my opinion it’s even less ready for it now. But certainly the internal and networked computing vision that was developed at PARC could’ve been implemented in machines that the consuming public could’ve purchased decades ago. I think that’s where one could say, “You had your chance, but you blew it.” Vendors didn’t have the vision for it.

What’s needed in the future

One of Kay’s last slides referred to Marvin Minsky. I am not familiar with Minsky, and I guess I should be. He said that in terms of software we need to move from a “biology of systems” (architecture) to a “psychology of systems” (which I’m going to assume for the moment means “behavior”). I don’t really know about this, so I’m just going to leave it at that.

In a chart titled “Software Has Fallen Short” Kay made a clearer case than he has in the past for not idolizing the accomplishments that were made at ARPA/Xerox PARC. In the past he’s tried to discourage people from getting too excited about the PARC stuff, in some cases downplaying it as if it’s irrelevant. He’s always tried to get people to not fall in love with it too much, because he wanted people to improve upon it. He used to complain that the Lisp and Smalltalk communities thought that these languages were the greatest thing, and didn’t dream of creating anything better. Here he explains why it’s important to think beyond the accomplishments at PARC: It’s insufficient for what’s needed in the future. In fact the research is so far behind that the challenges are getting ahead of what the current best research can deal with.

He said that the PARC stuff is now “news,” and my guess is he means “it’s been turned into news.” He talked about this earlier in the speech. The essential ideas that were developed at PARC have not made it into the computing culture, with the exception of the internet and the GUI. Instead some of the “new” ideas developed there have been turned into superficial “news.”

Those who are interested in thinking about the future will find the slide titled “Simplest Idea” interesting. Kay threw out some concepts that are food for thought.

A statement that Kay closed with is one he’s mentioned before, and it depresses me. He said that when he’s traveled around to universities and met with CS professors, they think what they’ve got “is it.” They think they’re doing real science. All I can do is shake my head in disbelief at that. Even my CS professors understood that what they were teaching was not a science. For the most part what’s passing for CS now is no better than what I had–excepting the few top schools.

Likewise, I’ve heard accounts saying that there are some enterprise software leaders who (mistakenly) think they understand software engineering, and that it’s now a fully mature engineering discipline.

Coming full circle

Does computer science have a future? I think as before, computing is going to be at the mercy of events, and people’s emotional perceptions of computing’s importance. This is because the vast majority of people don’t understand what computing’s potential is. Today we see it as “current” not “future.” I think that people’s perception of computing will become more remote. Computing will be in devices we use every day, as we can see today, and we will see them as digital devices, where they used to be analog.

“Digital” has become the new medium, not computing. Computing has been overlayed with analog media (text, graphics, audio, video). Kay has said for a while now that this is how it is with new media. When it arises, the generation that’s around doesn’t see its potential. They see it as the “new old thing” and they just use it to optimize what they did before. Computing for now is just the magical substrate that makes “digital” work. The essential element that makes computing powerful and dynamic has been pushed to the side, with a couple of exceptions, as it often has been.

Kay didn’t talk about this in the video, but he and I have discussed this previously. There used to be a common concept in computing, which has come to be called “generative technology.” It used to be a common expectation that digital technology was open ended. You could do with it what you wanted. This idea has been severely damaged by the way that the web has developed. First, commercial PCs, which were designed to be used alone and in LANs, were thrust upon the internet using native code and careless programming, which didn’t anticipate security risks. In addition, identities became more loosely managed, and this became a problem as people figured out how to hide behind pseudonyms. Secondly, the web browser wasn’t designed initially as a programmable medium. A scripting language was added to browsers, but it was not made accessible through the browser. The scripting language was not designed too well, either.

Many security catastrophes ensued, each eroding people’s confidence in the safety of the internet, and the image of programming. People who didn’t know how to program, much less how the web worked, felt like they were the victims of those who did know how to program (though this went all the way back to stand-alone PCs as well when mischievous and destructive viruses circulated around in infected executables on BBSes and floppy disks). Programming became seen as a suspicious activity, rather than a creative one. And so as vendors put up defensive barriers to try to compensate for their own flawed designs, it only reinforced the initial design decision that was made when the commercial browser was conceived: that programming would be restricted to people who worked for service providers. Ordinary users should neither expect to gain access to code to modify it, nor want to. It’s gotten to the point we see today where even text formatting on message boards is restricted. HTML tags are restricted or banned altogether in favor of special codes, and you can’t use CSS at all. Programming by the public on websites is usually forbidden, because it’s seen as a security risk. And most web operators don’t see the value of letting people program on their site.

Kay complained that despite the initial idealistic visions of how the internet would develop, it’s still difficult or impossible to send a simulation to somebody on a message board, or through e-mail, to show the framework for a concept. What I think he had envisioned for the internet is a multimedia environment in which people could communicate in all sorts of media: text, graphics, audio, video, and simulations–in real time. Specialized software has made this possible using services you can pay for, but it’s still not something that people can universally access on the internet.

To get to the future we need to look at the world differently

I agree with the sentiment that in order to make the next leaps we need to not accept the world as it appears. We have to get out of what we think is the current accepted reality, what we’ve put up with and gotten used to, and look at what we want the world to be like. We should use that as inspiration for what we do next. One strategy to get that started might be to look at what we find irritating about what currently exists, what we would like to see changed, and think about it from a fundamental, structural perspective. What are some things you’ve tried to do, but given up on, because when you tried to use the tools or resources available they just weren’t up to the task? What sort of technology (perhaps a kind that doesn’t exist yet) do you think would do the job?

For further reading/exploring:

A complete system (apps. and all) in 20KLOC – Viewpoints Research

Demonstration of Lively Kernel, by Dan Ingalls

Edit 8-21-2009: A reader left a comment to an old post I wrote on Lisp and Dijkstra, quoting a speech of Dijkstra’s from 10 years ago, titled “Computing Science: Achievements and Challenges.” Towards the end of his speech he bemoaned the fact that CS in academia, and industry in the U.S. had rejected the idea of proving program correctness. He attributed this to an increasing mathematical illiteracy here. I think his analysis of cause and effect is wrong. My own CS professors liked the discipline that proofs of program correctness provided, but they rejected the idea that all programs can and should be proved correct. Dijkstra held the view that CS should only focus on programs that could be proved correct.

I think Dijkstra’s critique of anti-intellectualism in the U.S. is accurate, however, including our aversion to mathematics, and I found that it answered some questions I had about what I wrote above. It also gets to the heart of one of the issues I harp on repeatedly in my blog. His third bullet point is most prescient. Quoting Dijkstra:

  • The ongoing process of becoming more and more an amathematical society is more an American specialty than anything else. (It is also a tragic accident of history.)
  • The idea of a formal design discipline is often rejected on account of vague cultural/philosophical condemnations such as “stifling creativity”; this is more pronounced in the Anglo-Saxon world where a romantic vision of “the humanities” in fact idealizes technical incompetence. Another aspect of that same trait is the cult of iterative design.
  • Industry suffers from the managerial dogma that for the sake of stability and continuity, the company should be independent of the competence of individual employees. Hence industry rejects any methodological proposal that can be viewed as making intellectual demands on its work force. Since in the US the influence of industry is more pervasive than elsewhere, the above dogma hurts American computing science most. The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science. [my emphasis]

Edit 1-2-2014: Alan Kay gave a presentation called “Normal Considered Harmful” in 2012, at the University of Illinois, where he repeated some of the same topics as his 2009 speech above, but he clarified some of the points he made. So I thought it would be valuable to refer people to it as well.

Related post: The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I’ve been coming across videos lately that get more into the creative ideas that inspired research projects which were out of this world for their time, and give me feelings of inadequacy even today.

Two anniversaries happened in November 2008. One was the 40th anniversary of the idea of the Dynabook. The other was the 40th anniversary of Douglas Engelbart’s NLS demo.

Alan Kay – the Dynabook concept, 1968

Alan Kay gave a historical background on the ideas that led to the Dynabook concept. It’s 1 hour, 44 minutes.

Douglas Engelbart – NLS demo, 1968

Here is a collection of video clips of the 40th anniversary event for the NLS demo, held by the Stanford Research Institute (SRI). The original members of the NLS development team were in attendance to talk about the experience of building this amazing system. They give more details about how it was constructed. One of Engelbart’s daughters, Christina, talks about the conceptual framework her father implemented through the process of building NLS–incremental improvement of the group and the system. NLS was intended to increase the working power of professional groups through a concept Engelbart called “augmentation”, augmenting the human intellect. His goals were similar to Licklider’s concept of human-computer symbiosis.

My thanks go to Rosemary Simpson of Brown University for providing these links. This is great stuff.

In an interview on Nerd TV a few years ago Douglas Engelbart talked about the struggle he went through to implement his vision. It’s a sad tale, with occasional triumphs. It’s good to see his efforts getting public recognition in the present day.

You can learn more about Engelbart’s work at the Doug Engelbart Institute. What I think is of particular interest is the library section. You can view the complete 1968 demo there, along with the papers he wrote. It’s interesting to note when the papers were written, because his concepts of what was possible with the system he envisioned will sound familiar to people who are accustomed to today’s computer technology. Considering what was the norm in computing at the time this is amazing.

Related post: Great moments in modern computer history

Read Full Post »

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s, that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been missing what’s powerful about computing because like the early users of the printing press we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he said that after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.” It said that there were such things as libraries, going back hundreds of years, but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has said in his recent speeches that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with the printing press’s ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media and processes enables powerful things to happen in our society via. efficiency. It further democratizes old media and modes of thought, but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power of EToys comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful, but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s fallen prey to ideologies. The goal of constructivism as Kay has used it is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play) the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path to the answer. The children do some of the work to discover the answers themselves, once a focus has been established. And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how the student thought in order to get it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to get otherwise. This keys right into a philosophy of computing that J.C.R. Licklider pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,” as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.” Computers have the power to make this complexity comprehensible. Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally think of our computers as simulators of “business models.” I don’t think the current definition of “business model” (a business plan) really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are in the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today. We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers have historically.

In my “Reminiscing” post I referred to above, I talked about the fact that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think Kay would like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching it play out, inform themselves about it. Rather than appealing to experts, they can understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that they see as relevant. This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different concept of what it means to be literate; a different model of education, and research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years. The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference. I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that. It adds another dimension to what I say here about the computer as medium, but it also expounds on what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

Related post: The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »