Is OOP all it’s cracked up to be?: My guest post on ZDNet

Paul Murphy saw fit to give me another guest spot on his blog, called “The tattered history of OOP”, talking about the history of OOP practice, where the idea came from, and how industry has implemented it. If you’ve been reading my blog this will probably be review. I’m just spreading the message a little wider.

Paul has an interesting take on the subject. He thinks OOP is a failure in practice because with the way it’s been implemented it’s just another way to make procedure calls. I agree with him for the most part. He’s informed me that he’s going to put up another post soon that gets further into why he thinks OOP is a failure. I’ll update this post when that’s available.

In short, where I’m coming from is that OOP, in the original vision that was created at Xerox PARC, still has promise. The current implementation that most developers use has architectural problems that the PARC version did not, and it still promotes a mode of thinking that’s compatible with procedural programming.

Update 6/3/08: Paul Murphy’s response to my article is now up, called “Oddball thinking about OOP”. He argues that OOP is a failure because it’s an idea that was incompatible with digital computing to begin with, and is better suited to analog computing. I disagree that this is the reason for its failure, but to each their own.

Update 8/1/09: I realized later I may have misattributed a quote to Albert Einstein. Paul Murphy talked about this in a post sometime after “The tattered history of OOP” was published. I said that insanity is, “Doing the same thing over and over again and expecting a different result.” Murphy said he realized that this was misattributed to Einstein. I did a little research myself and it seems like there’s confusion about it. I’ve found sites of quotations that attribute this saying to Einstein. On Wikipedia though it’s attributed to Rita Mae Brown, a novelist, who wrote this in her 1983 book, Sudden Death. I don’t know. I had always heard it attributed to Einstein, though I agree with the naysayers that no one has produced a citation that would prove he said it.

Java: Let it be

Joshua Bloch, Chief Java Architect at Google, gave a talk entitled “The Closures Controversy” (PowerPoint slides) at Javapolis in December 2007. I found it online through reddit, and it intrigued me, because I think it illustrates a disconnect between what we as an industry are doing and the goals we have. Bloch also makes what I think is a reasonable argument for limiting Java.

Those of you who’ve read what I’ve written for a while might be surprised by that statement. I have been critical of the IT industry for not thinking in more powerful terms about programming/computing. I am not endorsing Java. After watching his talk I was prepared to lay into Bloch for not having higher ambitions for the language, but as I thought about it I realized that it’s unreasonable to expect a computing environment to be more than what it was designed for. It helps me see that Java has a shelf life. To expect more would mean trying to change it into something it’s not.

The appeal of Java

Bloch began his talk using Gosling’s ideas from his article The Feel of Java, published in IEEE Computer in 1997, as a touchstone for what Java ought to be like in the future:

It is a language for a job. Doing language research was an anti-goal [in the development of Java] . . . It is practical, not theoretical. It’s driven by what people need. Theory provides rigour, cleanliness, and cohesiveness. [Java had] no new ideas. It took ideas from existing languages and put them together. It maybe added a few new things, but that wasn’t the goal. And this is perhaps the most important slide in the whole talk. He said, ‘Don’t fix it until it chafes’, to keep it simple. Don’t put anything in the language just because it’s nice. Require several real instances before including any feature. In other words, ‘Just Say No, until threatened with bodily harm.’ He said Java feels hyped, playful, flexible, deterministic, non-threatening, and rich, and like I can just code.

Bloch continued from the article:

Java is a blue collar language. It’s not a PhD thesis. It’s a language for a job. Java feels familiar to many different programmers, because we preferred tried and tested things.

When Gosling said Java is “not theoretical”, he was comparing it to languages/VMs that are based on computer science theory, like Lisp, which is based on lambda calculus. By “practical” he meant the instrumentalist approach to design, which is that the feature set and the architecture the language/VM supports should handle current problems. Nothing more, nothing less. In my view this is short-sighted. It means that after acquiring and using the language to solve current problems you will run into problems the language wasn’t designed to handle, because new challenges arise. You will have to improvize within a structure that is ill-suited to the task, until whoever is in charge of the language adds new structures that handle the problem effectively. Wash, rinse, repeat. I have a feeling that this approach is a defense against languages whose designers add features willy-nilly without consideration for how it fits into the overall goal of the platform. Bloch points out that “theoretical” languages don’t have this problem, but he passes by it quickly.

Java was a swing in the other direction from languages like C/C++ that were said to “give you more than enough rope to hang yourself with”. I guess programmers “hung themselves” more often than not. C and C++ are powerful in the sense of allowing you to access the hardware directly inside of a language that gives you high-level structures. If you’re interested in doing anything more than building kernels, device drivers, or VMs/compilers, they’re pretty weak. Java rode a wave that was happening at the time it was introduced. C and C++ were the dominant languages. All sorts of projects were done in them, including applications. Java provided a productivity boost to project groups with this orientation by getting rid of many of the pitfalls of using C/C++ (buffer overruns, memory leaks, null or uninitialized pointer references, etc.), but in terms of building more abstract software like applications/environments with the power the language and the VM gave you, it wasn’t that different from C/C++. So in essence it played to the tastes of programmers of the time. It went for familiarity, but without having a particularly good technical reason for it. Java doesn’t allow direct access to the hardware. So why did the language designers feel it necessary to play “follow the leader” with C/C++ in terms of conventions and power? It was an easier sell to programmers. They like familiarity.

Java meets the programming culture where it is. As long as the culture is ignorant of the fact that the architecture it’s using is ill-suited to the task at hand, then technologists can convince themselves and others that problems which arise from using a limited architecture for large projects just go with the territory, and there’s nothing anyone can do about it. It’s about time we became more enlightened, but it’s going to take us admitting that despite our masterful technical skills we don’t know very much about why it is we do the things we do with computers, beyond the joy of solving puzzles. According to Alan Kay, and I think he’s right about this, it will take learning about and exposing ourselves to other things besides computers.

Quoting Kay, from The Early History of Smalltalk:

Should we even try to teach programming? I have met hundreds of programmers in the last 30 years and can see no discernible influence of programming on their general ability to think well or to take an enlightened stance on human knowledge. If anything, the opposite is true. Expert knowledge often remains rooted in the environments in which it was first learned–and most metaphorical extensions result in misleading analogies. A remarkable number of artists, scientists, philosophers are quite dull outside of their specialty (and one suspects within it as well). The first siren’s song we need to be wary of is the one that promises a connection between an interesting pursuit and interesting thoughts. The music is not in the piano, and it is possible to graduate Julliard without finding or feeling it.

I have also met a few people for whom computing provides an important new mataphor for thinking about human knowledge and reach. But something else was needed besides computing for enlightenment to happen.

Tools provide a path, a context, and almost an excuse for developing enlightenment, but no tool ever contained it or can dispense it.

The state of Java today

Bloch’s introduction felt like cheerleading for Gosling’s vision for Java, but then he got into what’s painful about it now. He talked about the Java community’s involvement in the design of Java, and the wisdom of its actions. “How are we doing today?”, Bloch asked the audience. He showed several examples of messy Java generics. It didn’t look that different from C++. The audience laughed. He said, “Not so well, unfortunately. This is how Java looks now.”

The way he described things made it sound like Sun “jumped the shark” with generics. He was more polite than to say this. He said generics per se were not a bad addition to the language, but their implementation allowed so much freedom that Java code is becoming incomprehensible even to people who have been working with the language for years.

Generics: An escape hatch

In my view generics are just an attempt to get out from under the limitations of Java’s strict type system. The same goes for generics in .Net.

Generics are the “managed code” world’s version of C++ templates. They are also called “parameterized types”. They give you some of the programming flexibility of a dynamic language. Statically typed versions of generic code get generated depending on what types are called for in statically typed code. What they don’t give you is the “get to the point” feel of programming in a powerful dynamic language, because you have to tell the compiler about how many types the generic code will accept, and in what form it will accept them, where type information needs to go in the generic code, and what form type information will take when data is returned from the routines you write. In other words, you have to give some metadata for your code so that the static type system can deal with it. I call it “overhead”, but that’s just me. What static type systems force you to do is talk about the data flow of your code. This isn’t so that humans can understand it (though with some discipline it’s possible to create readable code this way), but so that the compiler and runtime don’t have to figure out what the data flow will be when your code is executed. It’s really a form of optimization.

Bloch explored where the complexity that now bedevils Java comes from. He got into talking about feature creep. It’s not the number of features, it’s the permutations of feature interactions. “When you add a feature to an already rich language, you vastly increase its complexity,” he said.

The “quagmire” of closures

First, let me get a definition out of the way. A closure can be thought of as an anonymous function. It is not officially a method in any class. You can spontaneously create closures inline in your code and use them, in dynamic languages. You can pass them around from method to method, and they never lose their integrity. In the context of OOP it’s more aptly defined as an anonymous class that contains a single anonymous function, because in OOP languages closures are objects with methods of their own.

Bloch gave a brief explanation, which I think is correct: closures “enclose their environment”. They create references to whatever variables are used within them that are in scope range, at the time they are created. This is called “lexical scoping”. They also automatically return the value of whatever is the last expression evaluated within them. Bloch gives two reasons why closures are being considered for Java:

  • They facilitate fine-grained concurrency in fork-join frameworks. You can use anonymous classes with these frameworks, but they’re a pain right now. He said it’s important that programmers use fork-join frameworks because, “we have these multi-core machines, and in order to actually make use of these cores, we’re going to have to start using these parallel programming frameworks.”
  • Resource management – Always using try-finally blocks for resource management is painful and causes resource leaks, because programmers misapply the construct. Closures would help with this.

Even though he conceded these points, he clearly has disdain for closures. He used BGGA closures as his example. With the way they’ve been implemented, I’d have disdain for them, too. The hoops you have to jump through to make them work makes me glad I’m not using this implementation.

He said closures encourage an “exotic form of programming”, which is the use of higher-order functions: functions that take functions as parameters, and/or return functions. He jabs, “If you give someone a new toy, then they’ll play with it.” Ah, so higher-order functions are a “toy”. Mm-hmm… I’m tempted to say something impolite, but I won’t.

Bloch talked about closure semantics next, and how they’re different from normal Java semantics. He expressed concern throughout the rest of his talk that this would confuse Java programmers, and thereby create bugs. He has a point. Still, I don’t mind the idea of mixing computing models in a single environment (VM). I think there are advantages to this approach.

Some history: Where modern computing comes from

Code blocks (closures) in Squeak/Smalltalk work differently than normal Smalltalk code as well.

I’d venture to say that the reason closures have been put in Ruby, and are now being considered for Java, is that Smalltalk was the first OO language to use them. Even though I’m sure it’s largely forgotten now, there are some significant artifacts of computing that have come into common use because of the Smalltalk-80 system. The GUI you’re using now, “hibernation” on your PC, bit-blitting, and the MVC design pattern are a few other examples. Language and system designers have been mining Smalltalk features for years, and incorporating them into modern, though less powerful technologies.

In retrospect people within the Smalltalk community have pointed to some design flaws in it. They didn’t ruin it by any means, but it’s recognized that there’s room for improvement. In a conversation I was in on between Alan Kay and Bill Kerr, a blogger I’ve mentioned previously, Kay expressed muted regret about the inclusion of closures in Smalltalk, saying they broke the semantic simplicity of the language. I can see where he’s coming from. Closures introduce an element of functional programming into OOP. He desired a system that was purely OO, mainly for consistency so the language would be easier for beginners to learn. I haven’t gotten far enough along in my research to see what Kay would’ve suggested instead of closures. My understanding is he was heavily involved with the development of Smalltalk up until Smalltalk-76. Due to a schism that developed in the Smalltalk team, he gave up most of his involvement in it by the time Smalltalk-80 was in development. This is the version that ultimately was released to the public. “The professionals”, as Kay put it, took over and so some of the vision was lost. They turned the Smalltalk language into something that was friendly to them, not beginners.

Closures are something that anyone new to a dynamic OO language has had to learn about. They’re pretty painless to use in Smalltalk, in my opinion. They’re easy to create. Contrast this with closures in Java. Bloch referenced a post written by blogger Mark Mahieu, where he got some of the examples. Here are a couple of them, with equivalents in Smalltalk for clarity.

Java example #1:                          

static <A1, A2, R> {A1 => {A2 => R}} curry({A1, A2 => R} fn) {
   return {A1 a1 => {A2 a2 => fn.invoke(a1, a2)}};

Smalltalk equivalent:                          

curry := [:fn | [:a1 | [:a2 | fn value: a1 value: a2]]]

Java example #2:                          

static <A1, A2, A3, R> {A2, A3 => R}
      partial({A1, A2, A3 => R} fn, A1 a1) {
   return {A2 a2, A3 a3 => fn.invoke(a1, a2, a3)};

Smalltalk equivalent:                          

partial := [:fn :a1 |
            [:a2 :a3 |
             fn value: a1 value: a2 value: a3]]

This is what I meant about “hoops”. The type information you have to supply for the Java closures detracts from the clarity of the code. It’s clunky. That’s for sure.

I’m not going to say anything about these structures (“curry” and “partial”) right now. If readers would like, I can write a post just on them, explaining how they work, and how they can be useful.

Bloch’s proposal

Bloch put forward a proposal, that rather than adding closures to the language, at least in the near future, that Java designers instead focus on creating concise structures (ie. add features) that implicitly create anonymous classes, to eliminate the verbosity (I guess this was his answer to the challenge of concurrency), handle resource allocation, and critical sections for threading. Basically it sounded like he favored the stylistic approaches that Microsoft has taken with .Net, with some differences.

He rejected out of hand the idea of adding library-defined control structures to the language, something made possible by closures, because it will encourage dialects. Squeak has a lot of library-defined control structures. I think they work quite well. They don’t change the fundamental rules of the language. Instead they expand its “vocabulary”. That is, they expand upon what you can do using the language’s existing structures. I understand this is hard to imagine in the context of Java, because most programmers are used to the idea that control structures are invariants in the language. Bloch would probably feel uncomfortable with this concept, but ask yourself which language is getting more cumbersome and complicated as time passes?

I think there is a fundamental contradiction in goals between what Bloch says he prefers about Java, and the growing problems of computing. He says that adding features to a language increases its complexity, and he makes a good case for that. Yet, how is Java going to solve new problems in the future? His answer is to add more features to the language. Granted they’re focused, and not open-ended to allow further expansion, but they’re still feature additions all the same. As is illustrated by Squeak, library-defined structures would actually help reduce the number of features in the language. Even though Squeak can do more things concisely than Java can, the feature set of the language is very small. As a point of comparison, Lisp’s language feature set is even smaller, yet it’s more capable of handling complex problems than Java. They accomplish this by taking what Java does as a built-in feature, and use message-passing (Squeak), closures (Squeak, Lisp, others), an extensive meta system (Squeak, Lisp, others), named functions (Lisp, Scheme), and macros (Lisp) instead.

Where Java sits

I agree with Bloch that if the intention of Java was to be a language that is “practical” (ie. is designed from the standpoint of instrumental reasoning), easy enough to use so “you can just code”, and doesn’t strain the ability of Java programmers to understand their code, then generics should’ve been scaled back, closures shouldn’t be added, and his suggestions for expansion should be considered. Unfortunately this also means that it’s going to get harder and harder to work with Java as time passes, because the demands placed on it will exceed its ability to deal with them. Maybe at that point people will pursue another “escape hatch”.

Struggling to transcend Java

I compliment the Java community on its efforts to build something better out of a limited platform. What some are trying out is the idea of a VM of VMs. That is, building a VM on top of the JVM. This creates a common computing environment that enables some interoperability between different computing models, while at the same time transcending some of the limitations of the JVM. From what I hear, it sounds like it’s been a struggle. Personally I think it would be wiser for the industry to start with something better–a “theoretical” language/VM–and then try to build on top of that. Our computing culture has a lot of resistance to this idea. Even the efforts of groups to build better language/VM architectures on top of the JVM seem to be regarded as curiosities by most practitioners. They get talked about, and get used by a relative few who recognize their power for real projects, but they’re ultimately dismissed by most developers as “impractical” for one reason or another.

It’s funny. Several years ago I talked with a friend about how slow Java was. His solution was to “throw more hardware at it” then. Some months back I told him about Groovy. He read up on it and dismissed it as too slow. Why not just “throw more hardware at it”? I didn’t hear that recommendation from him. I didn’t press the case for it, because I didn’t have a dog in the fight. I’m not part of the Java community. I figured if he didn’t want to pursue it, he either had a good reason for it, or it was just his loss. My inclination is to believe the latter. I guess by and large there’s not enough pain yet to motivate people to reach for something better.

—Mark Miller,

IT: You’re doing it completely wrong

Hi guys. I’m still very busy with other stuff right now. I’ve yearned to get back to my research, and sharing my findings on here. I’ve had to be patient and persistent. It’s going to be slow-going for a while.

I found this article on reddit, called Rental Car IT, by Neal Ford. It encapsulates what I think is going on in IT, in the large. That is, companies are unwittingly making their systems more complex, not less, and therefor less manageable as time passes, due to what he calls “accidental complexity”. The reason I’m covering this here is the mindset it illustrates in our industry applies just as well to the issues of software development.

Ford talks about customers he’s done business with, and how he tries to talk them into simplifying their systems. They seem to agree in principle with the idea, but when it comes down to making a decision they consistently choose the “less risky” option of buying off-the-shelf software, which ends up making their systems more complex and unweildy than they were before.

Ford also points out that customers frequently spend millions of dollars on pre-fabbed systems to solve simple IT problems.

He talks about something which has been a bone of contention for me. He describes an incident where he suggests to a senior IT guy that they talk to the users about what they want out of the system. The IT guy scoffs at the idea, saying the users should have no say in it, because, “We’ve tried talking to them–they don’t know what they want, so we have to define it for them.” Justin James talked about a related subject in a blog post called “Enterprise software is about data–not people”. I and Justin’s other readers had a good discussion on there about this. Ford says this attitude has the effect of creating cumbersome, complex systems that hinder important things like customer service. I agree users tend to not know what they want, but that doesn’t mean they should be totally ignored. All this attitude does is hinder productivity. The users are a part of the business operation. If a system is not working in a compatible fashion with the way they think and work, it creates inefficiency, because users are having to repeatedly take their minds off the real task they’re trying to accomplish so they can “deal with the machine”.

Some of the comments to Ford’s post were interesting, too. One said that it’s all about risk to the decision-makers, not to the company. If a project goes bad and they used a package from one of the major vendors, there are a lot of other people that can be blamed for the failure. Whereas if they try to develop the solution in-house and the project fails, then the IT executives themselves will be on the chopping block.

The sense I got from reading this is that increasingly enterprise IT development is being sequestered; moved, separated from the company, precisely because of past IT project failures. In other words, our ignorance about what we’re really doing is gradually discrediting IT development. People don’t want to take the risk anymore. I am reminded of a phrase which goes, “The biggest risk is not taking a risk at all.”

This attempt to remove risk and liability from internal IT is not necessarily serving companies well. What’s being missed is that a business’s computing systems should model the business. Companies such as the ones described in the article are attempting to fall back on an Industrial Age idea of solving a problem by acquiring standardized equipment (software systems) and making it interoperate with their existing equipment (software systems). In other words, take something that was designed by a group of engineers who have no idea what your business is about, and was designed with a particular purpose in mind, while at the same time it tries to allow some customization (it isn’t always flexible/generic enough to allow the customization you want), and attempt to make it model your business. Another name for it is “the data processing mindset”. Users are mere “operators” of this equipment, just as their industrial forebearers were. Their job is to learn how to operate it, and use it the way it was designed–without them in mind.

These businesses are trying to build their computing systems using Computer Engineering and Information Systems disciplines and infrastructure almost exclusively, avoiding what Computer Science and Software Engineering have to offer to the organization–sequestering it to the major software companies. They want to deal with pre-fabbed “boxes”, not code. The reason this doesn’t work well is they’re dealing with information, not widgets. They’re delivering a unique service, one where domain-specific information and semantics are key components. This is not a manufacturing process, and in the long run it cannot be dealt with effectively using this mindset.

—Mark Miller,

The death of the computer chain store

I heard on the radio a couple days ago that CompUSA is going out of business. It’s been sold to a private equity company, and the plan is its stores will be gradually liquidated. This is not the end of the computer retail chain as a fixture in our society. Now there are Apple stores, but they’re really the only ones left. The main reason these exist though is because of the iPod and iPhone. MacBooks have been selling well, but I think the sales and profits of the iPod and its successors make the computer sales pale by comparison.

This isn’t to say that you can’t buy a PC at a store after CompUSA goes away, or that you’ll only be able to buy a Mac. You can still get PCs at Circuit City and Best Buy. PCs today are becoming media devices–consumer electronics. CompUSA tried to compete with these retailers by introducing consumer electronics into their stores, but I guess they weren’t as good at it. They were good for buying some electronics though. I bought a decent Canon digital camera from CompUSA for a really good price last year.

If you still want a techie/power user PC I’d recommend a mom and pop store. I think They’re still around. Or, get one at Dell or HP (mail order).

The PC business is definitely going through a transition. Somehow I think the future of the PC is your mobile phone…or maybe your TV. It will practically disappear. In some ways this makes sense, but until the I/O interface is figured out so that people can actually use it in a sophisticated way the prospect makes me cringe.

What was the PC revolution for, anyway? Was it just a way to digitize analog media?

This has been a question in the industry for a long time. In one of my previous posts I embedded a vintage episode of The Computer Chronicles on the Atari ST and Commodore Amiga. In it Gary Kildall made the comment that after buying 8-bit computers, “People have gone back to their VCRs”, and computer companies were needing something new to attract those customers back. Even then people saw computers as “media devices”, but it was more obvious they were something different from a phone or a TV. They were something you attached to a TV or monitor. What’s going on now is the maturation of the technology. Different media are being integrated together. It’ll be interactive, but centrally controlled. I suppose most people are looking forward to this future, but I’m not. Computing can be more than this if people have the vision to make it so.

—Mark Miller,

Work like an Egyptian…

…playing off the name of an old Bangles tune…

I ran across a case in point for this, thanks to James Robertson’s blog. Steve Jones is talking about the current state of the art in the organization of IT software development:

So why do I choose to have strict contracts, static languages, early validation of everything and extremely rigorous standards applied to code, build and test? The answer was nicely highlighted by Bill Roth on his post around JavaOne Day 1, there are SIX MILLION Java developers out there. This is a clearly staggering number and means a few things.

  1. There are lots of jobs out there in Java
  2. Lots of those people are going to be rubbish or at least below average

But I think this Java groups (sic) is a good cross section of IT, it ranges from the truly talented at the stratosphere of IT down to the muppets who can’t even read the APIs so write their own functions to do something like “isEmpty” on a string.

The quality of dynamic language people out there takes a similar profile (as a quick trawl of the groups will show) and here in lies the problem with IT as art. If you have Da Vinci, Monet or even someone half-way decent who trained at the Royal College of Art then the art is pretty damned fine. But would you put a paintbrush in the hand of someone who you have to tell which way round it goes and still expect the same results?

IT as art works for the talented, which means it works as long as the talented are involved. As soon as the average developer comes in it quickly degrades and turns into a hype cycle with no progress and a huge amount of bugs. The top talented people are extremely rare in application support, that is where the average people live and if you are lucky a couple of the “alright” ones.

This is why the engineering and “I don’t trust you” approach to IT works well when you consider the full lifecycle of a solution or service. I want enforced contracts because I expect people to do something dumb maybe not on my project but certainly when it goes into support. I want static languages with extra-enforcement because I want to catch the stupidity as quickly as possible.

He concludes with:

This is the reason I am against dynamic languages and all of the fan-boy parts of IT. IT is no-longer a hackers paradise populated only by people with a background in Computer Science, it is a discipline for the masses and as such the IT technologies and standards that we adopt should recognise that stopping the majority doing something stupid is the goal, because the smart guys can always cope.

IT as art can be beautiful, but IT as engineering will last.

I like that Jones stresses testing. I realize that he is being pragmatic. He has to use the hand he’s been dealt. He is sadly mistaken though when he calls his approach “engineering” or a “discipline” in any modern sense of the word. Engineering today is done by skilled people, not “muppets”. The phrase “IT as art” is misleading, because it implies that it was made the way it was to satisfy a developer’s vanity. Good architecture is nothing to sneer at (it’s part of engineering). His use of the phrase in this context shows how useless he thinks good software architecture is, because most people don’t understand it. Well that sure is a good rationale, isn’t it? How would we like it if that sort of consideration was given to all of our buildings?

And yes, we can “cope”, but it’s insulting to our intelligence. It’s challenging in the sense that trying to build a house (a real one, not a play one) out of children’s building blocks is challenging.

I guess what I’m saying is, I understand what Jones is dealing with. My problem with what he said is he seems to endorse it.

In a way this post is a sequel to a post I wrote a while ago, called “The future is not now”. Quoting Alan Kay from an interview he had with Stuart Feldman, published in ACM Queue in 2005 (this article keeps coming up for me):

If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.

Egyptologists have found that the ancient Egyptians were able to build their majestic pyramids through their mastery of bureaucratic organization. They had a few skilled engineers, foremen, and some artisans–skilled craftsmen. Most of the workforce was made up of laborers who could be trained to make mud bricks, to quarry and chisel stone into the shape of a rectangle, and could be used to haul huge stones for miles. They could not be trusted to construct anything by themselves more sophisticated than a hut to live in, however. It took tens of thousands (though estimates vary wildly on the magnitude) of laborers 10-20 years to build The Great Pyramid at Giza. It was inefficient, the building materials were difficult to work with (it took careful planning just to maneuver them), used a LOT of building material, and was very expensive.

I don’t know about you, but I think this matches the description of Steve Jones’s organization, and many others like it. Not that it takes as many “laborers” or as long. What I mean is it’s bureaucratic and inefficient.

Alan Kay has pointed to what should be the ultimate goal of software development, in the story of the planning and building of the Empire State Building. I believe the reason he uses this example is that this structure is more than twice as tall as the ancient pyramids, uses a lot less building material, has very good structural integrity, used around 4,000 laborers, and took a little more than 1 year to build.

He sees that even our most advanced languages and software development skills today are not up to this level. When he rated where Smalltalk was in the analog of history, he said it’s at the level of Gothic architecture of about 1,000 years ago. The problem is ancient Egyptian pyramid architecture is about 4,500 years old. That’s the state of the art in how software development is organized today.

I recently talked a bit with Alan Kay about his conception of programming in an e-mail conversation. I expected him to elaborate on his conception of OOP, but his response was “Well, I don’t think I have a real conception of programming”. His perspective is “most of programming is really architectural design”. Even though he invented OOP, he said he doesn’t think strictly in terms of objects, though this model is useful. What this clarified for me is that OOP, functional, imperative, types, etc. is not what’s important (though late-binding is, given our lack of engineering development). What’s important is architecture. Programming languages are just a means of using it.

As I mentioned in my last Reminiscing post, programming is a description of our models. It’s how we construct them. Buildings are built using appropriate architecture. We should try to do the same with our software.

What Kay was getting at is our languages should reflect a computing model that is well designed. The computing model should be about something coherent. The language should be easy to use by those who understand the computing model, and enable everything that the computing model is designed for. Frameworks created in the language should share the same characteristics. What this also implies is that since we have not figured out computing yet, as a whole, we still have to make choices. We can either choose an existing computing model if it fits the problem domain well, or create our own. That last part will be intimidating to a lot of people. I have to admit I have no experience in this yet, but I can tell this is the direction I’m heading in.

The difference between the pyramids, and the Empire State Building is architecture, with all of the complementary knowledge that goes with it, and building materials.

The problem is when it comes to training programmers, computer science has ignored this important aspect for a long time.

Discussing CS today is kind of irrelevant. As Jones indicates, most programmers these days have probably not taken computer science. The traditional CS curriculum is on the ropes. It abandoned the ideas of systems architecture a long time ago. Efforts are being made to revive some watered down version of CS via. specialization. So in a way it’s starting all over again.

So what do the books you can buy at the bookstore teach us? Languages and frameworks. We program inside of architecture, though as Kay said in his response to me, “most of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess”. One could call it “architecture of a sort”…

In our situation ignorance is not bliss. It makes our jobs harder, it causes lots of frustration, it’s wasteful, and looked at objectively, it’s…well, “stupid is as stupid does”.

Did you ever wonder how Ruby on Rails achieves such elegant solutions, and how it saves developers a lot of time for a certain class of problems? How does Seaside achieve high levels of productivity by the almost miraculous ability to allow developers to build web applications without having to worry about session state? The answer is architectural design, both in the computing/language system that’s used, and the framework that’s implemented in it.

The problem of mass ignorance

I had a conversation a while back with a long-time friend and former co-worker along the lines of what Jones was talking about. He told me that he thought it was a waste of time to learn advanced material like I’ve been doing, because in the long run it isn’t going to do anyone any good. You can build your “masterful creation” wherever you go, but when you leave–and you WILL leave–whoever they bring in next is not going to be as skilled as you, won’t understand how you did what you did, and they’re probably going to end up rewriting everything you created in some form that’s more understandable to themselves and most other programmers. I had to admit that in most circumstances he’s probably right, in terms of the labor dynamic. He does like to learn, but he’s pragmatic. Nevertheless this is a sad commentary on our times: Don’t try to be too smart. Don’t try to be educated. Your artifacts will be misunderstood and misused.

I’m not trying to be smarter than anyone else. I’m genuinely interested in this stuff. If that somehow makes me “smarter” than most programmers out there (which I wish it didn’t), so be it.

History shows that eventually this got worked out in Western civilization. Eventually the knowledge necessary to build efficient, elegant, strong structures was learned by more than a few isolated individuals. It got adopted and taught in schools, and architecture, design, and building techniques have improved as a result, improving life for more people. The same thing needs to happen for software.

I think what’s needed is a better CS curriculum, one that recognizes the importance of architectural design, thinking about computing systems as abstractions–models that are universal, and most importantly gets back to seeing CS as “a work in progress”. What distresses me among other things is hearing that some people believe CS is now an engineering discipline, as in “stick a fork in it, and call it done.” That is so far from the truth it’s not even funny.

Secondly, and this is equally important, the way business manages software development has got to change. Everything depends on this. Even if the CS curriculum were modified to what I describe, and if business management hasn’t changed, then what I’m talking about here is just academic. The current management mentality, which has existed for decades, puts the emphasis on code, not design. What they’re missing is design is where the real bang for the buck comes from.

—Mark Miller,

Reminiscing, Part 6

The pop culture

This is my last post in this series. In the other parts I’ve written positively about my experience growing up in the computing culture that our society has created. Now, bringing this full circle, I will be looking at it with a critical eye, because I’ve come to realize that some things have been missing. If you compare what I’ve been talking about with the work of Engelbart and Kay, from the 1960s and 70s, I think you’ll see that not much progress has been made since then. In fact, this business began as a retrogression from what had been developed. Some of what had been invented and advocated for has been misunderstood. Some of it was adopted for a time but pushed aside later. Some has remained in a watered-down form. I’ve spoken of analogies before about the amazing things that were invented more than 30 years ago. I’m going to make another one. They were like Hero’s steam engine. The culture was not ready to understand them, and so they didn’t go much of anywhere, despite their tremendous potential.

Technological progress has been made pretty much in a brute force, slog-it-out fashion. Eventually things get refined, and worked out, but mainly by “reinventing the wheel”.

Quoting from an interview with Alan Kay in ACM Queue published in 2005:

In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

It was a different culture in the ’60s and ’70s; the ARPA (Advanced Research Projects Agency) and PARC culture was basically a mathematical/scientific kind of culture and was interested in scaling, and of course, the Internet was an exercise in scaling. There are just two different worlds, and I don’t think it’s even that helpful for people from one world to complain about the other world—like people from a literary culture complaining about the majority of the world that doesn’t read for ideas. It’s futile.

I don’t spend time complaining about this stuff, because what happened in the last 20 years is quite normal, even though it was unfortunate. Once you have something that grows faster than education grows, you’re always going to get a pop culture.

The way he defined this pop culture is “accidents of history” and expedient gap-filling. He said for example that BASIC became popular because it was implemented on a GE time-sharing system. GE decided to make this system accessible by many. This made BASIC very popular just by virtue of the fact that it was made widely available, not because it had any particular merit. Another more modern example he gave was Perl, which has been used to fill short-term needs, but it doesn’t scale. In a speech (video) he gave 10 years ago he mentioned the web browser as a terrible idea, because it locks everything down to a standardized protocol, rather than having data come with a translator which makes the protocol more flexible. It got adopted anyway, because it was available and increased accessibility to information. Interestingly, contrary to the mantra of the open source movement of “standards”, I get the impression he doesn’t agree with this very much. Not to say he dislikes open source. I think he’s very much in favor of it. My sense is, with a few exceptions, he thinks we’re not ready for standards, because we don’t even have a good idea of what computing is good for yet. So things need to remain malleable.

I think part of what he’s getting at with his comments on the “pop culture” is a focus on features like availability, licensing, and functionality and/or expressiveness. Features excite people because they solve immediate problems, but since by and large we don’t look at architectures critically, this leads to problems down the road.

We tend to not think in terms of structures that can expand and grow with time, as requirements change, though the web has been forcing some developers to think about scalability issues, in concert with Information Systems. Unfortunately all too often we’re using technologies that make these goals difficult to accomplish, because we’ve ignored ideas that would make it easier. Instead of looking at them we’ve tried to adapt architectures that were meant for other purposes to new purposes, because the old model is more familiar. Adding on to an old architecture is seen as cheaper than trying to invent something more appropriate. Up front costs are all that count.

I’m going to use a broader definition of “pop culture” here. I think it’s a combination of things:

  • As Kay implied, an emphasis on “features”. It gets people thinking in terms of parts of computing, not the whole experience.
  • An emphasis on using the machine to accomplish tasks through applications, rather than using it to run simulations on your ideas, though there have been some applications that have brought out this idea in domain-specific ways, like word processing and spreadsheets.
  • It’s about an image: “this technology will define you.” Of course, this is an integral part of advertising. It’s part of how companies get you interested in products.
  • A technology is imbued with qualities it really doesn’t have. Another name for this is “hype”.

The problem with the pop culture is it misdirects people from what’s really powerful about computing. I don’t see this as a deliberate attempt to hide these properties. I think most vendors believed the messages they were selling. They just didn’t see what’s truly powerful about the idea of computing. Customers related to these messages readily.

The pop culture has defined what computing is for years. It has infected the whole culture around it. According to Kay, there’s literally been a dumbing down of computer science as a result. This has continued on into the age of the internet. You hear some people complain about it with “Web 2.0” now. Personally I think LOLCode is a repetition of this cycle. Why do I point this out? Microsoft is porting it to .Net…Great. Just what we need. Another version of BASIC for people to glom onto.

The truth is it’s not about defining who you are. It’s not about this or that feature. That’s what I’m gradually waking up to myself. It’s about what computers and software really can be–an extension of the mind.

The “early years”

I found a bunch of vintage ads, and a few that are recent, which in some ways document, but also demonstrate this pop culture as it’s evolved. I’ve also tried to convey the multiplicity of computer platforms that used to exist. I couldn’t find ads for every brand of computer that was out there at the time. Out of these, only a couple give a glimpse of the power of computing. As I looked at them I was reminded that computer companies often hired celebrities to pitch their machines. A lot of the ads focused on how low-price their computers were. The exceptions were usually IBM and Apple.

The pitch at that time was to sell people “a computer”, no matter what it was. You’ll notice that they had different target audiences: some for adults, some for businesspeople, and some for kids. There was no sense until the mid-1980s that software compatibility mattered.

I went kind of wild with the videos here. If a video doesn’t seem to play properly, refresh the page and try again. I’ve tried to put these in a semi-chronological order of when they came out, just so you can kind of see what was going on when.

Dick Cavett pitches the Apple II

Well, at least she’s using the machine to think thoughts she probably didn’t before. That’s a positive message.

William Shatner sells the Commodore Vic-20

Beam into the future…

The Tandy/Radio Shack TRS-80 Color Computer

The Tandy TRS-80 Model 4 computer

Tandy/Radio Shack (where the “TRS” moniker came from) had (I think) 4 “model” business computers. This was #4. The Model I looked more like the Color Computer in configuration, but all the other ones looked like the Model 4 shown here. I think they just had different configurations in the case, like number of disk drives, and the amount of memory they came with.


The one that started it all, so to speak. Want to know who your Windows PC’s granddaddy was? This was it. Want to know how Microsoft made its big break? This was it.

These ads didn’t feature a celebrity, per se, but an actor playing Charlie Chaplin.

The Kaypro computer

As I recall, the Kaypro ran Digital Research’s CP/M operating system.

This reminds me of something that was very real at the time. A lot of places tried to lure you in telling you the price of the computer–with nothing else to go with it…The monitor was extra, for example. So as the ad says, a lot of times you’d end up paying more than the list price for a complete system.

The Commodore 64

What’s funny is I actually wrote a BASIC program on the Apple II once that did just what this ad says. I wrote a computer comparison program. It would ask for specifications on each machine, weight them based on importance (subjective on my part), and made a recommendation about which was the better one. I don’t remember if I was inspired by this ad or not.

Bill Cosby pitches the TI-99/4a with a $100 rebate

Kevin Costner appears for the Apple Lisa

This ad is from 1983.

“I’ll be home for breakfast.” Okay, is he coming back in later, or did he just come to his desk for a minute and call it a day?

The Apple Macintosh

This ad is from 1984. I remember seeing it a lot. I loved the way they dropped the IBM manuals on the table and you could literally see the IBM monitor bounce a little from the THUD. 😀


Of course, who could forget this ad. Ridley Scott’s now-famous Super Bowl ad for the Macintosh.

Even though it’s a historical piece, this pop culture push for the Macintosh was unfortunate, because it made the Mac into some sort of “liberating” social cause (in this case, against IBM). It’s a piece that’s more useful for historical commentary (like this, I guess…).

Ah yes, this is more like it:

This ad gave a glimpse of the true potential of computing to the audience of this era–create, simulate whatever you want. Unfortunately it’s a little hard to see the details.

Alan Alda sells the Atari XL 8-bit series

Alan Alda did a whole series of ads like this for Atari in 1984. I gotta say, this ad’s clever. I especially liked the ending. “What’s a typewriter?” the girl asks. Indeed!

The Coleco Adam

This is the only ad video I could find for it. It features 4 ads for the Colecovision game console and the Coleco Adam computer. Yep, the video game companies were getting in on the fun, too. Well, what am I saying? Atari got into this a long time before. I remember that the Adam came in two configurations. You could get it as a stand-alone system, or you could get it as an add-on to the Colecovision game console. The console had a front expansion slot you could plug the “add-on” computer into.

What I remember is the tape you see the kid loading into the machine was a special kind of cassette, called a “streaming tape”. Even though data access was sequential, supposedly it could access data randomly on the tape about as fast as a disk drive could access data on a floppy disk. Pretty fast.

According to a little research I did, the Adam ran Digital Research’s CP/M operating system.

Microsoft Windows 1.0 with Steve Ballmer

This ad is from 1986. Steve Ballmer back when he had (some) hair. 🙂 This is the only software ad of any significance during this era. I don’t remember seeing this on TV. Maybe it was an early promo at Microsoft.

Atari Mega ST

I never saw TV ads for the Atari STs. I’ve only seen them online. In classic “Commodore 64” fashion, this one compares the Atari Mega ST to the Mac SE on features and price. Jack Tramiel’s influence, for sure.

The Commodore Amiga 500

A TON of celebrities for this one!

Pop culture: The next generation

As with the above set, there are more I wish I could show here, but I couldn’t find them. These are ads from the 1990s and beyond.

“Industrial Revelation”, by Apple

I like the speech. It’s inspiring, but…what does this have to do with computing?? It sounds more like a promo ad for a new management technique, and a little for globalization. This was a time when Apple started doing a lot of ads like this. It was getting ridiculous. I remember an ad they put on when the movie Independence Day came out, because the Mac Powerbook was featured in it: “You can save the world, if you have the right computer”. A total pop culture distraction.

IBM OS/2 Warp

Windows 95

The one that “started” it all–“Start me up!” What they didn’t tell us was “then reboot!”

“I Am” ad for Lotus Domino, from IBM

Wow. So Lotus Domino makes you important, huh? Neat ad, but I think people who actually used Domino wouldn’t have said it turns them into “superman”. What was nice about it was you could store pertinent information in a reasonably easy-to-access database that other people could use to access it, rather than using an e-mail system as your information “database”. Unfortunately the ad never communicated this. Instead it tried to pass it off as some “empowerment” tool. Domino was okay for what it did. I think it’s a concept that could be improved upon.

IBM Linux ad

So…Linux is artificially intelligent? I realize they’re talking about properties of open source software, but this is clearly hype-inducing.

The following isn’t an ad, but it’s pop culture. It’s touting features, and it claims that technology defines us. It doesn’t really talk about computing as a whole, but it claims to.

The Machine is Us/ing Us
by Michael Wesch, Assistant Professor of Cultural Anthropology,
Kansas State University

I think it’s less “the machine” than “Google is using us”. Tagging is a neat cataloging technique, but each link “teaches the machine a new idea”? Gimme a break!

So what are you saying?

In short, for one, I’m saying, “Don’t believe the hype!”

Tron is one of my favorite movies of all time. I listened to an interview with its creator, Steven Lisberger, that was done several years ago. He said now that the public had discovered the internet, there was a “stepping back” from it, to look at things from a broader view. He said people were saying, “Now that we have this great technology, what do we do with it?” He said he thought it was a good time for artists to revisit this subject (hinting at a Tron sequel, which so far hasn’t shown up). This is what I loved. He said of the internet, “It’s got to be something more than a glorified phone system.”

For those who have the time to ponder such things, like I’ve had lately, I think we need to be asking ourselves what we’re creating. Are we truly advancing the state of the art? Are we reinventing what’s already been invented, as Alan Kay says often? Are we regressing, even? Are we automating what used to exist in a different medium? Think about that one. Kay has pointed this out often, too.

What is a static web page, a word processing document, or a text file? It’s the automation of paper, an old medium. What’s new with it is hypertext, though as I mention below, Douglas Engelbart had already done something similar decades before. Form fields and buttons are very old. The VT3270 terminal had these features for decades (there was a button for the “submit” function on the terminal keyboard).

What is a web browser? It’s like a VT3270 terminal, a technology created by IBM in the 1970s, adapted to the GUI world, with the addition of componentized client-side plug-ins.

What is blogging? It’s a special case of a static web site. It’s a diary done in public, or at best an automation of an old-fashioned bulletin board (I’m talking about the cork kind), or newsletter.

What is an e-commerce site? It’s the automation of mail order, which has existed since the 19th century.

What about search? It’s a little new, because it’s distributed, but Engelbart already invented a searchable system that used hypertext in the late 1960s.

What is content management? It’s the automation of copy editing. Magazines and newspapers have been doing it for eons, even when you take internationalization into account. It takes it from an analog medium, and brings it to the internet where everything is updated dynamically. I imagine it adds to this the idea of content for certain groups of viewers (subscribers vs. non-subscribers, for example). A little something new.

What about collaborative computing over a network, with video? Nope. Douglas Engelbart invented that in the late 1960s, too.

What about online video? It’s the automation of the distribution of home movies. The fact that you can embed videos into other media is fairly new (a special case), but the idea that you could bring various media together into a new form existed in the Smalltalk system in the 1970s. It’s just been a matter of processing power that’s enabled digital video to be more widespread. Network streaming is new, though, to the best of my memory.

What about VOIP? It’s just another version of a phone system, or a PBX.

What about digital photography and video (like from a camcorder)? I don’t really have to explain this one, because you already know what I’ll say about it. The part that’s really new is you don’t have to wait to get your photos developed, and you can see pretty much how the video will look while you’re taking it. I’ll put these under the “something new” column, because it really facilitates experimentation, something that was difficult with film.

What about digital music? Same thing. The only things new about it are it’s easier to manipulate (if it’s not inhibited by DRM), and your player doesn’t have to have any moving parts. The main bonus with it is since it’s not analog its quality doesn’t degrade with time.

What about photo and video editing? Professionals have done this in analog media for years.

And there’s more.

Are you getting the impression that this is like that episode of South Park called “The Simpsons Already Did It”?

South Park: Blocking out the Sun

There’s not a whole lot new going on here. It just seems like it because all of the old stuff is being brought into a new medium, and/or the hardware has gotten powerful and cheap enough to have finally brought old inventions to a level that most people can access.

The pop culture’s mantra: accessibility

The impression I get is a lot of what people have bought into with computing is accessibility, but to what? When you talk to a lot of people about open source software they think it’s about this as well, along with community.

Here’s what’s been going on: Take what used to be exclusive to professionals and open it up to a lot more people through technology and economies of scale. It’s as if all people have cared about is porting the old thing to the new cheaper, more compact thing just so we can pat ourselves on the back for democratizing it and say more people can do the old thing. Accessibility and community are wonderful, but they’re not the whole enchilada! They are just slices.

What’s missing?

From what I’ve heard Alan Kay say, what’s not being invested in so much is creating systems where people can simulate their ideas, experiment with them, and see them play out; or look at a “thought model” from multiple perspectives. There are some isolated areas where this has been with us for a long time: word processing, spreadsheets, desktop publishing, and later photo and video editing. Some of these have been attempts to take something old and make it better, but the inspiration has always been some old medium.

One guess as to why Kay’s vision hasn’t become widespread is because there hasn’t been demand for it. Why? Because our educational system, and aspects of our society, have not encouraged us to think in terms of modeling our ideas, except for the arts, and perhaps the world of finance. Making this more broad-based will require innovation in thought as well as technology.

The following are my own impressions about what Kay is after. It seems to me what he wants to do is make modes of thought more accessible to the masses. This way we won’t have to depend on professional analysts to make our conclusions for us about some complex subject that affects us. We can create our own models, our own analyses, and make decisions based on them. This requires a different kind of education though, a different view of learning, research, and decision-making, because if we’re going to make our own models, we have to have enough of a basis in the facts so that our models have some grounding in reality. Otherwise we’re just deceiving ourselves.

What Kay has been after is a safe computing environment that works like we think. He sees computers as a medium, one that is interactive and dynamic, where you can see the effects of what you’re trying to do immediately; a medium where what it synthesizes is under our control.

He’s never liked applications, because they create stovepipes of data and functionality. In most cases they can’t be expanded by the user. He envisions a system where data can be adapted to multiple uses, and where needed functionality can be added as the user wants. Data maintains some control over how it is accessed, to free the user from having to interpret its format/structure, but other than that, the user is in control of how it’s used.

It seems like the lodestone he’s been after is the moment when people find a way to communicate that is only possible with a computer. He’s not talking about blogging, e-mail, the web, etc. As I said above, those are basically automation of old media. He’s thinking about using a computer as a truly new medium for communication.

He believes that all of these qualities of computing should be brought to programmers. In fact everyone should know how to program, in a way that’s understandable to us, because programming is a description of our models. A program is a simulation of a model. The environment it runs in should have the same adaptive qualities as described above.

What’s hindered progress in computing is how we think about it. We shouldn’t think of it as a “new old thing”. It’s truly a new thing. In Engelbart’s and Kay’s view it’s an extension of our minds. We should try to treat it as such. When we succeed at it, we will realize what is truly powerful about it.

A caveat that Kay and others have put forward is we don’t understand computing yet. The work that Engelbart and Kay have done have been steps along the way towards realizing this, but we’re not at the endpoint. Not by a longshot. There’s lots of work to be done.

—Mark Miller,

The EU: First they wanted WMP out…

James Robertson is no fan of Microsoft, but yesterday he had to call out the machinations of the EU as “stupid”. I agree.

First, the EU demanded that Microsoft sell a version of Windows in the EU market without Windows Media Player. Microsoft complied, with “Windows n” (for (n)o media player), a configuration that became an instant flop. Fortunately they were allowed to sell the regular configuration as well, which continued to sell decently. The reason “n” was a flop was Microsoft was allowed to sell it for the same price as the regular configuration. What do you know? Customers noticed, and decided to get the one that was full-featured. Something that was also pointed out by Microsoft proponents was that Windows Media Player was one of the few (if not the only) free media players on the market that came with no ad-ware or spyware. Maybe people preferred that, too.

Now, the EU is considering a proposal by their Globalization Initiative that says all PCs in the EU should be sold without an operating system. Now, maybe they mean the OS should not come bundled. Rather the customer will be allowed to tell the vendor which OS to install. That wouldn’t be too bad, though I imagine it would still confuse a lot of consumers who are not into studying which OS they need. I think a good compromise would be if the PC vendors were allowed to have a “default” option, such that if a customer just orders a PC, without specifying anything else, they’ll get the vendor’s standard configuration, keeping things simple. If the customer specifies a preference, that’s what they’ll get, without having to install it themselves.

Here’s hoping the EU doesn’t go off the deep end and take their computer industry back 25 years.

Corporate America wrings its hands about the future

On July 7 The Aspen Institute held a public panel with some corporate leaders and policymakers on American competitiveness. It was aired on C-SPAN on 7/21.

Update 5-24-2013: You can see the discussion here, from C-SPAN. The partial transcript I give below of this panel’s discussion is from the C-SPAN broadcast. This panel was part of the Aspen Ideas Festival.

The panelists were Stephen Friedman, Gene Sperling of the Center for American Progress; Thomas Wilson, CEO of Allstate Corporation; James Turley, Chairman & CEO of Ernst & Young; and Robert Steel, Treasury Undersecretary for Domestic Finance.

They talked about a few topics: the state of the U.S. economy today, ideas about competitiveness, concerns about having an educated work force for the future, and issues in the financial markets. I’m going to focus a little on the 2nd one, but mostly the 3rd one. I edited down comments to focus on these two topics. Also, I did a little editing of what they said with some annotations in []’s. When people speak contemporaneously sometimes they say stuff that doesn’t read well. They kind of stumble over their words, etc. So in a few cases I put what I think they meant to say more clearly. I put some commentary of my own in between what I’m quoting.

What was interesting to me is there was a lot of energy around what I’d call “Silicon Valley issues,” mainly concerns about having an adequately educated work force for the future. That’s why I’m talking about this.

Stephen Friedman functioned as a participant and moderator. He said that the current state of the economy is good, and the current state of our competitiveness is good. He said, however, there are some serious issues that our country faces in the future. The topic he set for discussion was on American competitiveness. Gene Sperling spoke first.

Gene Sperling: . . .When I’m framing and thinking about what the competitive challenge is I think we have to ask ourselves, “How are we competing for the new middle class jobs of the future?” Not simply, “How can our companies get the highest degree of productivity by getting the most efficient supply chain?” They need to do that, but how do we encourage them, not out of patriotism, but out of their economic logic, to want to locate more of the high value-added jobs here? I think that’s the discussion that we’re not having enough in the United States.

Let’s just kind of tick off the kind of things you’d be thinking about. Why are we not competing like crazy for the R&D jobs of the future? This is the high value-added job production, yet there’s a BusinessWeek survey that shows that in telecom, of the exciting research things being done by U.S. companies, 52 out of 57 are happening outside of here. National Institute of Health has actually had a real cut in research. We spend 300-400 billion more a year on tax cuts and prescription drugs. We don’t have any money to increase the research, the innovation, to help see if we can create more clusters of innovation that make it attractive for the innovation to be here.

The R&D jobs, the university competitiveness, that’s where we still have the lead. Why aren’t we doing everything to maximize that lead? Then, what are we doing to encourage scientists? Yes, we should have more liberalization of the highest skilled immigrants being able to stay here. The highest skilled, with Ph.D.’s, and masters in engineering. We should have more ability for them to stay here, but that shouldn’t be our long-term plan. Our long-term plan should be to go after the untapped pool of skills that are here. I think you had Shirley Ann Jackson here earlier. The 35% of women in the workforce that only make up 10-15% of the science and technology pool, the 24% of Americans who are African-American, disabled; or Hispanics who make up 5-7%. We need to be reaching down, encouraging our own pool. Cutting NIH funding is the most discouraging thing for new scientists possible. We talk about these things, that we want more scientists, and yet when you look at the path they have to go through, the pay scale, and then what they have to do just now to get cuts in research grants, we do virtually everything to discourage them. Lowering health care costs would make job creation more attractive here, and then there’s just things that just make no sense at all. Why we don’t invest in early 0-5 education, when we know the cognitive abilities that you need to be flexible, and to think, and adjust in the economy, come the earliest in life. Those are some of the issues that are not often enough on a competitiveness agenda. So I think we want to ask ourselves not just how do we maximize productivity. How do we do that while also focusing in non-protectionist, non-luddite ways on how we make it more attractive, often through public investment, to have more of the new middle class jobs of the future located here so that Americans still believe that an open global economy is a rising tide that does lift all boats and creates an inclusive middle class where everybody can work their way up, which is what our entire country was founded on?

Thomas Wilson: As an employer, of course, what we want are educated, motivated people, and we’re not getting them in America today, which is why we’re going other places. We all know that K-12 is a disaster in America, and we’re not really doing anything about it. I think part of the blame is on business. I think business needs to speak out. Business in general has sort of gotten to, “I’m only going to politicians so they’ll give me what I want for my specific interest, and I’m not going to demand other things of them.” I think businesses need to start demanding from our government a better education system. That’s what we pay them for and that’s what they should do. I’m struck by how the whole education thing seems to be stuck in the mud. Everybody’s got their own corner, whether it’s competitiveness or teachers’ salaries, or teachers’ jobs, and nobody is doing anything really that looks like it will change the effort, and if it was a problem in our company we’d put some real focus behind it, and we’d do something different.

Wilson said something to the effect of “I think we need a Marshall Plan for education.” He went on to propose a plan that in my opinion was naive. He proposed giving the federal government 75 billion dollars over the next 5 years to spend on education priorities. Great. If only Washington worked that way. You give the federal government money and it flies to the Four Winds, going to all sorts of interests. There’s no guarantee all of it would go to education.

In that vein, Friedman asked a follow-up question of Wilson:

Friedman: The U.S. spends something like 3 times more than I think the next largest spender on K-12 education. Somehow it seems to be working in our colleges, which are considered to be the best in the world. In K-12 it isn’t working. We spend the money, and if we look at the standardized tests we trail Latvia and 25 other countries in terms of math performance. What are we doing wrong?

Wilson: Everyone’s got their own things. I would do a bunch of things different, but I think you need money to do it differently. We spend more money on R&D than everybody else, too. We spend more money on defense than everybody else.

Friedman: We like to think we get a payoff.

Wilson: We do, but we also have this standard that every dollar in education has to be well spent. As far as I know, every bomb we drop on Iraq isn’t hitting its mark. So I’m like, “Look, let’s look at it in total. Let’s do something different.” I recognize some of the money will be inefficient, but politically teachers would like this, construction unions would like this, parents would like it. Economically, in Chicago where I live, a bunch of the money spent in the school system is on fixing crappy old buildings. Well that money can’t go into teacher training. I just think we have to recognize it’s broken. Let’s put some money behind it and do something differently. Maybe it’s not 75 billion from the federal government, but somebody needs to do something, because right now everybody’s in their corners and nobody’s coming together.

As I indicated above I wasn’t impressed with Wilson’s education plan, because there’s no real strategy. It just sounds like, “Okay people! Don’t just stand there. Do something!” Nevertheless, Wilson makes a good point on the way education spending works now. What it sounds like to me he’s saying here, and Gene Sperling gets into this later, too, is that we should not treat education like every other government program, but rather as a form of government funded research. When you’re doing research (properly, anyway) you don’t fund it based on immediate results. You’re going to get a bunch of things that don’t pan out. What you count on is the few gems that pay off big, and make up for the flops, and then some. Interesting idea.

James Turley: I think the corporate community has got to step up as well, around K-12 education, and not leave it just to a Marshall Plan. I think the Marshall Plan idea is a good one, but I think that companies have a responsibility here as well. When I moved to New York in ’98 I found that we had been involved with the Adlai Stevenson High School in the Bronx, as one little micro example, and our people volunteer their time in mentoring students. Now, those of you from New York, Adlai Stevenson High School is a school that has a graduation rate of something in the mid to high teens. Our people mentor these students. They don’t pick the easiest ones, they pick the toughest ones. Over the last 15+ years we’ve mentored 600-700 students to a graduation rate of 95%. And so I think that when companies [applause from the audience]…thank you…When companies involve their employees, involve their efforts, I think it compounds the effectiveness of funds that come from governments, and I think it’s something that is past time for the corporate community to step up to.

Robert Steel: I think it’s clear that the linkage here is between this issue of income gap that’s unattractive, as Gene describes, for lots of reasons in our country, [and] this issue of education. Twenty-five years ago someone who graduated from high school vs. someone that went to college, the gap was about 20-25%. Today it’s 45-50%. So basically the return on education is rising, and as you were saying earlier this is based on productivity, and education drives productivity. People that don’t have the education can’t be more productive and this is exacerbated and highlighted by the use of technologies and tools. If you’re not trained to use the technology and the tools and to have the skills to think broadly about how to use and think about the tools of the future, and the skills, then you’re destined to have this gap continue to widen. So it really is an issue of education. It is the key secret weapon to attacking this income disparity. If you don’t attack education, you cannot solve it by trying to take from this group to give to that group. It’s basically got to be driven by the skills, and that’s got to be the focus. The economics and the science is really clear, that this is really the issue. We have to accept that as a society if we want to tackle this.

Wilson: We have claim adjusters go around, and fix cars and houses, and stuff like that, and we have computers that they use. One of our leaders was with one of them and the person was taking all the stuff down on a piece of paper. He [the leader] said, “How come you’re not using our new computer,” which was in her trunk. She said, “Because I don’t really know how to use it, and I’m embarrassed, and I’m really slow, and I’m faster with our customers this way.” An intelligent choice on her part. But from our standpoint as an employer, if I can have somebody in India do that over the phone who knows the computer, who does it faster, who gets an entire system, it’s cheaper. Why would I not do that? So finding a way to teach people to get contemporary when the world changes so fast–We all have the benefit of coming here [to the Aspen Institute event], and we learn, but I think we have to find a way to have lifelong learning, and we haven’t cracked that code.

Steel: This is all at the issue of outsourcing. Your customers, our clients, Goldman’s clients, all want the right talent in the right place at the right price. Sadly today the right place is probably the least important of those three. Technology is making that by far the least important. So I think the right talent is fundamental, and that’s where this whole issue of education comes in. And you’re right, it doesn’t stop. When you hire someone it really begins at that stage as well. It’s life-long.

Friedman: Lifelong education is crucial, but if you talk to the people who are enormous users of this talent, the technology industry, they will tell you over and over, “Where we need the help is in the early stages, in K-[12], because if you lose them there, you’ve lost them.” So it seems to me, though, that we still haven’t gotten at the root of the problem of why it isn’t working. We’ve talked about some fixes. Gene, you’ve studied this stuff. Let’s give you a chance for a “Nixon goes to China” moment. To what extent is this problem related to the elephant in the room that we’re not talking about: A bureacratic, encrusted bureaucracy, and unions that are resistant to new practices?

Sperling: I think trying to single out teachers unions is not productive. I think what the most innovative people are doing is trying to break those things down, and work together. I think the smartest superintendents, I think, are trying to do that. Certainly there’s always resistance.

Just to make a point or two, I do agree that number one is education. I said this in our earlier panel, you have to be honest with people a little, too, which is to say that education has never been more important, but it’s never been less of a sure thing, too. A college education is your best bet. A graduate education is a better bet, but a college education is less protection against globalization. So it’s going to increase your chances. It’s never been more important, but you’re still going to have people facing that competition, and we have to be honest with them that it’s still the best thing to maximize.

Secondly, though, it’s not the only thing people base location on, and I think looking at the broader issues of competitiveness, whether it’s health care costs, how modern our infrastructure is, what the tax incentives are for our jobs, also should be part of it.

I’ll answer your question in this way. Tom has made one of the most important points, and I just want to put an exclamation mark on it. The dynamic that happens in Washington is the following. You have a proposal. You see this with, like, Headstart. People will come out, and they try to get–There’s a study that shows that it’s not working quite well. The people who are coming out with those studies, [the] solution they want [is] we shouldn’t spend a lot of money on early pre-K. So what happens is that everybody kind of rallies to defend that. So you get this problem where you get less innovation, less experimentation, and less admitting things are wrong. And what bothers me about that mentality is–and this was Tom’s point–we don’t say that about cancer. We don’t say, “Boy, you know, the first few experiments didn’t work.” If we took the approach that he is suggesting; if we said it is just inexcusable. The situation for young black males in our country is a national disgrace, it is an outrage, it is a disgrace. We should all be ashamed it exists every day. We should be ashamed at what a disadvantage a child has by the accident of their birth, by the time they’re 5 years old. Now let’s do something about it like it’s AIDS, or cancer. Let’s have that research. Now, if you then said we are committed as a country to spending what it needs to fix that, then we would encourage all the experimentation and innovation. People would be then willing to admit that they missed a few bombs, that it failed, because they knew it wasn’t going to be undercutting the whole enterprise. It was going to be suggesting that we try the slightly different [approach]. So, one, I think if we had that attitude that we are willing to spend the resources, and it’s okay to admit things aren’t working, because it doesn’t mean we’ll spend less–it might mean we spend more, but in a better, smarter way–I think we’d have a better attitude. And secondly, one thing we could at least do, Stephen, is fund the things we know that work. Teach For America is an amazing success. How can it be that they’re taking 1 out of 10 Ivy League school kids who want to go teach in underprivileged areas? One of the things that’s incredibly successful are these early intervention programs where colleges, businesses are reaching kids, 5th, 6th grade, and trying with them some of the expectations that a person like myself born into an upper-middle class family was just born with. A lot of these things are working. So one of the things we should ask ourselves is why are we not replicating and expanding the things that we at least know are working, that are affecting the motivation, and aspiration of young people?

I thought Sperling’s comments were good in some spots, especially when he said that a college education was not a ticket to a sure thing. It’s not insurance against facing foreign competition anymore. I’d say that’s very true. Your situation can be better if you have a college education, but you can’t rest on your laurels after college. You’re still going to be facing foreign competition.

The only way I can see that treating education like a crisis that needs extra attention is going to happen is if there is a major push by the Department of Education, and perhaps the National Science Foundation and such, and industry, to put out a consistent message to the public that it’s a crisis that needs to be warded off, and what the consequences will be if it’s not. This is not unlike the campaigns we’ve seen around the whole global warming issue. People need to feel that their children’s lives will be negatively affected if “something isn’t done,” so to speak, to create that sense of urgency. The solution needs to be clearly presented as well. Otherwise, as with so many past education reform efforts, they can get watered down and warped into people’s pet projects, and the original goal gets lost. It can’t just be a top-down thing, which is what’s been tried in the past. There needs to be public awareness of the issue and what the solutions are, so parents can apply pressure from below as well. I think such a shift in thinking would have better results than what the global warming campaigners are trying to achieve, because the current situation is largely a result of careerism, bureaucracy, and politics, all things the government has some control over. Having said this it’s not going to be easy. Bureaucrats know the game a lot better than politicians do. They’ve been at it for decades. Politicians come and go.

Then came questions from the audience. Most of them were pertinent to the topics I’m emphasizing here.

Kim Anderson: My name is Kim Anderson. I’m from Minneapolis. I make automobile parts for domestic and foreign vehicles, and do it profitably. I would like each individual’s brief opinion in terms of where we are competitively in America, and in the world.

Thomas Wilson: Kim, I think we’re very competitive, but I think we need to change our mindset in terms of where our public infrastructure is. Today there’s many more opportunities, many more places for companies and capital to go, whether that be Asia, or Mexico, or various other places. We’ve not yet come to grips with the fact in this country that we’re competing with these other countries. And so we need an education system. We need effective government to do that. I’m worried that it might be too late. There’s a line out of a–I think it’s an F. Scott Fitzgerald book–that says, “How did you go bankrupt?” And it was “Gradually, and then suddenly.” [laughter] I’m worried that we have not yet woken up to the fact that America is competing.

Robert Steel: 100% agree. Basically, the competition’s gotten better. They’ve taken our playbook. Everywhere around the world, people are using the market-based mechanism to allocate resources. They’ve discovered that this has the best chance of creating growth, higher employment, lower inflation. And basically we’ve got to keep getting better and not be complacent in order to stay the best.

Sperling made some interesting comments about reducing risk for everyday people, so they’d feel freer about taking some chances in an attempt to improve their lives.

Gene Sperling: . . . I think we also have to figure out ways that we can do more for workers without [having] the problems that you’ve seen where you’re discouraging job taking. So for example Universal Health Care, or things like wage insurance. These are things that help workers in between jobs, that they don’t discourage people from getting back and looking for new jobs. They’re not the type of European model that actually gives you perhaps an incentive to stay off the job. So I think we can be smart about getting the best of having more economic security, without, as Steve likes to say, killing the goose that laid the golden eggs. One thing I want to remember when it comes to individual people, Bob Scheuer [I may have misspelled the name] has a really interesting book on risk-taking for individuals. One of the reasons you can take risks in small business here is if you fail, if you try and fail, you don’t go to debtors prison. You go to bankruptcy. It’s not easy, but you can start again and try again. I think for a lot of people in the United States we want people to feel that way with their own lives, that they can take risks on new jobs and new companies, on starting their own businesses, and not feel, as I feel a lot of Americans feel, that the fall is going to be so deep and so devastating that they start taking less risks themselves. And that’s the way I think having a bit more economic security for more workers can actually make people less risk [averse], and more willing to accept an open and innovative economy, and to take chances on their own higher education, knowing that if they missed, there’s 2nd and 3rd chances in our economy.

What he’s talking about is generally called “insurance”, but it sounds like he’s expanding the concept into more areas, such as going back to school in an attempt to open up more job prospects. The thing is there’s got to be some pain for failure, otherwise you run into the problem of “moral hazard”, where people will take risks carelessly because they know somebody will always bail them out. I’m not saying we don’t sometimes run afoul of moral hazard. Anytime the government bails out a business, they’re risking encouraging more bad behavior from them in the future, but the government tends to do this for companies that are “too big to fail”. In other words, letting them collapse totally would make a noticeably negative dent in our economy.

Among other things, Stephen Friedman made this comment about where most enterprising college students are going:

Stephen Friedman: . . . We have too damn many of our best college kids going into financial services, businesses. They could be doing lots of other stuff. We have too many lawyers, too many investment bankers, too many kids whose deepest aspiration in life is to be a hedge fund guy, rather than being a scientist, or doing something else.

Funny. I remember my late grandfather used to complain about the same thing 20 years ago. Nothing has changed, apparently.

Next question:

Bill Coman: Bill Coman, serial entreprenuer, Silicon Valley. The important thing is that [the] last two years I’ve been the Chairman of the Silicon Valley Leadership Group, 210 companies, a trillion dollars in revenue, and we believe we’re losing our international competitiveness. It’s all about brains. It’s all about brains in our schools, and it’s all about being able to hire them. 53% of our engineers in Silicon Valley are foreign-born. Here’s an example. Biotech is the leading growth industry. $62-billion in revenue. The largest growth industry in high-tech in California today. The seedcorn is Ph.D.’s. Over 50% of the Ph.D.’s in the world are graduated from universities in the United States. This year before one could be hired, all H1-B visas were gone. We will hire them, and we’ll hire them in India and China, and we’ll grow the next level of research over there. So, a two-part proposal for Tom’s Marshall Plan. Very simple. First, STEM is what it’s called: Science Technology Engineering and Mathematics. Every graduate degree granted in the United States should get at least a 5-year visa, renewable; lift the caps on H1-B visas internationally for bringing those people in. This is where we’re going to build the next-generation companies. Second, K-12 education. The biggest problem is teachers. A disgrace. The average teacher makes between $40,000-50,000 a year. How are we going to attract and retain those people? I love Teach For America, but most of them will not stay teaching for 30-40 years. The problem is 1) we’ve got to give them better lifetime training, and 2) we have to give them a salary they can live on.

This is a very naive plan, but some part of it should be looked at. For half the cost of the Iraq war today we can double their salaries. Now let me tell you what I mean. There’s about 1 million teachers, K-12. That’s 40-50 billion dollars a year. We’re spending well over 100 billion dollars a year [I assume he means on the Iraq war]. Take a program, put [a] 10-year sunset on it to transfer to state and local. Divide into two parts for that 40-50 billion to double their salaries. One-third pays for their summer. A volunteer program in which they can go get continuing education for the rest of their lives. Two-thirds is a program that we have to figure out how to keep from being run by politicians in which we can give a bonus to teachers for [the] results of their students. That could double or triple the [salaries] for the best teachers. We’ve learned [that] we have one model here that really works. We have the best college education system in the world. It’s because we compete for students and we compete for teachers. And the third thing is, find some way to leverage–and this is really going to be the hardest part–through some sort of vouchers, the advantages we’re seeing in the KIPP, in the charter kinds of programs, so that students can be competitive. [Lots of applause from the audience]

Well at least Mr. Coman was being honest with himself when he said it’s a naive plan. It sounds like the outline of a policy idea that deserves a look in terms of setting up programs for teachers. As far as allocating the money for it the way he suggests, that’s pie in the sky, unless it can be specifically set aside in legislation. I wouldn’t believe it until I saw it passed with the President’s signature, though.

Friedman: Bill Coman founded one of the great technology successes, so he’s a man who speaks not only from a tremendous practical basis, but also, obviously, in a conceptual level. Is there a website you could cite this group to, so people could look at your program, and study it, and get their minds around it?

Coman: I wish the whole program was together, but we have parts of it. It’s the Silicon Valley Leadership Group [] website. We have proposals in these areas. . . . It’s not all there, by the way.

Maybe there’s more at the site now.

Friedman: One of the points that Bill made, which I just would like to underscore: It’s not just getting those scientists, and those brains in to do the current work of the corporation. As Tom has mentioned, he’d love to hire Americans, but if he can’t get the ones he needs, he is going to get that work done overseas. That’s not good for the U.S., but in many ways the worst part of it is if we don’t have the people doing the work here, when the guy spins off from Allstate, and founds a technology company that revolutionizes some area of the business, he’s doing it in Mumbai or in China. Sergey Brin, one of the co-founders of Google, is a Russian immigrant. We really need those spin-off businesses. It’s this tremendous viral system that we have to keep going here. So I think there’s room for passion on this issue.

Sperling: I negotiated the last H1-B increase in the Clinton Administration, and it was an interesting experience. [laughter] Very interesting. If you want to get stuff done policy-wise, you have to understand the legitimate concerns of others. I think there’s a trust issue, and I think, Tom, this would be a great area to see Silicon Valley and others reach out. Here’s the trust issue: the typical worker needs to believe that they’re not just being passed over for no reason. A lot of workers look at this and think–If I’ve heard the number of times somebody said to me, “Here’s an article about engineers laid off in a city. And here’s an article about somebody complaining that there’s an engineering shortage in the same city.” The H1-B program is probably not a great program because it kind of includes kind of very semi-skilled and then very high-skilled. I think what would make more sense is to have unlimited immigration for the highest skilled, for the truly best and brightest, the Ph.D.’s, who will help you keep your jobs here. But I think the trade-off for doing that is really showing the typical person, who’s lost their job somewhere, that the reason someone’s going somewhere else is not just because it was going to take an extra 3 months to train them, or it’s just slightly more costly to do so. I think there were a lot of articles written about the so-called engineering shortage. I think this is an area where we could actually bring people together. I think if you could create greater trust that there was more happening to train every worker you could, you could get more acceptance of the immigration at the highest level. I think that’s a place to come together.

I just have to emphasize one of the other points you said. It’s just easy to sit there and say everybody should become a scientist, etc. I’ve got to tell you, my sister, [a] tenured scientist in immunology [at the] University of Chicago. I admire her like I admire people who work in refugee camps. I mean that! I mean, she could’ve gone to med school. She could’ve gotten into 3 of the top 5 law schools! It is a total sacrifice of income and comfort to do something that’s greatly in our national interest. And then we freeze NIH for 3 years, so they have to lay off their grad students, and the people in their labs. I mean, if we’re serious about all this stuff about encouraging science and teachers then we [have] got to actually make it easier for real live sons and daughters of ours to actually feel that those are attractive paths to go into. That’s a public policy issue we could do something about. [applause from the audience]

I thought this was Sperling’s best comment of the day. Over the last few years I’ve read articles about scientists finding that it’s not attractive to work in the U.S. and that research institutions here make it hard for them. I read through Philip Greenspun’s “career guide” on his site. He basically said that there’s a surplus of scientists in academia; that it’s actually hard for Ph.D. scientists to get teaching positions in universities, and even if they do, the pay isn’t that good. If they fail to get a teaching position, by the time they’ve realized this they’re “over the hill” in terms of their “age of eligibility” for working in industry. So some end up being “sent out to pasture”. Not very uplifting.

At this point in the discussion it got to be a bit difficult tell who was saying what, so some of the attributions I give here might be incorrect. An anonymous questioner asked:

I was wondering if I could gather some data before I ask my question. Could I get the educational experience of your children, K-12, on the panel? Maybe where they went to school?

Thomas Wilson: K-12?

Questioner: K-12. Steve, where did your kids go to school? K-12.

Friedman: We started out our oldest in public school in New York. It was an idealistic and noble experiment, which lasted two weeks. [laughter] And then they all went to private schools in New York.

(The questioner called out each other person’s name one by one during the course of this interaction.)

Turley: We have one son, and he went to public schools throughout in St. Louis, in Minnesota, and in New York.

Questioner: In the cities?

Turley: No. In the Scarsdales. So it’s sort of semi-private. [laughter] And then there’s Northwestern.

Wilson: Three kids, all in private school.

Steel: Started school in England, and then regular local schools, and then to private schools, and universities.

Sperling: My 15-month-old is still…[laughter] I inherited a 12-year-old and I’d say he’s gone about 5 years to public school and 2 years to private school.

Questioner: The point is will we ever change K-12 when all the powerful people opt out for private schools, or quasi-private schools (Scarsdales), and until this happens are we really going to change K-12?

Sperling: I cannot tell you what a good question I think this is. And you’re right. I moved my son from public schools in L.A., and my wife making this whole move wanted to make sure that he was in private schools–the first time anybody in our entire family, brothers and sisters, anybody, had ever done it. I do it because I’ve got to do what’s best for my 12-year-old. But I’ve got to tell you, I feel like I am part of an apartheid system in Washington, D.C. I feel terrible about it. I’m not willing to make an experiment on my son, but I think public policy-wise you’re absolutely right. And to be honest, if I had sent him to a public school it wouldn’t have mattered anyways, because you have two private school systems: One, you pay high tuition, the other, you pay a huge premium on your house so you can say you’re going to public schools…

Questioner: That’s quasi-private.

Sperling: That’s quasi-private school. You’re absolutely right. And you know what? I don’t want to deny that I’m doing it, and I feel terrible about it, but I’m still going to do what’s best for my 12-year-old. The only way I can feel good is to make sure that I don’t let that affect what I do policy-wise. I spend all my life, you know, trying to focus to make sure kids in public schools are getting a chance again. That’s just the truth. [applause from the audience]

Friedman: Let me come back to that point. There’s no question that when you have active, aggressive parents who demand the services, and Scarsdales is certainly an epitome of that, you’re going to get different results. I don’t think that you have to be the head of a Fortune 100 company to understand that your kids’ futures depend on education. I think most of the people in this audience probably came from families where education was an important part of life, even if it was a Depression era and there was no money around. I think the real question is why are the average parents in America willing to live with a school–You know, when they do the studies they find that people tend to be satisfied with their own school system. They’re underdemanding consumers. Why are they satisfied with a system that has their kids’ comparative standings on the standardized tests so poor against the rest of the world? I don’t understand it. I don’t understand. There’s a market failure, but I don’t think that only rich people can drive very successful school systems.

Lorraine Harrison (not sure about spelling): My name’s Lorraine Harrison from Silicon Valley. I had two recent ‘ah-ha’ experiences I wanted to communicate. Recently my son graduated from Princeton. 50% of the students in the Economics Department were going to Wall Street. In fact, most of the engineering people were going to Wall Street. The second experience I had was in India. I went to a girls’ middle school. We asked them to raise their hands and say, “What do they want to do in their future?” This is all girls. 90% of them either wanted to be software engineers or doctors [my emphasis]. There’s no way in the U.S. that that would happen at a girls’ middle school. So what can we do to get the motivations of these kids focused on innovative jobs?

Friedman: On which jobs?

Harrison: On innovative jobs. Engineering, and science. Our kids are not thinking about that.

Sperling: Shirley Ann Jackson I think was here earlier. She talks about this. I mean, you mentioned girls. We have a workforce that is not going to grow as much as it did. So in the past we’ve produced more college educated, more science and tech people, just by kind of labor force growth. Now we’ve got to actually get more people in a kind of more stagnant pool to want to do those things. The people who are not seeking those jobs are girls, young women. 35% are in the workforce, but they make up only 10-15% of science and technology. A lot of them lose interest around 6th and 7th grade. They’re outperforming boys earlier.

I’m so against any kind of segregation, but I have to say the studies about letting girls segregate in the schools for 5th, 6th, 7th grade, the kind of stuff Sally Ride is doing–I think the question is you’ve got to reach people at early ages, high motivational. It’s not just about the standards. It’s about motivating people at a very early age. It is women, African-Americans, Hispanics and people with disabilities, who make up about 60% of the workforce and are going to be fulfilling just a very small fraction of the number of jobs. That’s the unrealized talent pool for science and tech innovation in our own country. I don’t think we could do enough to try to reach out. I think it’s got to be early. It’s got to be early. You can’t wait until 10 to 11…I don’t think you can wait until 11th Grade.

Harrison: The issue with H1-B visas is, I mean, I’m in Silicon Valley. Yes, half the engineering graduates or more come from India and China. That’s why we’ve got to hire them. I mean, if you look at the entreprenuers in Silicon Valley, the number of entreprenuers that have come from India is like 25% of the entreprenuers in Silicon Valley. I mean, it’s incredible. I mean, there’s a motivation issue [that’s happening here]. We’re not instilling in–women, absolutely–but also in the general population.

Steel: I think you’re being a bit pessimistic. Let me make just one example. The competition for the graduates at the top colleges now–There’s a race to the top by two organizations I’m familiar with: Goldman Sachs and Teach For America. And a lot of [what] schools now teach is “beating Goldman Sachs,” which I have mixed emotions about. I think that the real issue is if we create the right inspirational activities, people will respond. The issue isn’t with the children, The issue is we haven’t created the vehicles by which people can be inspired. There’s a good example [here]. Let’s give it up for Wendy Cobb basically for creating something that’s been the vehicle by which this can be presented [I assume he’s talking about the Aspen Institute conference here].

Friedman: What percentage of great scientists when they’re asked why they got into it allude to the fact they were inspired by a great teacher early on?

Turley: One thing I’ll add to this is in India engineers are rock stars. They get a lot of publicity and a lot of celebrations. Here in this country, frankly, Wall Street, hedge funds, and private equity have been the rock stars. And I think we need to do a lot more as a society to celebrate entreprenuership, and to celebrate the engineering and science disciplines.

The session ended here. I think Turley’s point was a good one to end on. Culturally scientists and engineers are not celebrated in the U.S. Maybe they used to be, but as I’ve gone through my career and seen the general attitudes of people I don’t get a sense that we’re looked upon with much admiration. It could be because people feel we’ve broken faith with them. I’m just throwing that one out there. I mean, you can even go back to when you were in school. Were people who studied science and engineering admired among their peers? Generally speaking, absolutely not. You can say that as kids they were just immature. They didn’t see the value in it. Well, kids’ values are a reflection of their parents’ values, at least in part.

In fact, depending on the school you went to, there might’ve even been peer pressure against getting good grades to begin with. Being “cool” could mean not bothering to study, being a slacker, a kind of anti-education attitude.

The point that Turley was making is a significant part of the issue is cultural. Some cultures greatly value education, and parents instill that in their children. Some cultures have a tendency to steer their kids towards certain professions, which they believe will give them a decent or exceptional standard of living. Depending on what community you live in in the U.S., this varies.

When I was in high school there was actually peer pressure to get good grades. That was good for me. It tended to motivate me to try to do better. When I was in junior high a friend of mine went to a different junior high school where there was peer pressure to not get good grades. He was smart, and could get the grades, but to be socially accepted he did badly on tests on purpose, at least for a time. Eventually his mother found out what was going on and set him straight. This goes on all over the U.S. today, and there are plenty of parents who don’t seem to be that attentive to this issue, or if they do it’s countered by “self-esteem” measures that give students A’s for lackluster work.

One of the things that’s going to have to change if we want to compete globally is to somehow stamp out an educational culture that has low expectations of students. This has to come from the parents. That’s all there is to it. Schools can do their part, and government can have a role in that, but if the parents aren’t pushing in the same direction it’s not going to work.

—Mark Miller,

Straying into dangerous territory

This is the 2nd part to my previous post, Squeak is like an operating system. I also refer you to this post on Giles’s blog, which provides some context.

It’s easy to get lulled into the notion when programming in Smalltalk that you can use it like you would a programming language that is separated from the OS, that uses libraries, and is only linked in by a loader. Indeed Smalltalk is a programming language, but more in the sense that Bourne Shell is a programming language on Unix, and you’re using it logged in as “root”. That’s a bit of a bad analogy, because I don’t mean to imply that Smalltalk is a scripting language that uses terse symbols, but the power that you have over the system is the same (in this case, the Squeak image).

In Unix you manipulate processes and files. In Squeak you manipulate objects. Everything is an object.

I had an interesting “beginner root” experience with Squeak recently, and it highlights a difference between Ruby (file-based) and Smalltalk (image-based). Some might judge it in Ruby’s favor. I was doing some experimenting in a Workspace. To further the analogy, think of a Workspace as a command line shell instance in X-Windows. I had been using a variable called “duration” in my code. It was late. I was tired (really interested in what I was doing), and I got to a point where I didn’t need this variable. Cognizant of the fact that in certain workspaces variables hang around, I decided I wanted to get rid of the object it was holding, but I was a bit spacey and couldn’t remember the exact spelling of it. I had already changed the code that used it. So I evaluated:

duration := nil


Duration := nil

just to be sure I got both spellings. At first nothing weird happened, but then I tried to save the image and quit. I had forgotten that 1) all variables in Smalltalk begin in lower-case, and 2) “Duration” (with the capital D) is the name of a class. Smalltalk is case-sensitive. Instead of saving, I got an exception dialog saying “Message Not Understood”. I found the source of the error, and for some mysterious reason a message being passed to the Duration class seemed to be causing the error. I inspected “Duration” in the code and Smalltalk said “Unknown object”. Alright. So it wasn’t the message. Still, I thought, “Huh. Wonder what that’s about?” I ended up just quitting without saving. I realized the next day the reason I got the error is I had deleted the Duration class from the image (in memory, not on disk)! I confirmed this by trying it again, bringing up a code browser and trying to search for the Duration class. Nothing. I thought, “Wow. That’s pretty dangerous,” just being able to make a spelling error and delete a critical class without warning.

What happened?

Say you’re using Unix and you’re at the command line, and there’s a critical executable that Unix uses to calculate the passage of time, called Duration (I know this would normally involve using a time structure in C and some API calls, but bear with me). What I did would be the equivalent to saying “mv Duration /dev/null”, and when the system tried to shut down, it couldn’t find it, and aborted the shutdown. I know this sounds weird, because I just said the image on disk was unaltered. Anything that is actively available as an object in Squeak is in memory, not on disk. The disk image is roughly equivalent to the file that’s created when you put your computer into hibernation. It’s just a byte for byte copy of what was in memory.

Unix treats directories/files, processes, and shell variables differently. It treats them each as separate “species”. One does not work like the other, and has no relationship with the other. I know it’s often said of Unix that “everything is a file”, but there are these distinctions.

Another fundamental difference between Unix and Squeak is Unix is process-oriented. In Unix, to start up a new process entity, you run an executable with command line arguments. This can be done via. GUI menus, but still this is what’s going on. In Squeak you run something by passing a message to a class that instantiates one or more new objects. This is typically done via. icons and menus, but at heart this is what’s going on.

Anyway, back to my dilemma. I was looking to see if there was a way to easily fix this situation, since I considered this behavior to be a bug. I didn’t like that it did this without warning. It turned out to be more complicated than I thought. I realized that it would take a heuristic analyzer to determine that what I was doing was dangerous.

I did a little research, using Google to search the Squeak developers list, looking up the keywords “Squeak ‘idiot proof'”, and interestingly the first entry that came up was from 1998 where Paul Fernhout had done something very similar. He asked, “Shouldn’t we prevent this?” Here’s what he did:

Dictionary become: nil

Dictionary is, again, a class. become: is a message handled by ProtoObject, the root class of every class in the system. In this case it makes every variable in the entire system that used to point to Dictionary point to nil. The method description also says, “and vice versa”, which I take to mean that every variable in the system that pointed to the argument in become: would point to Dictionary (to use the example). I’m guessing that nil is a special case, but I don’t know.

Anyway, this caused Squeak to crash. The consensus that ensued was that this was okay. Squeak was designed to be a fully modifiable system. In fact Alan Kay has often said that he wishes that people would take Squeak and use it to create a better system than itself. “Obsolete the darn thing,” he says. The argument on the developers list was if it was made crash-proof, anticipating every possible way it could crash, it would become less of what it was intended to be. Someone feared that, “if you make it idiot proof, then only idiots will use it.” The conclusion seemed to be that Squeak programmers should just follow “the doctor’s rule”: The patient says, “It hurts when I do this.” The doctor says, “Well don’t do that.”

I can see that this argument has validity, but in this particular instance I don’t like this explanation. It seems to me that this is an area where some error checking could be easily implemented. Isn’t it erroneous to put nil in for become:? Maybe the answer is no. I at least have an excuse, since I was using the := operator, which is a primitive, and it’s much harder to check for this sort of thing in that context.

I also realized that the reason that code browsers can warn you about doing such things easily is they have menu options for doing things like deleting a class, and they throw up a warning if you try to do that. The browser can clearly make out what you’re trying to do, and check if there are dependencies on what you’re deleting. Checking for “Duration := nil” would be akin to a virus scanner looking for suspicious behavior. I don’t believe even a browser would pick up on this piece of code.

This example drove home the point for me that Squeak is like an operating system, and I as a developer am logged in as root–I can do anything, which means I can royally screw the system up (the image) if I don’t know what I’m doing.

I do have a way out of a mess, though, with Squeak. Fortunately, since it’s image-based, if I had managed to save a screwed-up image, I could just substitute a working image. It’s a good idea to keep a backup “clean” image around for such occasions, that you can just make a copy of. I wouldn’t need to reinstall the OS (to use the analogy). It’s like what people who use virtualization software do to manage different operating systems. They save disk images of their sessions. If one becomes screwed up or infected with a virus, they can just substitute a good one and delete the bad one. Problem solved.

For the most part, I wouldn’t lose any source code either, because that’s all saved in changesets, which I could recover (though Workspaces are only saved if you explicitly save them).

Another thing I could’ve done is opened up another instance of Squeak on a good image, filed out the Duration class (have Squeak serialize the class definition to a text file), and then filed it in (deserialized it), in the corrupted image. I imagine that would’ve solved the problem in my case.

Having said all this, I don’t mean to say that if you’re a Squeak user, working at a more abstract level, say with eToys, that you’re likely to experience a crash in the system. That level of use is pretty stable. Individual apps. may crash, and people have complained about apps. getting messed up in the newer versions, but the system will stay up.

I can understand why some like Ruby. You get access to Smalltalk-like language features, without having “root” access, and thereby the ability to mess things up in the runtime environment. The core runtime is written in C, which conveys a “None but the brave enter here” message. It’s compatible with the file-based paradigm that most developers are familiar with. From what I hear, with the way programming teams are staffed these days at large firms, a technology like Squeak in unskilled hands would probably lead to problems. It’s like the saying in Spiderman: “With power comes responsibility.” Squeak gives the programmer a lot of power, and some power to mess things up. Interestingly, in the 1998 discussion I link to, there was a distinction made between “shooting yourself in the foot” in Smalltalk vs. doing the same in a language like C. The commenter said that a C programmer has to be much more careful than a Squeak programmer does, because a) pointers in C are dangerous, and b) you can’t avoid using them. Whereas in Squeak you don’t have to use facilities that are dangerous to get something useful done. In fact, if you do screw up, Squeak offers several ways you can recover.

In closing I’ll say that just as a seasoned Unix administrator would never say “rm -r *” from the root directory (though Unix would allow such a command to be executed, destructive as it is), people learning Squeak come to the realization that they have the full power of the system in their hands, and it behooves them to use it with discipline. I know that in a Unix shell you can set up an alias for rm that says “rm -i” that forces a warning. I suppose such an option could be built into Squeak if there was the desire for that. It would depend a lot on context, and Squeak doesn’t really embody that concept at this point.

Squeak is like an operating system

Squeak desktop

The Squeak 3.9 desktop

In foreground is a System Browser (what I call a “code browser”–the default developer IDE), and one of the desktop “flaps” on the right (like a “quick launch” bar–it retracts). Behind the System Browser is the Monticello Browser, a version control system for Squeak. The bar at the top contains some system menus and desktop controls. The wallpaper is called “Altitude”.

I was inspired to write this by this discussion on Giles’s blog.

Smalltalk is misunderstood. It’s often criticized as one big lock-in platform (like Microsoft Windows, I guess), or as I saw someone say in the comments section of Giles’s post, a “walled garden”, or more often a language that’s “in its own little world”. I think all of these interpretations come about because of this misunderstanding: people think Smalltalk is just a language running on a VM. Period. They don’t understand why it has its own development tools and its own way of interacting. That doesn’t seem like a typical language to them, but they can’t get past this notion. The truth is Smalltalk is unlike anything most programmers have ever seen before. It doesn’t seem to fit most of the conventions that programmers are trained to see in a language, or a system.

We’re trained that all an operating system represents to a user or a programmer is a manager of resources, and an interaction environment. Programs are stored inactive on disk, along with our data files. We learn that we initiate our programs by starting them from a command line, or clicking on an icon or menu item, through the interaction environment of the OS. When a program is run, it is linked into the system via. a loader, assigned an area of memory, and given access to certain OS resources. Programs are only loaded into memory when they’re running, or put into a state of stasis for later activation.

We’re trained that a “virtual machine”, in the .Net or JVM sense, is a runtime that executes bytecodes and manages its own garbage collected memory environment, and is strictly there to run programs. It may manage its own threads, or not. It becomes active when it’s running programs, and becomes inactive when we shut those programs down. It’s a bit like a primitive OS without its own user interaction model. It remains hidden to most people. We’re trained that programming languages are compilers or interpreters that run on top of the OS. Each is a separate piece: the OS, the development tools, and the software we run. They are linked together only occasionally, and only so long as a tool or a piece of software is in use. Smalltalk runs against the grain of most of this conventional thinking.

Like its predecessor, Smalltalk-80, Squeak functions like an operating system (in Smalltalk-80’s case, it really was an OS, running on the Xerox Alto). The reason I say “like” an OS, is while it implements many of the basic features we expect from an OS, it doesn’t do everything. In its current implementation it does not become the OS of your computer, nor do any of the other Smalltalk implementations. It doesn’t have its own file system, nor does it natively handle devices. It needs an underlying OS. What distinguishes it from runtimes, like .Net and the JVM, is it has its own GUI–its own user interaction environment (which can be turned off). It has its own development tools, which run inside this interaction environment. It multitasks its own processes. It allows the user to interactively monitor and control background threads through the Process Monitor that also runs inside the environment. Like Lisp’s runtime environment, everything is late-bound, unlike .Net and the JVM. Everything is more integrated together: the kernel, the Smalltalk compiler, the UI, the development tools, and applications. All are implemented as objects running in one environment.

Creating a unified language/OS system was intentional. Alan Kay, the designer of the Smalltalk language, was in a discussion recently on operating systems on the Squeak developers list. He said that an operating system is just the part that’s not included in a programming language. In his opinion a language should be an integral part of what we call an OS. His rationale was that in order for a program to run, it needs the facilities of the OS. The OS governs I/O, and it governs memory management. What’s the point in keeping a language separate from these facilities if programs are always going to need these things, and would be utterly useless without them? (Update 12-14-2010: Just a note of clarification about this paragraph. Though Alan Kay’s utterance of this idea was the first I’d heard of it, I’ve since seen quotes from Dan Ingalls (the implementor of Smalltalk) from material written a few decades ago which suggest that the idea should be attributed to him. To be honest I’m not real sure who came up with it first.)

Another reason that there’s misunderstanding about Smalltalk is people think that the Smalltalk language is trying to be the “one language to rule them all” in the Squeak/Smalltalk environment. This is not true.  If you take a look at other systems, like Unix/Linux, Windows, or Mac OS, most of the code base for them was written in C. In some OSes, some of the code is probably even written in assembly language. What are the C/C++ compilers, the Java and .Net runtimes, and Python and Ruby written in? They’re all written in C. Why? Is C trying to be the “one language to rule them all” on those systems? No, but why are they all written in the same language? Why aren’t they written in a variety of languages like Pascal, Lisp, or Scheme? I think any of them would be capable of handling the task of translating source code into machine code, or creating an interpreter. Sure, performance is part of the reason. The main one though is that all of the operating systems are fundamentally oriented around the way C does things. The libraries that come with the OS, or with the development systems, are written in it. Most of the kernel is probably written in it. Pascal, assuming it was not written in C, could probably handle dealing with the OS, but it would have a rough time, due to the different calling conventions that are used between it and C. Could Lisp, assuming its runtime wasn’t written in C either, interact with the OS with no help from C? I doubt it, not unless the OS libraries allowed some kind of universal message passing scheme that it could adapt to.

A major reason Smalltalk seems to be its “own little world” is like any OS it needs a large variety of tools, applications, and supporting technologies to feel useful to most users, and Smalltalk doesn’t have that right now. It does have its own way of doing things, but nothing says it can’t be changed. As an example, before Monticello came along, versioning used to only take place in the changesets, or via. filing out classes to text files, and then saving them in a version control system.

Saying Smalltalk is isolating is equivalent to saying that BeOS is a “walled garden”, and I’m sure it would feel that way to most. Is any OS any less a “walled garden”? It may not seem like it, but try taking a Windows application and run it on a Mac or Linux without an emulator, or vice versa. You’ll see what I mean. It’s just a question of is the walled garden so big it doesn’t feel confining?

Having said this, Squeak does offer some opportunities for interoperating with the “outside world”. Squeak can open sockets, so it can communicate via. network protocols with other servers and databases. There’s a web service library so it can communicate in SOAP. Other commercial implementations have these features as well. Squeak also has FFI (Foreign Functionality Interface), and OSProcess libraries, which enables Squeak programmers to initiate processes that run on the underlying OS. As I mentioned a while ago, someone ported the wxWidgets library to it, called wxSqueak, so people can write GUI apps. that run on the underlying OS. There’s also Squeak/.Net Bridge, which allows interaction with the .Net runtime, in a late-bound way (though .Net makes you work harder to get what you want in this mode). So Squeak at least is not an island in principle. Its system support could be better. Other languages like Ruby have more developed OS interoperability libraries at this point.

The “C/C++ effect” you see on most other systems is the equivalent of what you see in Squeak. The Smalltalk language is the “C” of the system. The reason most software out there, even open source, can’t run on it, is at a fundamental level they’re all written in C or C++. The system languages are so different. Smalltalk also represents a very different system model.

What Smalltalk, and especially Squeak, has achieved that no other system has achieved to date, with the exception of the Lisp machines of old, is a symbolic virtual machine environment that is more relevant to its own use than any other runtime environment out there. In .Net and the JVM you just use their runtimes to run programs. They’re rather like the old operating systems at the beginning of computing, which were glorified program loaders, though they have some sophisticated technical features the old OSes didn’t have. With Squeak, you’re not just running software on it, you’re using the VM as well, interacting with it. Squeak can be stripped down to the point that it is just a runtime for running programs. There are a few Squeak developers who have done that.

In Smalltalk, rather than machine code and C being software’s primary method of interacting with the machine, bytecode and the Smalltalk language are the basic technologies you work with. This doesn’t mean at all that more languages couldn’t be written to run in the Smalltalk environment. I have no doubt that Ruby or Python could be written in Smalltalk, and used inside the environment, with their syntax and libraries intact. The way in which they’d be run (initiated) would be different, but at heart they would be the same as they are now.

Both Apple and Microsoft are moving towards the Smalltalk model, with the Cocoa development environment on the Mac, and .Net/WPF on Windows. Both have you write software that can run in a bytecode-executing runtime, though the OS is still written in large part in compiled-to-binary code. If desktop OSes have a future, I think we’ll see them move more and more to this model. Even so there remain a couple of major differences between these approaches: Smalltalk uses a late-binding model for everything it does, so it doesn’t have to be shut off every time you want to change what’s running inside it. Even programs that are executing in it don’t have to be stopped. .Net uses an early-binding model, though it emulates late binding pretty well in ASP.Net now. I can’t comment on Cocoa and Objective C, since I haven’t used them. Another difference is, like Linux, Squeak is completely open source. So everything is modifiable, right down to the kernel. As I’ll get into in my next post, this is a double-edged sword.

Update 7/20/07: I found this in the reddit comments to this blog post. Apparently there is a version of Squeak that IS an operating system, called SqueakNOS. The name stands for “Squeak No Operating System” (underneath it). It comes in an ISO image that you can burn to CD. From the screenshot it shows that it has a concept of devices, displaying tabs for the CD-ROM drive, serial port, and NIC.

I think I had heard the name of SqueakNOS before this, but I didn’t know what it was. The version of Squeak that most people have is the kind that rides on top of an existing OS, not this, but it’s nice to know that this is out there. Personally I like the kind I have, since it allows me to do what I like doing in Squeak, and if I want resources it doesn’t have available then I can go to the OS I’m running. I used to hear about a version of Linux that ran this way so people didn’t have to dual boot or use virtualization software. There’s no “dishonor” in it.

One person in the reddit comments said they wished it had a “good web browser that supported AJAX.” Me too. There is a HTML browser called “Scamper”, available from SqueakMap. It used to be included in the image, but in the latest version from it looks like it’s been taken out. It seems that lately Squeak developers have preferred the idea of having Squeak control the default browser on your system (IE, Safari, Firefox, etc.) to using a browser internal to Squeak. The last I saw Scamper it was a bare-bones HTML browser that just displayed text in a standard system font. It could be better.