Feeds:
Posts
Comments

Archive for the ‘Analysis’ Category

This is going to strike people as a rant, but it’s not. It’s a recommendation to the fields of software engineering, and by extension, computer science, at large, on how to elevate the discussion in software engineering. This post was inspired by a question I was asked to answer on Quora about whether object-oriented programming or functional programming offers better opportunities for modularizing software design into simpler units.

So, I’ll start off by saying that what software engineering typically calls “OOP” is not what I consider OOP. The discussion that this question was about is likely comparing abstract data types, in concept, with functional programming (FP). What they’re also probably talking about is how well each programming model handles data structures, how the different models implement modular functionality, and which is preferable for dealing with said data structures.

Debating stuff like this is so off from what the discussion could be about. It is a fact that if an OOP model is what is desired, one can create a better one in FP than what these so-called “OOP” proponents are likely talking about. It would take some work, but it could be done. It wouldn’t surprise me if there already are libraries for FP languages that do this.

The point is what’s actually hampering the discussion about what programming model is better is the ideas that are being discussed, and more broadly, the goals. A modern, developed engineering discipline would understand this. Yes, both programming models are capable of decomposing tasks, but the whole discussion about which does it “better” is off the mark. It has a weak goal in mind.

I remember recently answering a question on Quora regarding a dispute between two people over which of two singers, who had passed away in recent years, was “better.” I said that you can’t begin to discuss which one was better on a reasonable basis until you look at what genres they were in. In this case, I said each singer used a different style. They weren’t trying to communicate the same things. They weren’t using the same techniques. They were in different genres, so there’s little point in comparing them, unless you’re talking about what style of music you like better. By analogy, each programming model has strengths and weaknesses relative to what you’re trying to accomplish, but in order to use them to best effect, you have to consider whether each is even a good fit architecturally for the system you’re trying to build. There may be different FP languages that are a better fit than others, or maybe none of them fit well. Likewise, for the procedural languages that these proponents are probably calling “OOP.” Calling one “better” than another is missing the point. It depends on what you’re trying to do.

Comparisons about programming models in software engineering tend to wind down to familiarity, at some point; which languages can function as a platform that a large pool of developers know how to use, because no one wants to be left holding the bag if developers decide to up and leave. I think this argument misses a larger problem: How do you replace the domain knowledge those developers had? The most valuable part of a developer’s knowledge base is their understanding of the intent of the software design; for example, how the target business for the system operates, and how that intersects with technical decisions that were made in the software they worked on. Usually, that knowledge is all in their heads. It’s hardly documented. It doesn’t matter that you can replace those developers with people with the same programming skills, because sure, they can read the code, but they don’t understand why it was designed the way it was, and without that, it’s going to be difficult for them to do much that’s meaningful with it. That knowledge is either with other people who are still working at the same business, and/or the ones who left, and either way, the new people are going to have to be brought up to speed to be productive, which likely means the people who are still there are going to have to take time away from their critical duties to train the new people. Productivity suffers even if the new people are experts in the required programming skills.

The problem with this approach, from a modern engineering perspective, goes back to the saying that if all someone knows how to use is a hammer, every problem looks like a nail to them. The problem is with the discipline itself; that this mentality dominates the thinking in it. And I must say, computer science has not been helping with this, either. Rather than exploring how to make the process of building a better-fit programming model easier, they’ve been teaching a strict set of programming models for the purpose of employing students in industry. I could go on about other nagging problems that are quite evident in the industry that they are ignoring.

I could be oversimplifying this, but my understanding is modern engineering has much more of a form-follows-function orientation, and it focuses on technological properties. It does not take a pre-engineered product as a given. It looks at its parts. It takes all of the requirements of a project into consideration, and then applies analysis technique and this knowledge of technological properties to finding a solution. It focuses a lot of attention on architecture, trying to make efficient use of materials and labor, to control costs. This focus tends to make the scheduling and budgeting for the project more predictable, since they are basing estimates on known constraints. They also operate on a principle that simpler models (but not too simple) fail less often, and are easier to maintain than overcomplicated ones. They use analysis tools that help them model the constraints of the design, again, using technological properties as the basis, taking cognitive load off of the engineers.

Another thing is they don’t think about “what’s easier” for the engineers. They think about “what’s easier” for the people who will be using what’s ultimately created, including engineers who will be maintaining it, at least within cost constraints, and reliability requirements. The engineers tasked with creating the end product are supposed to be doing the hard part of trying to find the architecture and materials that fit the requirements for the people who will be using it, and paying for it.

Clarifying this analogy, what I’m talking about when I say “architecture” is not, “What data structure/container should we use?” It’s more akin to the question, “Should we use OOP or FP,” but it’s more than that. It involves thinking about what relationships between information and semantics best suite the domain for which the system is being developed, so that software engineers can better express the design (hopefully as close to a “spec” as possible), and what computing engine design to use in processing that programming model, so it runs most efficiently. When I talk about “constraints of materials,” I’m analogizing that to hardware, and software runtimes, and what their speed and load capacity is, in terms of things like frequency of requests, and memory. In short, what I’m saying is that some language and VM/runtime design might be necessary for this analogy to hold.

What this could accomplish is ultimately documenting the process that’s needed—still in a formal language—and using that documentation to run the process. So, rather than requiring software engineers to understand the business process, they can instead focus on the description and semantics of the engine that allows the description of the process to be run.

What’s needed is thinking like, “What kind of system model do we need,” and an industry that supports that kind of thinking. It needs to be much more technically competent to do this. I know this is sounding like wishful thinking, since people in the field are always trying to address problems that are right in front of them, and there’s no time to think about this. Secondly, I’m sure it sounds like I’m saying that software engineers should be engaged in something that’s impossibly hard, since I’m sure many are thinking that bugs are bad enough in existing software systems, and now I’m talking about getting software engineers involved in developing the very languages that will be used to describe the processes. That sounds like I’m asking for more trouble.

I’m talking about taking software engineers out of the task of describing customer processes, and putting them to work on the syntactic and semantic engines that enable people familiar with the processes to describe them. Perhaps I’m wrong, but I think this reorientation would make hiring programming staff based on technical skills easier, in principle, since not as much business knowledge would be necessary to make them productive.

Thirdly, “What about development communities?” Useful technology cannot just stand on its own. It needs an industry, a market around it. I agree, but I think, as I already said, it needs a more technically competent industry around it, one that can think in terms of the engineering processes I’ve described.

It seems to me one reason the industry doesn’t focus on this is we’ve gotten so used to the idea that our languages and computing systems need to be complex, that they need to be like Swiss Army knives that can handle every conceivable need, because we seem to need those features in the systems that have already been implemented. They reality is the reason they’re complex is a) they have been built using semantic systems that are not well suited to the problem they’re trying to solve, and b) they’re really designed to be “catch all” systems that anticipate a wide variety of customer needs. So, the problem you’re trying to solve is but a subset of that. We’ve been coping with the “Swiss Army knife” designs of others for decades. What’s actually needed is a different knowledge set that eschews from the features we don’t need for the projects we want to complete, that focuses on just the elements that are needed, with a practice that focuses on design, and its improvement.

Very few software engineers and computer scientists have had the experience of using a language that was tailored to the task they were working on. We’ve come to think that we need feature-rich languages and/or feature-rich libraries to finish projects. I say no. That is a habit, thinking that programming languages are communication protocols not just with computers, but with software engineers. What would be better is a semantic design scheme for semantic engines, having languages on top of them, in which the project can be spec’d out, and executed.

As things stand, what I’m talking about is impractical. It’s likely there are not enough software engineers around with the skills necessary to do what I’m describing to handle the demand for computational services. However, what’s been going on for ages in software engineering has mostly been a record of failure, with relatively few successes (the vast majority of software projects fail). Discussions, like the one described in the question that inspired this post, are not helping the problem. What’s needed is a different kind of discussion, I suggest using the topic scheme I’ve outlined here.

I’m saying that software engineering (SE) needs to take a look at what modern engineering disciplines do, and do their best to model that. CS needs to wonder what scientific discipline it can most emulate, which is what’s going to be needed if SE is going to improve. Both disciplines are stagnating, and are being surpassed by information technology management as a viable solution scheme for solving computational problems. However, that leads into a different problem, which I talked about 9 years ago in IT: You’re doing it completely wrong.

Related posts:

Alan Kay: Rethinking CS education

The future is not now

— Mark Miller, https://tekkie.wordpress.com

Advertisements

Read Full Post »

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.

Read Full Post »

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

Foster also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

He says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Edit 2/16/2016: I’ve added 3 more paragraphs relating to another video I’m adding, since it relates more specifically to this topic.

The video, below, is intriguing, but I can’t help but wonder if the two researchers, Cosmides and Tooby, are reading current issues they’re hearing about in political discourse into “stone age thinking” unjustifiably, because how do we know what stone age thinking was? I have to admit, I have no background in anthropology or archaeology at this point. I might need that to give more weight to this inference. The topic they discuss here, about a common misunderstanding of market economics, relates back to something they discussed in the above video, about humans trying to detect and form coalitions, and how market mechanisms have the effect of appearing to interfere with coalition-building strategies. They say this leads to resentment against the market system.

What this would seem to suggest is that the idea that humans are drastically changing our planet’s climate system for the worst is a nice salve for that desire for coalition building, because it leads one to a much larger inference that market economics (the perceived enemy of coalition strategies) is a problem that transcends national boundaries. The constant mantra of warmists that, “We must act now to solve it,” appears to demand a coalition, which to those who feel disconnected by markets feels very desirable.

One of the most frequent desires I’ve heard from those who believe that we are changing our climate for the worst is that they only want to deal with market participants, “Who care about me and my community.” What Cosmides and Tooby say is this relates back to our innate desire to build coalitions, and is evidence that these people feel that the market system is interfering, or not cooperating in that process. What they say, as Foster says, is this reflects a lack of understanding of market economics, and a total ignorance of the benefits its effects bring to humanity.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in our evolutionary experience, by comparison. So we still carry an evolutionary heritage that forms our perceptions of fairness and social survival, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions that are taken in its name, and the technical premises that motivate them. Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

Read Full Post »

Luigi Zingales wrote an excellent article for City Journal, called “Who Killed Horatio Alger?” He lays out very clearly how a meritocratic society works, and the diversions that can take us off that track. He says this is what’s happened with the bubbles and bailouts, and that they risk ending our belief in a meritocracy. He identifies a crucial agenda to reasserting it, which he calls, I think aptly, “pro-market.” It is neither anti-business nor pro-business. Rather it asserts that the rules of a market should rule in the economy: When you succeed, the reward you receive from the market should not be pillaged. You get to keep the lion’s share of it. When you fail, it’s all on you. Do not expect the government to back you up. He points out that a “pro-market” agenda does not have a lot of political allies right now. Pro-business, particularly big business interests, and anti-business interests are on the same page: Neither of them like the market approach.

Zingales does a great job of describing what’s different about the political realm vs. the market, and how culturally similar business monopolies are to political institutions.

I think he puts forward a legitimate warning that if we don’t stop doing the bailouts when future market crashes happen (and they will happen), the free market principles this country has used since its founding will go away, and may never come back, because people will see that using free market assumptions in their own lives don’t pay off when society doesn’t believe in them.

The alternative in such a scenario is to see success as purely a matter of luck, not merit, and that there is nothing fair about it. This provides a rationale to redistribute income to the less lucky, and for government to pick winners and losers in the economy, favoring some businesses or industries over others. The position of industries in the market would be seen as purely arbitrary, a matter of luck. He points out that redistribution leads to economic stagnation, because there is no incentive to create new wealth in that scenario.

Here are some salient quotes from the article:

[T]his rosy picture obscures a hard fact: meritocracy is a difficult principle to sustain in a democracy. Any system that allocates rewards on the basis of merit inevitably gives higher compensation to the few, leaving the majority potentially envious. In a democracy, the majority generally rules. Why should that majority agree to grant a minority disproportionate power and rewards?

America … encouraged meritocracy from its inception. In the eighteenth century, the social order throughout the world was based on birthrights: nobles ruled Europe and Japan, the caste system prevailed in India, and even in England, where merchants were gaining economic and political strength, the aristocracy wielded most of the political power. The American Revolution was a revolt against aristocracy and the immobility of European society, but unlike the French Revolution, which emphasized the principle of equality, it championed the freedom to pursue happiness. In other words, America was founded on equality of opportunities, not of outcomes.

When the difference between a comfortable retirement and an indigent one is determined not by hard work or by a frugal lifestyle but by lucky timing in buying or selling your house, people start questioning the fairness of the market system. The fact that the real-estate bubble was the second large bubble to pop in less than a decade further undermined trust in markets as a good indicator of where to invest resources.

Though he doesn’t get into this, in my mind the third quote relates to monetary policy. The Fed kept interest rates low in the 1990s for an extended period of time. Fed Chairman Alan Greenspan said that there was no fear of inflation, due to increases in productivity from the computer/internet revolutions. The Fed’s easy money policy likely was a major contributor to the dot-com and telecom bubbles that burst in 2000. The Fed returned to an easy money policy in 2001, which fueled the housing bubble, and led to its subsequent collapse. It continues to implement an easy money policy to this day. That needs to change if Zingales’s pro-market agenda is going to work. All this policy has done is led us to ruin.

Read Full Post »

I saw this from Star Parker today, and I think she makes an excellent point about the role of government:

There is no way around the fact that freedom and prosperity only exist when government protects property, and this includes our money.

I would add to this, “protects the lives of individuals, and respects contracts,” as well as property, but she is on to something. Since the financial crash of 2008 our government has taken on the role of the protector of our economy, even if the majority of Americans has not wanted it to do this. It has propped up companies, at best providing a cushion to tide them through their restructuring, and at worst given them a false sense of value. It has been focused for too long on the conceit that it can produce favorable outcomes, and when I say this in particular, I mean it far beyond this discussion. It goes above and beyond protecting property. Protecting property means protecting it from harm inflicted by fraud, thieves, saboteurs, and vandals–external threats, not harm that is self-inflicted. What the government has been presuming is that it can save us from ourselves.

Even though the Federal Reserve has wanted to deny that it is devaluing the Dollar, it has been doing so by creating money and then exchanging it for Treasuries, to finance our dramatically expanding public debt, and/or toxic mortgage securities. We haven’t been feeling the effects of this so much, because banks have not been lending that much into the private economy, but someday this will change. This policy has had effects abroad, which we have been feeling indirectly. Nevertheless, this is not protecting our property–specifically, our money!

What I’ve been increasingly realizing is that if we are to get back to prosperity, our government should stop trying to save us from ourselves, stop trying to manage the economy, and get back to its raison d’etre, and the Fed should adopt a new policy of protecting the value of the Dollar. If both were to do this, it would likely cause some scary and unpleasant results in the short term, but if people can see where the real problems are in the economy, then they are more likely to be resolved.

Read Full Post »

A few months ago Apple made some controversial changes to its App Store developer terms, which were seen as overly restrictive. The scuttlebutt was that the changes were aimed at banning Adobe Flash from the iPhone, since Apple had already said that Flash was not going to be included, nor allowed on the iPad. The developer terms said that all apps. which could be downloaded through the App Store had to be originally written only in C, C++, Objective-C, or Javascript compatible with Apple’s WebKit engine. No cross-compiled code, private libraries, translation or compatibility layers were allowed. Apple claimed they made these changes to increase the stability and security of the iOS environment for users. I considered this a somewhat dubious claim given that they were allowing C and C++. We all know the myriad security issues that Microsoft has had to deal with as a result of using C and C++ in Windows, and in the applications that run on it. A side-effect I noticed was that the terms also banned Squeak apps., since Squeak’s VM source code is written in Smalltalk and is translated to C for cross-compilation. In addition any apps. written in it are originally written in Smalltalk (typically), and are executed by the Squeak VM, which would be considered a translation layer. The reason this was relevant was that someone had ported Squeak to the iPhone a couple years ago, and had developed several apps. in it.

The Weekly Squeak revealed today that Apple has made changes to its App Store terms, and they just so happen to allow Squeak apps. As Daring Fireball has revealed, Apple has removed all programming language restrictions. They have even removed the ban on “intermediary translation or compatibility layers”. The one caveat is they do not allow App Store apps. to download code. So if you’re using an interpreter in your app., the interpreter and all of the code that will execute on it must be included in the package. (Update 9-12-10: Justin James pointed out that the one exception to this rule is Javascript code which is downloaded and run by the WebKit engine). This still restricts Squeak some, because it would disallow users or the app. from using something like SqueakMap or SqueakSource as a source for downloading code into a Squeak image, but it allows the typical stand-alone application case to work.

John Gruber, the author of Daring Fireball, speculates that these new rules could allow developers to use Adobe’s Flash cross-compiler, which Adobe had scuttled when Apple imposed the previous restrictions. John said, “If you can produce a binary that complies with the guidelines, how you produced it doesn’t matter.” Sounds right to me.

However, looking over the other terms that John excerpts from the license agreement gives me the impression that Apple still hasn’t figured everything out yet about what it will allow, and what it won’t allow, in the future. It has this capricious attitude of, “Just be cool, bro.” So things could still change. That’s the thing that would be disappointing to me about this if I were an iPhone developer right now. I got the impression when the last license terms came out that Apple hadn’t really thought through what they were doing. While I get a better impression about the recent changes, I still have a sense that they haven’t thought everything through. To me the question is why? I guess it’s like what Tom R. Halfhill once told me, that Steve Jobs never understood developers, even back in the days when the company was young. Steve Wozniak was the resident “master developer” in those days, and he had Jobs’s ear. Once he left Apple in the 1980s that influence was gone.

Read Full Post »

This is kind of a follow-up post to one I wrote in March, 2009, talking about what I anticipated in our economic future. This takes a look back at the past, and brings us up to the present, showing us a bit of how we got here.

I saw an interview with economist and Wall Street Journal columnist Judy Shelton a couple months ago on C-SPAN. It was meant to be a nice sit-down chat, but she gives an interesting history of what’s happened to world monetary systems since the 1930s. She said that the global economy went from instability in the 1930s, where every nation was trying to undercut every other country’s currency, to stability with the creation of the International Monetary Fund (IMF). Then the IMF betrayed its own original mission, President Nixon took the U.S. off Bretton-Woods in the early 1970s, and now we’re back to where we were in the 1930s in terms of international finance–each country trying to undercut everyone else’s currencies.

Quite a bit of this interview is a profile of Shelton and her interests, particularly her love of Russian culture. The interview is an hour long.

The most intriguing thing to me is she likes the idea of a Bretton-Woods-like gold standard for the U.S. dollar, an idea that most economists would consider anachronistic. This is an idea that I got into shortly after I left college in the early 1990s, but later abandoned, because I confused it with the gold standard that existed prior to Bretton-Woods. The older one was partly blamed for the Great Depression.

The most interesting parts of the video are at 42 minutes and 55 minutes in. At minute 42 she explains her rationale for an international gold standard, saying she’s not a “gold bug,” I guess meaning that she doesn’t like it for its own sake. Rather she sees it as a relatively stable guarantor of value, and she dislikes the fact that world currencies are now not comparable to anything that seems real to most people. Instead, what people get is a sliding measure of value in the international currency market that varies from month to month and year to year. She said that now world currency trading is just a gamble, not unlike derivatives, and this has a debilitating effect on international trade. She also advocates it so that people can count on a dollar still being worth approximately what it was when they earned it, rather than its value wasting away because of inflation.

At minute 55 she gets very frank about the fiscal and economic situation that we are now in, and how we got into it. She utterly rejects the notion that supply side economics and the free market created the mess. She puts the blame squarely on government interference in the real estate market, and the dysfunctional incentives this created, with the government on the one hand tacitly guaranteeing mortgage-backed securities, while on the other allowing speculators to use them in free wheeling bets, which ultimately caused the crash to be as bad as it was. This put the government (in other words, the tax paying public) in the position of having to back the bets to try to contain an economic collapse.

On the lighter side, I noted that Mrs. Shelton said she grew up in “The Valley” (San Fernando Valley in CA). “I’m really kind of the classic Valley girl,” she said. Wow! You wouldn’t know it by listening to her talk. Note this is not “The Valley” I sometimes refer to. To those of us with a computer/tech background, “The Valley” means Silicon Valley, centered in San Jose, CA. San Fernando is a northern suburb of Los Angeles. Anyway, this took me back to the early 1980s when I used to hear “Valley Girl” by Frank Zappa on the radio.

Valley girl
she’s a Valley girl
Valley girl
she’s a Valley girl
Okay fine
fer sure, fer sure
She’s a Valley girl
in a clothing store …

Like, totally. Good memories. 🙂

Read Full Post »

Once again, I return to the technique of using a metaphor created by a recent South Park episode (Season 13, Episode 10) to illustrate the way things are (in an entirely different subject). Warning: There is adult, offensive language in the South Park video clips I link to below. My usual admonition applies. If you are easily offended, please skip them.

The episode is about the popular perception of wrestling, promoted by the WWF, vs. the real sport. It uses the same theme as an earlier episode I used, which satirized Guitar Hero: The pop culture is taking something which involves discipline and hard work, with some redeeming quality, and turning it into something else entirely with no redeemable value. Rather than challenging us to be something more, it revels in what we are.

This is a follow-up post to “Does computer science have a future?” It fleshes out the challenge of trying to promote a real science in computing…with some satire.

I’ve been getting more exposure to the evolution of CS in universities, which extends to the issue of propping up CS in our high schools (which often just means AP CS), though there’s been some talk of trying to get it into a K-12 framework.

University CS programs have been hurting badly since the dot-com crash of 2001. Enrollments in CS dropped off dramatically to levels not seen since the 1970s. We’re talking about the days when mainframes and minicomputers seemed to be the only options. It’s not too much of a stretch to say that we’ve returned to that configuration of mainframes and terminals, and the thinking that “the world needs only 5 computers.”

After looking at the arc of the history of computing over the last 30 years, one could be excused for thinking that an open network like the internet and an open platform like the personal computer were all just a “bad trip,” and that things have been brought back to a sense of reasonable sobriety. This hides a deeper issue: Who is going to understand computing enough to deal with it competently? Can our universities orient themselves to bring students who have been raised in an environment of the web, video game consoles, iPods, and smart phones–who have their own ideas about what computers are–to an understanding of what computing is, so they can carry technological development forward?

Going even deeper, can our universities bring themselves to understand the implications of computing for their own outlook on the world, and convey that to their students? I ask this last question partly because of my own experience with computing and how it has allowed me to think and perceive. I also ask it because Alan Kay has brought it up several times as an epistemological subject that he’d like to see schools ponder, and in some way impart to students: thinking about systems in general, not just in computers.

The point of the internet was to create a platform upon which networks could be built, so they could be used and experimented with. It wasn’t just built so that people could publish porn, opinion, news, music, and video, and so that e-commerce, information transfer, chat, and internet gaming could take place.

The personal computer concept was not just created so that we could run apps. on it, and use it like a terminal. It was created with the intent of making a new medium, one where ideas could be exchanged and transactions could take place, to be sure, but also so that people could mold it into their own creation if they wanted. People could become authors with it, just as with pen, paper, and the typewriter, but in a much different sense.

But, back to where we are now…

Most high schools across the country have dropped programming courses completely, or they’re just hanging on by a thread (and in those cases an AP course is usually all that’s left). The only courses in computer literacy most schools have offered in the last several years are in how to use Microsoft Word and Excel (and maybe how to write scripts in Office using VBA), and how to use the web, Google, and e-mail.

CS departments in universities are desperate to find something that gives them relevance again, and there’s a sense of “adapt or die.” I’ve had some conversations with several computing educators at the university and high school level about my own emerging perspective on CS, which is largely based on the material that Alan Kay and Chris Crawford have put forward. I’ve gotten the ear of some of them. I’ve participated in discussions in online forums where there are many CS professors, but only a rare few seem to have the background and open mindedness to understand and consider what I have to say. These few like entertaining the ideas for a bit, but then forget about it, because they don’t translate into their notions of productive classroom activities (implementations). I must admit I don’t have much of a clue about how to translate these ideas into classroom activities yet, either, much less how to sell them to curriculum committees. I’m just discovering this stuff as I go. I guess I thought that by putting the ideas out there, along with a few other like-minded people, that at least a few teachers would catch on, become inspired, and start really thinking about translating them into curriculum and activities.

I’ve been getting a fair sense from afar about how hard it is for computing educators to even think about doing this. For one, academics need support from each other. Since these ideas are alien to most CS educators, it’s difficult to get support for them. I’ve been surprised, because I’ve always thought that it’s the teachers who come up with the curriculum, and in this “time of crisis” for CS they would be interested in a new perspective. It’s actually an old perspective, but it hasn’t been implemented successfully on a wide scale yet, and it’s a more advanced perspective than what has been taught in CS in most places for decades.

The challenge is understandable to me now at one level, because any alternative point of view on CS, even the ones currently being taught in some innovative university programs, run into the buzz saw of the pre-existing student perspective on computing (prepare, there’s an analogy coming):

WWE: “You took my girl, and my job”

They come in with a view of computing driven by the way the pop culture defines it. In the story (follow the above link), the WWE defines wrestling for the kids. Trying to teach anything akin to “the real thing” (using computing in deep and creative ways to create representations of our ideas and test them, and possibly learn something new in the process) tends to get weird looks from students (though not from all):

The fine art of “wrastling”

There’s a temptation on the part of some to take away from this experience the idea that the material they’re presenting is no longer relevant, and that the students’ perspectives should be folded into what’s taught. I agree with this up to a point. I think that the presentation of the material can be altered to help students better relate to the subject, but I draw the line at watering down the end goal. I think that subjects have their own intrinsic worth. Rather than trying to find a way to help more students pass by throwing out subjects that seem too difficult, there should instead be remedial support that helps willing students rise up to a level where they can understand the material. To me, the whole idea of a university education is to get students to think and perceive at a more sophisticated level than when they came in. CS should be no different. You’d be surprised, though, how many CS professors think the more important goal is to teach only about existing technology and methodologies, the pop culture.

Talking to most other CS educators at the high school and university level has been an eye-opening and disheartening experience, because we get our terminology confused. I mean to say one thing, and to my amazement I get people who honestly think I’m saying the complete opposite. In other cases we have a different understanding of the same subject, and so the terms we use get assigned different meanings, and I come to a point where I realize we can’t understand each other. The background material for where we’re coming from is just too different.

Many CS professors (whom I have not met, though I’ve been hearing about them) apparently think all of this “real thing” stuff is quaint, but they have real work to do, like “training IT professionals for the 21st century.” Phah! Guess what, people. Java is the new Cobol (in lieu of Fortran, no less…). Web apps. are the new IBM 3270 apps. Remember those days? Welcome to the 1970s! And by the way, isn’t CS starting to look like CIS? Just an observation…

(A little background on the clip below: In the story, after the failed attempt at learning real wrestling (in the clip above), the South Park kids form their own “wrestling show,” which is just play acting with violence, complete with the same scandal-ridden themes of the “WWE wrestling” they admire so much. A bunch of “redneck” adults in South Park gather around to watch in rapt attention.)

“What you mean it ain’t real?”

Note I am not trying to call anyone “stupid” (the wrestling teacher in South Park has a penchant for calling people that), though I share in the character’s frustration with people who are unwilling to challenge their own assumptions.

I get a sense that with many CS professors the bottom line is it’s all about supporting the standard curriculum in order to turn out what they believe are trained IT professionals who are capable of working with existing technology to create solutions employers need. I wonder if they’ve talked to any software developers who work out in the field. I’m sure they’d get an earful about how CS graduates tend to not be of high quality anymore.

The faculty support their departmental goals and the existing educational field. The belief is, “Dammit! We’ve gotta have something called ‘CS’, because something is better than nothing.” This ends up meaning that they accommodate the dominant perspective that students have on computing. This ultimately goes back to the dominant industry vision, which now has nothing to do with CS.

When I’ve found a receptive CS professor, what I’ve suggested as a remedy, along with “moving the needle” back to something resembling a real science, is that the CS community needs to invent educational tools: Their own educational artifacts, and their own programming languages. This still goes on, though it’s not happening nearly as much as it did 40 years ago. I think that the perspectives of students need to be taken into consideration. The models used for teaching need to create a bridge between how they think of computing coming in, and where the CS discipline can take them.

The standard languages that most CS departments have been using for the past 14 years or so are C++, and now Java. C++ came from Bell Labs. Java came from Sun Microsystems. Both were created for industrial use by professionals, not people who’ve never programmed a computer before in their lives! In addition, they’re not that great for doing “real CS”. I’ve only heard of Scheme being recommended for that (along with some select course material, of course). I haven’t worked with it, but Python seems to get recommended from time to time by innovative people in CS. It’s now used in the introductory CS course at MIT. My point is, though, there needs to be more of an effort put into things like this in the CS academic community.

What’s at stake

In my previous post I link to above I asked the question, which Alan Kay asked in his presentation, “Does CS have a future?” I kind of punted, giving a general answer that’s characterized the field through the last several decades. I figured it would continue to be the case. This is going to sound real silly, but this next scene in South Park actually helped me see what Kay was talking about: We could lose it all! The strand that spans the gulf between ignorance and “the real thing” (a connection which is already extremely tenuous) could snap, and CS would be all but dead:

This isn’t “wrastling”!

What came to mind is that it wouldn’t necessarily mean that the programs called “CS” would be dead, but what they teach would end up being something so different and shallow as to not even resemble the traditional CS curriculum that’s existed for 30+ years.

What I’ve learned is there’s a lot of resistance to bringing a more powerful form of CS into universities and our school system. This is not only coming from the academics. It’s also coming from the students. This is because, ironically, it’s seen as weak and irrelevant. I’ve already given you a feel for where the student perspective is at. Alan Kay identified a reason why most CS academics don’t go for it: They think what they’re teaching is it! They think they’re teaching “real CS.” They have little to no knowledge of the innovative CS that was done in the 1960s and 70s. They couldn’t even recall it if they tried, because they never studied it. They’re historically illiterate, just like our industry.

High school “CS” teachers (why not just call their courses “programming.” That’s what they used to be called) are hapless followers of the university system. They’re less educated than the CS professors about what they do. From what I’ve heard about the standards committees that govern curriculum for the public schools, they’d have no clue about what “real CS” is if it was shown to them. This isn’t so much a gripe of mine as an acknowledgment that the public schools are not designed for teaching the different modes of thought that have been invented through the ages. There’s no point in complaining about a design, except to identify that it’s flawed, and why. The answer is to redesign the system and then criticize it if the intent of the design is not being met.

My sense of the historical ignorance in our field, having been a part of it since I was a teenager, is people take it as an article of faith that there’s been a linear progression of technological improvement. So why look at the old artifacts? They’re old and they’re obviously worse than what we have now. That’s the thinking. They don’t have to verify this notion. They know it in their bones. If they’re exposed to technical details of some of the amazing artifacts of the past, they’ll find fault with the design (from their context). How the design came about is not seen as valuable in and of itself. Only the usefulness that can be derived from it is seen as worthy. Most CS academics are not too distant from this level of ignorance. I get a sense that this comes from their own sense of inadequacy. They know they couldn’t design something like it. They wouldn’t expect themselves to. As far as they’re concerned electrical engineers and programming “gods” created that stuff. They also tend to not have the first clue about the context in which it was designed. They’ll pick out some subsystem that’s antiquated by what’s used now and throw the baby out with the bathwater, calling the whole thing worthless, or at best an antique that should be displayed dead in a museum. You may get the concession out of them that it was useful as a prototype for its time, but what’s the point in studying it now? The very idea of understanding the context for the design is an alien thought to them.

The designs of these old systems weren’t perfect. They were always expected to be works in progress. What gets missed is the sheer magnitude of the improvement that occurred, the leaps in thinking about computing’s potential, and what these new perspectives allowed the researchers to think about in terms of improving their designs, the quality of which were orders of magnitude greater than what came before them (that point is often missed in modern analysis). Technologists often have the mindset of, “What can this technology do for me now?” They take for granted what they have, as in, “Now that I know how to use and do this, that old thing over there is ancient! My god. Why do you think that thing is so amazing?”

Another thing I’ve seen people dismiss is the idea that the people who created those amazing artifacts had an outlook that’s any different from their own, except for the fact that they were probably “artists”. That’s a polite way of dismissing what they did. Since they were so unique and “out there,” many of their ideas can be safely ignored, rather like how many in the open source community politely dismiss the ideas of the Free Software Foundation, and Richard Stallman. The stance is basically, “Thanks, but no thanks.” Yes, they came up with some significant new ideas, but it’s only useful to study them insofar as some of their ideas helped expand the marketability of technology. Their other ideas are too esoteric to matter. In fact we secretly think they’re a little nuts.

As I have written about on this blog, I share Kay’s view that computing has great historical and societal significance. What I’ve come to realize is that most CS academics don’t share that view. Sure, they think computing is significant, but only in terms of doing things faster than we once did, and in allowing people to communicate and work more efficiently at vast distances. The reasons are not as simple as a closed-mindedness. Kay probably nailed this: Their outlook is limited.

What it comes down to is most computer science academics are not scientists in any way, shape, or form. They’re not curious about their field of work. They’re not interested in examining their own assumptions, the way scientists do. Yet they dominate the academic field.

I’ve given some thought to some analogies for the CS major, as it exists today. One that came to mind is it’s like an English major, because the main focus seems to be on teaching students to program, and to do it well. As I’ve said before, programming in the realm of computing is just reading and writing. Where the English major excels beyond what CS teaches is there’s usually a study of some classics. This doesn’t happen in most CS curricula.

In discussions I’ve heard among CS academics, they now regard the field as an engineering discipline. That sounds correct. It fits how they perceive their mission. What generally goes unrecognized is that good engineering is backed by good science. Right now we have engineering backed by no science. All the field has are rules of thumb created from anecdotal evidence. The evidence is not closely scrutinized to see where the technique fits in other applications, or if it could be improved upon.

I am not arguing for a field that is bereft of an engineering focus. Clearly engineering is needed in computing, but without a science behind it, the engineering is poor, and will remain poor until a science is developed to support it.

My own vision of a science of computing could be described as a “science of design,” whose fundamental area of study would be architectures of computing. Lately Alan Kay has been talking about a “systems science,” which as he talked about in the presentation I link to above, includes not just architecture, but a “psychology of systems,” which I translated to mean incorporating a concept of collective “behavior” in a system of interactive systems. Perhaps that’s a shallow interpretation.

As you can tell, I now hold a dim view of traditional CS. However, I think it would be disruptive to our society to just say “let it die”. There are a lot of legacy systems around that need maintaining, and some of them are damn important to our society’s ability to function. There’s a need for knowledgeable people to do that. At the same time, Kay points ahead to the problem of increasing system complexity with poor architectural designs. It’s unsustainable. So it’s not enough to just train maintainers. Our technological society needs people who know how to design complex systems that can scale, and if nothing else, provide efficient, reliable service. That’s just a minimum. Hopefully one day our ambitions will go beyond that.

The discussion we’re not having

I happened to catch a presentation on C-SPAN last month by the National Academy of Engineering and the National Research Council on bringing engineering education into the K-12 curriculum (the meeting actually took place in September). I’m including links to the two-part presentation here, because the people involved were having the kind of discussion that I think that CS academics should be having, but they’re not. The quality of the discussion was top notch, in my opinion.

Engineering in K-12 Education, Part 1, 3 hours

Engineering in K-12 Education, Part 2, 2 hours

What was clear in this presentation, particularly in Part 2, was that the people promoting engineering in K-12 have a LOT of heavy lifting to do to achieve their goals. They are just at the infant stage right now.

Readers might be saying, “How can they possibly think of teaching engineering in primary school? What about all the other subjects they have to learn?” First of all, the organizations that held this event said that they weren’t pressing for adding on classes to the existing curriculum, because they understand that the existing school structure is already full of subjects. Rather, they wanted to try integrating engineering principles into the existing course structure, particularly in math and science. Secondly, they emphasized the engineering principles of design and systems thinking. So it didn’t sound like it would involve shoving advanced math down the throats of first graders, though teaching advanced math principles (again, principles, not drill and practice) in a fashion that young students can understand wouldn’t be such a bad idea. Perhaps that could be accomplished through such a curriculum.

I had a discussion recently with Prof. Mark Guzdial, some other CS professors, and high school teachers on his blog, related to this subject. Someone asked a similar question to what I posed above: “I agree [with what you say about a real science of computing]. But how in the world do you sell that to a K-12 education board?” I came to a similar conclusion as the NAE/NRC: Integrate CS principles wherever you can. For example, I suggested that by introducing linguistics into the English curriculum, it would then be easier to introduce concepts such as parse trees and natural language processing into it, and it would be germane. Linguistics is an epistemological subject. Computing and epistemology seem to be cousins. That was just one example I could think of.

If CS academics would actually think about what computing is for a while, and its relevance to learning, they wouldn’t have to ask someone like me these questions. I’m just a guy who got a BSCS years ago, went into IT for several years, and is now engaged in independent study to expose himself to a better form of CS in hopes of fulfilling a dream.

Universities need to be places where students actually get educated, not just “credentialed.” Even if we’re thinking about the economic prospects of the U.S., we’re fooling ourselves if we think that just by handing a student a diploma that they’re being given a bright future. Our civil and economic future is going to be built on our ability to think, perceive, and create, even if most of us don’t realize it. Our educational systems had best spend time understanding what it will take to create educated citizens.

Read Full Post »

Update 8-17-09: I’ve revised this post a bit to clarify some points I made.

I received a request 2-1/2 weeks ago to write a post based on video of a speech that Alan Kay gave at Kyoto University in February, titled “Systems Thinking For Children And Adults.” Here it is. The volume in the first 10 minutes of the video is really low, so you’ll probably need to turn up your volume. The volume in the video gets readjusted louder after that.

Edit 1-2-2014: See also Kay’s 2012 speech, “Normal Considered Harmful,” which I’ve included at the bottom of this post.

On the science of computer science

Kay said what he means by a science of computing is the forward-looking study, understanding, and invention of computing. Will the “science” come to mean something like the other real sciences? Or will it be like library and social science, which means a gathering of knowledge? He said this is not the principled way that physics, chemistry, and biology have been able to revolutionize our understanding of phenomena. Likewise, will we develop a software engineering that is like the other engineering disciplines?

I’ve looked back at my CS education with more scrutiny, and given what I’ve found, I’m surprised that Kay is asking this question. Maybe I’m misunderstanding what he said, but for me CS was a gathering of knowledge a long time ago. The question for me is can it change from that to a real science? Perhaps he’s asking about the top universities.

When I took CS as an undergraduate in the late 80s/early 90s it was clear that some research had gone into what I was studying. All the research was in the realm of math. There was no sense of tinkering with architectures that existed, and very little practice in analyzing it. We were taught to apply a little analysis to algorithms. There was no sense of trying to create new architectures. What we were given was pre-digested analysis of what existed. So it had gotten as far as exposing us to the “TEM” parts (of “TEMS”–Technology, Engineering, Mathematics, Science), but only in a narrow band. The (S)cience was non-existent.

What we got instead is what I’d call “small science” in the sense that we had lots of programming labs where we experimented with our own knowledge of how to write programs that worked; how to use, organize, and address memory; and how to manage complexity in our software. We were given some strategies for doing this, which were taught as catechisms. The labs gave us an opportunity to see where those strategies were most effective. We were sometimes graded on how well we applied them.

We got to experience a little bit of how computers could manipulate symbols, which I thought was real interesting. I wished that there would’ve been more of that.

One of the tracks I took while in college was focused on software engineering, which really focused on project management techniques and rules of thumb. It was not a strong engineering discipline backed by scientific findings and methods.

When I got out into the work world I felt like I had to “spread the gospel,” because what IT shops were doing was ad hoc, worse than the methodologies I was taught. I was bringing them “enlightenment” compared to what they were doing. The nature and constraints of the workplace broke me out of this narrow-mindedness, and not always in good ways.

It’s only been by doing a lot of thinking about what I’ve learned, and my POV of computers, that I’ve been able to see this that I’ve been able to see that what I got out of CS was a gathering of knowledge with some best practices. At the time I had no concept that I was only getting part of the picture even though our CS professors openly volunteered with a wry humor that “computer science is not a science.” They compared the term “computer science” to “social science” in the sense that it was an ill-defined field. There was the expectation that it would develop into something more cohesive, hopefully a real science, later on. Given the way we were taught though, I guess they expected it to develop with no help from us.

Kay has complained previously that the commercial personal computing culture has contributed greatly to the deterioration of CS in academia. I have to admit I was a case in point. A big reason why I thought of CS the way I did was this culture I grew up in. Like I’ve said before, I have mixed feelings about this, because I don’t know if I would be a part of this field at all if the commercial culture he complains about never existed.

What I saw was that people were discouraged from tinkering with the hardware, seeing how it worked, much less trying to create their own computers. Not to say this was impossible, because there were people who tinkered with 8- and 16-bit computers. Of course, as I think most people in our field still know, Steve Wozniak was able to build his own computer. That’s how he and Steve Jobs created Apple. Computer kits were kind of popular in the late 1970s, but that faded by the time I really got into it.

When I used to read articles about modifying hardware there was always the caution about, “Be careful or you could hose your entire machine.” These machines were expensive at the time. There were the horror stories about people who tried some machine language programming and corrupted the floppy disk that had the only copy of their program on it (ie. hours and hours of work). So people like me didn’t venture into the “danger zone.” Companies (except for Apple with the Apple II) wouldn’t tell you about the internals of their computers without NDAs and licensing agreements, which I imagine one had to pay for handsomely. Instead we were given open access to a layer we could experiment on, which was the realm of programming either in assembly or a HLL. There were books one could get that would tell you about memory locations for system functions, and how to manipulate features of the system in software. I never saw discussion of how to create a software computer, for example, that one could tinker with, but then the hardware probably wasn’t powerful enough for that.

By and large, CS fit the fashion of the time. The one exception I remember is that in the CS department’s orientation/introductory materials they encouraged students to build their own computers from kits (this was in the late 1980s), and try writing a few programs, before entering the CS program. I had already written plenty of my own programs, but as I said, I was intimidated by the hardware realm.

My education wasn’t the vocational school setting that it’s turning into today, but it was not as rigorous as it could have been. It met my expectations at the time. What gave me a hint that my education wasn’t as complete as I thought was that opportunities which I thought would be open to me were not available when I looked for employment after graduation. The hint was there, but I don’t think I really got it until a year or two ago.

What would computing as a real science be like?

I attended the 2009 Rebooting Computing summit on CS education in January, and one of the topics discussed was what is the science of computer science? In my opinion it was the only topic brought up there that was worth discussing at that time, but that’s just me. The consensus among the luminaries that participated was that historically science has always followed technology and engineering. The science explains why some engineering works and some doesn’t, and it provides boundaries for a type of engineering.

We asked the question, “What would a science of computing look like?” Some CS luminaries used an acronym “TEMS” (Technology, Engineering, Mathematics, Science), and there seemed to be a deliberate reason why they had those terms in that order. In other circles it’s often expressed as “STEM.” Technology is developed first. Some engineering gets developed from patterns that are seen (best practices). Some math can be derived from it. Then you have something you can work with, experiment with, and reason about–science. The part that’s been missing from CS education is the science itself: experimentation, an interest and proclivity to get into the guts of something and try out new things with whatever–the hardware, the operating system, a programming language, what have you–just because we’re curious. Or, we see that what we have is inadequate and there’s a need for something that addresses the problem better.

Alan Kay participated in the summit and gave a description of a computing science that he had experienced at Xerox PARC. They studied existing computing artifacts, tried to come up with better architectures that did the same things as the old artifacts, and then applied the new architectures to “everything else.” I imagine that this would test the limits of the architecture, and provide more avenues for other scientists to repeat the process (take an artifact, create a new architecture for it, “spread it everywhere”) and gain more improvement.

(Update 12-14-2010: I added the following 3 paragraphs after finding Dan Ingalls’s “Design Principles Behind Smalltalk” article. It clarifies the idea that a science of computing was once attempted.)

Dan Ingalls gave a brief description of a process they used at Xerox to drive their innovative research, in “Design Principles Behind Smalltalk,” published in Byte Magazine in 1981:

Our work has followed a two- to four-year cycle that can be seen to parallel the scientific method:

  • Build an application program within the current system (make an observation)
  • Based on that experience, redesign the language (formulate a theory)
  • Build a new system based on the new design (make a prediction that can be tested)

The Smalltalk-80 system marks our fifth time through this cycle.

This parallels a process that was once described to me in computer science, called “bootstrapping”: Building a language and system “B” from a “lower form” language and system “A”. What was unique here was they had a set of overarching philosophies they followed, which drove the bootstrapping process. The inventors of the C language and Unix went through a similar process to create those artifacts. Each system had different goals and design philosophies, which is reflected in their design.

To give you a “starter” idea of what this process is like, read the introduction to Design Patterns, by Gamma, Helm, Johnson, and Vlissides, and take note of how they describe coming up with their patterns. Then notice how widely those patterns have been applied to projects that have nothing to do with what the Gang of Four originally created with the patterns they came up with. The exception here is the Gang of Four didn’t redesign a language as part of their process. They invented a terminology set, and a concept of coding patterns that are repeatable. They established architectural patterns to use within an existing language. The thing is, if they had explored what was going on with the patterns mathematically, they might very well have been able to formulate a new architecture that encompassed the patterns they saw, which if successful, would’ve allowed them to create a new language.

The problem in our field is illustrated by the fact that more often than not, there’s been no study of the other technologies where these patterns have been applied. When the Xerox Learning Research Group came up with Smalltalk (object-orientation, late-binding, GUI), Alan Kay expected that others would use the same process they did to improve on it, but instead people either copied OOP into less advanced environments and then used them to build practical software applications, or they built applications on top of Smalltalk. It’s just as I described with my CS education: The strategies get turned into a catechism by most practitioners. Rather than studying our creations, we’ve just kept building upon and using the same frameworks, and treating them with religious reverence–we dare not change them lest we lose community support. Instead of looking at how to improve upon the architecture of Smalltalk, people adopted OOP as a religion. This happened and continues to happen because almost nobody in the field is being taught to apply mathematical and scientific principles to computing (except in the sense of pursuing proofs of computability), and there’s little encouragement from funding sources to carry out this kind of research.

There are degrees of scientific thinking that software developers use. One IT software house where I worked for a year used patterns on a regular basis that we created ourselves. We understood the essential idea of patterns, though we did not understand the scientific principles Kay described. Everywhere else I worked didn’t use design patterns at all.

There has been more movement in the last few years to break out of the confines developers have been “living” in; to try and improve upon fundamental runtime/VM architecture, and build better languages on top of it. This is good, but in reality most of it has just recapitulated language features that were invented decades ago through scientific approaches to computing. There hasn’t been anything dramatically new developed yet.

The ignorance we ignore

“What is the definition of ignorance and apathy?”

“I don’t know, and I don’t care.”

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well.

— Alan Kay, The Early History of Smalltalk

A fundamental problem with our field is there’s very little appreciation for good architecture. We keep tinkering with old familiar structures, and produce technologies that are marginally better than what came before. We assume that brute force will get us by, because it always has in the past. This is ignoring a lot. One reason that brute force has been able to work productively in the past is because of the work of people who did not use brute force, creating: functional programming, interactive computing, word processing, hyperlinking, search, semiconductors, personal computing, the internet, object-oriented programming, IDEs, multimedia, spreadsheets, etc. This kind of research went into decline after the 1970s and has not recovered. It’s possible that the brute force mentality will run into a brick wall, because there will be no more innovative ideas to save it from itself. A symptom of this is the hand-wringing I’ve been hearing about for a few years now about how to leverage multiple CPU cores, though Kay thinks this is the wrong direction to look in for improvement.

There’s a temptation to say “more is better” when you run into a brick wall. If one CPU core isn’t providing enough speed, add another one. If the API is not to your satisfaction, just add another layer of abstraction to cover it over. If the language you’re using is weak, create a large API to give it lots of functionality and/or a large framework to make it easier to develop apps. in it. What we’re ignoring is the software architecture (all of it, including the language(s), and OS), and indeed the hardware architecture. These are the two places where we put up the greatest resistance to change. I think it’s because we acknowledge to ourselves that the people who make up our field by and large lack some basic competencies that are necessary to reconsider these structures. Even if we had the competencies nobody would be willing to fund us for the purpose of reconsidering said structures. We’re asked to build software quickly, and with that as the sole goal we’ll never get around to reconsidering what we use. We don’t like talking about it, but we know it’s true, and we don’t want to bother gaining those competencies, because they look hard and confusing. It’s just a suspicion I have, but I think An understanding of mathematics and the scientific outlook is important for all this essential to the process of invention in computing. We can’t get to better architectures without it. Nobody else seems to mind the way things are going. They accept it. So there’s no incentive to try to bust through some perceptual barriers to get to better answers, except the sense that some of us have that what’s being done is inadequate.

In his presentation in the video, Kay pointed out the folly of brute force thinking by showing how humungous software gets with this approach, and how messy the software is architecturally. He said that the “garbage dump” that is our software is tolerated because most people can’t actually see it. Our perceptual horizons are so limited that most people can only comprehend a piece of the huge mess, if they’re able to look at it. In that case it doesn’t look so bad, but if we could see the full expanse, we would be horrified.

This clarifies what had long frustrated me about IT software development, and I’m glad Kay talked about it. I hated the fact that my bosses often egged me on towards creating a mess. They were not aware they were doing this. Their only concern was getting the computer to do what the requirements said in the quickest way possible. They didn’t care about the structure of it, because they couldn’t see it. So to them it was irrelevant whether it was built well or not. They didn’t know the difference. I could see the mess, at least as far as my project was concerned. It eventually got to the point that I could anticipate it, just from the way the project was being managed. So often I wished that the people managing me or my team, and our customers, could see what we saw. I thought that if they did they would recognize the consequences of their decisions and priorities, and we could come to an agreement about how to avoid it. That was my “in my dreams” wish.

Kay said, “Much of the applications we use … actually take longer now to load than they did 20 years ago.” I’ve read about this (h/t to Paul Murphy). (Update 3-25-2010: I used to have video of this, but it was taken down by the people who made it. So I’ll just describe it.) A few people did a side-by-side test of a 2007 Vista laptop with a dual-core Intel processor (I’m guessing 2.4 Ghz) and 1 Gig. RAM vs. a Mac Classic II with a 16 Mhz Motorola 68030 and 2 MB RAM. My guess is the Mac was running a SCSI hard drive (the only kind you could install on a Mac when they were made back then). I didn’t see them insert a floppy disk. They said in the video that the Mac is “1987 technology.” Even though the Mac Classic II was introduced in 1991, they’re probably referring to the 68030 CPU it uses, which came out in 1987.

The tasks the two computers were given were to boot up, load a document into a word processor, quit out of the word processor, and shut down the machine. The Mac completed the contest in 1 minute, 42 seconds, 25% faster than the Vista laptop, which took 2 minutes, 17 seconds. Someone else posted a demonstration video in 2009 of a Vista desktop PC which completed the same tasks in 1 minute, 18 seconds–23% faster than the old Mac. I think the difference was that the laptop vs. Mac demo likely used a slower processor for the laptop (vs. the 2009 demo), and probably a slower hard drive. The thing is though, it probably took a 4 Ghz dual-processor (or quad core?) computer, faster memory, and a faster hard drive to beat the old Mac. To be fair, I’ve heard from others that the results would be much the same if you compared a modern Mac to the old Mac. The point is not which platform is better. It’s that we’ve made little progress in responsiveness.

The wide gulf between the two pieces of hardware (old Mac vs. Vista PC) is dramatic. The hardware got about 15,000-25,000% faster via. Moore’s Law (though Moore’s Law only applied to transistors, not speed), but we have not seen a commensurate speed up in system responsiveness. In some cases the newer software technology is slower than what existed 22 years ago.

When Kay has elaborated on this in the past he’s said this is also partly due to the poor hardware architecture which was adopted by the microprocessor industry in the 1970s, and which has been marginally improved over the years. Quoting from an interview with Alan Kay in ACM Queue in 2004:

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea [referring to the Burroughs B5000 computer].

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

Another factor is the configuration and speed of main and cache memory. Cache is built into the CPU, is directly accessed by it, and is very fast. Main memory has always been much slower than the CPU in microcomputers. The CPU spends the majority of its time waiting for memory, or data to stream from a hard drive or internet connection, if the data is not already in cache. This has always been true. It may be one of the main reasons why no matter how fast CPU speeds have gotten the user experience has not gotten commensurately more responsive.

The revenge of data processing

The section of Kay’s speech where he talks about “embarrassing questions” from his wife (the slide is subtitled “Two cultures in computing”) gets to a complaint I’ve had for a while now. There have been many times while I’m writing for this blog when I’ve tried to “absent-mindedly” put the cursor somewhere on a preview page and start editing, but then realize that the technology won’t let me do it. You can always tell when a user interaction issue needs to be addressed when your subconscious tries to do something with a computer and it doesn’t work.

Speaking about her GUI apps. his wife said, “In these apps I can see and do full WYSIWYG authoring.” She said about web apps., “But with this stuff in the web browser, I can’t–I have to use modes that delay seeing what I get, and I have to start guessing,” and, “I have to edit through a keyhole.” He said what’s even more embarrassing is that the technology she likes was invented in the 1970s (things like word processing and desktop publishing in a GUI with WYSIWYG–aspects of the personal computing model), whereas the stuff she doesn’t like was invented in the 1990s (the web/terminal model). Actually I have to quibble a bit with him about the timeline.

“The browser = next generation 3270 terminal”
3270 terminal image from Wikipedia.org

The technology she doesn’t like–the interaction model–was invented in the 1970s as well. Kay mentioned 3270 terminals earlier in his presentation. The web browser with its screens and forms is an extension of the old IBM mainframe batch terminal architecture from the 1970s. The difference is one of culture, not time. Quoting from the Wikipedia article on the 3270:

[T]he Web (and HTTP) is similar to 3270 interaction because the terminal (browser) is given more responsibility for managing presentation and user input, minimizing host interaction while still facilitating server-based information retrieval and processing.

Applications development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. [my emphasis]

It’s true that the appearance and user interaction with the browser itself has changed a lot from the 3270 days in terms of a nicer presentation (graphics and fonts vs. green-screen text), and the ability to use a mouse with the browser UI (vs. keyboard-only with the 3270). It features document composition, which comes from the GUI world. The 3270 did not. Early on, a client scripting language was added, Javascript, which enabled things to happen on the browser without requiring server interaction. The 3270 had no scripting language.

There’s more freedom than the 3270 allowed. One can point their browser anywhere they want. A 3270 was designed to be hooked up to one IBM mainframe, and the user was not allowed to “roam” on other systems with it, except if given permission via. mainframe administration policies. What’s the same, though, is the basic interaction model of filling in a form on the client end, sending that information to the server in a batch, and then receiving a response form.

The idea that Kay brought to personal computing was immediacy. You immediately see the effect of what you are doing in real time. He saw it as an authoring platform. The browser model, at least in its commercial incarnation, was designed as a form submission and publishing platform.

Richard Gabriel wrote extensively about the difference between using an authoring platform and a publishing platform for writing, and its implications from the perspective of a programmer, in The Art of Lisp and Writing. The vast majority of software developers work in what is essentially a “publishing” environment, not an authoring environment, and this has been the case for most of software development’s history. The only time when most developers got closer to an authoring environment was when Basic interpreters were installed on microcomputers, and it became the most popular language that programmers used. The old Visual Basic offered the same environment. The interpreter got us closer to an authoring environment, but it still had one barrier: you either had to be editing code, or running your program. You could not do both at the same time, but there was less of a wait to see your code run because there was no compile step. Unfortunately with the older Basics, the programmer’s power was limited.

People’s experience with the browser has been very slowly coming back towards the idea of authoring via. the AJAX kludge. Kay showed a screenshot of a more complete authoring environment inside the browser using Dan Ingalls’s Lively Kernel. It looks and behaves a lot like Squeak/Smalltalk. The programming language within Lively is Javascript.

We’ve been willing to sacrifice immediacy to get rid of having to install apps. on PCs. App. installation is not what Kay envisioned with the Dynabook, but that’s the model that prevailed in the personal computer marketplace. You know you’ve got a problem with your thinking if you’re coming up with a bad design to compensate for past design decisions that were also bad. The bigger problem is that our field is not even conscious that the previous bad design was avoidable.

Kay said, “A large percentage of the main software that a lot of people use today, because it’s connected to the web, is actually inferior to stuff that was done before, and for no good reason whatsoever!”

I agree with him, but there are people who would disagree on this point. They are not enlightened, but they’re a force to be reckoned with.

What’s happened with the web is the same thing that happened to PCs: The GUI was seen as a “good idea” and was grafted on to what has been essentially a minicomputer, and then a mainframe mindset. Blogger Paul Murphy used to write, from a business perspective, about how there are two cultures in computing/IT that have existed for more than 100 years. The oldest culture that’s operated continuously throughout this time is data processing. This is the culture that brought us punch cards, mainframes, and 3270 terminals. It brought us the idea that the most important function of a computer system is to encode data, retrieve it on demand, and produce reports from automated analysis. The other culture is what Murphy called “scientific computing,” and it’s closer to what Kay promotes.

The data processing culture has used web applications to recapitulate the style of IT management that existed 30 years ago. Several years ago, I heard from a few data processing believers who told me that PCs had been just a fad, a distraction. “Now we can get back to real computing,” they said. The meme from data processing is, “The data is more important than the software, and the people.” Why? Because software gets copied and devalued. People come and go. Data is what you’ve uniquely gathered and can keep proprietary.

I think this is one reason why CS enrollment has fallen. It’s becoming dead like Latin. What’s the point of going into it if the knowledge you learn is irrelevant to the IT jobs (read “most abundant technology jobs”) that are available, and there’s little to no money going into exciting computing research? You don’t need a CS degree to create your own startup, either. All you need is an application server, a database, and a scripting/development platform, like PHP. Some university CS departments have responded by following what industry is doing in an effort to seem relevant and increase enrollment (ie. keep the department going). The main problem with this strategy is they’re being led by the nose. They’re not leading our society to better solutions.

What’s natural and what’s better

Kay tried to use some simple terms for the next part of his talk to help the audience relate to two ideas:

He gave what I think is an elegant graphical representation of one of the reasons things have turned out the way they have. He talked about modes of thought, and said that 80% of people are outer-directed and instrumental reasoners. They only look at ideas and tools in the context of whether it meets their current goals. They’re very conservative, seeking consensus before making a change. Their goals are the most important thing. Tools and ideas are only valid if they help meet these goals. This mentality dominates IT. Some would say, “Well, duh! Of course that’s the way they think. Technology’s role in IT is to automate business processes.” That’s the kind of thinking that got us to where we are. As Paul Murphy has written previously, the vision of “scientific computing,” as he calls it, is to extend human capabilities, not replace/automate them.

Kay said that 1% of people look at tools and respond to them, reconsidering their goals in light of what they see as the tool’s potential. This isn’t to say that they accept the tool as it’s given to them, and adjust their goals to fit the limitations of that version of the tool. Rather, they see its potential and fashion the tool to meet it. Think about Lively Kernel, which I mentioned above. It’s a proof of this concept.

There’s a symbiosis that takes place between the tool and the tool user. As the tool is improved, new ideas and insights become more feasible, and so new avenues for improvement can be explored. As the tool develops, the user can rethink how they work with it, and so improve their thinking about processes. As I’ve thought about this, it reminds me of how Engelbart’s NLS developed. The NLS team called it “bootstrapping.” It led to a far more powerful leveraging of technology than the instrumental approach.

The “80%” dynamic Kay described happened to NLS. Most people didn’t understand the technology, how it could empower them, and how it was developed. They still don’t. In fact Engelbart is barely known for his work today. Through the work that was done in the early days of Xerox PARC, a few of his ideas managed to get into technology that we’ve used over the years. He’s most known now for the invention of the mouse, but he did much more than that.

In the instrumental approach the goals precede the tools, and are actually much more conservative than the approach of the 1-percenters. In the instrumental scenario the goals people have are similar whether the tools exist or not, and so there’s no potential for this human-tool interaction to provide insight into what might be better goals.

Kay talked about this subject at Rebooting Computing, and I think he said that a really big challenge in education is to get students to shift from the “80%” mindset to one of the other modes of thought that have to do with deep thinking, at least exposing students to this potential within themselves. I think he would say that we’re not predisposed to only think one way. It’s just that left to our own devices we tend to fall into one of these categories.

I base the following on the notes Kay had on his slides in his speech:

He said that in the commercial computing realm (which is based on the way mainframes were sold to the public years ago) it’s about “news”: The emphasis is on functionality, and people as components. This approach also says you get your hardware, OS, programming language, tools, and user interface from a vendor. You should not try to make your own. This sums up the attitude of IT in a nutshell. It’s not too far from the mentality of CS in academia, either.

The “new” in the 1970s was a focus on the end-user and whether what they can learn and do. “We should design how user interactions and learning will be done and work our way down to the functions they need!” In other words, rather than thinking functionality-first, think user-first–in the context of the computer being a new medium, extending human capabilities. Aren’t “functionality” and “user” equally high priorities? Users just want functionality from computers, don’t they? Ask yourself this question: Is a book’s purpose only to provide functionality? The idea at Xerox PARC was to create a friendly, inviting, creative environment that could be explored, built on, and used to create a better version of itself–an authoring platform, in a very deep sense.

Likewise, with Engelbart’s NLS, the idea was to create an information sharing and collaboration environment that improved how groups work together. Again, the emphasis was on the interaction between the computer and the user. Everything else was built on that basis.

By thinking functionality-first you turn the computer into a device, or a device server, speaking metaphorically. You’re not exposing the full power of computing to the user. I’m going to use some analogies which Chris Crawford has written about previously. In the typical IT mentality this is the idea. It’s too dangerous to expose the full power of computing to end users, so the thinking goes. They’ll mess things up either on purpose or by mistake. Or they’ll steal or corrupt proprietary information. Such low expectations… It’s like they’re children or illiterate peasants who can’t be trusted with the knowledge of how to read or write. They must be read to aloud rather than reading for themselves. Most IT environments believe that they cannot be allowed to write, for they are not educated enough to write in the computer’s complex language system (akin to hieroglyphics). Some allow their workers to do some writing, but in limited, and not very expressive languages. This “limits their power to do damage.” If they were highly educated, they wouldn’t be end users. They’d be scribes, writing what others will hear. Further, these “illiterates” are never taught the value of books (again, using a metaphor). They are only taught to believe that certain tomes, written by “experts” (scribes), are of value. It sounds Medieval if you ask me. Not that this is entirely IT’s fault. Computer science has fallen down on the job by not creating better language systems, for one thing. Our whole educational/societal thought structure also makes it difficult to break out of this dynamic.

The “news” way of thinking has fit people into specialist roles that require standardized training. The “new” emphasized learning by doing and general “literacy.”

We’re in danger of losing it

Everyone interested in seeing what the technology developed at ARPA and Xerox PARC (the internet, personal computing, object-oriented programming) was intended to represent should pay special attention to the part of Kay’s speech titled “No Gears, No Centers: ARPA/PARC Outlook.” Kay told me about most of this a couple years ago, and I incorporated it into a guest post I wrote for Paul Murphy’s blog, called “The tattered history of OOP” (see also “The PC vision was lost from the get-go”). Kay shows it better in his presentation.

I think the purpose of Kay’s criticism of what exists now is to point out what we’ve lost, and continue to lose. Since our field doesn’t understand the research that created what we have today, we’ve helplessly taken it for granted that what we have, and the method of conservative incremental improvement we’ve practiced, will always be able to handle future challenges.

Kay said that because of what he’s seen with the development of the field of computing, he doesn’t believe what we have in computer science is a real field. And we are in danger of losing altogether any remnants of the science that was developed decades ago. This is because it’s been ignored. The artifacts that are based on that research are all around us. We use them every day, but the vast majority of practitioners don’t understand the principles and outlook that made it possible to create them. This isn’t a call to reinvent the wheel, but rather a qualitative statement: We’re losing the ability to make the kind of leap that was made in the 1970s, to go beyond what that research has brought us. In fact, in Kay’s view, we’re regressing. The potential scenario he paints reminds me of the science fiction stories where people use a network of space warping portals for efficient space travel and say, “We don’t know who built it. They disappeared a long time ago.”

I think there are three forces that created this problem:

The first was the decline in funding for interesting and powerful computing research. What’s become increasingly clear to me from listening to CS luminaries, who were around when this research was being done, is that the world of personal computing and the internet we have today is an outgrowth–you could really say an accident–of defense funding that took place during the Cold War–ARPA, specifically research funded by the IPTO (the Information Processing Techniques Office). This isn’t to say that ARPA was like a military operation. Far from it. When it started, it was staffed by academics and they were given a wide berth in which to do their research on computing. An important ingredient of this was the expectation that researchers would come up with big ideas, ones that were high risk. We don’t see this today.

The reason this computing research got as much funding as it did was due to the Sputnik launch. This got Americans to wake up to the fact that mathematics, science, and engineering were important, because of the perception that they were important for the defense of the country. The U.S. knew that the Soviets had computer technology they were using as part of their military operations, and the U.S. wanted to be able to compete with them in that realm.

In terms of popular perception, there’s no longer a sense that we need technical supremacy over an enemy. However, computing is still important to national defense, even taking the war against Al Qaeda into account. A few defense and prominent technology leaders in the know have said that Al Qaeda is a very early adopter of new technologies, and an innovator in their use for their own ends.

In 1970 computing research split into part government-funded and part private, with some ARPA researchers moving to the newly created PARC facility at Xerox. This is where they created the modern form of personal computing, some aspects of which made it into the technologies we’ve been using since the mid-1980s. The groundbreaking work at PARC ended in the early 1980s.

The second force, which Kay identified, was the rise of the commercial market in PCs. He’s said that computers were introduced to society before we were ready to understand what’s really powerful about them. This market invited us in, and got us acquainted with very limited ideas about computing and programming. We went into CS in academia with an eye towards making it like Bill Gates (never mind that Gates is actually a college drop-out), and in the process of trying to accommodate our limited ideas about computing, the CS curriculum was dumbed down. I didn’t go into CS for the dream of becoming a billionaire, but I had my eye on a career at first.

It’s been difficult for universities in general to resist this force, because it’s pretty universal. The majority of parents of college-bound youth have believed for decades that a college degree means a secure economic future–meaning a job, a career that pays well. It’s not always true, but it’s believed in our culture. This sets expectations for universities to be career “launching pads” and centers for social networking, rather than institutions that develop a student’s faculties, their ability to think, and expose them to high level perspectives to improve their perception.

The third force is, as Paul Murphy wrote, the imperative of IT since the earliest days of the 20th century, which is to use technology to extend and automate bureaucracy. I don’t see bureaucracy as a totally bad thing. I think of Alexander Hamilton’s quote about public debt, rephrased as, “A bureaucracy, if not excessive, will be to us a blessing.” However, in its traditional role it creates and follows rules and procedures formulated in a hierarchy. There’s accountability for meeting directives, not broad goals. Its thinking is deterministic and algorithmic. IT, being traditionally a bureaucratic function, models this (we should be asking ourselves, “Does it have to be limited to this?”) My sense of it is CS responded to the economic imperatives. It felt the need to lean towards what industry was doing, and how it thinks. The way it tried to add value was by emphasizing optimization strategies. The difference was it used to have a strong belief in staying true to some theoretical underpinnings, which somewhat counter-balanced the strong pull from industry. That resolve is slipping.

Kay and I have talked a little about math and science education. He said that our educational system has long viewed math and science as important, and so these subjects have always been taught, but our school system hasn’t understood their significance. So the essential ideas of both tend to get lost. What the computer brings to light as a new medium is that math (as mathematicians understand it, not how it’s typically taught) and science are essential for understanding computing’s potential. They are foundations for a new literacy. Without this perspective, and the general competencies in math and science to support it, both students and universities will likely continue the status quo.

Outlook: The key ingredient

Kay’s focus on outlook really struck me, because we have such an emphasis in our society on knowledge and IQ. My view of this has certainly evolved as I’ve gotten into my studies.

Whenever you interview for a job in the IT industry you get asked about your knowledge, and maybe some questions that probe your management skills. In technology companies that actually invent stuff you are probed for your mental faculties, particularly at tech companies that are known as “where the tech whizzes work.” I had an interview 5 years ago with a software company where the employer gave me something that resembled an IQ test. I have not seen nor heard of a technology company that asks questions about one’s outlook.

Kay said,

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that. And in contrast to these two, IQ is just a big lump of lead. It’s one of the worst things in our field that we have clever people in it, because like Leonardo [da Vinci] none of us is clever enough to deal with the scaling problems that we’re dealing with. So we need to be less clever and be able to look at things from better points of view.

This is truly remarkable! I have never heard anyone say this, but I think he may be right. All of the technology that has been developed, that has been used by millions of people, and has been talked about lo these many years was created by very smart people. More often than not what’s been created, though, has reflected a limited vision.

Given how society’s perception of computing has been, I doubt Kay’s vision of human-computer interaction would’ve been able to fly in the commercial marketplace, because as he’s said, we weren’t ready for it. In my opinion it’s even less ready for it now. But certainly the internal and networked computing vision that was developed at PARC could’ve been implemented in machines that the consuming public could’ve purchased decades ago. I think that’s where one could say, “You had your chance, but you blew it.” Vendors didn’t have the vision for it.

What’s needed in the future

One of Kay’s last slides referred to Marvin Minsky. I am not familiar with Minsky, and I guess I should be. He said that in terms of software we need to move from a “biology of systems” (architecture) to a “psychology of systems” (which I’m going to assume for the moment means “behavior”). I don’t really know about this, so I’m just going to leave it at that.

In a chart titled “Software Has Fallen Short” Kay made a clearer case than he has in the past for not idolizing the accomplishments that were made at ARPA/Xerox PARC. In the past he’s tried to discourage people from getting too excited about the PARC stuff, in some cases downplaying it as if it’s irrelevant. He’s always tried to get people to not fall in love with it too much, because he wanted people to improve upon it. He used to complain that the Lisp and Smalltalk communities thought that these languages were the greatest thing, and didn’t dream of creating anything better. Here he explains why it’s important to think beyond the accomplishments at PARC: It’s insufficient for what’s needed in the future. In fact the research is so far behind that the challenges are getting ahead of what the current best research can deal with.

He said that the PARC stuff is now “news,” and my guess is he means “it’s been turned into news.” He talked about this earlier in the speech. The essential ideas that were developed at PARC have not made it into the computing culture, with the exception of the internet and the GUI. Instead some of the “new” ideas developed there have been turned into superficial “news.”

Those who are interested in thinking about the future will find the slide titled “Simplest Idea” interesting. Kay threw out some concepts that are food for thought.

A statement that Kay closed with is one he’s mentioned before, and it depresses me. He said that when he’s traveled around to universities and met with CS professors, they think what they’ve got “is it.” They think they’re doing real science. All I can do is shake my head in disbelief at that. Even my CS professors understood that what they were teaching was not a science. For the most part what’s passing for CS now is no better than what I had–excepting the few top schools.

Likewise, I’ve heard accounts saying that there are some enterprise software leaders who (mistakenly) think they understand software engineering, and that it’s now a fully mature engineering discipline.

Coming full circle

Does computer science have a future? I think as before, computing is going to be at the mercy of events, and people’s emotional perceptions of computing’s importance. This is because the vast majority of people don’t understand what computing’s potential is. Today we see it as “current” not “future.” I think that people’s perception of computing will become more remote. Computing will be in devices we use every day, as we can see today, and we will see them as digital devices, where they used to be analog.

“Digital” has become the new medium, not computing. Computing has been overlayed with analog media (text, graphics, audio, video). Kay has said for a while now that this is how it is with new media. When it arises, the generation that’s around doesn’t see its potential. They see it as the “new old thing” and they just use it to optimize what they did before. Computing for now is just the magical substrate that makes “digital” work. The essential element that makes computing powerful and dynamic has been pushed to the side, with a couple of exceptions, as it often has been.

Kay didn’t talk about this in the video, but he and I have discussed this previously. There used to be a common concept in computing, which has come to be called “generative technology.” It used to be a common expectation that digital technology was open ended. You could do with it what you wanted. This idea has been severely damaged by the way that the web has developed. First, commercial PCs, which were designed to be used alone and in LANs, were thrust upon the internet using native code and careless programming, which didn’t anticipate security risks. In addition, identities became more loosely managed, and this became a problem as people figured out how to hide behind pseudonyms. Secondly, the web browser wasn’t designed initially as a programmable medium. A scripting language was added to browsers, but it was not made accessible through the browser. The scripting language was not designed too well, either.

Many security catastrophes ensued, each eroding people’s confidence in the safety of the internet, and the image of programming. People who didn’t know how to program, much less how the web worked, felt like they were the victims of those who did know how to program (though this went all the way back to stand-alone PCs as well when mischievous and destructive viruses circulated around in infected executables on BBSes and floppy disks). Programming became seen as a suspicious activity, rather than a creative one. And so as vendors put up defensive barriers to try to compensate for their own flawed designs, it only reinforced the initial design decision that was made when the commercial browser was conceived: that programming would be restricted to people who worked for service providers. Ordinary users should neither expect to gain access to code to modify it, nor want to. It’s gotten to the point we see today where even text formatting on message boards is restricted. HTML tags are restricted or banned altogether in favor of special codes, and you can’t use CSS at all. Programming by the public on websites is usually forbidden, because it’s seen as a security risk. And most web operators don’t see the value of letting people program on their site.

Kay complained that despite the initial idealistic visions of how the internet would develop, it’s still difficult or impossible to send a simulation to somebody on a message board, or through e-mail, to show the framework for a concept. What I think he had envisioned for the internet is a multimedia environment in which people could communicate in all sorts of media: text, graphics, audio, video, and simulations–in real time. Specialized software has made this possible using services you can pay for, but it’s still not something that people can universally access on the internet.

To get to the future we need to look at the world differently

I agree with the sentiment that in order to make the next leaps we need to not accept the world as it appears. We have to get out of what we think is the current accepted reality, what we’ve put up with and gotten used to, and look at what we want the world to be like. We should use that as inspiration for what we do next. One strategy to get that started might be to look at what we find irritating about what currently exists, what we would like to see changed, and think about it from a fundamental, structural perspective. What are some things you’ve tried to do, but given up on, because when you tried to use the tools or resources available they just weren’t up to the task? What sort of technology (perhaps a kind that doesn’t exist yet) do you think would do the job?

For further reading/exploring:

A complete system (apps. and all) in 20KLOC – Viewpoints Research

Demonstration of Lively Kernel, by Dan Ingalls

Edit 8-21-2009: A reader left a comment to an old post I wrote on Lisp and Dijkstra, quoting a speech of Dijkstra’s from 10 years ago, titled “Computing Science: Achievements and Challenges.” Towards the end of his speech he bemoaned the fact that CS in academia, and industry in the U.S. had rejected the idea of proving program correctness. He attributed this to an increasing mathematical illiteracy here. I think his analysis of cause and effect is wrong. My own CS professors liked the discipline that proofs of program correctness provided, but they rejected the idea that all programs can and should be proved correct. Dijkstra held the view that CS should only focus on programs that could be proved correct.

I think Dijkstra’s critique of anti-intellectualism in the U.S. is accurate, however, including our aversion to mathematics, and I found that it answered some questions I had about what I wrote above. It also gets to the heart of one of the issues I harp on repeatedly in my blog. His third bullet point is most prescient. Quoting Dijkstra:

  • The ongoing process of becoming more and more an amathematical society is more an American specialty than anything else. (It is also a tragic accident of history.)
  • The idea of a formal design discipline is often rejected on account of vague cultural/philosophical condemnations such as “stifling creativity”; this is more pronounced in the Anglo-Saxon world where a romantic vision of “the humanities” in fact idealizes technical incompetence. Another aspect of that same trait is the cult of iterative design.
  • Industry suffers from the managerial dogma that for the sake of stability and continuity, the company should be independent of the competence of individual employees. Hence industry rejects any methodological proposal that can be viewed as making intellectual demands on its work force. Since in the US the influence of industry is more pervasive than elsewhere, the above dogma hurts American computing science most. The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science. [my emphasis]

Edit 1-2-2014: Alan Kay gave a presentation called “Normal Considered Harmful” in 2012, at the University of Illinois, where he repeated some of the same topics as his 2009 speech above, but he clarified some of the points he made. So I thought it would be valuable to refer people to it as well.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

“We all believed in socialism because we were children of a different generation. Then I realized that if you want to irradicate poverty you don’t do it by redistribution of existing wealth. You have to create new wealth.”

— Narayana Murthy, Chairman and CEO of Infosys, circa. 2002

I’m stepping into an area fraught with politics, which I usually try to avoid as I think it distracts from the main mission of my blog, but I feel as though I need to say something because I get a strong feeling we as a country are repeating some mistakes of the past, and we shouldn’t be. The above quote is from a series produced in 2002 by PBS called Commanding Heights: The Battle for the World Economy. It explains the evolution of economic theory in the 20th century, from globalized trade (early 20th century), to Keynesian economics, to a return of free markets, and then globalization.

I hear the refrain from those in power now that “We’ve tried that and it didn’t work”, regarding free markets. To the extent that misplaced regulation caused our current mess, I agree, but a prevailing attitude I’m seeing develop among Democrats concerns me. I see what they are proposing now and I say, “We’ve tried this as well and it didn’t work.” Just look at our history. Does anyone remember the 1970s (I’m referring to President Nixon’s, Ford’s, and Carter’s economic policies)? It was the product of Keynesian ideas in a changed world that was no longer compatible with it, and they are still incompatible.

(Update 6/2/2009: I had to revise the paragraph below with updated info. since my AP newswire link disappeared)

The sheer magnitude of spending that is being proposed is mind boggling: A budget of $3.55 trillion for next year, with plans for future spending, along with projected deficits totaling $7.1 trillion from 2010-2019. That’s in addition to the $1.8 trillion deficit we’ve accrued this year alone. That’s a total of $8.9 trillion added to the debt in 10 years. To provide some perspective it took us 28 years, 1980-2008, to rack up an $11 trillion debt. So Obama is talking about increasing the debt by 81% in 36% of the time.

I could understand the necessity of setting up TARP (Troubled Asset Relief Program) to rescue the financial system while it was collapsing quickly last year, but the news I’ve been hearing lately says that even the Bush Administration overdid it a little. Now the Obama Administration is proposing a new round of TARP financing. Maybe it’s necessary, but it may backfire on us. Banks overfilled their reserves from the last TARP. They are keeping their powder dry, and have been stingy about lending. The question we should ask is why. Are they being greedy or are they just getting more conservative, trying to avoid high risk investments? Finding out which it is makes a big difference in terms of how to approach this.

I’ve gotten a sense that the election of Obama was not a vote for the future in terms of economics, but an attempt to arrest the present and reach into the past for salvation. I get a sense that we’re saying, “Stop this train. I want to get off!” It was a vote for “Remember the good old days? Let’s bring them back!” Too bad that’s a dream that’s not possible. The world has changed. It’s time to craft a new future. I know this sounds really anachronistic given that the Obama campaign used up-to-date technology, and continues to use it. The economic and budgetary solutions put forward, however, are from several decades in the past.

I’ve been paying close attention to financial analysis of our economy over the past several months. The sense I get is that we are preparing to repeat some old mistakes, and we will achieve similar results. The way I see it we can look forward to the following over the next few years:

  1. Inflation: caused by higher energy prices, and the classic “too much money chasing too few goods” phenomenon due to the fact that we’ve been printing money at an accelerated rate, and given the current proposals we’re going to accelerate it even more because we’re not going to be able to borrow all the money we need. I’m sure inflation feels good now, since we’ve been in a deflationary cycle. Given the slowness of government to react to any situation, and a consistently prevailing attitude of “If it works don’t fix it” (no matter who’s in power) we can predict with certainty that the government will overcorrect. Secondly, whenever the economy recovers, banks will draw from their excess reserves and increase the money supply even more. Saving will continue to be low, because it doesn’t pay to save money in an inflationary economy. Investors will put their money abroad in an attempt to increase their wealth instead of losing it letting it sit in a bank account (more on this below).
  2. Higher interest rates. From what I’m hearing we’ve nearly tapped out our borrowing capacity, at least with what foreign bond holders are willing to buy into. We’ll need to increase interest rates to make bond holding more attractive. This is going to have consequences for all of us. Initially we may not feel it, due to excess capacity that will be built up in the banking system, but eventually we will.
  3. Slow domestic economic growth caused by #1 and #2, and government borrowing crowding out private borrowing. I think however we will see a “jobless recovery”. Even though foreign countries appear to be in worse shape now than we are, if they stay true to market principles I think they will recover faster than we will, and will grow faster. Their crashing stock markets mean things are getting cheaper over there. Our current economic policies are not encouraging investment in our economy. Money will flow where it is welcomed, despite many people’s notions of where it “should” go. So I think we can look forward to a stock market recovery down the road (maybe next year), but I don’t get a sense that we will have robust job growth. The key to all of this happening is for our government and financial system to somehow untangle the derivatives mess of mortgage-backed securities and CDO’s.
  4. A trend towards slow, lumbering gigantism in the private sector. With economic conditions not conducive to domestic investment, and therefor the creation of few competitors in the marketplace I think we’ll see a trend towards conglomeration, and perhaps more regulated monopolies. This will contribute to our slow economic growth because these giants are not going to be too competitive internationally. If they survive it’ll be because they participate in government contracts, and they can get pretty far on their inertia alone.
  5. Your future is a government job. With all the government spending that’s going to take place, probably the best place to find a good paying job will be working for the federal government, or a major corporation that does government contracting. Government is not as efficient in creating jobs as the private sector is, but there will definitely be growth there. And it’ll likely be secure. Despite the efforts of past administrations, it’s been relatively rare when government jobs get cut. The federal government has grown continuously no matter who’s been in charge. It’s just been a matter of where the growth has occurred.

Having said all that I hope I’m wrong. I hope that we will come out of this with a robust domestic economy with good private sector job growth in the years ahead. I just don’t see that happening given what I’ve learned about our past history doing what we’re doing now.

Ending on a light-hearted note, The Onion discusses “Closing the money hole”. Though I haven’t seen pundit discussions that are nearly this silly (there’s expressions of muted surprise, but it’s amazing to me there’s not more alarm), the public discussion among citizens has felt pretty close to this… I haven’t posted warnings about foul language in videos I’ve posted in the past, because just about every video I post now has a little of it, and I got tired of warning people off, but there’s a bit of it here:


Edit 6/2/2009: I found this May 31 interview with Niall Ferguson and Joshua Ramo (video) on Fareed Zakaria’s CNN show “GPS”. Unfortunately I can’t embed this video, so follow the link. What Niall says agrees with what I said above. Both Niall and Joshua widen the picture of the consequences of what the federal government is doing now, and what it still needs to do. I agree with Joshua’s premise that we live in a “revolutionary time”, but I disagree with his examples. It’s true that we have “upstarts” around the world filling the void created by discredited leadership, but my point here is the solutions that are being offered, that I’ve heard, are also old, discredited solutions, but in new packaging. Unfortunately we’ve chosen something “new” that’s really not. I guess the thinking is that since one old idea apparently doesn’t work, let’s try a different old idea. Sorry, that’s not going to work either. Best come up with something new, though let’s not throw out the baby with the bathwater. Some of the old ideas still hold true. Let’s not forget what we’ve learned that works (and what doesn’t), but acknowledge that we have more to learn about how a globalized economy works.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Older Posts »