Feeds:
Posts
Comments

Posts Tagged ‘science’

I thought I’d share the video below, since it has some valuable insights on what computer science should be, and what education should be, generally. It’s all integrated together in this presentation, and indeed, one of the projects of education should be integrating computer science into it, but not with the explicit purpose to create more programmers for future jobs, though it could always be used for that by the students. Alan Kay presents a different definition of CS than is pursued in universities today. He refers to how Alan Perlis defined it (Perlis was the one to come up with the term “computer science”), which I’ll get to below.

This thinking about CS and education provides, among other things, a pathway toward reimagining how knowledge, literature, and art can be presented, organized, dissected, annotated, and shared in a way that’s more meaningful than can be achieved with old media. (For more on that, see my post “Getting beyond paper and linear media.”) As with what the late Carl Sagan often advocated, Kay’s presentation here advocates for a general model for the education of citizens, not just professional careerists.

Another reason I thought of sharing this is several years ago I remember hearing that Kay was working on rethinking computer science curriculum. What he presents is more about suggestion, “Start thinking about it this way.” (He takes a dim view of the concept of curriculum, as it suggests setting students on a rigid path of study with the intent to create minds with a cookie cutter, though he is in favor of classical liberal arts education, with the prescriptions that entails.)

As he says in the presentation, there’s a lot to develop in the practice of education in order to bring this into fruition.

This is from 2015:

I wrote notes below for some of the segments he talks about, just because I think his presentation bears some interpretation for many readers. He uses metaphors a lot.

The bicycle

This is an analogy for how an apparatus, or a subject, is typically modified for education. We take the optimized, or the adult version of something, and add compensators, which make it so that beginners can use it without falling all over themselves. It’s seen as easier to present it this way, and as a skill-building experience, where in order to learn how to do something, you need to use “the real thing.” Beginners can put on a good show of using this sort of apparatus, or modified subject, but the problem is that it doesn’t teach a beginner how to become good at really using the real thing at its full potential. The compensators become a crutch. He said a better idea is to use an apparatus, or a component of the subject, that allows a beginner to get a feel for how to use the thing in a way that gets across significant aspects of its potential, but without the optimizations, or the way adults use it, which make it too complicated for a beginner to use under their own power. In this case, a lowbike is better. This beginner apparatus, or component, is more like the first version of the thing you’re trying to teach. Bicycles were originally more like scooters, without pedals, or a chain, where you’d sit in the seat, push it along with your legs, kind of “running while sitting,” glide, and turn by shifting your weight, and turning into the turn. Once a beginner gets a feel for that, they can deal with the optimized version, or a scaled down adult version, with pedals and a chain to drive the bike, because all that adds is the ability to get more power out of it. It doesn’t change any of the fundamentals of how you use it.

This gets to an issue of pedagogy, that learners need to learn something in components, rather than dealing with the whole thing at once. Once they learn one capacity, they can move on to the next challenge in learning the “whole thing.”

Radiation vs. nouns

He put forward a proposition for his talk, which is that he’s mixing a bunch of ideas together, because they overlap. This is a good metaphor, because most of his talk is about what we are as human beings, and how society should situate and view education. Only a little of it is actually on computer science, but all of it is germane to talking about computer science education.

He also gives a little advice to education reformers. He pointed out what’s wrong with education as it is right now, but rather than cursing it, he said one should make a deliberate effort to “build a tribe” or coalition with those who are causing the problem, or are in the midst of the problem, and suggest ways to bring them into a dignified position, perhaps by sharing in their honors, or, as his example illustrated, their shame. I draw some of this from Plato’s Cave metaphor.

Cooperation and competition in society

I once heard Kay talk about this many years ago. He said that, culturally, modern corporations are like the ancient hunter-gatherers. They exploit the resources of an area for a while, and once it’s exhausted, they move on, and that as a culture, they have yet to learn about democracy, which requires more of a “settlement” mentality toward what they’re doing. Here, he used an agricultural metaphor to talk about a cooperative society that creates the wealth that is then used by competitive forces within it. What he means by this is that the true wealth is the knowledge that’s ultimately used to develop new products and services. It’s not all developed inside the marketplace. He doesn’t talk about this, but I will. Even though a significant part of the wealth (as he said, you can think of it as “potential energy”) is generated inside research labs, what research labs tend to lack is knowledge of what the members of society can actually understand of this developed knowledge. That’s where the competitive forces in society come in, because they understand this a lot better. They can negotiate between how much of the new knowledge to put into a product, and how much it will cost, to reach as many people as possible. This is what happened in the computer industry of the past.

I think I understand what he’s getting at with the agricultural metaphor, because farmers don’t just want to reap a crop for one season. Their livelihood depends on maintaining fertility on their land. That requires not just exploiting what’s there season after season, or else you get the dust bowl. If instead, practices are modified to allow the existing land to become fertile again, or, in the case of hunter-gathering, aggressively managing the environment to create favorable grazing to attract game, then you can create a cycle of exploitation and care such that a group can stay in one area for a long time, without denying themselves any of the benefits of how they live. I think what he suggests is that if corporations would modify their behavior to a more settled, agricultural model, to use some of their profits to contribute to educating the society in which they exist, and to funding scientific research on a private basis, that would “regenerate the soil” for economic growth, which can then fuel more research, creating a cycle of renewal. No doubt the idea he’s presenting includes the businesses who would participate in doing this. They should be allowed to profit (“reap”) from what they “sow,” but the point is they’re not the only ones who can profit. Other players in the marketplace can also exploit the knowledge that’s generated, and profit as well. That’s what’s been done in the past with private research labs.

He attributes the lack of this to culture, of not realizing that the economic model that’s being used is not sustainable. Eventually, you use up the “soil,” and it becomes “infertile,” and “blows away,” and, in the case of hunter-gathering, the “good hunting grounds” are used up.

He makes a crucial point, though, that education is not just about jobs and competitiveness. It’s also about inculcating what citizenship really means. I’m sure if he was asked to drill down on this more, he would suggest a classical education for this, along with a modified math and science curriculum that would give students a sense of what those subjects really are like.

The sense I get is he’s advocating almost more of an Andrew Carnegie model of corporate stewardship, who, after he made his money, directed his philanthropy to building schools and libraries. Kay would just add science labs to that mix. (He mentions this later in his talk.)

I feel it necessary to note that not all for-profit entities would be able to participate in funding these cooperative activities, because their profit margins are so slim. I don’t think that’s what he’s expecting out of this.

What we are, to the best of our knowledge

He gives three views into human mental capacity: the way we perceive (theatrical), how much we can perceive at any moment in time (1 ± 2), and how educators should perceive ourselves psychologically and mentally (more primate and mammalian). This relates to neuroscience, and to some extent, evolutionary psychology.

The raison d’être of computer science

The primary purpose of computer science should be developing a science of systems in process, and that means all processes: mechanical processes, technological processes, social processes, biological processes, mental processes, etc. This relates to my earlier post, “Beginning the journey of becoming a computer scientist.” It’s partly about developing a new kind of mathematics for modeling processes. Alan Turing did it, with his Turing Machine concept, though he was trying to model the process of generating mathematical statements, and doing mathematical tests on them.

Shipping the design

Kay talks about how programmers today don’t have access to anything like what designers in other fields have, where they’re able to model their design, simulate it, and then have a machine fabricate a prototype that you can actually show and use.

I had an epiphany about this about 8 or 9 years ago. I was at a friend’s party, and there were a few mechanical engineers among the guests. I overheard a couple of them talking about the computer-aided design (CAD) software they were using. One talked about a “terrible” piece of CAD software he used at work. He said he had a much better CAD system at home, but it was incompatible with the data files that were used at work. As much as he would’ve loved to use it, he couldn’t. He said the system he used at work required him to group pieces of a design together as he was building the model, and once he did that, those pieces became inflexible. He couldn’t just redesign one piece of it, or separate one out individually from the model. He couldn’t move the pieces around on the model, and have them fit. Once they were grouped, that was it. It became this static thing. He said in order to redesign one piece of it, he had to take the entire model apart, piece by piece, redesign the part, and then redesign all the other pieces in the group to make the new part fit. He said he hated it, and as he talked about it, he acted like he was so disgusted with it, he wanted to throw it in the trash, like it was a piece of garbage. He said on his CAD system at home, it was wonderful, because he could separate a part from a model any time he wanted, and the system would adjust the model automatically to “make sense” out of the part being missing. He could redesign the part, and move it to a different part of the model, “attach it” somewhere, and the system would automatically adjust the model so that the new part would fit. The way he described it gave it a sense of fluidity. Whereas the system he used at work sounded rigid. It reminded me of the programming languages I had been using, where once relationships between entities were set up, it was really difficult to take pieces of it “out” and redesign them, because everything that depended on that piece would break once I redesigned it. I had to go around and redesign all the other entities that related to it to adjust to the redesign of the one piece.

I can’t remember how this worked, but another thing the engineer talked about was the system at work had some sort of “binding” mechanism that seemed to associate parts by “type,” and that this was also rigid, which reminded me a lot of the strong typing system in the languages I had been using. He said the system he had at home didn’t have this, and to him, it made more sense. Again, his description lent a sense of fluidity to the experience of using it. I thought, “My goodness! Why don’t programmers think like this? Why don’t they insist that the experience be like this guy’s CAD system at home?” For the first time in my career, I had a profound sense of just what Alan Kay talked about, here, that the computing field is retrograde. It has not advanced anywhere close to the kind of engineering that exists in other fields, where they would insist on this sort of experience. We accept so much less, whereas modern engineers have a hard time standing for it, because they know they have better options.

Don’t be fooled by large efforts below threshold

Before I begin this part, I want to share a crucial point that Kay makes, because it’s one of the big ones:

Think about what thinking is. Thinking is not being logical. Thinking is choosing the environment that you’re going to think in before you start rationalizing.

Kay had something very interesting, and startling, to say about the Apollo space program, using that as a metaphor for large reform initiatives in education generally. I recently happened upon a video of testimony he gave to a House committee on educational computing back in 1982, chaired by then-congressman Al Gore, and Kay talked about this same example back then. He said that the way the Apollo rockets were designed was a “hack.” They were not the best design for space travel, but it was the most expedient for the mission of getting to the Moon by the end of the 1960s. Here, in this presentation, he talks about how each complete rocket was the height of a 45-story building (1-1/2 football fields in length), most of it high explosives, with only a tiny capsule at the top that could fit 3 astronauts. This is not a model that could scale to what was needed for space travel.

It became this huge worldwide cultural event when NASA launched it, and landed men on the Moon, but Kay said it was really not a great accomplishment. I remember Rep. Gore said in jest, “The walls in this room are shaking!” The camera panned around a bit, showing pictures on the wall from NASA. How could he say such a thing?! This was the biggest cultural event of the century, perhaps in all of human history. He explained the same thing here: that the Apollo program didn’t advance space travel beyond the mission to the Moon. It was not technology that would get us beyond that, though, in hindsight we can say technology like it enabled launching probes throughout the Solar System.

Now, what he means by “space travel,” I’m not sure. Is it manned missions to the outer planets, or to other star systems? Kay is someone who has always thought big. So, it’s possible he was thinking of interstellar travel. What he was talking about was the problem of propulsion, getting something powerful enough to make significant discoveries in space exploration possible. He said chemical propellant just doesn’t do it. It’s good enough for launching orbital vehicles around our planet, and launching probes, but that’s really it. The rest is just wasting time below threshold.

Another thing he explained is that large initiatives which don’t cross a meaningful threshold can be harmful to efforts to advancing any endeavor, because large initiatives come with extended expectations that the investment will continue to be used, and they must be satisfied, or else there will be no cooperation in doing the initial effort. The participants will want their return on investment. He said that’s what happened with NASA. The ROI had to play out, but that ruined the program, because as that happened, people could see we weren’t advancing the state of the art that much in space travel, and the science that was being produced out of it was usually nothing to write home about. Eventually, we got what we now see: People are tired of it, and have no enthusiasm for it, because it set expectations so low.

What he was trying to do in his House committee testimony, and what he’s trying to do here, is provide some perspective that science offers, vs. our common sense notion of how “great” something is. You cannot get qualitative improvement in an endeavor without this perspective, because otherwise you have no idea what you’re dealing with, or what progress you’re building on, if any. Looking at it from a cultural perspective is not sufficient. Yes, the Moon landing was a cultural milestone, but not a scientific or engineering milestone, and that matters.

Modern science and engineering have a sense of thresholds, that there can come a point where some qualitative leap is made, a new perspective on what you’re doing is realized that is significantly better than what existed prior. He explains that once a threshold has been crossed, you can make small improvements which continue to build on that significant leap, and those improvements will stick. The effort won’t just crash back down into mediocrity, because you know something about what you have, and you value it. It’s a paradigm shift. It is so significant, you have little reason to go back to what you were doing before. From there, you can start to learn the limits of that new perspective, and at some point, make more qualitative leaps, crossing more thresholds.

“Problem-finding”/finding the goal vs. problem-solving

Problem solving begins with a current context, “We’re having a problem with X. How do we solve it?” Problem finding asks do we even have a good idea of what the problem is? Maybe the reason for the problems we’ve been seeing has to do with the fact that we haven’t solved a different problem we don’t know about yet. “Let’s spend time trying to find that.”

Another way of expressing this is a concept I’ve heard about from economists, called “opportunity cost,” which, in one context, gets across the idea that by implementing a regulation, it’s possible that better outcomes will be produced in certain categories of economic interactions, but it will also prevent certain opportunities from arising which may also be positive. The rub is these opportunities will not be known ahead of time, and will not be realized, because the regulation creates a barrier to entry that entrepreneurs and investors will find too high of a barrier to overcome. This concept is difficult to communicate to many laymen, because it sounds speculative. What this concept encourages people cognizant of it to do is to “consider the unseen,” to consider the possibilities that lie outside of what’s currently known. One can view “problem finding” in a similar way, not as a way of considering the unseen, but exploring it, and finding new knowledge that was previously unknown, and therefore unseen, and then reconsidering one’s notion of what the problem really is. It’s a way of expanding your knowledge base in a domain, with the key practice being that you’re not just exploring what’s already known. You’re exploring the unknown.

The story he tells about MacCready illustrates working with a good modeling system. He needed to be able to fail with his ideas a lot, before he found something that worked. So he needed a simple enough modeling system that he could fail in, where when he crashed with a particular model, it didn’t take a lot to analyze why it didn’t work, and it didn’t take a lot to put it back together differently, so he could try again.

He made another point about Xerox PARC, that it took years to find the goal, and it involved finding many other goals, and solving them in the interim. I’ve written about this history at “A history lesson on government R&D” Part 2 and Part 3. There, you can see the continuum he talks about, where ARPA/IPTO work led into Xerox PARC.

This video with Vishal Sikka and Alan Kay gives a brief illustration of this process, and what was produced out of it.

Erosion gullies

There are a couple metaphors he uses to talk about the lack of flexibility that develops in our minds the more we focus our efforts on coping, problem solving, and optimizing how we live and work in our current circumstances. One is erosion gullies. The other is the “monkey trap.”

Erosion gullies channel water along a particular path. They develop naturally as water erodes the land it flows across. These “gullies” seem to fit with what works for us, and/or what we’re used to. They develop into habits about how we see the world–beliefs, ideas which we just accept, and don’t question. They allow some variation in the path that’s followed, but they provide boundaries that don’t allow the water to go outside the gully (leaving aside the exception of floods, for the sake of argument). He uses this to talk about how “channels” develop in our minds that direct our thinking. The more we focus our thoughts in that direction, the “deeper” the gully gets. Keep at it too long, and the “gully” won’t allow us to see anything different than what we’re used to. He says that it may become inconceivable to think that you could climb out of it. Most everything inside the “gully” will be considered “right” thinking (no reason why), and anything outside of it will be considered “wrong” (no reason why), and even threatening. This is why he mentions that wars are fought over this. “We’re all in different erosion gullies.” They don’t meet anywhere, and my “right” is your “wrong,” and vice-versa. The differences are irreconcilable, because the idea of seeing outside of them is inconceivable.

He makes two points with this. One is that we have erosion gullies re. stories that we tell ourselves, and beliefs that we hold onto. Another is that we have erosion gullies even in our sensory perceptions that dictate what we see and don’t see. We can see things that don’t even exist, and typically do. He uses eyewitness testimony to illustrate this.

I think what he’s saying with it is we need to watch out for these “gullies.” They develop naturally, but it would be good if we had the flexibility to be able to eventually get out of our “gully,” and form a new “channel,” which I take is a metaphor for seeing the world differently than what we’re used to. We need a means for doing that, and what he proposes is science, since it questions what we believe, and tests our ideas. We can get around our beliefs, and thereby get out of our “gullies” to change our perspective. It doesn’t mean we abandon “gullies,” but just become aware that other “channels” (perspectives) are possible, and we can switch between them, to see better, and achieve better outcomes.

Regarding the “monkey trap,” he uses it as a metaphor for us getting on a single track, grasping for what we want, not realizing that the very act of being that focused, to the exclusion of all other possibilities, is not getting us anywhere. It’s a trap, and we’d benefit by not being so dogged in pursuing goals if they’re not getting us anywhere.

“Fast” vs. “slow”

He gets into some neuroscience that relates to how we perceive, what he called “fast” and “slow” response. You can train your mind through practice in how to use “fast” and “slow” for different activities, and they’re integral to our perception of what we’re doing, and our reactions to it, so that we don’t careen into a catastrophe, or miss important ideas in trying to deal with problems. He said that cognitive approaches to education deal with the “slow” systems, but not the “fast” ones, and it’s not enough to help students in really understanding a subject. As other forms of training inherently deal with the “fast” systems, educators need to think about how the “fast” systems responds to their subjects, and incorporate that into how they are taught. He anticipates this will require radically redesigning the pedagogy that’s typically used.

He says that the “fast” systems deal with the “atoms” of ideas that the “slow” system also deals with. By “atoms,” I take it he means fundamental, basic ideas or concepts for a subject. (I think of this as the “building blocks of molecules.”)

The way I take this is that the “slow” systems he’s talking about are what we use to work out hard problems. They’re what we use to sit down and ponder a problem for a while. The “fast” systems are what we use to recognize or spot possible patterns/answers quickly, a kind of quick, first-blush analysis that can make solving the problem easier. To use an example, you might be using “fast” systems now to read this text. You can do it without thinking about it. The “slow” systems are involved in interpreting what I’m saying, generating ideas that occur to you as you read it.

This is just me, but “fast” sounds like what we’d call “intuition,” because some of the thinking has already been done before we use the “slow” systems to solve the rest. It’s a thought process that takes place, and has already worked some things out, before we consciously engage in a thought process.

Science

This is the clearest expression I’ve heard Kay make about what science actually is, not what most people think it is. He’s talked about it before in other ways, but he just comes right out and says it in this presentation, and I hope people watching it really take it in, because I see too often that people take what they’ve been taught about what science is in school and keep reiterating it for the rest of their lives. This goes on not only with people who love following what scientists say, but also in our societal institutions that we happen to associate with science.

…[Francis] Bacon wrote a book called “The Novum Organum” in 1620, where he said, “Hey, look. Our brains are messed up. We have bad brains.” He called the ways of messing up “idols.” He said we get serious errors because of our genetics. We get serious errors because of the culture we’re in. We get serious errors because of the languages we use. They don’t represent what’s actually out there. We get serious errors from the way that academia hangs on to bad ideas, and teaches them over again. These are his four “idols.” Anyone ever read Bacon? He said we need something to get around our bad brains! A set of heuristics, is the term we’d use today.

What he called for was … science, because that’s what “Novum Organum,” the rest of the title, was: “A new way of dealing with knowledge.”

Science is not the knowledge, because knowledge is in this context. What science is is a negotiation between what’s out there and what we can represent.

This is the big idea. This is the idea they don’t teach in school. This is the idea we should be teaching. It’s one of the biggest ideas of all time.

It isn’t the knowledge. It’s the relationship, because what’s out there is only knowable by a phenomena that is being filtered in every possible way. We don’t even know if our brain is capable of representing the stuff.

So, to think about science as the truth is completely wrong! It’s not the right way to look at it. But if you think about it as a negotiation between the best you can do right now and stuff that’s out there, where you’re not completely sure, you’re in a very strong position.

Science has been the most important, powerful thought system humans have ever invented, because it gave up the idea of truth, and it substituted for it a thousand variations of false, some of which are incredibly powerful. This is the big idea.

So, if we’re going to think about computing, this is one way … of thinking about, “Wow! Computers!” They are representers. We can learn about representations. We can simulate ideas. We can get a very good–much better sense of dealing with thinking about these complexities.

“Getting there”

The last part demonstrates what I’ve seen with exploration. You start out thinking you’re going to go from Point A to Point B, but you take diversions, pathways that are interesting, but related to your initial search, because you find that it’s not a straight path from Point A to Point B. It’s not as straightforward as you thought. So, you try other ways of getting there. It is a kind of problem solving, but it’s really what Kay called “problem finding,” or finding the goal. In the process, the goal is to find a better way to get to the goal, and along the way, you find problems that are worth solving, that you didn’t anticipate at all when you first got going. In that process, you’ll find things you didn’t expect to learn, but which are really valuable to your knowledge base. In your pursuit of trying to find a better way to get to your destination, you might even get through a threshold, and find that your initial goal is no longer worth pursuing, but there are better goals to pursue in this new perception you’ve obtained.

Related post: Reviving programming as literacy

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I came across this interview with Lesley Chilcott, the producer of “An Inconvenient Truth” and “Waiting For Superman.” Kind of extending her emphasis on improving education, she produced a short 9-minute video selling the idea of “You should learn to code,” both to adults and children. It addresses two points: 1) the anticipated shortage of programmers needed to write software in the future, and 2) the increasing ubiquity of programming in all sorts of fields where people would think it wouldn’t exist, such as manufacturing and agriculture.

The interview gets interesting at 3 minutes 45 seconds in.

Michelle Fields, the interviewer, asked what I thought were some insightful questions. She started things off with:

It seems as though the next generation is so fluent in technology. How is it that they don’t know what computer programming is?

Chilcott said:

I think the reason is, you know, we all use technology every day. It’s surrounding us. Like, we can debate the pro’s and con’s of technology/social media, but the bottom line is it’s everywhere, right? So I think a lot of people know how to read it. They grow up playing with an iPhone or something like that, but they don’t know how to write it. And so when you say, “Do you know what this is,” specifically, or what this job is–and you know, those kids are in first, second, fifth grade–they know all about it, but they don’t know what the job is.

I found this answer confusing. She’s kind of on the right track, thinking of programming as “writing.” I cut her some slack, because as she admits in the interview, she’s just started programming herself. However, as I’ve said before, running software is not “reading.” It’s really more like being read to by a machine, like listening to an audio book, or someone else reading to you. You don’t have to worry about the mental tasks of pronunciation, sentence construction, or punctuation. You can just listen to the story. Running software doesn’t communicate the process that the code is generating, because there’s a lot that the person using it is not shown. This is on purpose, because most people use software to accomplish some utilitarian task unrelated to how a computer works. They’re not using it to understand a process.

The last sentence came across as muddled. I think what she meant was they know all about using technology, but they don’t know how to create it (“what the job is”).

Fields then asked,

There was this study which found that 56% of students would rather eat broccoli than learn math. Do you think that since computer programming is somewhat related to math, that that’s the reason children and students shy away from it?

Chilcott said:

It could be. That is one of the myths that exist. There is some, you know, math, but as Bill Gates and some other people said, you know, addition, subtraction–It’s much more about problem solving, and I think people like to problem-solve, they like mysteries, they like decoding things. It’s much more about that than complicated algorithms.

She’s right that there is problem solving involved with programming, but she’s either mistaken or confusing math with arithmetic when she says that the relationship between math and programming is a “myth.” I can understand why she tries to wave it off, because as Fields pointed out, most students don’t like math. I contend, as do some mathematicians, this is due to the way it’s taught in our schools. The essence of math gets lost. Instead it’s presented as a tool for calculation, and possibly a cognitive development discipline for problem solving, both of which don’t communicate what it really is, and remove a lot of its beauty.

In reality math is pervasive in programming, but to understand why I say this you have to understand that math is not arithmetic–addition, subtraction, like she suggests. This confusion is common in our society. I talk more about this here. Having said this, it does not mean that programming is hard right off the bat. The math involved has more to do with logic and reasoning. I like the message in the video below from a couple of the programmers interviewed: “You don’t have to be a genius to know how to code. … Do you have to be a genius to do math? No.” I think that’s the right way to approach this. Math is important to programming, but it’s not just about calculating a result. While there’s some memorization, understanding a programming language’s rules, and knowing what different things are called, that’s not a big part of it.

The cool thing is you can accomplish some simple things in programming, to get started, without worrying about math at all. It becomes more important if you want to write complex programs, but that’s something that can wait.

My current understanding is the math in programming is about understanding the rules of a system and what statements used in that system imply, and then understanding the effects of those implications. That sounds complicated, but it’s just something that has to be learned to do anything significant with programming, and once learned will become more and more natural. I liken it to understanding how to drive a car on the road. You don’t have to learn this concept right away, though. When first starting out, you can just look at and enjoy the effects of trying out different things, exploring what a programming environment offers you.

Where Chilcott shines in the interview above is when she becomes the “organizer.” She said that even though 95% of the schools have computers and internet access, only 10% have what she calls a “computer science” course. (I wish they’d go back to calling it a “programming course.” Computer science is more than what most of these schools teach, but I’m being nit-picky.) The cool thing about Code.org, a web site she promotes, is that it tries to locate a school near you that offers programming courses. If there aren’t any, no problem. You can learn some basics of programming right inside your browser using the online tools that it offers on the site.

The video Chilcott produced is called “Code Stars” in the above interview, but when I went looking for it I found it under the name “the Code.org film,” or, “What Most Schools Don’t Teach.”

Here is the full 9-minute video:

If you want the shorter videos, you can find them here.

The programming environment you see kids using in these videos is called “Scratch.”

Gabe Newell said of programming:

When you’re programming, you’re teaching possibly the stupidest thing in the entire universe–a computer–how to do something.

I see where Newell is going with this, but from my perspective it depends on what programming environment you’re using. Some programming languages have the feel of you “teaching” the system when you’re programming. Others have the feel of creating relationships between simple behaviors. Others, still, have the feel of using relationships to set up rules for a new system. Programming comes in a variety of approaches. However, the basic idea that Newell gets across is true, that computers only come with a set of simple operations, and that’s it. They don’t do very much by themselves, or even in combination. It’s important for those new to programming to learn this early on. Some of my early experiences in programming match those of new programmers even today. One of them is, when using a programming language, one is tempted to assume that the computer will infer the meaning of some programming expression from context. There is some context used in programming, but not much, and it’s highly formalized. It’s not intuitive. I can remember the first time I learned this it was like the joke where, say, someone introduces his/her friend to a dumb, witless character in a skit. He/she says, “Say hi to my friend, Frank,” and the dummy says, “Hi to my friend Frank.” And the guy/gal says, “NO! I mean…say hello,” making a hand gesture trying to get the two to connect, and the dummy might look at the friend and say, “Hello,” but that’s it. That’s kind of a realization to new programmers. Yeah, the computer has to have almost everything explained to it (or modeled), even things we do without thinking about it. It’s up to the programmer to make the connections between the few things the computer knows how to do, to make something larger happen.

Jack Dorsey talked about programming in a way that I think is important. His ultimate goal when he started out was to model something, and make the model malleable enough that he could manipulate it, because he wanted to use it for understanding how cities work.

Bill Gates emphasized control. This is a common early motivation for programmers. Not necessarily controlling people, but controlling the computer. What Gates was talking about was what I’d call “making your own world,” like Dorsey was saying, but he wanted to make it real. When I was in high school (late 1980s) it was a rather common project for aspiring programming students to create “matchmaking” programs, where boys and girls in the whole school would answer a simple questionnaire, and a computer program that a student had written would try to match them up by interests, not unlike some of the online dating sites that are out there now. I never heard of any students finding their true love through one of these projects, but it was fun for some people.

Vanessa Hurst said, “You don’t have to be a genius to know how to code. You need to be determined.” That’s pretty much it in a nutshell. In my experience everything else flowed from determination when I was learning how to do this. It will drive you to learn what you need to learn to get it, even if sometimes it’s subject matter you find tedious and icky. You learn to just push through it to get to the glorious feeling at the end of having accomplished what you set out to do.

Newell said at the end of the video,

The programmers of tomorrow are the wizards of the future. You’re going to look like you have magic powers compared to everybody else.

That’s true, but this has been true for a long time. In my professional work developing custom database solutions for business customers I had the experience of being viewed like a magician, because customers didn’t know how I did what I did. They just appreciated the fact that I could do it. I really don’t mean to discourage anyone, because I still enjoy programming today, and I want to encourage people to learn programming, but I feel the need to say something, because I don’t want people to get disillusioned over this. This status of “wizard,” or “magician” is not always what it’s cracked up to be. It can feel great, but there is a flip side to it that can be downright frustrating. This is because people who don’t know a wit of what you know how to do can get confused about what your true abilities are, and they can develop unrealistic expectations of you. I’ve found that wherever possible, the most pleasurable work environment is working among those who also know how to code, because we’re able to size each other up, and assign tasks appropriately. I encourage those who are pursuing software development as a career to shoot for that.

A couple things I can say for being able to code are:

  • It makes you less of a “victim” in our technology world. Once you know how to do it, you have an idea about how other programs work, and the pitfalls they can fall into that might compromise your private information, allow a computer cracker to access it, or take control of your system. You don’t have to feel scared at the alarming “hacking” or phishing reports you hear on the news, because you can be choosey about what software you use based on how it was constructed, what it’s capable of, how much power it gives you (not someone else), and not just base a decision on the features it has, or cool graphics and promotion. You can become a discriminating user of software.
  • You gain the power to create the things that suite you. You don’t have to use software that you don’t like, or you think is being offered on unreasonable terms. You can create your own, and it can be whatever you want. It’s just a matter of the knowledge you’re willing to gather and the amount of energy you’re willing to put into developing the software.

Edit 5-20-2013: While I’m on this subject, I thought I should include this video by Mitch Resnick, who has been involved in creating Scratch at MIT. Similar to what Lesley Chilcott said above, he said, “It’s almost as if [users of new technologies] can read, but not write,” referring to how people use technology to interact. I disagreed with the notion, above, that using technology is the same as reading. Resnick hedged a bit on that. I can kind of understand why he might say this, because by running a Scratch program, it is like reading it, because you can see how code creates its results in the environment. This is not true, however, of much of the technology people use today.

Mark Guzdial asked a question a while back that I thought was important, because it brings this issue down to where a lot of people live. If the kind of literacy I’m going to talk about below is going to happen, the concept needs to be able to come down “out of the clouds” and become more pedestrian. Not to say that literacy needs to be watered down in toto (far from it), but that it should be possible to read and write to communicate everyday ideas and experiences without being super sophisticated about it. What Mark asked was, in the context of a computing medium, what would be the equivalent of a “note to grandma”? I remember suggesting Dan Ingalls’s prop-piston concept from his Lively Kernel demos as one candidate. Resnick provided what I thought were some other good ones, but in the context of Mother’s Day.

Context reversal

The challenge that faces new programmers today is different from when I learned programming as a child in one fundamental way. Today, kids are introduced to computers before they enter school. They’re just “around.” If you’ve got a cell phone, you’ve got a computer in your pocket. The technology kids use presents them with an easy-to-use interface, but the emphasis is on use, not authoring. There is so much software around it seems you can just wish for it, and it’s there. The motivation to get into programming has to be different than what motivated me.

When I was young the computer industry was still something new. It was not widespread. Most computers that were around were big mainframes that only corporations and universities could afford and manage. When the first microcomputers came out, there wasn’t much software for them. It was a lot easier to be motivated to learn programming, because if you didn’t write it, it probably didn’t exist, or it was too expensive to get (depending on your financial circumstances). The way computers operated was more technical than they are today. We didn’t have graphical user interfaces (at first). Everything was done from some kind of text command line interface that filled the entire screen. Every computer came with a programming language as well, along with a small manual giving you an introduction on how to use it.

PC-DOS, from Wikipedia

It was expected that if you bought a computer you’d learn something about programming it, even if it was just a little scripting. Sometimes the line between what was the operating system’s command line interface, and what was the programming language was blurred. So even if all you wanted to do was manipulate files and run programs, you were learning a little about programming just by learning how to use the computer. Some of today’s software developers came out of that era (including yours truly).

Computer and operating system manufacturers had stopped including programming languages with their systems by the mid-1990s. Programming languages had also been taken over by professionals. The typical languages used by developers were much harder to learn for beginners. There were educational languages around, but they had fallen behind the times. They were designed for older personal computer systems, and when the systems got more sophisticated no one had come around to update them. That began to be remedied only in the last 10 years.

Computer science was still a popular major at universities in the 1990s, due to the dot-com craze. When that bubble burst in 2000, that went away, too. So in the last 18 years we’ve had what I’d call an “educational programming winter.” Maybe we’ll see a revival. I hope so.

Literacy reconsidered

I’m directing the rest of this post to educators, because there are some issues around a programming revival I’d like to address. I’m going to share some more detailed history, and other perspectives on computer programming.

What many may not know is that we as a society have already gone through this once. From the late 1970s to the mid-1980s there was a major push to teach programming in schools as “computer literacy.” This was the regime that I went through. The problem was some mistakes were made, and this caused the educational movement behind it to collapse. I think the reason this happened was due to a misunderstanding of what’s powerful about programming, and I’d like educators to evaluate their current thinking in light of this, so that hopefully they do not repeat the mistakes of the past.

As I go through this part, I’ll mostly be quoting from a Ph.D. thesis written by John Maxwell in 2006 called Tracing the Dynabook: A Study of Technocultural Transformations.” (h/t Bill Kerr)

Back in the late 1970s microcomputers/personal computers were taking off like wildfire with Apple II’s, and Commodore VIC-20’s, and later, Commodore 64’s, and IBM PCs. They were seen as “the future.” Parents didn’t want their children to be “left behind” as “technological illiterates.” This was the first time computers were being brought into the home. It was also the first time many schools were able to grant students access to computers.

Educators thought about the “benefits” of using a computer for certain cognitive and social skills.  Programming spread in public school systems as something to teach students. Fred D’Ignazio wrote in an article called “Beyond Computer Literacy,” from 1983:

A recent national “computers in the schools” survey conducted by the Center for the Social Organization of Schools at Johns Hopkins University found that most secondary schools are using computers to teach programming. … According to the survey, the second most popular use of computers was for drill and practice, primarily for math and language arts. In addition, the majority of the teachers who responded to the survey said that they looked at the computer as a “resource” rather than as a “tool.”

…Another recent survey (conducted by the University of Maryland) echoes the Johns Hopkins survey. It found that most schools introduce computers into the curriculum to help students become literate in computer technology. But what does this literacy entail?

Because of the pervasive spread of computers throughout our society, we have all become convinced that computers are important. From what we read and hear, when our kids grow up almost everyone will have to use computers in some aspect of their lives. This makes computers, as a subject, not only important, but also relevant.

An important, relevant subject like computers should be part of a school’s curriculum. The question is how “Computers” ought to be taught.

Special computer classes are being set up so that students can play with computers, tinker with them, and learn some basic programming. Thus, on a practical level, computer literacy turns out to be mere computer exposure.

But exposure to what? Kids who are now enrolled in elementary and secondary schools are exposed to four aspects of computers. They learn that computers are programmable machines. They learn that computers are being used in all areas of society. They learn that computers make good electronic textbooks. And (something they already knew), they learn that computers are terrific game machines.

… According to the surveys, real educational results have been realized at schools which concentrate on exposing kids to computers. … Kids get to touch computers, play with them, push their buttons, order them about, and cope with computers’ incredible dumbness, their awful pickiness, their exasperating bugs, and their ridiculous quirks.

The main benefits D’Ignazio noted were ancillary. Students stayed at school longer, came in earlier, and stayed late. They were more attentive to their studies, and the computers fostered a sense of community, rather than competition and rivalry. If you read his article, you get a sense that there was almost a “worship” of computers on the part of educators. They didn’t understand what they were, or what they represented, but they were so interesting! There’s a problem there… When people are fascinated by something they don’t understand, they tend to impose meanings on it that are not backed by evidence, and so miss the point. The mistaken perceptions can be strengthened by anecdotal evidence (one of the weakest kinds). This is what happened to programming in schools.

The success of the strategy of using computers to try to improve higher-order thinking was illusory. John Maxwell’s telling of the “life and death of Logo” (my phrasing) serves as a useful analog to what happened to programming in schools generally. For those unfamiliar with it, the basic concept of Logo was a programming environment in which the student manipulates an object called a “turtle” via. commands. The student can ask the turtle to rotate and move. As it moves it drags a pen behind it, tracing its trail.  Other versions of this language were created that allowed more capabilities, allowing further exploration of the concepts for which it was created. The original idea Seymour Papert, who taught children using Logo, had was to teach young children about sophisticated math concepts, but our educational system imposed a very different definition and purpose on it. Just because something is created on a computer with the intent of it being used for a specific purpose doesn’t mean that others can’t use it for completely different, and possibly less valuable purposes. We’ve seen this a lot with computers over the years; people “misusing” them for both constructive and destructive ends.

As I go forward with this, I just want to put out a disclaimer that I don’t have answers to the problems I point out here. I point them out to make people aware of them, to get people to pause with the pursuit of putting people through this again, and to point to some people who are working on trying to find some answers. I present some of their learned opinions. I encourage interested readers to read up on what these people have had to say about the use of computers in education, and perhaps contact them with the idea of learning more about what they’ve found out.

I ask the reader to pay particular attention to the “benefits” that educators imposed on the idea of programming during this period that Maxwell talks about, via. what Papert called “technocentrism.” You hear this being echoed in the videos above. As you go through this, I also want you to notice that Papert, and another educator by the name of Alan Kay, who have thought a lot about what computers represent, have a very different idea about the importance of computers and programming than is typical in our school system, and in the computer industry.

The spark that started Logo’s rise in the educational establishment was the publication of Papert’s book, “Mindstorms: Children, Computers, and Powerful Ideas” in 1980. Through the process of Logo’s promotion…

Logo became in the marketplace (in the broad sense of the word) [a] particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the “thinking skills” strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon.

It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. [my emphasis — Mark] In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo “didn’t work as promised,” as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:

“Egocentrism for Piaget does not mean ‘selfishness’—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like ‘What is THE effect of THE computer on cognitive development?’ or ‘Does Logo work?’ … such turns of phrase often betray a tendency to think of ‘computers’ and ‘Logo’ as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology.”

But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.

By the early 1990s … Logo—and with it, programming—had faded.

The message–or black box–resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. (pp. 183-185)

To be clear, the last part of the quote refers only to the educational value placed on programming by our school system. When educators attempted to formally study and evaluate programming’s benefits on higher-order thinking and the like, they found it wanting, and so most schools gradually dropped teaching programming in the 1990s.

Maxwell addresses the conundrum of computing and programming in schools, and I think what he says is important to consider as people try to “reboot” programming in education:

[The] critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary ‘humanist’ stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of “the technology must be in the service of the educational objectives and not the other way around.” The determinist critique, in turn, says, ‘the use of computers encourages a mechanistic way of thinking that is a danger to natural/human/traditional ways of life’ (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000).

Missing from either version of this critique is any idea that digital information technology might present something worth actually engaging with. De Castell, Bryson & Jenson write:

“Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become ‘comfortable’ using it. […] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things–writing and speech, images and sound–every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity?”

Surely the power of education is transformation. And yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. And so we preserve the divide between ‘expert’ and ‘end-user;’ for the ‘end-user’ is profoundly she who is unchanged, uninitiated, unempowered.

A seemingly endless literature describes study after study, project after project, trying to identify what really ‘works’ or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time.

But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don’t amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things.

But clearly, it would be inane to try to claim that computing technology–one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down–has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is important, but we don’t really understand how. And so what shall we do, right now?

It is not that there haven’t been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology “paradigm” as Timothy Koschmann put it. (pp. 18-19)

The answer is not a “reboot” of programming, but rather a rethinking of it. Maxwell makes a humble suggestion: that educators stop being blinded by “the shiny new thing,” or some so-called “new” idea such that they lose their ability to think clearly about what’s being done with regard to computers in education, and that they deal with history and historicism. He said that the technology field has had a problem with its own history, and this tends to bleed over into how educators regard it. The tendency is to forget the past, and to downplay it (“That was neat then, but it’s irrelevant now”).

In my experience, people have associated technology’s past with memories of using it. They’ve given little if any thought to what it represented. They take for granted what it enabled them to do, and do not consider what that meant. Maxwell said that this…

…makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. We are faced with a tangle of hobbles–instrumentalism, ahistoricism, fear of transformation, Snow’s “two cultures,” and a consumerist subjectivity.

An examination of the history of educational technology–and educational computing in particular–reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert’s philosophy and his more than two decades of practical work on Logo than is commonly remembered. And Papert is not the only one. (p. 20)

Maxwell went into what Alan Kay thought about the subject. Kay has spent almost as many years as Papert working on a meaningful context for computing and programming within education. Some of the quotes Maxwell uses are from “The Early History of Smalltalk,” (h/t Bill Kerr) which I’ll also refer to. The other sources for Kay’s quotes are included in Maxwell’s bibliography:

What is Literacy?

“The music is not in the piano.” — Alan Kay

The past three or four decades are littered with attempts to define “computer literacy” or something like it. I think that, in the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have won large numbers of supporters across the board.

Kay’s appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a much more fruitful framing. His argument is thus not for computer literacy per se, but for systems literacy, of which computing is a key part.

That this is a massive undertaking is clear … and the size of the challenge is not lost on Kay. Reflecting on the difficulties they faced in trying to teach programming to children at PARC in the 1970s, he wrote that:

“The connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas.”

Because literature is about ideas, Kay connects the notion of literacy firmly to literature:

“What is literature about? Literature is a conversation in writing about important ideas. That’s why Euclid’s Elements and Newton’s Principia Mathematica are as much a part of the Western world’s tradition of great books as Plato’s Dialogues. But somehow we’ve come to think of science and mathematics as being apart from literature.”

There are echoes here of Papert’s lament about mathophobia, not fear of math, but the fear of learning that underlies C.P. Snow’s “two cultures,” and which surely underlies our society’s love-hate relationship with computing. Kay’s warning that too few of us are truly fluent with the ways of thinking that have shaped the modern world finds an anchor here. How is it that Euclid and Newton, to take Kay’s favourite examples, are not part of the canon, unless one’s very particular scholarly path leads there? We might argue that we all inherit Euclid’s and Newton’s ideas, but in distilled form. But this misses something important … Kay makes this point with respect to Papert’s experiences with Logo in classrooms:

“Despite many compelling presentations and demonstrations of Logo, elementary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children in a way that illuminates how we think about mathematics and how mathematics relates to the real world.” (Maxwell, pp. 135-137)

Just a note of clarification: I refer back to what Maxwell said re. Logo and mathematics. Papert did not use his language to teach programming as an end in itself. His goal was to use a computer to teach mathematics to children. Programming with Logo was the means for doing it. This is an important concept to keep in mind as one considers what role computer programming plays in education.

The problem, in Kay’s portrayal, isn’t “computer literacy,” it’s a larger one of familiarity and fluency with the deeper intellectual content; not just that which is specific to math and science curriculum. Kay’s diagnosis runs very close to Neil Postman’s critiques of television and mass media … that we as a society have become incapable of dealing with complex issues.

“Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in [Thomas] Paine’s Common Sense and being able (and happy) to fluently write a critique or defense of it.” (Maxwell, p. 137)

Extending this quote (from “The Early History of Smalltalk”), Kay went on to say:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

Continuing with Maxwell (and Kay):

“Many adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like AIDS, or even compound interest on their credit cards. In contrast, a 12-year-old child in a few lines of Logo […] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that would ordinarily be consigned to common sense (the worst of all reasoning methods), can now be tried out with a modest amount of effort.”

Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? We have enough trouble coping with arguments presented in print, let alone simulations and modeling. Postman’s argument implicates television, but television is not a techno-deterministic anomaly within an otherwise sensible cultural milieu; rather it is a manifestation of a larger pattern. What is wrong here has as much to do with our relationship with print and other media as it does with television. Kay noted that “In America, printing has failed as a carrier of important ideas for most Americans.” To think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, economy, simplicity) make a real difference in the lives of only a small number of individuals, even in the Western world. Kay put it eloquently thus: “The computer really is the next great thing after the book. But as was also true with the book, most [people] are being left behind.” This is a sobering thought for those who advocate public access to digital resources and lament a “digital divide” along traditional socioeconomic lines. Kay notes,

“As my wife once remarked to Vice President Al Gore, the ‘haves and have-nots’ of the future will not be caused so much by being connected or not to the Internet, since most important content is already available in public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wherever it may be found.” (Maxwell, pp. 138-139)

I’m still trying to understand myself what exactly Alan Kay means by “literature” in the realm of computing. He said that it is a means for discussing important ideas, but in the context of computing, what ideas? I suspect from what’s been said here he’s talking about what I’d call “model content,” thought forms, such as the idea of an exponential progression, or the concept of velocity and acceleration, which have been fashioned in science and mathematics to describe ideas and phenomena. “Literature,” as he defined it, is a means of discussing these thought forms–important ideas–in some meaningful context.

In prior years he had worked on that in his Squeak environment, working with some educators. They would show children a car moving across the screen, dropping dots as it went, illustrating velocity, and then, modifying the model, acceleration. Then they would show them Galileo’s experiment, dropping heavy and light balls from the roof of a building (real balls from a real building), recording the ball dropping, and allowing the children to view the video of the ball, and simultaneously model it via. programming, and discovering that the same principle of acceleration applied there as well. Thus, they could see in a couple contexts how the principle worked, how they could recognize it, and see its relationship to the real world. The idea being that they could grasp the concepts that make up the idea of acceleration, and then integrate it into their thinking about other important matters they would encounter in the future.

Maxwell quoted from an author named Andrew diSessa to get deeper into the concept of literacy, specifically what literacy in a type of media offers our understanding of issues:

The hidden metaphor behind transparency–that seeing is understanding–is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.”

What’s a good goal for computing?

The temptation in teaching and learning programming is to get students familiar enough with the concepts and a language that they can start creating things with it. But create what? The typical cases are to allow students to tinker, and/or to create applications which gradually become more complex and feature-rich, with the idea of building confidence and competence with increasing complexity. The latter is not a bad idea in itself, but listening to Alan Kay has led me to believe that starting off with this is the equivalent of jumping to a conclusion too quickly, and to miss the point of what’s powerful about computers and programming.

I like what Kay said in “The Early History of Smalltalk” about this:

A twentieth century problem is that technology has become too “easy.” When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

Edit 4-5-2013: I thought I should point out that this quote has some nuance to it that people might miss. I don’t believe Kay is saying that “programming should be hard.” Quite the contrary. One can observe from his designs that he’s advocated the opposite. Not that technology should mold itself to what is “natural” for humans. It might require some training and practice, but once mastered, it should magnify or enhance human capabilities, thereby making previously difficult or tedious tasks easier to accomplish and incorporate into a larger goal.

Kay was making an observation about the history of technology’s relationship to society, that the effect on people of useful technology being hard to build has generally caused the people who created something useful to make it well. What he’s pointing out is that people generally take the presence of technology as an excuse to use it as a crutch, in this case to make immediate use of it towards some other goal that has little to do with what the technology represents, rather than an invitation to revisit it, criticize its design, and try to make it better. This is an easy sell, because everyone likes something that makes their lives easier (or seems to), but we rob ourselves of something important in the process if that becomes the only end goal. What I see him proposing is that people with some skill should impose a high standard for design on themselves, drawing inspiration for that standard from how the best art humanity has produced was developed and nurtured, but guard against the sense of feeling small, inadequate, and overwhelmed by the challenge.

Maxwell (and Kay) explain further why this idea of “literacy” as being able to understand and communicate important ideas, which includes ideas about complexity, is something worth pursuing:

“If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is in story form! Newton’s treatise on the laws of motion, the force of gravity, and the behavior of the planets is set up as a sequence of arguments that imitate Euclid’s books on geometry.”

The most important ideas in modern Western culture in the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented in narrative. …

But more recent still are forms of argumentation that defy linear representation at all: ‘complex’ systems, dynamic models, ecological relationships of interacting parts. These can be hinted at with logical or mathematical representations, but in order to flesh them out effectively, they need to be dynamically modeled. This kind of modeling is in many cases only possible once we have computational systems at our disposal, and in fact with the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the “regime of computation” inherent in the work of thinkers like Stephen Wolfram, Edward Fredkin, and Harold Morowitz, N. Katherine Hayles explains:

“Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work in chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so in the social sciences and humanities.”

Kay’s lament too is that though these non-narrative forms of communication and understanding–both in the linear and complex varieties–are key to our modern world, a tiny fraction of people in Western society are actually fluent in them.

“In order to be completely enfranchised in the 21st century, it will be very important for children to become fluent in all three of the central forms of thinking that are now in use. […] the question is: How can we get children to explore ways of thinking beyond the one they’re ‘wired for’ (storytelling) and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems ‘eco-logic?'” …

In this we get Kay’s argument for ‘what computers are good for’ … It does not contradict Papert’s vision of children’s access to mathematical thinking; rather, it generalizes the principle, by applying Kay’s vision of the computer as medium, and even metamedium, capable of “simulating the details of any descriptive model.” The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as the promise of personal computing, with millions of users and millions of machines.

“The thing that jumped into my head was that simulation would be the basis for this new argument. […] If you’re going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a mathematical equation. If, for example, you’re talking about an epidemic, you can make claims in an essay, and you can put mathematical equations in there. Still, it is really difficult for your reader to understand what you’re actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim in the form of a working simulation, something that can be examined, and also can be changed.”

The computer is thus to be seen as a modeling tool. The models might be relatively mundane–our familiar word processors and painting programs define one end of the scale–or they might be considerably more complex. [my emphasis — Mark] It is important to keep in mind that this conception of computing is in the first instance personal–“personal dynamic media”–so that the ideal isn’t simulation and modeling on some institutional or centralized basis, but rather the kind of thing that individuals would engage in, in the same way in which individuals read and write for their own edification and practical reasons. This is what defines Kay’s vision of a literacy that encompasses logic and systems thinking as well as narrative.

And, as with Papert’s enactive mathematics, this vision seeks to make the understanding of complex systems something to which young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a ‘computer-science’ or an ‘information technology’ curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools:

“So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects–which I think is enough complexity for anybody–gets them into that space in thinking about things that I think is more interesting than just simple input/output mechanisms.” (Maxwell, pp. 132-135)

I wanted to highlight the part about “word processors” and “paint programs,” because this idea that’s being discussed is not limited to simulating real world phenomena. It could be incorporated into simulating “artificial phenomena” as well. It’s a different way of looking at what you are doing and creating when you are programming. It takes it away from asking, “How do I get this thing to do what I want,” and redirects it to, “What entities do we want to make up this desired system, what are they like, and how can they interact to create something that we can recognize, or otherwise leverages human capabilities?”

Maxwell said that computer science is not the important thing. Rather, what’s important about computer science is what it makes possible: “the study and engagement with complex or dynamic systems–and it is this latter issue which is of key importance to education.” Think about this in relation to what we do with reading and writing. We don’t learn to read and write just to be able to write characters in some sequence, and then for others to read what we’ve written. We have events and ideas, perhaps more esoteric to this subject, emotions and poetry, that we write about. That’s why we learn to read and write. It’s the same thing with computer science. It’s pretty worthless, if we as a society value it for communicating ideas, if it’s just about learning to read and write code. To make the practice something that’s truly valuable to society, we need to have content, ideas, to read and write about in code. There’s a lot that can be explored with that idea in mind.

Characterizing Alan Kay’s vision for personal computing, Maxwell talked about Kay’s concept of the Dynabook:

Alan Kay’s key insight in the late 1960s was that computing would become the practice of millions of people, and that they would engage with computing to perform myriad tasks; the role of software would be to provide a flexible medium with which people could approach those myriad tasks. … [The] Dynabook’s user is an engaged participant rather than a passive, spectatorial consumer—the Dynabook’s user was supposed to be the creator of her own tools, a smarter, more capable user than the market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the “end-user” as documented by Bardini & Horvath. (p. 218)

Kay’s contribution begins with the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. What discursive possibilities does this new modality open up, and for whom? Kay argues that this latter communications revolution should in the first place be in the hands of children. What we are left with is a sketch of a possible new literacy; not “computer literacy” as an alternative to book literacy, but systems literacy—the realm of powerful ideas in a world in which complex systems modelling is possible and indeed commonplace, even among children. Kay’s fundamental and sustained admonition is that this literacy is the task and responsibility of education in the 21st century. The Dynabook vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. (p. 262)

I would encourage interested readers to read Maxwell’s paper in full. He gives a rich description of the problem of computers in the educational context, giving a much more detailed history of it than I have here, and what the best minds on the subject have tried to do to improve the situation.

The main point I want to get across is if we as a society really want to get the greatest impact out of what computers can do for us, beyond just being tools that do canned, but useful things, I implore educators to see computers and programming environments more as apparatus, instruments, media (the computers and programming environments themselves, not what’s “played” on computers, and languages and metaphors, which are the media’s means of expression, not just a means to some non-expressive end), rather than as agents and tools. Sure, there will be room for them to function as agents and tools, but the main focus that I see as important in this subject area is in how the machine helps facilitate substantial pedagogies and illuminates epistemological concepts that would otherwise be difficult or impossible to communicate.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I was excited to see this article yesterday on a long-planned experiment conducted in space, called “Gravity Probe B”. According to the article, the results confirm predictions made by Einstein’s theory of General Relativity. The experiment made extremely precise measurements of the Earth’s gravity well. The results comport with the idea that space-time is warped by the mass of the Earth. They also match a prediction made by Einstein’s theory that the Earth’s spin produces a swirling gravity well, with a shape analogous to that of a whirlpool drawing water into itself, in this case drawing space into itself. The space-time distortion caused by the Earth appears to move as a result of Earth’s rotation, though the rates of spin are not anywhere close to each other. The rate of spin of the gravity well is much slower. What has interested the scientists on this project in this effect is it helps explain the jets of charged particles that are generated by massive black holes in space, as the whirling distortions around them are probably more extreme, spinning much faster than here on Earth.

If you’re interested in getting into the history and the details of the theory that inspired this experiment, click on the first graphic embedded in the linked article. This will take you to another page, which also includes some video clips that provide easy to understand illustrations of what was observed. I particularly liked Kip Thorne’s “missing inch” model demonstration. He provided an easy to understand 2D model, which he transforms into a 3D model, of how they were able to show with Gravity Probe B’s instruments that the space around Earth is indeed warped.

Read Full Post »

This is from Bryan Magee’s 15-part series The Great Philosophers, broadcast on BBC2 in 1987. Here he talks with Hilary Putnam about the philosophy of science.

Alan Kay has talked about how most people don’t understand how science worked in the 20th century, much less the present century. Our schools still teach science in a 19th century fashion in terms of how people can approach knowledge. This episode of the show amply demonstrates this disconnect.

As I studied what was said here about scientific thinking up until the late 19th century, it reminded me a lot of how I was taught computer science. It was thought to be a process of adding on to existing knowledge, and sorting out any inconsistencies. With rare exceptions, alternatives to existing models were not discussed, or even made known to students.

I’m going to put the notes I took from this episode below each section of video, because I found it hard to follow the arguments without going through them slowly, and looking at what Magee and Putnam said explicitly.

Magee: The religious world view has been largely replaced by a world view purportedly derived from science, since the 17th century.

Science was seen as a process of “adding” and sorting for 300 years, up until the late 19th century. The view of knowledge was that it grew by accumulation. Science was seen as getting its success from an inductive method. For 300 years educated people thought of the Universe as just matter in motion. They thought if we went on long enough, we’d find out everything about the Universe there was to know. This whole idea has been abandoned by science, but non-scientists think that scientists still think this way.

Kant challenged the “correspondence” view of truth, that theories correspond directly to reality without nuances, that the world makes its truths apparent, and scientists just find out what that is and write it down. Kant said there’s a contribution of the thinking mind. Truth depends on what exists, and on the mind of the observer. Scientists have come to a similar view, that theories are not merely dictated to us by the “facts.”

The categories and interpretations we use, the ideas within which we organize our observations, are our contributions. The world that’s perceived by us is partly contributed to by external effects, and is partly made up of categories and ways of seeing things that come from us. [Mark says: I think Plato’s notion of “forms” also applies here, in terms of our theories. We could equate a theory of a phenomenon to its form in our own minds, and we could recognize that if a phenomenon is unfamiliar enough, different people will create different forms of it in their own minds.]

In the late 19th century the views of scientists changed to realize that the old theories could be wrong. The more modern view is that not only is our view of reality partly mind-dependent, but there are alternatives, and the concepts we impose on the world may not be the right ones, we may have to change them, and there is an interaction between what we contribute and what we find out. It was realized that there were alternative descriptions of some things that were equally as valid.

The new conception of science is that theories are constantly being replaced by newer and better ones, which are richer, that explain phenomena more fully, and there’s a mystery to our universe which will never be completely discovered. The view about “truth” that Putnam said was coming into use more and more in science is the idea that there cannot be a separation between what’s considered true and what our standards of assertability are. So the way that mind-dependence comes in is the fact that what’s true and what’s false is partly a function of what our standards of truth and falsity are. That depends on our interests, which change over time. Putnam defined “interests” in a cultural context, such as, in our modern world we’d recognize that there’s a policeman on the corner. Someone from a tribal culture, which may have no formal social services, wouldn’t recognize a policeman, but would instead see “someone in blue” on the corner.

Even facts within theories can have alternatives, as was evidenced by the theory of relativity.

A scientific theory can be useful even if nobody really understands what it means (quantum mechanics). [Mark says: You could also say the same about Newton’s theory of gravity.]

Putnam thinks the way in which the old scientific view (pre-19th century) was destructive was that since it saw scientific findings as objective facts which were accumulated over time, then everything else was considered non-knowledge, that couldn’t even be considered true or false.

Real interesting! Putnam and Magee talk about how computer science, through an interaction with computer simulations, has created a growth of knowledge about the human mind. This relates to what I’ve read about Licklider’s use of computers at MIT in the 1950s, in Waldrop’s book, “The Dream Machine.”

Edit 5-4-2011: I had a brief conversation with Alan Kay about the main theme of this program, and I put particular focus on this part, where Magee and Putnam discussed the role that computer science has played in the advancement of knowledge in other areas of research. He zeroed in on this part, saying that it was a misperception of what really happened, at least in relation to Licklider (and Engelbart). He said the advances in knowledge from computer science have been paradigm shifts, not advancements by interaction. He pointed out that the theory of evolution did not begin in the field of biology, as Magee asserts. He also said that computers did not begin as “a self-conscious analogy to the human mind,” but rather as machines to run calculations. I knew that. I thought Magee’s description of this was rather romantic, and other historical accounts have agreed with his assessment, so I let it slide, but I have since had second thoughts about that.

Magee: Most people with college degrees have no idea what Einstein’s theory of relativity is all about, more than 70 years after it was published. It’s done very little to influence their view of the world. Isn’t there a danger that science and mathematics are racing ahead, and the whole range of insight that that’s giving us into the Universe simply isn’t filtering through to the layman?

Putnam: There was a text on Special Relativity called “Space-Time Physics” that was designed for the first month of the first college physics course, and the authors hoped that someday it would be taught in high schools.

The question and answer above really struck a chord with me. First of all, I didn’t encounter Einstein’s theory of relativity in my first semester physics course in college. All we covered, that I remember, was Newtonian mechanics, though at a more detailed and advanced level than what we got in high school.

The discussion that Magee and Putnam had about General Relativity, and the risk we take with science “racing past” the rest of society, I think, makes a good case for Alan Kay’s efforts to teach more advanced math concepts to children, because without that, they’re never going to understand it. To put this in perspective, take a look at General Relativity, and think about the understanding of mathematics that would be required to understand it.

Strangely enough, I got more exposure to Einstein’s theories of relativity when I was in Jr. high school (1982-’85) than I got at any other time. We watched parts of Carl Sagan’s Cosmos series. In it Sagan talked about Einstein’s theories of relativity, and included Einstein’s notion of gravity as warped space-time. Farther back than that, I had been fascinated by the idea of black holes, since I was in 4th grade. I read all I could on them, and I learned some very strange things: that not even light could escape from them, and that matter was destroyed upon entering them, for all intents and purposes, though I think the evidence still supports the idea of conservation of matter. It’s just that the matter gets separated into its component parts, and some of the matter is converted into energy.

I got exposed a bit to a theory of gravity that was different from Einstein’s or Newton’s outside of school, by a toy inventor, of all people. So I knew there were other theories besides those that were commonly accepted. What became clear to me after exposure to these ideas was that gravity is a fascinating mystery. We didn’t know what caused it then, and we don’t know now, not really, except for mass. The question that would not go away for me was what about mass causes it? The idea that mass causes it just because it exists never satisfied me. The other forces were explained through phenomena I could relate to, but gravity was different.

(Update 12-15-2013: I’ve updated the paragraph below with what I think is a more accurate description of the theory, and I’ve added a couple paragraphs to further explain. I was mistaken in saying that astrophysicists think that the distortion in space-time causes us to “slide” in towards the gravity well.)

What Einstein’s theory explains is that gravity is not a force in the way that we think about other forces. His theory said that matter warps space-time, and it is this warping which creates the perception of a force acting on objects and energy. The way Sagan explained it is that we are all “sliding” being pushed inward towards the center of the gravity well, on by this distortion, and what keeps us from going to the center of the well is the outward force exerted by the earth we stand on. Scientists have tried to explain it using an analogy of a large ball setting on a piece of taut fabric, which creates a dip in it. If you toss a marble onto the fabric, it will fall towards the larger ball, due to the anomaly in the fabric. This is not a great analogy, because in it, gravity is pulling the larger ball down creating the dip in the fabric. If you can disregard that fact, what they are trying to get across is its the distortion of the fabric that alters the path of the marble, not any force. Another way this analogy is imperfect is it doesn’t illustrate how space-time is being drawn inward by something that is not yet explained. What Einstein’s theory says is that there is something about matter that causes this distortion in space-time. That could be attributed to a force, but from everything I’ve studied about the theory so far, Einstein didn’t say that. That’s left as “a problem for the reader,” so to speak.

The strange thing about the theory that might seem confusing is it explains that while we are “pushed” inward towards the well, it’s not a push in the Newtonian sense. This “push” is coming from what is called “space-time” itself. It’s a push on everything that constitutes matter and energy, down to our body’s subatomic particles, and photons of light.

Looking at how it is we stand on earth, rather than all matter being drawn into the well, it’s rather like standing on stretchy material that’s constantly being drawn into something, with a “wall” in our way (which does not “catch” the fabric). The closer the material gets to whatever is pulling on it, the more stretched it becomes. As it becomes stretched, so are we, and all the matter around us, because, as we understand, matter is “situated” in space-time. There may be some “friction,” if you will, between us and the material (space-time), that causes us to be drawn in with it, but it’s not total, allowing this “material” to slide past us, while we are repelled outward by the “wall” (forces in the matter we stand on). Hopefully my attempt to explain this isn’t too confusing. I’m groping at understanding it myself.

An example of the way school gets science wrong

High school physics was odd to me, because we talked about stuff like how electrons had both the property of a particle and a wave (as Putnam discussed above), a very interesting idea, but when it came to gravity, all we focused on was Galileo’s and Newton’s notions of it, particularly Newton’s Universal Law of Gravitation. One of the things I remember being emphasized was that “this law is the same throughout the Universe.” For the sake of argument, from our perspective, being on Earth, I was willing to accept that claim, but I remember being a bit skeptical that it was really true. I knew that we hadn’t really explored the whole universe, and that we hadn’t even come close to testing this notion everywhere. I was open to the idea that someday we might find an exception to this notion if we were to theoretically explore the Universe, which is something that may never happen (I didn’t even know about Mercury’s orbit, which doesn’t fit Newton’s theory as well as the orbits of the other planets). So, for practical purposes, we could assume that it’s “universal.”

We talked about Einstein’s theory of Special Relativity, and what was really fascinating about that was E = mc2, that there’s a relationship between matter and energy that’s only “separated” by the square of the speed of light. My memory, though, is we hardly talked about General Relativity at all, except perhaps in a historical context. Looking back on this, it’s rather obvious to see why. In high school science we were expected to get a little more into the details of scientific theories, and work with the math concepts more. The school system hadn’t prepared us to work with the notion of General Relativity at that level. So in effect we skipped it, but the conceit that was presented in class was that Newton’s law of gravity was “the truth.”

I remember being asked a question on a physics test that asked, “If aliens visited Earth, would we find that they have the same knowledge of gravity as we do?” I paused. This was an interesting question to me, because I thought it was asking me to consider what understanding another race of intelligent beings would have about this phenomenon. I asked myself, if space aliens existed that were intelligent enough to build craft for interstellar travel, would they have the same ideas about gravity as we do? I answered, “Maybe.” I added something about how since the aliens had managed to make the journey from whatever star system they came from, that their technology was probably more advanced than ours (I mean, we haven’t tried this yet, so that was a good guess), and maybe they had a better understanding of gravity than we did, particularly what caused it. I hedged a bit, but I guessed that there was probably a link between technological development and greater scientific understanding of our universe. Granted, this was a totally speculative answer, but it was a speculative question, as far as I was concerned.

My physics teacher marked this answer wrong. I was floored! I wondered, “What did she expect?” I asked her about it after class, and she said the answer she expected was something along the lines of, “Yes, because the Law of Gravity is universal.” I was so disappointed (in her). It immediately hit me that, “Oh, yeah. I remember we talked about that.” I could’ve almost kicked myself for thinking that she had asked a thought-provoking question, and falling for it! I was supposed to remember to recall what we had talked about in class. I wasn’t supposed to think on it! Duh! How could I have been so stupid? That’s really how perverse and offensive this was. It brings to mind the fictional short story of “Harrison Bergeron,” now that I think about it… However, trying not to see that she was telling me not to think, I tried to talk her through my reasoning, because I thought I gave a legitimate answer. I told her about the other notions of gravity I knew about, and the questions they raised for me. She wouldn’t hear of it. I think I said in a final protest, “Do you really think we’ve discovered everything there is to know about gravity?!” In any case, she didn’t answer me. I walked out of the classroom exasperated. It was one of the most disillusioning experiences of my life. It made me fume!

Looking back on things like this, she probably didn’t even understand what a good question she had asked. Secondly, there were other instances where this happened in my schooling. Sometimes I wanted to think through things and come up with original answers, not merely regurgitate what I had been fed, and I got penalized for it. She and I had not been getting along for most of the time while I was in her class, and I think it was over issues like this. So this was nothing new, but this incident revealed a disturbing fact to me in a way that was so obvious, I couldn’t just brush it off as a misunderstanding between us: Her approach to science was that we were supposed to accept what she said as truth. We were not supposed to think about it, or question it. The only thinking we were supposed to do was in calculating results from experiments, but a lot of that was applying the “correct” formulas. More memorization. Nevertheless, I got an “A” in her class.

Looking at this from a “mountaintop” view, I think this example shows the split between 20th century scientific thinking, and the 19th century thinking that’s been used to teach science in schools. I saw a discussion recently where Alan Kay talked about this with the No Child Left Behind policy, that for students who were developing an understanding of scientific thinking, they had to, on the one hand, gain real understanding, and on the other, remember to answer “wrongly” on the test. That summed up the experience I describe above! I’ve used my example sometimes when I’ve heard people complain about this, because I can say to them I got the same treatment when I was in my high school science classes, more than 20 years ago. As far as I’m concerned, this policy is just taking that idea of instruction, which has been around for years, to its logical conclusion. It’s now metastasized throughout the public education system, at least in the areas that are tested for proficiency, whereas in my day there were exceptions.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

“Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. … The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
— from President Dwight Eisenhower’s farewell address, Jan. 17, 1961

Before I begin, I’m going to recommend right off a paper called “Climate science: Is it currently designed to answer questions?”, by Dr. Richard Lindzen, as an accompaniment to this post. It really lays out a history of what’s happened to climate science, and a bit of what’s happened to science generally, during the post-WW II period. I was surprised that some of what he said was relevant to explaining what happened to ARPA computer science research as it entered the decade of the 1970s, and thereafter, though he doesn’t specifically talk about it. The footnotes make interesting reading.

The issue of political influence in science has been around for a long time. Several presidential administrations in the past have been accused of distorting science to fit their predilections. I remember years ago, possibly during the Clinton Administration, hearing about a neuroscientist who was running into resistance for trying to study the difference between male and female brains. Feminists objected, because it’s their belief that there are no significant differences between men and women. President George W. Bush’s administration was accused of blocking stem cell research for religious reasons, and of altering the reports of government scientists, particularly on the issue of global warming. When funding for narrow research areas are blocked, it doesn’t bother me so much. There are private organizations that fund science. What irks me more is when the government, or any organization, alters the reports of its scientists. What’s bothered me much more is when scientists have chosen to distort the scientific process for an agenda.

One example of this was in 2001 when the public learned that a few activist scientists had planted lynx fur on rubbing sticks that were set out by surveyors of lynx habitat. The method was to set out these sticks, and lynx would come along and rub them, leaving behind a little fur, thereby revealing where their habitat was. The intent was to determine where it would be safe to allow development in or near wilderness areas so as to not intrude on this habitat. A few scientists who were either involved in the survey, or knew of it, decided to skew the results in order to try to prevent development in the area altogether. This was caught, but it shows that not all scientists want the evidence to lead them to conclusions.

The most egregious example of the confluence of politics and science that I’ve found to date, and I will be making it the “poster child” of my concern about this, is the issue of catastrophic human-caused global warming in the climate science community. I will use the term anthropogenic global warming, or “AGW” for short. I’m not going to give a complete exposition of the case for or against this theory. I leave it to the reader to do their own research on the science, though I will provide some guidance that I consider helpful. This post is going to assume that you’re already familiar with the science and some of the “atmospherics” that have been occurring around it. The purpose of this post is to illustrate corruption in the scientific process, its consequences, and how our own societal ignorance about science allows this to happen.

There is legitimate climate research going on. I don’t want to besmirch the entire field. There is, however, a significant issue in the field that is not being dealt with honestly, and it cannot be dealt with honestly until the influences of politics, and indeed religion–a religious mindset, are acknowledged and dealt with, however unfortunate and distasteful that is. The issue I refer to is the corruption of science in order to promote non-scientific agendas.

I felt uncomfortable with the idea of writing this post, because I don’t like discussing science together with politics. The two should not mix to this degree. I’d much prefer it if everyone in the climate research field respected the scientific method, and were about exploring what the natural world is really doing, and let the chips fall where they may. What prompted me to write this is I understand enough about the issue now to be able to speak somewhat authoritatively about it, and my conclusions have been corroborated by the presentations I’ve seen a few climate scientists give on the subject. I hate seeing science corrupted, and so I’ve felt a need to speak up about it.

I will quote from Carl Sagan’s book, The Demon-Haunted World, from time to time, referring to it as “TDHW,” to provide relevant descriptions of science to contrast against what’s happening in the field of climate science.

Scientists are insistent on testing . . . theories to the breaking point. They do not trust the intuitively obvious. The truth may be puzzling or counter-intuitive. It may contradict deeply held beliefs.

— TDHW

I think it is important to give some background on the issue as I talk about it. Otherwise I fear my attempt at using this as an example will be too esoteric for readers. There are two camps battling out this issue of the science of AGW. For the sake of description I’ll use the labels “warmist” and “skeptic” for them. They may seem inaccurate, given the nuances of the issue, but they’re the least offensive labels I could find in the dialogue.

The warmists claim that increasing carbon dioxide from human activities (factories, energy plants, and vehicles) is causing our climate to warm up at an alarming rate. If this is not curtailed, they predict that the earth’s climate and other earth systems will become inhospitable to life. They point to the rising levels of CO2, and various periods in the temperature record to make their point, usually the last 30 years. The predictions of doom resulting from AGW that they have communicated to the public are more based on conjecture and scenarios produced by computer models than anything else. This is the perspective that we all most often hear on the news.

The skeptics claim that climate has always changed on earth, naturally. It has never been constant, and the most recent period is no exception. They also say that while CO2 is a greenhouse gas, it is not that important, since the amount of it in the atmosphere is so small (it’s probably around 390 parts per million now), and secondly its impact is not linear. It’s logarithmic, so the more CO2 is added to the atmosphere, the less impact that addition has over what existed previously. From what I hear, even warmists agree on this point. Skeptics say that water vapor (H2O) is the most influential greenhouse gas. It is the most voluminous, from measurements that have been taken. Some challenge the idea that increased CO2 has caused the warming we’ve seen at all, whether it be from human or natural sources. Some say it probably has had some small influence, but it’s not big enough to matter, and that there must be other reasons not yet discovered for the warming that’s occurred. Others don’t care either way. They say that warming is good. It’s definitely better compared to dramatic cooling, as was seen in the Little Ice Age. Most say that the human contribution of CO2 is tiny compared to its natural sources. I haven’t seen any scientific validation of this claim yet, so I don’t put a lot of weight in it. As you’ll see, I don’t consider it that relevant, either.

In any case, they don’t see what the big deal is. They often point to geologic CO2 records and temperature proxies going back thousands of years to make their point, but they have some recent evidence on their side as well. They also use the geologic record and historical records to show that past warming periods (the Medieval Warm Period being the most recent–1,000 years ago) were not catastrophic, but in fact beneficial to humanity.

Some sober climate scientists say that there is a human influence on local climate, and I find that plausible, just from my own experience of traveling through different landscapes. They say that the skylines of our cities alter airflow over large areas, and the steel, stone, and asphalt/cement we use all absorb and radiate heat. This can have an effect on regional weather patterns.

Not everyone involved in distributing this information to the public is a scientist. There are many people who have other duties, such as journalists, reviewers of scientific papers, and climate modelers, who may have some scientific knowledge, but do not participate in obtaining observational data from Nature, or analyzing it.

So what is the agenda of the warmists? Well, that’s a little hard to pin down, because there are many interests involved. It seems like the common agenda is more government control of energy use, a desire to make a major move to alternative energy sources, such as wind and solar (and maybe natural gas), and a desire to set up a transfer of payments system from the First World to the Third World, a.k.a. carbon trading, which as best I can tell has more to do with international politics than climate. The issue of population control seems to be deeply entwined in their agenda as well, though it’s rarely discussed. Giving it a broader view, the people who hold this view are critics of our civilization as it’s been built. They would like to see it reoriented towards one that they see would be more environmentally friendly, and more “socially just.”

The sense I get from listening to them is they believe that our society is destroying the earth, and as our sins against the environment build up, the earth will one day make our lives a living hell (an “apocalypse,” if you will). Some will not admit to this description, but will instead prefer a more technical explanation that still amounts to a faith-based argument. Michael Crichton said in 2003 that this belief seemed religious. Lately there’s some evidence he was right. A lot of the AGW arguments I hear sound like George Michael’s “Praying for Time” from the early 1990s.

Crichton made a well-reasoned argument that environmentalism as religion does not serve us well:

Let me be clear. I am not anti-religion in all aspects of life. My concern here is not that people have religious beliefs, of whatever kind. What concerns me is the attempt to use religious beliefs as justification for government policy. I understand that environmentalism is not officially recognized as a religion in the U.S….yet. We can, however, recognize something that “walks like a duck, quacks like a duck,” etc., for ourselves. I agree with Crichton. The consciousness that environmentalism provides, that we have a role to play in the development of the natural world, a responsibility to be good stewards, is good. However, it should not be a religion. Despite the more alarmist environmentalists who try to scare people with phantoms, there are some sober environmentalists who act based on real scientific findings rather than a religious notion of how nature behaves, or would like to behave if we weren’t around to influence it. In my view those people should be supported.

To be fair, the skeptics have some political views of their own. Often they seem to have a politically conservative bent, with a belief in greater freedom and capitalism, though I think to a person they are environmentally conscious. The difference I’ve seen with them is they’re not coy, and are more willing to show what they’ve found in the evidence, and discuss it openly. They seem to act like scientists rather than proselytizers.

My experience with warmists is they want to control the message. They don’t want to discuss the scientific evidence. They seem to care more about whether people agree with them or not. The most I get out of them for “evidence” of AGW is anecdotes, even if their findings have been scientifically derived. I’m sure their findings are useful for something, but not for proving AGW. I’d be more willing to consider their arguments if they’d act like scientists. My low opinion of these people is driven not by the positions they take, but by how they behave.

We need science to be driven by the search for truth, and for that to happen we need people seeking evidence, being willing to share it openly, as well as their analysis, and allow it to be criticized and defended on its merits. Some climate scientists have been trying to do this. Some have been successful, but from what I’ve seen they represent only gradations of the “skeptic” position. Warmists have forfeited the debate by disclosing only as much information as they say supports their argument, restricting as much information as they can on areas that might be useful for disproving their argument (this gets to the issue of falsifiability, which is essential to science), and basically refusing to debate the data and the analysis, with a few rare exceptions.

The influence that warmists have had on culture, politics, and climate science has been tremendous. Skeptics have faced an uphill battle to be heard on the issue within their discipline since about the mid-1990s. Whole institutions have been set up under the assumption that AGW is catastrophic. Their mission is to fund research projects into the effects, and possible effects of AGW, not the cause of it. Nevertheless, the people who work for these institutions, or are funded by them, are frequently cited as the “thousands of scientists around the world who have proved catastrophic AGW is real.” The only thing is there’s not much going into looking at what’s causing global climate change, so I’ve heard, because the thinking is “everybody knows we’re the ones causing it”–it’s the consensus view, but that’s not based on strong evidence that validates the proposition.

“Consensus” might as well be code in the scientific community for “belief in the absence of evidence,” also known as “faith,” because that’s what “consensus” tends to be. Unfortunately this happens in the scientific community in general from time to time. It’s not unique to climate science.

Science is far from a perfect instrument of knowledge. It’s just the best we have. In this respect, as in many others, it’s like democracy. Science by itself cannot advocate courses of human action, but it can certainly illuminate the possible consequences of alternative courses of action.

The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything–new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.

One of the reasons for its success is that science has built-in, error correcting machinery at its very heart. Some may consider this an overbroad characterization, but to me every time we exercise self-criticism, every time we test our ideas against the outside world, we are doing science. When we are self-indulgent and uncritical, when we confuse hopes and facts, we slide into pseudoscience and superstition.

— TDHW

In light of the issue I’m discussing I would revise that last sentence to say, “when we confuse hopes and fears with facts, we slide into pseudoscience and superstition.” Continuing…

Every time a scientific paper presents a bit of data, it’s accompanied by an error bar–a quiet but insistent reminder that no knowledge is complete or perfect. It’s a calibration of how much we trust what we think we know. … Except in pure mathematics, nothing is known for certain (although much is certainly false).

I thought I should elucidate the distinction that Sagan makes here between science and mathematics. Mathematics is a pure abstraction. I’ve heard those more familiar with mathematics than myself say that it’s the only thing that we can really know. However, things that are true in mathematics are not necessarily true in the real world. Sometimes people confuse mathematics with science, particularly when objects from the real world are symbolically brought into formulas and equations. Scientists make a point of trying to avoid this confusion. Any mathematical formulas that are created in scientific study, because they seem to make sense, must be tested by experimentation with the actual object that’s being studied, to see if the formulas are a good representation of reality. Mathematics is used in science as a way of modeling reality. However, this does not make it a substitute for reality, only a means for understanding it better. Tested mathematical formulas create a mental scaffolding around which we can organize and make sense of our thoughts about reality. Once a model is validated by a lot of testing, it’s often used for prediction, though it’s essential to keep in mind the limitations of the model, as much as they are known. Sometimes a new limitation is discovered even when a well established prediction is tested.

Continuing with TDHW…

Moreover, scientists are usually careful to characterize the veridical status of their attempts to understand the world–ranging from conjectures and hypotheses, which are highly tentative, all the way up to laws of Nature which are repeatedly and systematically confirmed through many interrogations of how the world works. But even laws of Nature are not absolutely certain.

Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science–by far the most successful claim to knowledge accessible to humans–teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.

We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.

The following paragraphs are of particular interest to what I will discuss next:

One of the great commandments of science is, “Mistrust arguments from authority.” (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.) Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else. This independence of science, its occasional unwillingness to accept conventional wisdom, makes it dangerous to doctrines less self-critical, or with pretensions of certitude.

Because science carries us toward an understanding of how the world is, rather than how we would wish it to be, its findings may not in all cases be immediately comprehensible or satisfying. It may take a little work to restructure our mindsets. Some of science is very simple. When it gets complicated, that’s usually because the world is complicated, or because we’re complicated. When we shy away from it because it seems too difficult (or because we’ve been taught so poorly), we surrender the ability to take charge of our future. We are disenfranchised. Our self-confidence erodes.

But when we pass beyond the barrier, when the findings and methods of science get through to us, when we understand and put this knowledge to use, many feel deep satisfaction.

— TDHW

Sagan believed that science is the province of everyone, given that we understand what it’s about. In our society we often think of science as a two-tiered thing. There are the scientists who are authorities we can trust, and then there’s the rest of us. Sagan argued against that.

In the case of the AGW issue, what I often see with warmists is the promotion of blind trust, “The science says this,” or, “The world’s scientists have spoken,” and, “therefor we must act.” A note of certainty that in reality science does not offer. Whether we should act or not is a value judgement, and I argue that a cost/benefit analysis should be applied to such decisions as well, taking the scientific evidence and analysis into account, along with other considerations.

Breaking it wide open

There are a few really meaty exposés that have happened this year on what’s been going on in the climate science community around the issue of AGW. One of them I’ll include here is a presentation by Dr. Richard Lindzen, a climate scientist at MIT. It was sponsored by the Competitive Enterprise Institute. In addition, he also addressed an issue related to what Sagan talked about: the lack of critical thinking on the part of leaders and decision makers. Instead there are appeals to authority.

Up until a few hundred years ago, we in the West appealed to authority–monarchs and popes–for answers about how we should be governed, and how we should live. Thousands of years ago, the geometers (meaning “earth measurers”) of Egypt, who could measure and calculate angles so that great structures could be built, were worshipped. Temples were built for them. What created democracy was an appeal to rational argument among the people. A significant part of this came from habits formed in the discipline of science. Unfortunately with today’s social/political/intellectual environment, to discuss the climate issue rationally is to, in effect, commit heresy! What Lindzen showed in his presentation is the unscientific thinking that is passing for legitimate reasoning in climate science, along with a little of the science of climate.

I can vouch for most of the “trouble areas” that Dr. Lindzen talks about, with regard to the arguments warmists make, because I have seen them as I have studied this issue for myself, and discussed it with others. It’s as disconcerting as it looks.

The slides Lindzen used in his presentation are still available here. The notes below are from the video.

It’s ironic that we should be speaking of “ignorance” among the educated. Yet that seems to be the case. The leaders of universities should be scratching their heads and wondering why that is. Perhaps it has something to do with C. P. Snow’s “two cultures,” which I’ve brought up before. People in positions of administrative leadership seem to be more comfortable with narratives and notions of authorship than critically examining material that’s presented to them. If they are critical, they look at things only from a perspective of political priorities.

What’s interesting is this has been a persistent problem for ages. Dr. Sallie Baliunas talked about how the educated elite of some in Europe during the Little Ice Age persecuted, tortured, and executed people suspected of witchcraft, after severe weather events, because it was thought that the climate could be “cooked” by sorcery. In other words, it was caused by a group of people that was seen as evil. Since the weather events were “unnatural” they had to be supernatural in origin, and according to the beliefs of the day that could only happen by sorcery, and the people who caused it had to be eradicated. Skeptics who challenged the idea of weather “cooking” were marginalized and silenced.

Edit 3-14-2014: After being prompted to do some of my own research on this, I got the sense that Baliunas’s presentation was somewhat inaccurate. In my research I found there was a rivalry that went on between two prominent individuals around this issue, which Baliunas correctly identifies, one accusing witches of causing the aberrant weather, and another arguing that this was impossible, because the Bible said that only God controlled the weather. However, according to the sources I read, the accusations and prosecutions for witchcraft/sorcery only happened in rural areas, and were carried out by locals. If elites were involved, it was only the elites in those areas. The Church, and political leadership of Europe did not buy the idea that witches could alter the weather. Perhaps Baliunas had access to source material I didn’t, and that’s how she came to her conclusions. Some of what I found was behind a “pay wall,” and I wasn’t willing to put up money to research this topic.

The sense I get after looking at the global warming debate for a while is there’s disagreement between warmists and skeptics about where we are along the logarithmic curve for CO2 impact, and what coefficient should be applied to it. What Lindzen says, though, is that the idea of a “tipping point” with respect to CO2 is spurious, because you don’t get “tipping points” in situations with diminishing returns, which is what the logarithmic model tells us we will get. Some might ask, “Okay, but what about the positive feedbacks from water vapor and other greenhouse gases?” Well, I think Lindzen answered that with the data he gathered.

To clarify the graph that Lindzen showed towards the end, what he was saying is that as surface temperatures increased, so did the radiation that went back out into space. This contradicts the prediction made by computer models that as the earth warms, the greenhouse effect will be enhanced by a “piling on” effect, where warming will cause more water vapor to enter the atmosphere, and more ice to melt, causing more radiation to be trapped and absorbed–a positive feedback.

This study was just recently completed. Based on the scientific data that’s been published, and this presentation by Lindzen, it seems to me that these the computer models the IPCC was using were not based on actual observations, but instead represent untested theories–speculation.

The audio at the end of the Q & A section gets hard to hear, so I’ve quoted it. This is Lindzen:

The answer to this is unfortunately one that Arron Wildavsky gave 15-20 years ago before he died, which is, the people who are interested in the policy (and we all are to some extent, but some people, like you–foremost) have to genuinely familiarize themselves with the science. I’ll help. Other people will help. But you’re going to have to break a certain impasse. That impasse begins with the word “skeptic.” Whenever I’m asked, am I a climate skeptic? I always answer, “No. To the extent possible I am a climate denier.” That’s because skepticism assumes there is a good a priori case, but you have doubts about it. There isn’t even a good a priori case! And so by allowing us to be called skeptics, they have forced us to agree that they have something.

Despite Dr. Lindzen’s attempt to clarify his position from “skeptic” to “denier,” I think that’s a bad use of rhetoric, because “denier” in climate science circles has the political connotation of “holocaust denier,” which indicates that “the other side has something, and you have nothing.” Personally, I think that people like Lindzen should recognize that “climate skeptic” is a loaded term, and answer instead, “I am a skeptic of most everything, because that’s what good scientists are.” One can be skeptical of spurious claims.

It’s difficult for climate scientists from one side to even debate the other, as they should, because politics is inevitably introduced. This is a symptom of the corruption of science. Scientists should not have to defend their political or industrial affiliations with respect to scientific issues. This is tantamount to guilt by association and “attack the messenger” tactics, which are irrelevant as far as Nature is concerned. I’ve heard more than one scientist say, “Nature doesn’t give a damn about our opinions,” and it’s true. Science depends on the validity of observed data, and skeptical, probing analysis of that data. When the subject of study is human beings themselves, or products that could affect humans and the environment, then ethics comes into play, but this only extends so far as how to design experiments, or whether to do them at all, not what is discovered from observation.

This is old news by now, but a ton of e-mails and source code were stolen from The University of East Anglia’s Climate Research Unit (CRU), also called the Hadley Center, on November 19, 2009, and made public. I hadn’t heard about it until the last week in November. WattsUpWithThat.com has been publishing a series of articles on what’s being discovered in the e-mails, which provides a good synopsis. I picked out some of them that I thought summed up their contents and implications: herehere, here, and here. The following two interviews with retired climatologist Dr. Tim Ball also summed it up pretty well:

There are no forbidden questions in science, no matters too sensitive, or delicate to be probed, no sacred truths. Diversity and debate are valued. Opinions are encouraged to contend–substantively and in depth.

We insist on independent and–to the extent possible–quantitative verification of proposed tenets of belief. We are constantly prodding, challenging, seeking contradictions or small, persistent, residual errors, proposing alternate explanations, encouraging heresy. We give our highest rewards to those who convincingly disprove established beliefs.

— TDHW

[my emphasis in bold italics — Mark]

Ball referred to the following sites as good sources of information on climate science:

ClimateAudit

WattsUpWithThat

Here are articles written by Ball for the Canada Free Press

In the above interview Ball gets to one of the crucial issues that has frustrated skeptics for years: the publishing of scientific findings and peer review. He said the disclosed e-mails reveal that a small group of warmists exerted a tremendous amount of control over the process. He said he was mystified about why some climate scientists were emphasizing “peer review” 20 years ago (the peer-reviewed literature). He realizes now, after having reviewed the e-mails, that they were in effect promoting their own group. If you weren’t in their club, it’s likely you wouldn’t get published (they’d threaten editors if you did), and you wouldn’t get the coveted “peer review” that they touted so much. Of course, if you didn’t toe their line, you weren’t allowed in their club. No wonder former Vice-President Al Gore could say, “The debate is over.”

It goes without saying that publishing is the lifeblood of academia. If you don’t get published, you don’t get tenure, or you might even lose it. You might as well find another career if you can’t find another sponsor for your research.

The video below is called “Climategate: The backstory.” It looks like this interview with Ball was done earlier, probably in August or September.

The “damage control” from the Hadley e-mails incident was apparent in the media in December, around the time of the Copenhagen conference. There was an effort to distract people from the real issues, preferring instead to try to focus people’s attention on the nasty personalities involved. What galls me is this effort betrays a contempt for the public, taking advantage of the notion that we have little knowledge or interest in how science works, and so we can be easily distracted with personality issues.

I have to say the media reporting on this incident was pretty disappointing. If they talked about it at all, they frequently had pundits on who were not familiar with the science. They simply applied their reading skills to the e-mails and jumped to conclusions about what they read. In other cases they invited on PR flacks to give some counterpoint to the controversy. Warmists had a field day playing with the ignorance of correspondents and pundits. Some of the pundits were “in the ballpark.” At least their conclusions on the issue were sometimes correct, even if the reasoning behind them was not. A couple shows actually invited on real scientists to talk about the issue. What a concept!

On a lighter note, check out this clip from the Daily Show…

Here’s an explanatory article about the significance of the “hide the decline” comment, along with background information which gives context for it. Here’s a Finnish TV documentary that touched on the major issues that were revealed in the CRU e-mails (The link is to part 1. Look for the other two parts on the right sidebar at the linked page).

Carl Sagan saw this pattern of thought before:

“A fire-breathing dragon lives in my garage.”

Suppose (I’m following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you’d want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

“Show me,” you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle–but no dragon.

“Where’s the dragon?” you ask.

“Oh, she’s right here,” I reply, waving vaguely. “I neglected to mention that she’s an invisible dragon.”

You propose spreading flour on the floor of the garage to capture the dragon’s footprints.

“Good idea,” I say, “but this dragon floats in the air.”

Then you’ll use an infrared sensor to detect the invisible fire.

“Good idea, but the invisible fire is also heatless.”

You’ll spray-paint the dragon and make her visible.

“Good idea, except she’s an incorporeal dragon, and the paint won’t stick.”

And so on. I counter every physical test you propose with a special explanation of why it won’t work.

Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I’m asking you to do comes down to believing, in the absence of evidence, on my say-so.

The only thing you’ve really learned from my insistence that there’s a dragon in my garage is that something funny is going on inside my head.

Now another scenario: Suppose it’s not just me. Suppose that several people of your acquaintance, including people who you’re pretty sure don’t know each other, all tell you they have dragons in their garages–but in every case the evidence is maddeningly elusive. All of us admit we’re disturbed at being gripped by so odd a conviction so ill-supported by the physical evidence. None of us is a lunatic. We speculate about what it would mean if invisible dragons were really hiding out in our garages all over the world, with us humans just catching on. I’d rather it not be true, I tell you. But maybe all those ancient European and Chinese myths about dragons weren’t myths at all…

Gratifyingly, some dragon-size footprints in the flour are now reported. But they’re never made when a skeptic is looking. An alternative explanation presents itself: On close examination it seems clear that the footprints could have been faked. Another dragon enthusiast shows up with a burnt finger and attributes it to a rare physical manifestation of the dragon’s fiery breath. But again, other possibilities exist. We understand that there are other ways to burn fingers besides the breath of fiery dragons. Such “evidence”–no matter how important the dragon advocates consider it–is far from compelling. Once again, the only sensible approach is tentatively to reject the dragon hypothesis, to be open to future physical data, and to wonder what the cause might be that so many apparently sane and sober people share the same strange delusion.

— TDHW

[my emphasis in bold italics — Mark]

In an earlier part of the book he said:

The hard but just rule is that if the ideas don’t work, you must throw them away. Don’t waste neurons on what doesn’t work. Devote those neurons to new ideas that better explain the data. The British physicist Michael Faraday warned of the powerful temptation

“to seek for such evidence and appearances as are the favour of our desires, and to disregard those which oppose them . . . We receive as friendly that which agrees with [us], we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.”

Meanwhile, the risks we are ignoring

The obsession with catastrophic human-caused global warming, driven by ideology and a kind of religious group think, and the flow of money to the tune of tens of billions of dollars, represents a misplacement of priorities. It seems to me that if we should be focusing on any catastrophic threats from Nature, we should be putting more resources into a scientifically validated, catastrophic threat that hardly anyone is paying attention to: The possibility of the extinction of the human race, or an extreme culling, not to mention the extinction of most of life on Earth, from a large asteroid or comet impact. Science has revealed that large impacts have happened several times before in Earth’s history. A large impactor will come our way again someday, and we currently have no realistic method for averting such a disaster, even if we spotted a body heading for us months in advance. The number of scientists who are monitoring bodies in space that cross Earth’s orbit could literally fit around a table at McDonalds! Yet there are thousands of these missiles. These scientists say it is very difficult to make a case in congress for an increase in funding for their efforts, because the likelihood of an impact seems so remote to politicians.

The New Madrid fault zone represents a huge, known risk to the Midwestern part of the U.S. Scientists have tried to warn cities along the zone about updating their building codes to withstand the next quake that will inevitably occur. But so far they have gotten a cool reception.

Alan Kay commented on Bill Kerr’s blog that regardless of what’s caused global warming (he leaves that as an open question), what we should really be worried about is a “crash” of our climate system, where it suddenly changes state from even a small “nudge.” It could even come about as a result of natural forces. I hadn’t thought about the issue from that perspective, and I’m glad he brought it up. He cited an example for such a crash (though on a smaller scale), pointing to “dead zones” in coastal waters all over the world, resulting from agricultural effluents. The example distracts a bit from his main point, but I see what he’s getting at. He said that governments have not been focused on how to prepare for this scenario of a climate “system crash,” and are instead distracted by meaningless “counter measures.”

The implications for science and our democratic republic

The values of science and the values of democracy are concordant, in many cases indistinguishable. Science and democracy began–in their civilized incarnations–in the same time and place, Greece in the seventh and sixth centuries B.C. … Science thrives on, indeed requires, the free exchange of ideas; its values are antithetical to secrecy. Science holds to no special vantage points or privileged positions. Both science and democracy encourage unconventional opinions and vigorous debate. … Science is a way to call the bluff of those who pretend to knowledge. It is a bulwark against mysticism, against superstition, against religion misapplied to where it has no business being. If we’re true to our values, it can tell us when we’re being lied to. The more widespread its language, rules, and methods, the better chance we have of preserving what Thomas Jefferson and his colleagues had in mind. But democracy can also be subverted more thoroughly through the products of science than any pre-industrial demagogue ever dreamed.

— TDHW

I’m going to jump ahead a bit with this quote from an interview with Carl Sagan on Charlie Rose (shown further below):

If we are not able to ask skeptical questions, to interrogate those who tell us that something is true–to be skeptical of those in authority–then we’re up for grabs for the next charlatan, political or religious, who comes ambling along. It’s a thing that Jefferson lay great stress on. It wasn’t enough, he said, to enshrine some rights in a constitution or bill of rights. The people had to be educated, and they had to practice their skepticism in their education. Otherwise, we don’t run the government. The government runs us.

On December 7, 2009 the EPA came out with its endangerment finding saying that carbon dioxide is a pollutant that threatens public health. The agency will proceed to impose restrictions on CO2 emitters itself, since congress has not acted to impose its own. What is all this based on now?

President Obama, you said, “We will restore science to its rightful place.” I’m still waiting for that to happen.

Science helped birth democracy. Its shadow is now being used to create conditions for a more authoritarian government. This isn’t the first time this has happened. The pseudo-science of eugenics, which was once regarded as scientific since it was ostensibly based on the theory of evolution, was used as justification for the slaughter of millions in Europe in the 1930s and 40s. It was also used as justification for shameful actions and experiments performed by our government on certain groups of people in the U.S.

Global warming has been blown up into a huge issue. There aren’t too many people who haven’t at least heard of it. We are seriously considering taking actions that could cost ordinary people, the poor in particular, and businesses, a lot of money. When the stakes are this high we’d better have a good reason for it. This is like the craze with bran muffins and foods with oats in them, because of the belief (supported by scientific studies that were misreported to the public) that they prevented cancer, only it’s more serious. I worry about what this does to science, because it seems like since people can get away with debauching it, why not continue doing it in the future?

One worry I have about the debauching of science is that it will delegitimize science in the eyes of the public, and encourage the same superstition and magical thinking that marked the Middle Ages. Who could blame us for rejecting it after it’s been perceived as “crying wolf” too many times?

The public has valued science up to now, because of the information it can bring us. The problem is we don’t care to understand what it is or how it works. “Just give us the facts,” is our attitude. We have blindly given the name of science a legitimacy that, like other things I’ve talked about on this blog, doesn’t take into account the quality of the findings, or the way they were obtained. It reminds me of a reference Alan Kay made to Neil Postman:

Our [scientific] artifacts are everywhere, but most people, as Neil Postman said once, have to take more things on faith now in the 20th century than they did in the Middle Ages. There’s more knowledge that most people have to believe in dogmatically or be confused about.

As a result, we have set up scientists as authorities. Some purport to tell us what to believe, and how to behave, and we as a society expect this of them. The problem with this is when a “scientific fact” is later revealed to be wrong, people feel jilted. Science itself is thought of as a collection of facts, written by our scientific “priesthood.” We expect this “priesthood” to do right by the rest of us. Science was never meant to take on this role. I think a good part of the reason for this passive attitude towards science in the public sphere is the quality and the methodology of findings are not reported to the general public. Most journalists wouldn’t understand the criteria enough to explain it to the citizenry in a way they’d understand.

The other part of the problem is that science is presented in our educational system as something that’s not very interesting. In fact most students only experience a small sliver of science, if that. It’s rather like mathematics (or arithmetic and calculation that’s called “mathematics”) for them, something they’re required to take. They just want to “get through it,” and they’re thankful when it’s over.

An issue I’m not even addressing here, though it’s worth noting, is that science is often perceived as heartless and cold, a discipline that has allowed us as a society to act without a moral sense of responsibility. This I’m sure has also contributed to the public’s aversion to science. And I can see that because of this, people might prefer “the science of global warming alarm” to “the science of skepticism.” One seems to be promoting “good action,” while the other seems like a bunch of backward, out of touch folks, who don’t care about the earth. These are emotional images, a way of thought that a lot of people the world over are prone to. However, as Sagan said in the interview below, “Science is after how the Universe really is, and not what makes us feel good.” These images of one group and the other are stereotypes, not so much the truth.

Of course a moral sense is necessary for a self-governing society like ours, but morality can be misapplied. By trying to do good we could in fact be hurting people if the solution we implement is not thought through. We may act on incomplete information, all the while thinking that we have the complete picture, thereby ignoring important factors that may require a very different solution to resolve. Our understanding of complex systems and the effects of tampering with them may also be grossly incomplete. While attempting to shape and direct a system that is behaving in a way we don’t like, we may make matters worse. Intent matters, but results matter, too. What appear to be moral actions will not always result in moral outcomes, especially in systems that are huge in scale and complexity. This applies to the environment and our economy.

As we’ve seen in our past, people eventually do figure out that the science behind a spurious claim was flawed, but it tends to take a while. By that point the damage has already been done. Perhaps scientists need to take a more active public service role in informing the public about claims that are made through news outlets. What would be better is if people understood scientific thinking, but in the absence of that, scientists could do the public a service by explaining issues from a scientific perspective, and perhaps educating the audience about what science is along the way. This would need to be done carefully, though. A real effort at this would probably expose people to notions that they are uncomfortable with. Without a sufficient grounding in the importance of science, that is, the importance of listening and considering these uncomfortable ideas, most people will just change the channel when that happens. In order for this to work, people need to be willing to think, because the activity is interesting, and sometimes produces useful results. Science cannot just be regarded as a vocation in our society. It is an essential part of the health of our democratic republic.

The danger of our two-tiered knowledge society

In all uses of science, it is insufficient–indeed it is dangerous–to produce only a small, highly competent, well-rewarded priesthood of professionals. Instead, some fundamental understanding of the findings and methods of science must be available on the broadest scale.

— TDHW

I’m going to turn the subject now to the matter of science and technology, and our collective ignorance, because it also has bearing on this “dangerous brew.” I found this interview with Carl Sagan, which was done shortly before he died in 1996. He talked with Charlie Rose about his then-new book, The Demon-Haunted World. He had some very prescient things to say which add to the quotes I’ve been using from his book. I found myself agreeing with what Sagan said in this interview regarding science, scientific awareness, and science vs. faith and emotions, but context is everything. He may not have been arguing from the same point of view I am, as I reveal further below. I find it interesting, though, that his quotes seem to apply very nicely to my argument. I’ve been reading this book, and I don’t see how I might be quoting him out of context. You’ll see why I’m hedging as you read further.

This is a poignant interview, because they talk about death and what that means. It’s a bit sad and ironic to see his optimism about his good health. He died from pneumonia, which was a complication of his bone marrow transplant, which was a treatment he received for his myelodysplasia. His final accomplishment was completing work on a movie version of his novel, Contact, which came out in 1997. Interestingly, his movie touched on some themes from The Demon-Haunted World.

My jaw dropped when I heard Charlie Rose read that less than half of American adults in 1996 thought that our planet orbits the Sun once a year! I did a quick check of science surveys on the internet and it doesn’t look like the situation has gotten any better since then.

Sagan’s point was not that magical thinking in human beings was growing. He said it’s always been with us, but in the technological society we have built, the prominence of this kind of thinking is dangerous. This is partly because we are wielding great power without knowing it, and partly because it makes us as a people impotent on issues of science and technology. We will feel it necessary to just leave decisions about these issues up to a scientific-technological elite. I’ve argued before that we have an elite that has been making technological decisions for us, but not at a public policy level. It’s been at the level of IT administrators and senior engineers within organizations. In the realm of science, however, we clearly have an elite which has gladly taken over decisions about science at the policy level.

The climate issue points to another aspect of this. As Dr. Lindzen pointed out in his presentation (above), we have people who are misusing climate models (and it’s anyone’s guess whether it’s on purpose, or due to ignorance) as a substitute for the natural phenomenon itself! I’ve talked to a few climate modelers who believe that human activities are causing catastrophic climate change, and this is how they view it: Since we do not have another Earth to use as a “control,” or to use as a means for “repeatability,” we use computer models as a “control,” or in order to repeat an “experiment.” It’s absurd. Talking to these people is like entering the Twilight Zone. They argue as if they’re the professionals who know what they’re doing. The truth is they’re ignorant of the scientific method and its value, yet their theories of computer modeling and methodology carry a high level of legitimacy in the field of climate science. It’s what a lot of the prognosticating in the IPCC (Intergovernmental Panel on Climate Change) assessment reports are based on. This gives you an idea of the ignorance that at times passes for knowledge and wisdom in this field!

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality.

— Doron Swade, curator of the London Science Museum

As I’ve talked about before, a computer model is only a theory. That’s it. It’s a representation of reality created by imperfect human beings (programmers, though in principle it’s not that different from scientists creating theories and mathematical models of reality). It’s irrational to use a theory as a “control,” or as a proxy for the real thing in an “experiment.” It goes against what science is about, which is an acknowledgment that human beings are ignorant and flawed observers of Nature. Even if we have a theory that seems to work, there is always the possibility that in some circumstance that we cannot predict it will be wrong. This is because our knowledge of Nature will always be incomplete to some degree. What science offers, when applied rigorously, is very good approximations. Within the boundaries of those approximations we can find ideas that are useful and which work. There are no shortcuts to this, though.

Theories are of course welcome in science, but the only rational thing to do with them while using the scientific method is to test them against the real thing, and to pay attention to how well theory and reality match, in as many aspects as can be discerned.

Climate modelers who back the idea of catastrophe claim they do this when forming their models, but I’ve heard first-hand accounts from scientists about how modelers will “tweak” parameters to make a model do something “interesting.” This gets them attention, and I detect some techno-cultish behavior in this. I’ve heard second-hand accounts from scientists about how modelers will input unrealistic parameters to make the models closely match the temperature record, which they term “validating the model.” As Dr. John Christy, a scientist who studies temperature in the atmosphere and at the surface, at the University of Alabama at Huntsville, once remarked, “They already knew what the correct answer was.” This is an illegitimate methodology, because it’s no better than forming a conclusion based on a data correlation. I’m sure if I worked hard enough at it, I could create a computer model that also closely tracked the temperature record, just drawing lines on the screen, and/or producing numbers, coming from a standpoint of total ignorance of how the climate works, and I suppose by their criteria my model would be “validated.”

I’ve cited this quote from Alan Kay before (though he did not specifically address the issue of climate modeling, or anything having to do with climate science when he said it):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

To clarify, Kay was talking about the application of computing to non-computational sciences.

Beware of those who come bearing predictions

I have praised Carl Sagan for what he talked about in the Charlie Rose interview above (I praise him for some of his other work as well), but I feel I would be remiss if I didn’t talk about a portion of his career where he fell into doing what I’m complaining about in this post. He promoted an untested prediction for a political agenda. I’m going to talk about this, because it illustrates a temptation that scientists (who are flawed human beings like the rest of us) can succumb to.

Sagan was one of the chief proponents of the theory of nuclear winter in the 1980s during the Cold War between the U.S. and the Soviet Union. As Michael Crichton pointed out (you may want to search on Sagan’s name in the linked article to reach the relevant part), like with catastrophic AGW, this was based on a prediction that was supported by flimsy evidence. In fact, computer climate modeling had a central role in the prediction’s supposed legitimacy.

Sagan exhibited a fallacy in thinking on a number of occasions that I’ll call “belief in the mathematical scenario.” Such scenarios are supported by a concept that can be conjured up as a technical, mathematical model. Here’s the thing. Is the scenario even plausible? Does the fact that we can imagine it in a plausible way justify believing that it’s real? Does a moral belief that something is right or wrong justify promoting an unwavering belief in an untested theory that supports the moral rule, because it will cause people to “do the right thing”? How is this different from aspects of organized religion? Do these questions matter? From where I sit, it’s the academic equivalent of people making up their own myths, using the technical tool of mathematics as a legitimizer, mistaking mathematical precision for objective truth in the real world. This is a behavior that science is supposed to help us avoid!

On one level, trying to apply the scientific method to the nuclear winter prediction sounds absurd: “You want evidence confirming that a nuclear war would result in nuclear winter?? Are you nuts?” First of all, we don’t have to resort to that extreme. Scientists have found ways to physically model a scenario by using materials from Nature, but at a small scale, in order to arrive at approximations that are quite good. We don’t have to experience the real thing at full scale to get an idea of what will really happen. It’s just a matter of arriving at a realistic model, and in the case of this prediction that might’ve been difficult.

The point is that a prediction, an assertion, must be tested before it can be considered scientifically valid. It’s not science to begin with unless it’s falsifiable. And what’s worse, Sagan knew this! Without falsifiability, the appropriate scientific answer is, “We don’t know,” but that’s not what he said about this scenario. He at least admitted it was a prediction, but he also called it “science.” It was disingenuous, and he should’ve known better.

Without testing our notions, our assumptions, and our models, we are left with superstition–irrational fears of the unknown, and irrational hopes for things that defy what is possible in Nature (whether we know what’s possible is beside the point), even though they are dispensed using ideas that sound modern, and comport with what we think intelligent, educated people should know.

It doesn’t matter if the untested prediction is made seemingly plausible by mathematics, or a computer model (which is another form of mathematics). That’s mere hand waving. Prediction, mathematical or otherwise, is not science, and therefor it’s not nearly as reliable as analysis derived from the scientific method. Our predictions are hopefully derived from science, but even so, an untested prediction really is only as reliable as the experience of the person giving it.

The same sort of political dynamic came into play at the time that the nuclear winter theory was popular that has existed in climate science: If you were skeptical about the theory of nuclear winter, that meant you were in the “minority” (or so they had people believe)–not with the “consensus.” You were accused of supporting nuclear arms, and our government’s tough “cowboy” anti-Soviet policy, and were a bad person. Such smears were unjustified, but they were used to shame and silence dissent. I don’t mean to suggest that Sagan was a communist sympathizer, or anything of that sort. I think he wanted to prevent nuclear war, period. Not a bad motive in itself, but it seems to me has was willing to sacrifice the legitimacy of science for this.

A lot of scientists who didn’t know too much about the science at issue, but didn’t want to ruffle feathers, went along with it to be a part of the accepted group. The whole thing was desperate and cynical. It’s my understanding from history that the fears exhibited by the promoters of this theory were unfounded, and I think they came about because of a fundamental misunderstanding of realpolitik.

It’s not as if this scare tactic was really necessary. The consequences of nuclear war that we knew about were horrifying enough. It’s apparent from the interview with Ted Turner in the above video that there were worries about the escalation of the nuclear arms race, perhaps the Reagan Administration’s first strike nuclear capability against the Soviet Union in particular. You’ll notice that Sagan talks about (I’m paraphrasing), “One nation bombing another before the other can respond, the attacker thinking that they will remain untouched.” People like Sagan didn’t want the U.S., or perhaps the Soviets for that matter, to think it could carry out a first strike and wipe out the other side with impunity (because the climate would “get” the other side in return). Surely, a nuclear war with a large number of blasts would’ve caused some changes in climate, but how much was anyone’s guess.

The only evidence that could’ve realistically tested the theory, to a degree, would’ve been from above ground nuclear tests, or the bombs that were dropped on Hiroshima and Nagasaki, Japan. To my knowledge, none of them gave results that would’ve contributed to the prediction’s validity.

Before the very first nuclear bomb was tested there was at least one scientist in the Manhattan Project who thought that a single nuclear blast might ignite our atmosphere. That would be a fate worse than the predicted nuclear winter. Imagine everything charred to a crisp! Others thought that while it was possible, the probability was remote. Still, the scenario was terrifying. The bomb was tested. Many other above-ground atom bomb tests followed, and we’re all still here. Not to say that nuclear testing is good for us or the environment, but the prediction didn’t come true. The point is, yes, there are terrible scenarios that can be imagined. These scenarios are made plausible based on things that we know are real, and a knowledge of mathematics, but that does not mean any of these terrible scenarios will happen.

You’ll notice if you watch the interview with Turner that Sagan even talks about catastrophic AGW! Again, what he spoke of was a prediction, not a scientifically validated conclusion. It’s hard to know what his motivation was with that, but it sounded like he was uncomfortable with the idea that our civilization was not consciously thinking about the environment, and what consequences that might have down the line. Western governments began to understand environmental issues in the 1980s, and implemented regulations to clean up what was our highly polluted environment. From what I understand though, this did not happen in other parts of the world.

Not to say that all environmental problems have been solved in the U.S. There are real environmental issues that science can inform us about today, and will need to be acted upon. One example is “dead zones,” which I referred to earlier, where coastal waters are losing their oxygen due to an interaction between nitrogen-rich compounds that agricultural operations are releasing into streams, and algae. It’s killing off all marine life in the affected areas, and these “dead zones” exist all over the world. There’s a Frontline documentary called “Poisoned Waters” that talks about it. Another is an issue that does have to do with human-induced climate forcing, but not strictly in the sense of warming or cooling the planet. The PBS show Nova talked about it in an episode called “Dimming the Sun.” Huge quantities of sooty pollution have been found to affect relative humidity on Earth, which does have a significant effect on our weather. Aside from the application of this new find to the issue of AGW, which to me was rather irrelevant, this was a very interesting show. They gave what I thought was a very thorough and compelling exposition of the science behind the “dimming” effect.

The legacy of Carl Sagan

Based on what I’ve read in Sagan’s book, if he were still alive today, he would probably still be promoting the theory of catastrophic AGW. That is something I find hard to understand, given the understanding of science that he had, its implications for our society, and the seemingly innate need for humans to create their own myths, all of which he seemed to know about. Perhaps he was not one to look inward, as well as outward. Though it’s impossible to do this now, this is an issue I’d dearly like to ask him about.

I can’t help but think that Sagan and his cohorts created the template for the pseudo-science that bedevils climate science today. Richard Lindzen’s paper, which I referred to at the beginning of this post, paints the picture of what’s happened more fully, and points to some other motivations, besides politics. One of Sagan’s phrases that I still remember is, “Extraordinary claims require extraordinary proof.” Too bad he didn’t follow that maxim sometimes.

Despite this, I respect the fact that he really did try to bring scientific understanding to the masses. Sagan in my mind was a great man, but like all of us he was flawed, and even he was willing to set aside his scientific thinking and participate in the promotion of pseudo-science for non-scientific goals.

I could rant that Sagan was a hypocrite, because he cynically exploited the very ignorance he expressed concern about. However, my guess is that he saw what he thought was a dangerous situation developing in an ignorant world–a “demon-haunted” one at that. Perhaps the only way he knew how to deal with it in such “dire circumstances” was to “take what existed,” promote a scenario that was not based on much, which we ignoramuses would believe, and cynically exploit the good name of science (and his own good name) so that we would pull back from the brink. It is elitist, though I can understand the temptation.

If we really want to bring people out of ignorance it’s best to try to educate them, even though that can be hard. Sometimes people just don’t want to hear it. But if this approach is not taken, then it’s just a bunch of elites messing with people’s heads so we’ll give them a response they want. We won’t be any more enlightened. There’s too much of that already.

I guess another lesson is that even though we can see ignorance in people, when the human spirit is brought out it can manifest solutions to problems in ways that people like me would not anticipate, and things work out okay. That in essence is the genius of semi-autonomous systems like ours that have diffused power structures. It acknowledges that no one person, or group, has all the right answers. The same is true of science, when it’s done well, and relatively free markets. It’s best if we respect that, even though we may be tempted to subvert these systems for causes we ourselves deem noble.

Even so, I feel as though we put too much faith in our semi-autonomous, diffused systems. Some of us think they will solve all problems, and it’s not necessary to worry about being well educated. I think people push aside the idea too casually that more sophisticated ways of thinking and perceiving would help all of us (not just a few) make those systems more optimal.

So what are we to do?

The tenets of skepticism do not require an advanced degree to master, as most successful used car buyers demonstrate. The whole idea of a democratic application of skepticism is that everyone should have the essential tools to effectively and constructively evaluate claims to knowledge. All science asks is to employ the same levels of skepticism we use in buying a used car or in judging the quality of analgesics or beer from their television commercials.

But the tools of skepticism are generally unavailable to the citizens of our society. They’re hardly ever mentioned in the schools, even in the presentation of science, its most ardent practitioner, although skepticism repeatedly sprouts spontaneously out of the disappointments of everyday life. Our politics, economics, advertising, and religions (New Age and Old) are awash in credulity. Those who have something to sell, those who wish to influence public opinion, those in power, a skeptic might suggest, have a vested interest in discouraging skepticism.

— TDHW

I noted as I wrote this post that both Sallie Baliunas and Carl Sagan said that science needed special protection in our society. Richard Lindzen indicates that this protection is paper thin. He said it’s unfortunately easy to co-opt science in our society. The only way science can be protected in my view is if we value it for what it really is. Students need to be taught that, and shown its beauty. Sagan said a key thing in the Charlie Rose interview: “Science is more than a body of knowledge. It’s a way of thinking.” I must admit some ignorance to this, but what little I’ve heard about science education in schools now indicates that it’s taught almost strictly as a body of knowledge. I suspect this is because of No Child Left Behind and standardized testing. I remember a CS professor saying a while back that in his kid’s science class there was a lot of workbook material, but very little experimentation, because the teachers were afraid to allow their students to do experiments. He didn’t explain why. My suspicion is they didn’t want the students to come to their own conclusions about what they had seen, and possibly get “confused” about what they’re supposed to know for their tests. In any case I remember exclaiming to the professor that he should get his child out of that class! I asked rhetorically, “What do they think they’re teaching?”

Even when I look at my own science education I realize that I wasn’t given a complete sense of what science was. The hypotheses were practically given to us. When we did an experiment, the steps for it were always given to us. We were always given the “correct” answer in the end, so we could compare it against the answer we came up with. This is how we calculated error. We compared the answer we had against the “correct” answer. One thing that was valuable was we were asked to think of any reasons why the error occurred. Some error always existed (it always does in real science), and we could speculate that maybe our instruments introduced some error, and that we may have done the procedure a bit wrong, etc. This was teaching only one part of real science: observation, being skeptical of our observations, and recognizing human fallibility. That’s valuable. On the other hand, what it also taught was the fallacy that there was a perfectly correct answer, which was achieved via. mathematics, which was formulated by “masters of science.” There’s so much more to it than that. In real science, scientists come up with their own hypotheses. They design their own experiments. When they get their results they have nothing to compare them against, unless they’re reproducing someone else’s experiment. Even then they can’t just say “the results in the original experiment are the correct answer,” because the other experiment may have had unrecognized flaws, too. The process by which those mathematical formulas that we used became so good was not a “one shot” deal. They came about from making a lot of mistakes, realizing what they were, and correcting for them.

I wonder how real scientists figure out what error figures/bars to put in their results. Maybe they could come from instrument ratings, or probabilities, based on an examination of the scale of the observation.

Anyway, science is really about wondering, exploring, being curious, being skeptical of your own observations, as well as those of others. It also takes into account what’s been discovered previously. Alan Kay has talked about how there’s also a kind of critical argument that goes on in science, where the weak ideas are identified and set aside, and the strong ones are allowed to rise to the top.

So much stress is put on the need for math and science education for our country’s future economic health. It’s necessary for our society’s general health, too. I hope someday we will recognize that.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

“Oddly enough, it’s the scientists who had the most to do with redefining beauty.”
— author Paul Schullery

I watched parts of Ken Burns’s new documentary series “The National Parks: America’s Best Idea” on PBS. One part talked about George Melendez Wright, a biologist who through his tireless efforts got the national parks to change their policy towards wild animals in the 1930s. Park policy had been to treat them like zoo animals. Some were caged for public viewing, and feeding sites were set up to attract animals to strategic locations so that people could see them. Wright made the case against it, saying that this policy detracted from the beauty of the natural experience for visitors. Paul Schullery said the above quote in that segment, and I felt a tinge of anger. “Oddly??”, I thought. I felt disgusted. There’s nothing odd about it. It’s just that most Americans don’t recognize that science can open up a new perception to beauty. I’m using this post as an illustration of the beauty of science in an area I’ve been studying as a hobby.

I’ve been looking at the issue of climate change from a scientific perspective for several years now. When I was a kid I was very interested in severe weather phenomena: tornadoes and hurricanes. I wanted to be a meteorologist. Had I gone through with that I probably would’ve been one of those storm chasers. But then I found computers… Anyway, my interest in atmospheric science has remained.

I’ve looked at this subject long enough to know that just the study of what affects our climate is highly controversial within the field. Unfortunately, it has a significant amount of dogma in it, and any new theories that break with it have a hard time even being published. This is the main reason my interest in this area of research has been reinvigorated. There is legitimate climate research going on in the field, but I’ve had this increasing sense as I’ve kept track of it that the field is being used for non-scientific purposes, and at the expense of science. I’ll get into that in a future post.

A new theory, proposed by Danish scientist Henrik Svensmark, that’s been published just recently, proposes a radical idea: Our climate is under the influence of celestial phenomena. The theory is rather complex. Sometimes it’s portrayed as saying that our climate is influenced by the Sun, but that’s not the full story.

The theory contains three concepts:

  • The Sun’s magnetic field goes through active and dormant cycles.
  • The position of the solar system in the Milky Way changes with time. It has an orbit around the center of the galaxy (though our solar system is positioned near the edge of the galaxy). As it orbits, it sometimes goes through spiral arms, which contain many stars, some of which go nova. At other times it’s in between these arms, in relatively empty space.
  • Cosmic rays, which come from exploding stars, have a direct influence on the cloudiness of our atmosphere, and thereby heavily influence the cooling of our climate.

The third point is what’s most controversial about this theory. Svensmark claims that cosmic rays have a much greater influence on our climate than carbon dioxide (CO2).

The theory goes that there is an interaction between the three components. When the solar system is traveling through a spiral arm, there is more opportunity for cosmic rays to reach our atmosphere and create cloudiness. When it’s traveling in between arms, there are less cosmic rays. At the same time, when the Sun is in an active period, the solar wind diverts cosmic rays away from the Earth, creating less cloudiness, and a warmer climate. When it’s in a dormant period, cosmic rays reach the Earth more freely, creating cloudiness, and a cooler climate.

Svensmark claims that his theory can explain periods of climate that are evident in the geologic record, which go back hundreds of thousands of years, and the climate we experience in modern times.

This is a theory that still needs to go through the scientific process of rigorous review by other scientists. So at this point I’m going to consider it speculative, but worth considering.

I found the following video on “The Cloud Mystery.” It features other scientists who are supporting Svensmark’s work: Nir Shaviv, an astrophysicist at Hebrew University in Jerusalem; and Eugene Parker, a solar astrophysicist at the University of Chicago. Svensmark has also published a book, The Chilling StarsHere is an article on The Resilient Earth blog about this subject.

There’s a point of clarification I should make. Nir Shaviv talks about measuring Carbon-14 in sedimentary deposits. My understanding is that Carbon-14 can be used as a proxy for cosmic rays when looking at geologic records, since Carbon-14 is formed when nitrogen in the atmosphere is struck by cosmic rays. If you’re a science maven like I am, you often hear about Carbon-14 in relation to radiocarbon dating some organic material found at archaeological sites in order to date a find.

What they’re saying is when matched up against the temperature record (through other proxies, though I’m less familiar with them), there seems to be a good match between the trends in Carbon-14 levels and temperature trends in the same time period.

The comment that Svensmark makes about the whiteness of the low clouds is significant. We experience full-spectrum light as white light. So the fact that the low clouds appear white means, as he says, that they are reflecting most of the Sun’s rays that hit them back out into space.

 

The one disappointing thing to me about this presentation is that the proponents of this theory sound as dogmatic about it as other scientists in this field are about CO2 affecting global warming. I much prefer the stance of, “Given our current level of ignorance, what we think we know is…”. Even so, I find this idea that there’s a unity between the universe and our Earth, and that it affects something we experience every day to be really beautiful. It makes sense to me, but I’m going to withhold belief, and let the scientific process vet it out.

Science can be beautiful. I think a lot of people miss that. They think that scientists live their lives in sterile labs in white coats, running experiments. Some do that sort of thing, but the exciting stuff in the natural sciences happens when people go out in nature and “get their hands in it.”

There’s a wonderful site I found recently called Symphony of Science that takes lyrical quotes from some prominent scientists, modifies them so they sound like they’re singing, and sets it to music. It sounds a bit alien, but beautiful at the same time. This article is my opinion alone, and I make no claim that the scientists portrayed in this video endorse this theory hypothesis. I thought this “song” would put a nice “exclamation mark” on the subject, though.

“We are all connected” from symphonyofscience.com

Read Full Post »

Update 8-17-09: I’ve revised this post a bit to clarify some points I made.

I received a request 2-1/2 weeks ago to write a post based on video of a speech that Alan Kay gave at Kyoto University in February, titled “Systems Thinking For Children And Adults.” Here it is. The volume in the first 10 minutes of the video is really low, so you’ll probably need to turn up your volume. The volume in the video gets readjusted louder after that.

Edit 1-2-2014: See also Kay’s 2012 speech, “Normal Considered Harmful,” which I’ve included at the bottom of this post.

On the science of computer science

Kay said what he means by a science of computing is the forward-looking study, understanding, and invention of computing. Will the “science” come to mean something like the other real sciences? Or will it be like library and social science, which means a gathering of knowledge? He said this is not the principled way that physics, chemistry, and biology have been able to revolutionize our understanding of phenomena. Likewise, will we develop a software engineering that is like the other engineering disciplines?

I’ve looked back at my CS education with more scrutiny, and given what I’ve found, I’m surprised that Kay is asking this question. Maybe I’m misunderstanding what he said, but for me CS was a gathering of knowledge a long time ago. The question for me is can it change from that to a real science? Perhaps he’s asking about the top universities.

When I took CS as an undergraduate in the late 80s/early 90s it was clear that some research had gone into what I was studying. All the research was in the realm of math. There was no sense of tinkering with architectures that existed, and very little practice in analyzing it. We were taught to apply a little analysis to algorithms. There was no sense of trying to create new architectures. What we were given was pre-digested analysis of what existed. So it had gotten as far as exposing us to the “TEM” parts (of “TEMS”–Technology, Engineering, Mathematics, Science), but only in a narrow band. The (S)cience was non-existent.

What we got instead is what I’d call “small science” in the sense that we had lots of programming labs where we experimented with our own knowledge of how to write programs that worked; how to use, organize, and address memory; and how to manage complexity in our software. We were given some strategies for doing this, which were taught as catechisms. The labs gave us an opportunity to see where those strategies were most effective. We were sometimes graded on how well we applied them.

We got to experience a little bit of how computers could manipulate symbols, which I thought was real interesting. I wished that there would’ve been more of that.

One of the tracks I took while in college was focused on software engineering, which really focused on project management techniques and rules of thumb. It was not a strong engineering discipline backed by scientific findings and methods.

When I got out into the work world I felt like I had to “spread the gospel,” because what IT shops were doing was ad hoc, worse than the methodologies I was taught. I was bringing them “enlightenment” compared to what they were doing. The nature and constraints of the workplace broke me out of this narrow-mindedness, and not always in good ways.

It’s only been by doing a lot of thinking about what I’ve learned, and my POV of computers, that I’ve been able to see this that I’ve been able to see that what I got out of CS was a gathering of knowledge with some best practices. At the time I had no concept that I was only getting part of the picture even though our CS professors openly volunteered with a wry humor that “computer science is not a science.” They compared the term “computer science” to “social science” in the sense that it was an ill-defined field. There was the expectation that it would develop into something more cohesive, hopefully a real science, later on. Given the way we were taught though, I guess they expected it to develop with no help from us.

Kay has complained previously that the commercial personal computing culture has contributed greatly to the deterioration of CS in academia. I have to admit I was a case in point. A big reason why I thought of CS the way I did was this culture I grew up in. Like I’ve said before, I have mixed feelings about this, because I don’t know if I would be a part of this field at all if the commercial culture he complains about never existed.

What I saw was that people were discouraged from tinkering with the hardware, seeing how it worked, much less trying to create their own computers. Not to say this was impossible, because there were people who tinkered with 8- and 16-bit computers. Of course, as I think most people in our field still know, Steve Wozniak was able to build his own computer. That’s how he and Steve Jobs created Apple. Computer kits were kind of popular in the late 1970s, but that faded by the time I really got into it.

When I used to read articles about modifying hardware there was always the caution about, “Be careful or you could hose your entire machine.” These machines were expensive at the time. There were the horror stories about people who tried some machine language programming and corrupted the floppy disk that had the only copy of their program on it (ie. hours and hours of work). So people like me didn’t venture into the “danger zone.” Companies (except for Apple with the Apple II) wouldn’t tell you about the internals of their computers without NDAs and licensing agreements, which I imagine one had to pay for handsomely. Instead we were given open access to a layer we could experiment on, which was the realm of programming either in assembly or a HLL. There were books one could get that would tell you about memory locations for system functions, and how to manipulate features of the system in software. I never saw discussion of how to create a software computer, for example, that one could tinker with, but then the hardware probably wasn’t powerful enough for that.

By and large, CS fit the fashion of the time. The one exception I remember is that in the CS department’s orientation/introductory materials they encouraged students to build their own computers from kits (this was in the late 1980s), and try writing a few programs, before entering the CS program. I had already written plenty of my own programs, but as I said, I was intimidated by the hardware realm.

My education wasn’t the vocational school setting that it’s turning into today, but it was not as rigorous as it could have been. It met my expectations at the time. What gave me a hint that my education wasn’t as complete as I thought was that opportunities which I thought would be open to me were not available when I looked for employment after graduation. The hint was there, but I don’t think I really got it until a year or two ago.

What would computing as a real science be like?

I attended the 2009 Rebooting Computing summit on CS education in January, and one of the topics discussed was what is the science of computer science? In my opinion it was the only topic brought up there that was worth discussing at that time, but that’s just me. The consensus among the luminaries that participated was that historically science has always followed technology and engineering. The science explains why some engineering works and some doesn’t, and it provides boundaries for a type of engineering.

We asked the question, “What would a science of computing look like?” Some CS luminaries used an acronym “TEMS” (Technology, Engineering, Mathematics, Science), and there seemed to be a deliberate reason why they had those terms in that order. In other circles it’s often expressed as “STEM.” Technology is developed first. Some engineering gets developed from patterns that are seen (best practices). Some math can be derived from it. Then you have something you can work with, experiment with, and reason about–science. The part that’s been missing from CS education is the science itself: experimentation, an interest and proclivity to get into the guts of something and try out new things with whatever–the hardware, the operating system, a programming language, what have you–just because we’re curious. Or, we see that what we have is inadequate and there’s a need for something that addresses the problem better.

Alan Kay participated in the summit and gave a description of a computing science that he had experienced at Xerox PARC. They studied existing computing artifacts, tried to come up with better architectures that did the same things as the old artifacts, and then applied the new architectures to “everything else.” I imagine that this would test the limits of the architecture, and provide more avenues for other scientists to repeat the process (take an artifact, create a new architecture for it, “spread it everywhere”) and gain more improvement.

(Update 12-14-2010: I added the following 3 paragraphs after finding Dan Ingalls’s “Design Principles Behind Smalltalk” article. It clarifies the idea that a science of computing was once attempted.)

Dan Ingalls gave a brief description of a process they used at Xerox to drive their innovative research, in “Design Principles Behind Smalltalk,” published in Byte Magazine in 1981:

Our work has followed a two- to four-year cycle that can be seen to parallel the scientific method:

  • Build an application program within the current system (make an observation)
  • Based on that experience, redesign the language (formulate a theory)
  • Build a new system based on the new design (make a prediction that can be tested)

The Smalltalk-80 system marks our fifth time through this cycle.

This parallels a process that was once described to me in computer science, called “bootstrapping”: Building a language and system “B” from a “lower form” language and system “A”. What was unique here was they had a set of overarching philosophies they followed, which drove the bootstrapping process. The inventors of the C language and Unix went through a similar process to create those artifacts. Each system had different goals and design philosophies, which is reflected in their design.

To give you a “starter” idea of what this process is like, read the introduction to Design Patterns, by Gamma, Helm, Johnson, and Vlissides, and take note of how they describe coming up with their patterns. Then notice how widely those patterns have been applied to projects that have nothing to do with what the Gang of Four originally created with the patterns they came up with. The exception here is the Gang of Four didn’t redesign a language as part of their process. They invented a terminology set, and a concept of coding patterns that are repeatable. They established architectural patterns to use within an existing language. The thing is, if they had explored what was going on with the patterns mathematically, they might very well have been able to formulate a new architecture that encompassed the patterns they saw, which if successful, would’ve allowed them to create a new language.

The problem in our field is illustrated by the fact that more often than not, there’s been no study of the other technologies where these patterns have been applied. When the Xerox Learning Research Group came up with Smalltalk (object-orientation, late-binding, GUI), Alan Kay expected that others would use the same process they did to improve on it, but instead people either copied OOP into less advanced environments and then used them to build practical software applications, or they built applications on top of Smalltalk. It’s just as I described with my CS education: The strategies get turned into a catechism by most practitioners. Rather than studying our creations, we’ve just kept building upon and using the same frameworks, and treating them with religious reverence–we dare not change them lest we lose community support. Instead of looking at how to improve upon the architecture of Smalltalk, people adopted OOP as a religion. This happened and continues to happen because almost nobody in the field is being taught to apply mathematical and scientific principles to computing (except in the sense of pursuing proofs of computability), and there’s little encouragement from funding sources to carry out this kind of research.

There are degrees of scientific thinking that software developers use. One IT software house where I worked for a year used patterns on a regular basis that we created ourselves. We understood the essential idea of patterns, though we did not understand the scientific principles Kay described. Everywhere else I worked didn’t use design patterns at all.

There has been more movement in the last few years to break out of the confines developers have been “living” in; to try and improve upon fundamental runtime/VM architecture, and build better languages on top of it. This is good, but in reality most of it has just recapitulated language features that were invented decades ago through scientific approaches to computing. There hasn’t been anything dramatically new developed yet.

The ignorance we ignore

“What is the definition of ignorance and apathy?”

“I don’t know, and I don’t care.”

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well.

— Alan Kay, The Early History of Smalltalk

A fundamental problem with our field is there’s very little appreciation for good architecture. We keep tinkering with old familiar structures, and produce technologies that are marginally better than what came before. We assume that brute force will get us by, because it always has in the past. This is ignoring a lot. One reason that brute force has been able to work productively in the past is because of the work of people who did not use brute force, creating: functional programming, interactive computing, word processing, hyperlinking, search, semiconductors, personal computing, the internet, object-oriented programming, IDEs, multimedia, spreadsheets, etc. This kind of research went into decline after the 1970s and has not recovered. It’s possible that the brute force mentality will run into a brick wall, because there will be no more innovative ideas to save it from itself. A symptom of this is the hand-wringing I’ve been hearing about for a few years now about how to leverage multiple CPU cores, though Kay thinks this is the wrong direction to look in for improvement.

There’s a temptation to say “more is better” when you run into a brick wall. If one CPU core isn’t providing enough speed, add another one. If the API is not to your satisfaction, just add another layer of abstraction to cover it over. If the language you’re using is weak, create a large API to give it lots of functionality and/or a large framework to make it easier to develop apps. in it. What we’re ignoring is the software architecture (all of it, including the language(s), and OS), and indeed the hardware architecture. These are the two places where we put up the greatest resistance to change. I think it’s because we acknowledge to ourselves that the people who make up our field by and large lack some basic competencies that are necessary to reconsider these structures. Even if we had the competencies nobody would be willing to fund us for the purpose of reconsidering said structures. We’re asked to build software quickly, and with that as the sole goal we’ll never get around to reconsidering what we use. We don’t like talking about it, but we know it’s true, and we don’t want to bother gaining those competencies, because they look hard and confusing. It’s just a suspicion I have, but I think An understanding of mathematics and the scientific outlook is important for all this essential to the process of invention in computing. We can’t get to better architectures without it. Nobody else seems to mind the way things are going. They accept it. So there’s no incentive to try to bust through some perceptual barriers to get to better answers, except the sense that some of us have that what’s being done is inadequate.

In his presentation in the video, Kay pointed out the folly of brute force thinking by showing how humungous software gets with this approach, and how messy the software is architecturally. He said that the “garbage dump” that is our software is tolerated because most people can’t actually see it. Our perceptual horizons are so limited that most people can only comprehend a piece of the huge mess, if they’re able to look at it. In that case it doesn’t look so bad, but if we could see the full expanse, we would be horrified.

This clarifies what had long frustrated me about IT software development, and I’m glad Kay talked about it. I hated the fact that my bosses often egged me on towards creating a mess. They were not aware they were doing this. Their only concern was getting the computer to do what the requirements said in the quickest way possible. They didn’t care about the structure of it, because they couldn’t see it. So to them it was irrelevant whether it was built well or not. They didn’t know the difference. I could see the mess, at least as far as my project was concerned. It eventually got to the point that I could anticipate it, just from the way the project was being managed. So often I wished that the people managing me or my team, and our customers, could see what we saw. I thought that if they did they would recognize the consequences of their decisions and priorities, and we could come to an agreement about how to avoid it. That was my “in my dreams” wish.

Kay said, “Much of the applications we use … actually take longer now to load than they did 20 years ago.” I’ve read about this (h/t to Paul Murphy). (Update 3-25-2010: I used to have video of this, but it was taken down by the people who made it. So I’ll just describe it.) A few people did a side-by-side test of a 2007 Vista laptop with a dual-core Intel processor (I’m guessing 2.4 Ghz) and 1 Gig. RAM vs. a Mac Classic II with a 16 Mhz Motorola 68030 and 2 MB RAM. My guess is the Mac was running a SCSI hard drive (the only kind you could install on a Mac when they were made back then). I didn’t see them insert a floppy disk. They said in the video that the Mac is “1987 technology.” Even though the Mac Classic II was introduced in 1991, they’re probably referring to the 68030 CPU it uses, which came out in 1987.

The tasks the two computers were given were to boot up, load a document into a word processor, quit out of the word processor, and shut down the machine. The Mac completed the contest in 1 minute, 42 seconds, 25% faster than the Vista laptop, which took 2 minutes, 17 seconds. Someone else posted a demonstration video in 2009 of a Vista desktop PC which completed the same tasks in 1 minute, 18 seconds–23% faster than the old Mac. I think the difference was that the laptop vs. Mac demo likely used a slower processor for the laptop (vs. the 2009 demo), and probably a slower hard drive. The thing is though, it probably took a 4 Ghz dual-processor (or quad core?) computer, faster memory, and a faster hard drive to beat the old Mac. To be fair, I’ve heard from others that the results would be much the same if you compared a modern Mac to the old Mac. The point is not which platform is better. It’s that we’ve made little progress in responsiveness.

The wide gulf between the two pieces of hardware (old Mac vs. Vista PC) is dramatic. The hardware got about 15,000-25,000% faster via. Moore’s Law (though Moore’s Law only applied to transistors, not speed), but we have not seen a commensurate speed up in system responsiveness. In some cases the newer software technology is slower than what existed 22 years ago.

When Kay has elaborated on this in the past he’s said this is also partly due to the poor hardware architecture which was adopted by the microprocessor industry in the 1970s, and which has been marginally improved over the years. Quoting from an interview with Alan Kay in ACM Queue in 2004:

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea [referring to the Burroughs B5000 computer].

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

Another factor is the configuration and speed of main and cache memory. Cache is built into the CPU, is directly accessed by it, and is very fast. Main memory has always been much slower than the CPU in microcomputers. The CPU spends the majority of its time waiting for memory, or data to stream from a hard drive or internet connection, if the data is not already in cache. This has always been true. It may be one of the main reasons why no matter how fast CPU speeds have gotten the user experience has not gotten commensurately more responsive.

The revenge of data processing

The section of Kay’s speech where he talks about “embarrassing questions” from his wife (the slide is subtitled “Two cultures in computing”) gets to a complaint I’ve had for a while now. There have been many times while I’m writing for this blog when I’ve tried to “absent-mindedly” put the cursor somewhere on a preview page and start editing, but then realize that the technology won’t let me do it. You can always tell when a user interaction issue needs to be addressed when your subconscious tries to do something with a computer and it doesn’t work.

Speaking about her GUI apps. his wife said, “In these apps I can see and do full WYSIWYG authoring.” She said about web apps., “But with this stuff in the web browser, I can’t–I have to use modes that delay seeing what I get, and I have to start guessing,” and, “I have to edit through a keyhole.” He said what’s even more embarrassing is that the technology she likes was invented in the 1970s (things like word processing and desktop publishing in a GUI with WYSIWYG–aspects of the personal computing model), whereas the stuff she doesn’t like was invented in the 1990s (the web/terminal model). Actually I have to quibble a bit with him about the timeline.

“The browser = next generation 3270 terminal”
3270 terminal image from Wikipedia.org

The technology she doesn’t like–the interaction model–was invented in the 1970s as well. Kay mentioned 3270 terminals earlier in his presentation. The web browser with its screens and forms is an extension of the old IBM mainframe batch terminal architecture from the 1970s. The difference is one of culture, not time. Quoting from the Wikipedia article on the 3270:

[T]he Web (and HTTP) is similar to 3270 interaction because the terminal (browser) is given more responsibility for managing presentation and user input, minimizing host interaction while still facilitating server-based information retrieval and processing.

Applications development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. [my emphasis]

It’s true that the appearance and user interaction with the browser itself has changed a lot from the 3270 days in terms of a nicer presentation (graphics and fonts vs. green-screen text), and the ability to use a mouse with the browser UI (vs. keyboard-only with the 3270). It features document composition, which comes from the GUI world. The 3270 did not. Early on, a client scripting language was added, Javascript, which enabled things to happen on the browser without requiring server interaction. The 3270 had no scripting language.

There’s more freedom than the 3270 allowed. One can point their browser anywhere they want. A 3270 was designed to be hooked up to one IBM mainframe, and the user was not allowed to “roam” on other systems with it, except if given permission via. mainframe administration policies. What’s the same, though, is the basic interaction model of filling in a form on the client end, sending that information to the server in a batch, and then receiving a response form.

The idea that Kay brought to personal computing was immediacy. You immediately see the effect of what you are doing in real time. He saw it as an authoring platform. The browser model, at least in its commercial incarnation, was designed as a form submission and publishing platform.

Richard Gabriel wrote extensively about the difference between using an authoring platform and a publishing platform for writing, and its implications from the perspective of a programmer, in The Art of Lisp and Writing. The vast majority of software developers work in what is essentially a “publishing” environment, not an authoring environment, and this has been the case for most of software development’s history. The only time when most developers got closer to an authoring environment was when Basic interpreters were installed on microcomputers, and it became the most popular language that programmers used. The old Visual Basic offered the same environment. The interpreter got us closer to an authoring environment, but it still had one barrier: you either had to be editing code, or running your program. You could not do both at the same time, but there was less of a wait to see your code run because there was no compile step. Unfortunately with the older Basics, the programmer’s power was limited.

People’s experience with the browser has been very slowly coming back towards the idea of authoring via. the AJAX kludge. Kay showed a screenshot of a more complete authoring environment inside the browser using Dan Ingalls’s Lively Kernel. It looks and behaves a lot like Squeak/Smalltalk. The programming language within Lively is Javascript.

We’ve been willing to sacrifice immediacy to get rid of having to install apps. on PCs. App. installation is not what Kay envisioned with the Dynabook, but that’s the model that prevailed in the personal computer marketplace. You know you’ve got a problem with your thinking if you’re coming up with a bad design to compensate for past design decisions that were also bad. The bigger problem is that our field is not even conscious that the previous bad design was avoidable.

Kay said, “A large percentage of the main software that a lot of people use today, because it’s connected to the web, is actually inferior to stuff that was done before, and for no good reason whatsoever!”

I agree with him, but there are people who would disagree on this point. They are not enlightened, but they’re a force to be reckoned with.

What’s happened with the web is the same thing that happened to PCs: The GUI was seen as a “good idea” and was grafted on to what has been essentially a minicomputer, and then a mainframe mindset. Blogger Paul Murphy used to write, from a business perspective, about how there are two cultures in computing/IT that have existed for more than 100 years. The oldest culture that’s operated continuously throughout this time is data processing. This is the culture that brought us punch cards, mainframes, and 3270 terminals. It brought us the idea that the most important function of a computer system is to encode data, retrieve it on demand, and produce reports from automated analysis. The other culture is what Murphy called “scientific computing,” and it’s closer to what Kay promotes.

The data processing culture has used web applications to recapitulate the style of IT management that existed 30 years ago. Several years ago, I heard from a few data processing believers who told me that PCs had been just a fad, a distraction. “Now we can get back to real computing,” they said. The meme from data processing is, “The data is more important than the software, and the people.” Why? Because software gets copied and devalued. People come and go. Data is what you’ve uniquely gathered and can keep proprietary.

I think this is one reason why CS enrollment has fallen. It’s becoming dead like Latin. What’s the point of going into it if the knowledge you learn is irrelevant to the IT jobs (read “most abundant technology jobs”) that are available, and there’s little to no money going into exciting computing research? You don’t need a CS degree to create your own startup, either. All you need is an application server, a database, and a scripting/development platform, like PHP. Some university CS departments have responded by following what industry is doing in an effort to seem relevant and increase enrollment (ie. keep the department going). The main problem with this strategy is they’re being led by the nose. They’re not leading our society to better solutions.

What’s natural and what’s better

Kay tried to use some simple terms for the next part of his talk to help the audience relate to two ideas:

He gave what I think is an elegant graphical representation of one of the reasons things have turned out the way they have. He talked about modes of thought, and said that 80% of people are outer-directed and instrumental reasoners. They only look at ideas and tools in the context of whether it meets their current goals. They’re very conservative, seeking consensus before making a change. Their goals are the most important thing. Tools and ideas are only valid if they help meet these goals. This mentality dominates IT. Some would say, “Well, duh! Of course that’s the way they think. Technology’s role in IT is to automate business processes.” That’s the kind of thinking that got us to where we are. As Paul Murphy has written previously, the vision of “scientific computing,” as he calls it, is to extend human capabilities, not replace/automate them.

Kay said that 1% of people look at tools and respond to them, reconsidering their goals in light of what they see as the tool’s potential. This isn’t to say that they accept the tool as it’s given to them, and adjust their goals to fit the limitations of that version of the tool. Rather, they see its potential and fashion the tool to meet it. Think about Lively Kernel, which I mentioned above. It’s a proof of this concept.

There’s a symbiosis that takes place between the tool and the tool user. As the tool is improved, new ideas and insights become more feasible, and so new avenues for improvement can be explored. As the tool develops, the user can rethink how they work with it, and so improve their thinking about processes. As I’ve thought about this, it reminds me of how Engelbart’s NLS developed. The NLS team called it “bootstrapping.” It led to a far more powerful leveraging of technology than the instrumental approach.

The “80%” dynamic Kay described happened to NLS. Most people didn’t understand the technology, how it could empower them, and how it was developed. They still don’t. In fact Engelbart is barely known for his work today. Through the work that was done in the early days of Xerox PARC, a few of his ideas managed to get into technology that we’ve used over the years. He’s most known now for the invention of the mouse, but he did much more than that.

In the instrumental approach the goals precede the tools, and are actually much more conservative than the approach of the 1-percenters. In the instrumental scenario the goals people have are similar whether the tools exist or not, and so there’s no potential for this human-tool interaction to provide insight into what might be better goals.

Kay talked about this subject at Rebooting Computing, and I think he said that a really big challenge in education is to get students to shift from the “80%” mindset to one of the other modes of thought that have to do with deep thinking, at least exposing students to this potential within themselves. I think he would say that we’re not predisposed to only think one way. It’s just that left to our own devices we tend to fall into one of these categories.

I base the following on the notes Kay had on his slides in his speech:

He said that in the commercial computing realm (which is based on the way mainframes were sold to the public years ago) it’s about “news”: The emphasis is on functionality, and people as components. This approach also says you get your hardware, OS, programming language, tools, and user interface from a vendor. You should not try to make your own. This sums up the attitude of IT in a nutshell. It’s not too far from the mentality of CS in academia, either.

The “new” in the 1970s was a focus on the end-user and whether what they can learn and do. “We should design how user interactions and learning will be done and work our way down to the functions they need!” In other words, rather than thinking functionality-first, think user-first–in the context of the computer being a new medium, extending human capabilities. Aren’t “functionality” and “user” equally high priorities? Users just want functionality from computers, don’t they? Ask yourself this question: Is a book’s purpose only to provide functionality? The idea at Xerox PARC was to create a friendly, inviting, creative environment that could be explored, built on, and used to create a better version of itself–an authoring platform, in a very deep sense.

Likewise, with Engelbart’s NLS, the idea was to create an information sharing and collaboration environment that improved how groups work together. Again, the emphasis was on the interaction between the computer and the user. Everything else was built on that basis.

By thinking functionality-first you turn the computer into a device, or a device server, speaking metaphorically. You’re not exposing the full power of computing to the user. I’m going to use some analogies which Chris Crawford has written about previously. In the typical IT mentality this is the idea. It’s too dangerous to expose the full power of computing to end users, so the thinking goes. They’ll mess things up either on purpose or by mistake. Or they’ll steal or corrupt proprietary information. Such low expectations… It’s like they’re children or illiterate peasants who can’t be trusted with the knowledge of how to read or write. They must be read to aloud rather than reading for themselves. Most IT environments believe that they cannot be allowed to write, for they are not educated enough to write in the computer’s complex language system (akin to hieroglyphics). Some allow their workers to do some writing, but in limited, and not very expressive languages. This “limits their power to do damage.” If they were highly educated, they wouldn’t be end users. They’d be scribes, writing what others will hear. Further, these “illiterates” are never taught the value of books (again, using a metaphor). They are only taught to believe that certain tomes, written by “experts” (scribes), are of value. It sounds Medieval if you ask me. Not that this is entirely IT’s fault. Computer science has fallen down on the job by not creating better language systems, for one thing. Our whole educational/societal thought structure also makes it difficult to break out of this dynamic.

The “news” way of thinking has fit people into specialist roles that require standardized training. The “new” emphasized learning by doing and general “literacy.”

We’re in danger of losing it

Everyone interested in seeing what the technology developed at ARPA and Xerox PARC (the internet, personal computing, object-oriented programming) was intended to represent should pay special attention to the part of Kay’s speech titled “No Gears, No Centers: ARPA/PARC Outlook.” Kay told me about most of this a couple years ago, and I incorporated it into a guest post I wrote for Paul Murphy’s blog, called “The tattered history of OOP” (see also “The PC vision was lost from the get-go”). Kay shows it better in his presentation.

I think the purpose of Kay’s criticism of what exists now is to point out what we’ve lost, and continue to lose. Since our field doesn’t understand the research that created what we have today, we’ve helplessly taken it for granted that what we have, and the method of conservative incremental improvement we’ve practiced, will always be able to handle future challenges.

Kay said that because of what he’s seen with the development of the field of computing, he doesn’t believe what we have in computer science is a real field. And we are in danger of losing altogether any remnants of the science that was developed decades ago. This is because it’s been ignored. The artifacts that are based on that research are all around us. We use them every day, but the vast majority of practitioners don’t understand the principles and outlook that made it possible to create them. This isn’t a call to reinvent the wheel, but rather a qualitative statement: We’re losing the ability to make the kind of leap that was made in the 1970s, to go beyond what that research has brought us. In fact, in Kay’s view, we’re regressing. The potential scenario he paints reminds me of the science fiction stories where people use a network of space warping portals for efficient space travel and say, “We don’t know who built it. They disappeared a long time ago.”

I think there are three forces that created this problem:

The first was the decline in funding for interesting and powerful computing research. What’s become increasingly clear to me from listening to CS luminaries, who were around when this research was being done, is that the world of personal computing and the internet we have today is an outgrowth–you could really say an accident–of defense funding that took place during the Cold War–ARPA, specifically research funded by the IPTO (the Information Processing Techniques Office). This isn’t to say that ARPA was like a military operation. Far from it. When it started, it was staffed by academics and they were given a wide berth in which to do their research on computing. An important ingredient of this was the expectation that researchers would come up with big ideas, ones that were high risk. We don’t see this today.

The reason this computing research got as much funding as it did was due to the Sputnik launch. This got Americans to wake up to the fact that mathematics, science, and engineering were important, because of the perception that they were important for the defense of the country. The U.S. knew that the Soviets had computer technology they were using as part of their military operations, and the U.S. wanted to be able to compete with them in that realm.

In terms of popular perception, there’s no longer a sense that we need technical supremacy over an enemy. However, computing is still important to national defense, even taking the war against Al Qaeda into account. A few defense and prominent technology leaders in the know have said that Al Qaeda is a very early adopter of new technologies, and an innovator in their use for their own ends.

In 1970 computing research split into part government-funded and part private, with some ARPA researchers moving to the newly created PARC facility at Xerox. This is where they created the modern form of personal computing, some aspects of which made it into the technologies we’ve been using since the mid-1980s. The groundbreaking work at PARC ended in the early 1980s.

The second force, which Kay identified, was the rise of the commercial market in PCs. He’s said that computers were introduced to society before we were ready to understand what’s really powerful about them. This market invited us in, and got us acquainted with very limited ideas about computing and programming. We went into CS in academia with an eye towards making it like Bill Gates (never mind that Gates is actually a college drop-out), and in the process of trying to accommodate our limited ideas about computing, the CS curriculum was dumbed down. I didn’t go into CS for the dream of becoming a billionaire, but I had my eye on a career at first.

It’s been difficult for universities in general to resist this force, because it’s pretty universal. The majority of parents of college-bound youth have believed for decades that a college degree means a secure economic future–meaning a job, a career that pays well. It’s not always true, but it’s believed in our culture. This sets expectations for universities to be career “launching pads” and centers for social networking, rather than institutions that develop a student’s faculties, their ability to think, and expose them to high level perspectives to improve their perception.

The third force is, as Paul Murphy wrote, the imperative of IT since the earliest days of the 20th century, which is to use technology to extend and automate bureaucracy. I don’t see bureaucracy as a totally bad thing. I think of Alexander Hamilton’s quote about public debt, rephrased as, “A bureaucracy, if not excessive, will be to us a blessing.” However, in its traditional role it creates and follows rules and procedures formulated in a hierarchy. There’s accountability for meeting directives, not broad goals. Its thinking is deterministic and algorithmic. IT, being traditionally a bureaucratic function, models this (we should be asking ourselves, “Does it have to be limited to this?”) My sense of it is CS responded to the economic imperatives. It felt the need to lean towards what industry was doing, and how it thinks. The way it tried to add value was by emphasizing optimization strategies. The difference was it used to have a strong belief in staying true to some theoretical underpinnings, which somewhat counter-balanced the strong pull from industry. That resolve is slipping.

Kay and I have talked a little about math and science education. He said that our educational system has long viewed math and science as important, and so these subjects have always been taught, but our school system hasn’t understood their significance. So the essential ideas of both tend to get lost. What the computer brings to light as a new medium is that math (as mathematicians understand it, not how it’s typically taught) and science are essential for understanding computing’s potential. They are foundations for a new literacy. Without this perspective, and the general competencies in math and science to support it, both students and universities will likely continue the status quo.

Outlook: The key ingredient

Kay’s focus on outlook really struck me, because we have such an emphasis in our society on knowledge and IQ. My view of this has certainly evolved as I’ve gotten into my studies.

Whenever you interview for a job in the IT industry you get asked about your knowledge, and maybe some questions that probe your management skills. In technology companies that actually invent stuff you are probed for your mental faculties, particularly at tech companies that are known as “where the tech whizzes work.” I had an interview 5 years ago with a software company where the employer gave me something that resembled an IQ test. I have not seen nor heard of a technology company that asks questions about one’s outlook.

Kay said,

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that. And in contrast to these two, IQ is just a big lump of lead. It’s one of the worst things in our field that we have clever people in it, because like Leonardo [da Vinci] none of us is clever enough to deal with the scaling problems that we’re dealing with. So we need to be less clever and be able to look at things from better points of view.

This is truly remarkable! I have never heard anyone say this, but I think he may be right. All of the technology that has been developed, that has been used by millions of people, and has been talked about lo these many years was created by very smart people. More often than not what’s been created, though, has reflected a limited vision.

Given how society’s perception of computing has been, I doubt Kay’s vision of human-computer interaction would’ve been able to fly in the commercial marketplace, because as he’s said, we weren’t ready for it. In my opinion it’s even less ready for it now. But certainly the internal and networked computing vision that was developed at PARC could’ve been implemented in machines that the consuming public could’ve purchased decades ago. I think that’s where one could say, “You had your chance, but you blew it.” Vendors didn’t have the vision for it.

What’s needed in the future

One of Kay’s last slides referred to Marvin Minsky. I am not familiar with Minsky, and I guess I should be. He said that in terms of software we need to move from a “biology of systems” (architecture) to a “psychology of systems” (which I’m going to assume for the moment means “behavior”). I don’t really know about this, so I’m just going to leave it at that.

In a chart titled “Software Has Fallen Short” Kay made a clearer case than he has in the past for not idolizing the accomplishments that were made at ARPA/Xerox PARC. In the past he’s tried to discourage people from getting too excited about the PARC stuff, in some cases downplaying it as if it’s irrelevant. He’s always tried to get people to not fall in love with it too much, because he wanted people to improve upon it. He used to complain that the Lisp and Smalltalk communities thought that these languages were the greatest thing, and didn’t dream of creating anything better. Here he explains why it’s important to think beyond the accomplishments at PARC: It’s insufficient for what’s needed in the future. In fact the research is so far behind that the challenges are getting ahead of what the current best research can deal with.

He said that the PARC stuff is now “news,” and my guess is he means “it’s been turned into news.” He talked about this earlier in the speech. The essential ideas that were developed at PARC have not made it into the computing culture, with the exception of the internet and the GUI. Instead some of the “new” ideas developed there have been turned into superficial “news.”

Those who are interested in thinking about the future will find the slide titled “Simplest Idea” interesting. Kay threw out some concepts that are food for thought.

A statement that Kay closed with is one he’s mentioned before, and it depresses me. He said that when he’s traveled around to universities and met with CS professors, they think what they’ve got “is it.” They think they’re doing real science. All I can do is shake my head in disbelief at that. Even my CS professors understood that what they were teaching was not a science. For the most part what’s passing for CS now is no better than what I had–excepting the few top schools.

Likewise, I’ve heard accounts saying that there are some enterprise software leaders who (mistakenly) think they understand software engineering, and that it’s now a fully mature engineering discipline.

Coming full circle

Does computer science have a future? I think as before, computing is going to be at the mercy of events, and people’s emotional perceptions of computing’s importance. This is because the vast majority of people don’t understand what computing’s potential is. Today we see it as “current” not “future.” I think that people’s perception of computing will become more remote. Computing will be in devices we use every day, as we can see today, and we will see them as digital devices, where they used to be analog.

“Digital” has become the new medium, not computing. Computing has been overlayed with analog media (text, graphics, audio, video). Kay has said for a while now that this is how it is with new media. When it arises, the generation that’s around doesn’t see its potential. They see it as the “new old thing” and they just use it to optimize what they did before. Computing for now is just the magical substrate that makes “digital” work. The essential element that makes computing powerful and dynamic has been pushed to the side, with a couple of exceptions, as it often has been.

Kay didn’t talk about this in the video, but he and I have discussed this previously. There used to be a common concept in computing, which has come to be called “generative technology.” It used to be a common expectation that digital technology was open ended. You could do with it what you wanted. This idea has been severely damaged by the way that the web has developed. First, commercial PCs, which were designed to be used alone and in LANs, were thrust upon the internet using native code and careless programming, which didn’t anticipate security risks. In addition, identities became more loosely managed, and this became a problem as people figured out how to hide behind pseudonyms. Secondly, the web browser wasn’t designed initially as a programmable medium. A scripting language was added to browsers, but it was not made accessible through the browser. The scripting language was not designed too well, either.

Many security catastrophes ensued, each eroding people’s confidence in the safety of the internet, and the image of programming. People who didn’t know how to program, much less how the web worked, felt like they were the victims of those who did know how to program (though this went all the way back to stand-alone PCs as well when mischievous and destructive viruses circulated around in infected executables on BBSes and floppy disks). Programming became seen as a suspicious activity, rather than a creative one. And so as vendors put up defensive barriers to try to compensate for their own flawed designs, it only reinforced the initial design decision that was made when the commercial browser was conceived: that programming would be restricted to people who worked for service providers. Ordinary users should neither expect to gain access to code to modify it, nor want to. It’s gotten to the point we see today where even text formatting on message boards is restricted. HTML tags are restricted or banned altogether in favor of special codes, and you can’t use CSS at all. Programming by the public on websites is usually forbidden, because it’s seen as a security risk. And most web operators don’t see the value of letting people program on their site.

Kay complained that despite the initial idealistic visions of how the internet would develop, it’s still difficult or impossible to send a simulation to somebody on a message board, or through e-mail, to show the framework for a concept. What I think he had envisioned for the internet is a multimedia environment in which people could communicate in all sorts of media: text, graphics, audio, video, and simulations–in real time. Specialized software has made this possible using services you can pay for, but it’s still not something that people can universally access on the internet.

To get to the future we need to look at the world differently

I agree with the sentiment that in order to make the next leaps we need to not accept the world as it appears. We have to get out of what we think is the current accepted reality, what we’ve put up with and gotten used to, and look at what we want the world to be like. We should use that as inspiration for what we do next. One strategy to get that started might be to look at what we find irritating about what currently exists, what we would like to see changed, and think about it from a fundamental, structural perspective. What are some things you’ve tried to do, but given up on, because when you tried to use the tools or resources available they just weren’t up to the task? What sort of technology (perhaps a kind that doesn’t exist yet) do you think would do the job?

For further reading/exploring:

A complete system (apps. and all) in 20KLOC – Viewpoints Research

Demonstration of Lively Kernel, by Dan Ingalls

Edit 8-21-2009: A reader left a comment to an old post I wrote on Lisp and Dijkstra, quoting a speech of Dijkstra’s from 10 years ago, titled “Computing Science: Achievements and Challenges.” Towards the end of his speech he bemoaned the fact that CS in academia, and industry in the U.S. had rejected the idea of proving program correctness. He attributed this to an increasing mathematical illiteracy here. I think his analysis of cause and effect is wrong. My own CS professors liked the discipline that proofs of program correctness provided, but they rejected the idea that all programs can and should be proved correct. Dijkstra held the view that CS should only focus on programs that could be proved correct.

I think Dijkstra’s critique of anti-intellectualism in the U.S. is accurate, however, including our aversion to mathematics, and I found that it answered some questions I had about what I wrote above. It also gets to the heart of one of the issues I harp on repeatedly in my blog. His third bullet point is most prescient. Quoting Dijkstra:

  • The ongoing process of becoming more and more an amathematical society is more an American specialty than anything else. (It is also a tragic accident of history.)
  • The idea of a formal design discipline is often rejected on account of vague cultural/philosophical condemnations such as “stifling creativity”; this is more pronounced in the Anglo-Saxon world where a romantic vision of “the humanities” in fact idealizes technical incompetence. Another aspect of that same trait is the cult of iterative design.
  • Industry suffers from the managerial dogma that for the sake of stability and continuity, the company should be independent of the competence of individual employees. Hence industry rejects any methodological proposal that can be viewed as making intellectual demands on its work force. Since in the US the influence of industry is more pervasive than elsewhere, the above dogma hurts American computing science most. The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science. [my emphasis]

Edit 1-2-2014: Alan Kay gave a presentation called “Normal Considered Harmful” in 2012, at the University of Illinois, where he repeated some of the same topics as his 2009 speech above, but he clarified some of the points he made. So I thought it would be valuable to refer people to it as well.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

“We will restore science to its rightful place”
— President Barack Obama at his inaugural address

I heard this past weekend that the EPA has classified carbon dioxide as a pollutant that is hazardous to public health, and therefor needs to be regulated. What I feel is being left out of the discussion is that this much maligned gas is plant food. I assume we all learned about the process of photosynthesis in high school biology. In our society we apparently talk a lot about being “green”, but it appears to me that the EPA’s decision is actually anti-green.

Here’s a refresher on the process of photosynthesis.

Carbon dioxide is essential for plant growth

More carbon dioxide means greater biomass in plants (more plant growth)

There are a couple interesting things to note. One is that CO2 is heavier than air, so it has a tendency to sink towards the Earth. The Greenhouse Effect takes place in the upper atmosphere. I imagine there’s a bit of CO2 up there. The amount of it in the atmosphere as a whole is minute, about 380 parts per million. Here’s a decent article on the chemical composition of the atmosphere, and what’s known about its evolution since the Earth was first formed. It’s a bit old (it said CO2 was (currently) at 360 PPM).

Secondly, the chemical equation for photosynthesis (see the 2nd link called “process of photosynthesis”) shows that an equal amount of oxygen is produced from the amount of carbon dioxide that was introduced into the process. So more CO2 at the start will eventually produce a more oxygen-rich environment.

I understand there are concerns about global warming stemming from rising CO2 levels, but this side of the science is left out of the discussion, and it shouldn’t be. CO2 has a good side as well, and what these videos show in the small is that our biosphere has a natural response to higher carbon dioxide levels. It is absorbed into the bodies of plants and more oxygen is produced. We can think of plants as our natural carbon scrubbers. More than that, more plant growth leads to more food for us and animal life.

Read Full Post »