Archive for the ‘flame’ Category

Back in 2009 I wrote a post called “Getting an education in America.” I went through a litany of facts which indicated we as a society were missing the point of education, and wasting a lot of time and money on useless activity. I made reference to a segment with John Stossel, back when he was still a host on ABC’s 20/20, talking about the obsession we have that “everyone must go to college.” One of the people Stossel interviewed was Marty Nemko, who made a few points:

  • The bachelor’s degree is the most overvalued product in America today.
  • The idea marketed by universities that you will earn a million dollars more over a lifetime with a bachelor’s than with a high school diploma is grossly misleading.
  • The “million dollar” figure is based solely on accurate stats of ambitious high achievers, who just so happened to have gone to college, but didn’t require the education to be successful. It’s misattributed to their education, and it’s unethical that universities continue to use it in their sales pitch, saying, “It doesn’t matter what you major in.”

It turns out Nemko has had his own channel on YouTube. I happened to find a video of his from 2011 that really fleshes out the points he made in the 20/20 segment. What he says sounds very practical to me, and I encourage high school students, and their parents, to watch it before proceeding with a decision to go to college.

Nemko talks about what college really is these days: a business. He talks about how this idea that “everyone must go to college” has created a self-defeating proposition: Now that so many more people, in proportion to the general population, are getting bachelors’ degrees, getting one is not a distinction anymore. It doesn’t set you apart as someone who is uniquely skilled. He advises now that if you want the distinction that used to come from a bachelor’s, you should get a master’s degree. He talks about the economics of universities, and where undergraduates fit into their cost structure. This is valuable information to know, since students are going to have to deal with these realities if they go to college.

It’s not an issue of rejecting college, but of assessing whether it’s really worth it for you. He also outlines some other possibilities that could serve you a lot better, if what motivates you is not well suited to a 4-year program.

Nemko lays out his credentials. He’s gotten a few university degrees himself, and he’s worked at high levels within universities. He’s not just some gadfly who badmouths them. I think he knows of what he speaks. Take a listen.

Read Full Post »

I saw this from Star Parker today, and I think she makes an excellent point about the role of government:

There is no way around the fact that freedom and prosperity only exist when government protects property, and this includes our money.

I would add to this, “protects the lives of individuals, and respects contracts,” as well as property, but she is on to something. Since the financial crash of 2008 our government has taken on the role of the protector of our economy, even if the majority of Americans has not wanted it to do this. It has propped up companies, at best providing a cushion to tide them through their restructuring, and at worst given them a false sense of value. It has been focused for too long on the conceit that it can produce favorable outcomes, and when I say this in particular, I mean it far beyond this discussion. It goes above and beyond protecting property. Protecting property means protecting it from harm inflicted by fraud, thieves, saboteurs, and vandals–external threats, not harm that is self-inflicted. What the government has been presuming is that it can save us from ourselves.

Even though the Federal Reserve has wanted to deny that it is devaluing the Dollar, it has been doing so by creating money and then exchanging it for Treasuries, to finance our dramatically expanding public debt, and/or toxic mortgage securities. We haven’t been feeling the effects of this so much, because banks have not been lending that much into the private economy, but someday this will change. This policy has had effects abroad, which we have been feeling indirectly. Nevertheless, this is not protecting our property–specifically, our money!

What I’ve been increasingly realizing is that if we are to get back to prosperity, our government should stop trying to save us from ourselves, stop trying to manage the economy, and get back to its raison d’etre, and the Fed should adopt a new policy of protecting the value of the Dollar. If both were to do this, it would likely cause some scary and unpleasant results in the short term, but if people can see where the real problems are in the economy, then they are more likely to be resolved.

Read Full Post »

The title of this post is from a verbal gaffe that Dan Quayle committed when he gave a speech at the United Negro College Fund (now called “UNCF,” their slogan being, “A mind is a terrible thing to waste”) when he was Vice-President. I use it as a symbolic way of introducing this subject.

I came upon the following videos on YouTube. It is a dramatization of Ayn Rand’s thoughtful rant (nay, “indictment” is more like it) of our society’s promotion and acceptance of irrationality, through her character named John Galt, in her novel, “Atlas Shrugged.” It’s called “This is John Galt speaking…,” performed by Christopher Hurt, with video added by Richard Gleaves.

I am not wholeheartedly endorsing Rand’s Objectivist philosophy, but I agree strongly with her criticism of our society in the broadest sense. At times I have felt like screaming some of these criticisms, because I have seen the ignorance described, which seems impermeable, and I understand some things about the destructiveness it can produce. Screaming about it does little good, though. I am reminded of what Adlai Stevenson said of Eleanor Roosevelt, that she’d rather light a candle than curse the darkness.

John Galt’s speech is provocative, but it is provocation with a purpose, to get people to think about what has produced our modern world, and its problems, to think about the causes, not just the effects, and to perish the thought that it all comes about by magic, or should be taken for granted. That’s always valuable, to get a reality check. The reason I feature this rant is not to sway people towards a particular point of view, but to say that even though in our private lives we may find it valuable to hold beliefs in the supernatural, whether they be based in religious or secular views, they have real consequences in the health of our society when they are brought into the realm of politics, because they influence policy in unhealthy directions.

I am not putting all parts of Galt’s monologue here (the original dramatization has 18 parts), but certain key parts that I found thought-provoking, and valuable to share. I have long been interested in what creates and sustains modern civilization, and I think the Objectivist philosophy, as portrayed here, is an important piece of that, but I found it too limiting to be all-encompassing. In my encounters with philosophy, I’ve always found that materialism of any sort is too limiting as a singular governing principle for society. I would classify Objectivism as a “libertarian materialism.” I see it as just something to think about and consider.

Rand goes after all purveyors of irrationality in her time, but she seems to reserve particular scorn for mystics of all stripes, and catholicism. I find her criticism valuable from an anthropological perspective. If you take out the labels of different political systems and religions, and just look at their characteristics, it’s easier to see why those characteristics are probably destructive, as opposed to thinking that a particular instance of those characteristics, with a label, is destructive. That’s missing the forest for the trees.

Richard Gleaves used his own imagery and audio to illustrate what Galt was talking about. I do not agree with all of the imagery used, particularly regarding religion. It gives one the sense that all religion is like what is portrayed. I can say from experience that it’s not. Not all sects demand thoughtless obedience and sacrifice, though some popular forms of religion do promote this, and I agree with the specific criticism against that.

Rand seems to attack most forms of authority, a view I don’t agree with. I would just promote the idea of skepticism of authority.

The premise of this monologue is the society in Rand’s fictional tale has collapsed, and a character named John Galt, whom people in the story have wondered about, reveals himself to the world, telling everyone why society has collapsed, and how to bring it back to life.

What’s amazing to note is that Rand wrote all this in 1957, and that the concepts she talked about apply much more today than they did then. Though it was fictional, she wrote the story as an allegory, a warning to America. She said she saw troubling trends when she wrote it that she predicted would grow in impact on this country as time passed. I think she was right to see it that way.

Edit 11-28-2013: Gleaves deleted the videos I had been using here, and created a new series on the same monologue. So I’ve updated the videos I’ve used here with his new set of videos.

Part 1: This is John Galt Speaking

Part 4: The Standard of Morality

Part 5: Free Will

Part 5 is my favorite out of the whole series. Gleaves uses clips from the movie, “The Miracle Worker.” The way this was put together is poetic. As I watched it, I reflected on myself. At times I feel like Helen Keller’s teacher, trying to reach others. At other times I feel like Helen herself, going for long stretches feeling lost, mystified, and babbling about nothing of much value. Then I have experiences that feel like her at the water pump. The connection is made, and POW! Realization! The joy I feel afterward is like her running around, seeing a little better, taking it all in with a voracious hunger. Wonderful.

Part 7: Emotions

Part 13: Death Worship

Part 14: Utopia and Objectivity

Part 15: The Mystics of Muscle

Part 16: Nihilism

Part 17: Who is John Galt

Part 18: Necessary Evil & Paradise Lost

In a way, this post is a follow-up to an interview with Judy Shelton I featured on here about a year ago. She expressed concern that with the bent the U.S. government has now, that business owners, the people who create wealth, will eventually go on strike, or “go Galt,” because society no longer appreciates the personal risks they take to create products, services, and jobs.

What I really like about this is it doesn’t just complain about society, but illustrates the difference between a non-thinking society and a thinking society, and that this difference matters a great deal. The hope is that people will “wake up” and realize that this “dream” of certainty they’ve been in is not all its cracked up to be. While there will always be things we don’t know, there’s a lot less that’s “unknowable” than people think, and it would behoove us to find out as much about what’s really going on as possible, because it DOES affect us.

Like I said, this philosophy is not all-inclusive in terms of the important things that make up a functioning modern society. One thing it neglects is the fact that “intellectual life” is not just in the private sector. It’s also in our universities, at least in some holdouts. There is a healthy element of competition in this system, but in a well functioning system of this sort, the goal should not just be profit. An unfortunate fact I’ve been reading about is that universities are increasingly seeing profit as a primary goal. This narrows the focus of academic study significantly, and not always to good ends. It’s not just happening here. It’s happening in the UK as well.

So while I think Objectivism provides a valuable message to consider, I think it’s good to keep in mind that it is a vantage point from which one can be jarred, and see reality a little better, but that there are other valuable intellectual perspectives to explore and keep in mind as well.

Read Full Post »

“Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. … The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
— from President Dwight Eisenhower’s farewell address, Jan. 17, 1961

Before I begin, I’m going to recommend right off a paper called “Climate science: Is it currently designed to answer questions?”, by Dr. Richard Lindzen, as an accompaniment to this post. It really lays out a history of what’s happened to climate science, and a bit of what’s happened to science generally, during the post-WW II period. I was surprised that some of what he said was relevant to explaining what happened to ARPA computer science research as it entered the decade of the 1970s, and thereafter, though he doesn’t specifically talk about it. The footnotes make interesting reading.

The issue of political influence in science has been around for a long time. Several presidential administrations in the past have been accused of distorting science to fit their predilections. I remember years ago, possibly during the Clinton Administration, hearing about a neuroscientist who was running into resistance for trying to study the difference between male and female brains. Feminists objected, because it’s their belief that there are no significant differences between men and women. President George W. Bush’s administration was accused of blocking stem cell research for religious reasons, and of altering the reports of government scientists, particularly on the issue of global warming. When funding for narrow research areas are blocked, it doesn’t bother me so much. There are private organizations that fund science. What irks me more is when the government, or any organization, alters the reports of its scientists. What’s bothered me much more is when scientists have chosen to distort the scientific process for an agenda.

One example of this was in 2001 when the public learned that a few activist scientists had planted lynx fur on rubbing sticks that were set out by surveyors of lynx habitat. The method was to set out these sticks, and lynx would come along and rub them, leaving behind a little fur, thereby revealing where their habitat was. The intent was to determine where it would be safe to allow development in or near wilderness areas so as to not intrude on this habitat. A few scientists who were either involved in the survey, or knew of it, decided to skew the results in order to try to prevent development in the area altogether. This was caught, but it shows that not all scientists want the evidence to lead them to conclusions.

The most egregious example of the confluence of politics and science that I’ve found to date, and I will be making it the “poster child” of my concern about this, is the issue of catastrophic human-caused global warming in the climate science community. I will use the term anthropogenic global warming, or “AGW” for short. I’m not going to give a complete exposition of the case for or against this theory. I leave it to the reader to do their own research on the science, though I will provide some guidance that I consider helpful. This post is going to assume that you’re already familiar with the science and some of the “atmospherics” that have been occurring around it. The purpose of this post is to illustrate corruption in the scientific process, its consequences, and how our own societal ignorance about science allows this to happen.

There is legitimate climate research going on. I don’t want to besmirch the entire field. There is, however, a significant issue in the field that is not being dealt with honestly, and it cannot be dealt with honestly until the influences of politics, and indeed religion–a religious mindset, are acknowledged and dealt with, however unfortunate and distasteful that is. The issue I refer to is the corruption of science in order to promote non-scientific agendas.

I felt uncomfortable with the idea of writing this post, because I don’t like discussing science together with politics. The two should not mix to this degree. I’d much prefer it if everyone in the climate research field respected the scientific method, and were about exploring what the natural world is really doing, and let the chips fall where they may. What prompted me to write this is I understand enough about the issue now to be able to speak somewhat authoritatively about it, and my conclusions have been corroborated by the presentations I’ve seen a few climate scientists give on the subject. I hate seeing science corrupted, and so I’ve felt a need to speak up about it.

I will quote from Carl Sagan’s book, The Demon-Haunted World, from time to time, referring to it as “TDHW,” to provide relevant descriptions of science to contrast against what’s happening in the field of climate science.

Scientists are insistent on testing . . . theories to the breaking point. They do not trust the intuitively obvious. The truth may be puzzling or counter-intuitive. It may contradict deeply held beliefs.


I think it is important to give some background on the issue as I talk about it. Otherwise I fear my attempt at using this as an example will be too esoteric for readers. There are two camps battling out this issue of the science of AGW. For the sake of description I’ll use the labels “warmist” and “skeptic” for them. They may seem inaccurate, given the nuances of the issue, but they’re the least offensive labels I could find in the dialogue.

The warmists claim that increasing carbon dioxide from human activities (factories, energy plants, and vehicles) is causing our climate to warm up at an alarming rate. If this is not curtailed, they predict that the earth’s climate and other earth systems will become inhospitable to life. They point to the rising levels of CO2, and various periods in the temperature record to make their point, usually the last 30 years. The predictions of doom resulting from AGW that they have communicated to the public are more based on conjecture and scenarios produced by computer models than anything else. This is the perspective that we all most often hear on the news.

The skeptics claim that climate has always changed on earth, naturally. It has never been constant, and the most recent period is no exception. They also say that while CO2 is a greenhouse gas, it is not that important, since the amount of it in the atmosphere is so small (it’s probably around 390 parts per million now), and secondly its impact is not linear. It’s logarithmic, so the more CO2 is added to the atmosphere, the less impact that addition has over what existed previously. From what I hear, even warmists agree on this point. Skeptics say that water vapor (H2O) is the most influential greenhouse gas. It is the most voluminous, from measurements that have been taken. Some challenge the idea that increased CO2 has caused the warming we’ve seen at all, whether it be from human or natural sources. Some say it probably has had some small influence, but it’s not big enough to matter, and that there must be other reasons not yet discovered for the warming that’s occurred. Others don’t care either way. They say that warming is good. It’s definitely better compared to dramatic cooling, as was seen in the Little Ice Age. Most say that the human contribution of CO2 is tiny compared to its natural sources. I haven’t seen any scientific validation of this claim yet, so I don’t put a lot of weight in it. As you’ll see, I don’t consider it that relevant, either.

In any case, they don’t see what the big deal is. They often point to geologic CO2 records and temperature proxies going back thousands of years to make their point, but they have some recent evidence on their side as well. They also use the geologic record and historical records to show that past warming periods (the Medieval Warm Period being the most recent–1,000 years ago) were not catastrophic, but in fact beneficial to humanity.

Some sober climate scientists say that there is a human influence on local climate, and I find that plausible, just from my own experience of traveling through different landscapes. They say that the skylines of our cities alter airflow over large areas, and the steel, stone, and asphalt/cement we use all absorb and radiate heat. This can have an effect on regional weather patterns.

Not everyone involved in distributing this information to the public is a scientist. There are many people who have other duties, such as journalists, reviewers of scientific papers, and climate modelers, who may have some scientific knowledge, but do not participate in obtaining observational data from Nature, or analyzing it.

So what is the agenda of the warmists? Well, that’s a little hard to pin down, because there are many interests involved. It seems like the common agenda is more government control of energy use, a desire to make a major move to alternative energy sources, such as wind and solar (and maybe natural gas), and a desire to set up a transfer of payments system from the First World to the Third World, a.k.a. carbon trading, which as best I can tell has more to do with international politics than climate. The issue of population control seems to be deeply entwined in their agenda as well, though it’s rarely discussed. Giving it a broader view, the people who hold this view are critics of our civilization as it’s been built. They would like to see it reoriented towards one that they see would be more environmentally friendly, and more “socially just.”

The sense I get from listening to them is they believe that our society is destroying the earth, and as our sins against the environment build up, the earth will one day make our lives a living hell (an “apocalypse,” if you will). Some will not admit to this description, but will instead prefer a more technical explanation that still amounts to a faith-based argument. Michael Crichton said in 2003 that this belief seemed religious. Lately there’s some evidence he was right. A lot of the AGW arguments I hear sound like George Michael’s “Praying for Time” from the early 1990s.

Crichton made a well-reasoned argument that environmentalism as religion does not serve us well:

Let me be clear. I am not anti-religion in all aspects of life. My concern here is not that people have religious beliefs, of whatever kind. What concerns me is the attempt to use religious beliefs as justification for government policy. I understand that environmentalism is not officially recognized as a religion in the U.S….yet. We can, however, recognize something that “walks like a duck, quacks like a duck,” etc., for ourselves. I agree with Crichton. The consciousness that environmentalism provides, that we have a role to play in the development of the natural world, a responsibility to be good stewards, is good. However, it should not be a religion. Despite the more alarmist environmentalists who try to scare people with phantoms, there are some sober environmentalists who act based on real scientific findings rather than a religious notion of how nature behaves, or would like to behave if we weren’t around to influence it. In my view those people should be supported.

To be fair, the skeptics have some political views of their own. Often they seem to have a politically conservative bent, with a belief in greater freedom and capitalism, though I think to a person they are environmentally conscious. The difference I’ve seen with them is they’re not coy, and are more willing to show what they’ve found in the evidence, and discuss it openly. They seem to act like scientists rather than proselytizers.

My experience with warmists is they want to control the message. They don’t want to discuss the scientific evidence. They seem to care more about whether people agree with them or not. The most I get out of them for “evidence” of AGW is anecdotes, even if their findings have been scientifically derived. I’m sure their findings are useful for something, but not for proving AGW. I’d be more willing to consider their arguments if they’d act like scientists. My low opinion of these people is driven not by the positions they take, but by how they behave.

We need science to be driven by the search for truth, and for that to happen we need people seeking evidence, being willing to share it openly, as well as their analysis, and allow it to be criticized and defended on its merits. Some climate scientists have been trying to do this. Some have been successful, but from what I’ve seen they represent only gradations of the “skeptic” position. Warmists have forfeited the debate by disclosing only as much information as they say supports their argument, restricting as much information as they can on areas that might be useful for disproving their argument (this gets to the issue of falsifiability, which is essential to science), and basically refusing to debate the data and the analysis, with a few rare exceptions.

The influence that warmists have had on culture, politics, and climate science has been tremendous. Skeptics have faced an uphill battle to be heard on the issue within their discipline since about the mid-1990s. Whole institutions have been set up under the assumption that AGW is catastrophic. Their mission is to fund research projects into the effects, and possible effects of AGW, not the cause of it. Nevertheless, the people who work for these institutions, or are funded by them, are frequently cited as the “thousands of scientists around the world who have proved catastrophic AGW is real.” The only thing is there’s not much going into looking at what’s causing global climate change, so I’ve heard, because the thinking is “everybody knows we’re the ones causing it”–it’s the consensus view, but that’s not based on strong evidence that validates the proposition.

“Consensus” might as well be code in the scientific community for “belief in the absence of evidence,” also known as “faith,” because that’s what “consensus” tends to be. Unfortunately this happens in the scientific community in general from time to time. It’s not unique to climate science.

Science is far from a perfect instrument of knowledge. It’s just the best we have. In this respect, as in many others, it’s like democracy. Science by itself cannot advocate courses of human action, but it can certainly illuminate the possible consequences of alternative courses of action.

The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything–new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.

One of the reasons for its success is that science has built-in, error correcting machinery at its very heart. Some may consider this an overbroad characterization, but to me every time we exercise self-criticism, every time we test our ideas against the outside world, we are doing science. When we are self-indulgent and uncritical, when we confuse hopes and facts, we slide into pseudoscience and superstition.


In light of the issue I’m discussing I would revise that last sentence to say, “when we confuse hopes and fears with facts, we slide into pseudoscience and superstition.” Continuing…

Every time a scientific paper presents a bit of data, it’s accompanied by an error bar–a quiet but insistent reminder that no knowledge is complete or perfect. It’s a calibration of how much we trust what we think we know. … Except in pure mathematics, nothing is known for certain (although much is certainly false).

I thought I should elucidate the distinction that Sagan makes here between science and mathematics. Mathematics is a pure abstraction. I’ve heard those more familiar with mathematics than myself say that it’s the only thing that we can really know. However, things that are true in mathematics are not necessarily true in the real world. Sometimes people confuse mathematics with science, particularly when objects from the real world are symbolically brought into formulas and equations. Scientists make a point of trying to avoid this confusion. Any mathematical formulas that are created in scientific study, because they seem to make sense, must be tested by experimentation with the actual object that’s being studied, to see if the formulas are a good representation of reality. Mathematics is used in science as a way of modeling reality. However, this does not make it a substitute for reality, only a means for understanding it better. Tested mathematical formulas create a mental scaffolding around which we can organize and make sense of our thoughts about reality. Once a model is validated by a lot of testing, it’s often used for prediction, though it’s essential to keep in mind the limitations of the model, as much as they are known. Sometimes a new limitation is discovered even when a well established prediction is tested.

Continuing with TDHW…

Moreover, scientists are usually careful to characterize the veridical status of their attempts to understand the world–ranging from conjectures and hypotheses, which are highly tentative, all the way up to laws of Nature which are repeatedly and systematically confirmed through many interrogations of how the world works. But even laws of Nature are not absolutely certain.

Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science–by far the most successful claim to knowledge accessible to humans–teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.

We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.

The following paragraphs are of particular interest to what I will discuss next:

One of the great commandments of science is, “Mistrust arguments from authority.” (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.) Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else. This independence of science, its occasional unwillingness to accept conventional wisdom, makes it dangerous to doctrines less self-critical, or with pretensions of certitude.

Because science carries us toward an understanding of how the world is, rather than how we would wish it to be, its findings may not in all cases be immediately comprehensible or satisfying. It may take a little work to restructure our mindsets. Some of science is very simple. When it gets complicated, that’s usually because the world is complicated, or because we’re complicated. When we shy away from it because it seems too difficult (or because we’ve been taught so poorly), we surrender the ability to take charge of our future. We are disenfranchised. Our self-confidence erodes.

But when we pass beyond the barrier, when the findings and methods of science get through to us, when we understand and put this knowledge to use, many feel deep satisfaction.


Sagan believed that science is the province of everyone, given that we understand what it’s about. In our society we often think of science as a two-tiered thing. There are the scientists who are authorities we can trust, and then there’s the rest of us. Sagan argued against that.

In the case of the AGW issue, what I often see with warmists is the promotion of blind trust, “The science says this,” or, “The world’s scientists have spoken,” and, “therefor we must act.” A note of certainty that in reality science does not offer. Whether we should act or not is a value judgement, and I argue that a cost/benefit analysis should be applied to such decisions as well, taking the scientific evidence and analysis into account, along with other considerations.

Breaking it wide open

There are a few really meaty exposés that have happened this year on what’s been going on in the climate science community around the issue of AGW. One of them I’ll include here is a presentation by Dr. Richard Lindzen, a climate scientist at MIT. It was sponsored by the Competitive Enterprise Institute. In addition, he also addressed an issue related to what Sagan talked about: the lack of critical thinking on the part of leaders and decision makers. Instead there are appeals to authority.

Up until a few hundred years ago, we in the West appealed to authority–monarchs and popes–for answers about how we should be governed, and how we should live. Thousands of years ago, the geometers (meaning “earth measurers”) of Egypt, who could measure and calculate angles so that great structures could be built, were worshipped. Temples were built for them. What created democracy was an appeal to rational argument among the people. A significant part of this came from habits formed in the discipline of science. Unfortunately with today’s social/political/intellectual environment, to discuss the climate issue rationally is to, in effect, commit heresy! What Lindzen showed in his presentation is the unscientific thinking that is passing for legitimate reasoning in climate science, along with a little of the science of climate.

I can vouch for most of the “trouble areas” that Dr. Lindzen talks about, with regard to the arguments warmists make, because I have seen them as I have studied this issue for myself, and discussed it with others. It’s as disconcerting as it looks.

The slides Lindzen used in his presentation are still available here. The notes below are from the video.

It’s ironic that we should be speaking of “ignorance” among the educated. Yet that seems to be the case. The leaders of universities should be scratching their heads and wondering why that is. Perhaps it has something to do with C. P. Snow’s “two cultures,” which I’ve brought up before. People in positions of administrative leadership seem to be more comfortable with narratives and notions of authorship than critically examining material that’s presented to them. If they are critical, they look at things only from a perspective of political priorities.

What’s interesting is this has been a persistent problem for ages. Dr. Sallie Baliunas talked about how the educated elite of some in Europe during the Little Ice Age persecuted, tortured, and executed people suspected of witchcraft, after severe weather events, because it was thought that the climate could be “cooked” by sorcery. In other words, it was caused by a group of people that was seen as evil. Since the weather events were “unnatural” they had to be supernatural in origin, and according to the beliefs of the day that could only happen by sorcery, and the people who caused it had to be eradicated. Skeptics who challenged the idea of weather “cooking” were marginalized and silenced.

Edit 3-14-2014: After being prompted to do some of my own research on this, I got the sense that Baliunas’s presentation was somewhat inaccurate. In my research I found there was a rivalry that went on between two prominent individuals around this issue, which Baliunas correctly identifies, one accusing witches of causing the aberrant weather, and another arguing that this was impossible, because the Bible said that only God controlled the weather. However, according to the sources I read, the accusations and prosecutions for witchcraft/sorcery only happened in rural areas, and were carried out by locals. If elites were involved, it was only the elites in those areas. The Church, and political leadership of Europe did not buy the idea that witches could alter the weather. Perhaps Baliunas had access to source material I didn’t, and that’s how she came to her conclusions. Some of what I found was behind a “pay wall,” and I wasn’t willing to put up money to research this topic.

The sense I get after looking at the global warming debate for a while is there’s disagreement between warmists and skeptics about where we are along the logarithmic curve for CO2 impact, and what coefficient should be applied to it. What Lindzen says, though, is that the idea of a “tipping point” with respect to CO2 is spurious, because you don’t get “tipping points” in situations with diminishing returns, which is what the logarithmic model tells us we will get. Some might ask, “Okay, but what about the positive feedbacks from water vapor and other greenhouse gases?” Well, I think Lindzen answered that with the data he gathered.

To clarify the graph that Lindzen showed towards the end, what he was saying is that as surface temperatures increased, so did the radiation that went back out into space. This contradicts the prediction made by computer models that as the earth warms, the greenhouse effect will be enhanced by a “piling on” effect, where warming will cause more water vapor to enter the atmosphere, and more ice to melt, causing more radiation to be trapped and absorbed–a positive feedback.

This study was just recently completed. Based on the scientific data that’s been published, and this presentation by Lindzen, it seems to me that these the computer models the IPCC was using were not based on actual observations, but instead represent untested theories–speculation.

The audio at the end of the Q & A section gets hard to hear, so I’ve quoted it. This is Lindzen:

The answer to this is unfortunately one that Arron Wildavsky gave 15-20 years ago before he died, which is, the people who are interested in the policy (and we all are to some extent, but some people, like you–foremost) have to genuinely familiarize themselves with the science. I’ll help. Other people will help. But you’re going to have to break a certain impasse. That impasse begins with the word “skeptic.” Whenever I’m asked, am I a climate skeptic? I always answer, “No. To the extent possible I am a climate denier.” That’s because skepticism assumes there is a good a priori case, but you have doubts about it. There isn’t even a good a priori case! And so by allowing us to be called skeptics, they have forced us to agree that they have something.

Despite Dr. Lindzen’s attempt to clarify his position from “skeptic” to “denier,” I think that’s a bad use of rhetoric, because “denier” in climate science circles has the political connotation of “holocaust denier,” which indicates that “the other side has something, and you have nothing.” Personally, I think that people like Lindzen should recognize that “climate skeptic” is a loaded term, and answer instead, “I am a skeptic of most everything, because that’s what good scientists are.” One can be skeptical of spurious claims.

It’s difficult for climate scientists from one side to even debate the other, as they should, because politics is inevitably introduced. This is a symptom of the corruption of science. Scientists should not have to defend their political or industrial affiliations with respect to scientific issues. This is tantamount to guilt by association and “attack the messenger” tactics, which are irrelevant as far as Nature is concerned. I’ve heard more than one scientist say, “Nature doesn’t give a damn about our opinions,” and it’s true. Science depends on the validity of observed data, and skeptical, probing analysis of that data. When the subject of study is human beings themselves, or products that could affect humans and the environment, then ethics comes into play, but this only extends so far as how to design experiments, or whether to do them at all, not what is discovered from observation.

This is old news by now, but a ton of e-mails and source code were stolen from The University of East Anglia’s Climate Research Unit (CRU), also called the Hadley Center, on November 19, 2009, and made public. I hadn’t heard about it until the last week in November. WattsUpWithThat.com has been publishing a series of articles on what’s being discovered in the e-mails, which provides a good synopsis. I picked out some of them that I thought summed up their contents and implications: herehere, here, and here. The following two interviews with retired climatologist Dr. Tim Ball also summed it up pretty well:

There are no forbidden questions in science, no matters too sensitive, or delicate to be probed, no sacred truths. Diversity and debate are valued. Opinions are encouraged to contend–substantively and in depth.

We insist on independent and–to the extent possible–quantitative verification of proposed tenets of belief. We are constantly prodding, challenging, seeking contradictions or small, persistent, residual errors, proposing alternate explanations, encouraging heresy. We give our highest rewards to those who convincingly disprove established beliefs.


[my emphasis in bold italics — Mark]

Ball referred to the following sites as good sources of information on climate science:



Here are articles written by Ball for the Canada Free Press

In the above interview Ball gets to one of the crucial issues that has frustrated skeptics for years: the publishing of scientific findings and peer review. He said the disclosed e-mails reveal that a small group of warmists exerted a tremendous amount of control over the process. He said he was mystified about why some climate scientists were emphasizing “peer review” 20 years ago (the peer-reviewed literature). He realizes now, after having reviewed the e-mails, that they were in effect promoting their own group. If you weren’t in their club, it’s likely you wouldn’t get published (they’d threaten editors if you did), and you wouldn’t get the coveted “peer review” that they touted so much. Of course, if you didn’t toe their line, you weren’t allowed in their club. No wonder former Vice-President Al Gore could say, “The debate is over.”

It goes without saying that publishing is the lifeblood of academia. If you don’t get published, you don’t get tenure, or you might even lose it. You might as well find another career if you can’t find another sponsor for your research.

The video below is called “Climategate: The backstory.” It looks like this interview with Ball was done earlier, probably in August or September.

The “damage control” from the Hadley e-mails incident was apparent in the media in December, around the time of the Copenhagen conference. There was an effort to distract people from the real issues, preferring instead to try to focus people’s attention on the nasty personalities involved. What galls me is this effort betrays a contempt for the public, taking advantage of the notion that we have little knowledge or interest in how science works, and so we can be easily distracted with personality issues.

I have to say the media reporting on this incident was pretty disappointing. If they talked about it at all, they frequently had pundits on who were not familiar with the science. They simply applied their reading skills to the e-mails and jumped to conclusions about what they read. In other cases they invited on PR flacks to give some counterpoint to the controversy. Warmists had a field day playing with the ignorance of correspondents and pundits. Some of the pundits were “in the ballpark.” At least their conclusions on the issue were sometimes correct, even if the reasoning behind them was not. A couple shows actually invited on real scientists to talk about the issue. What a concept!

On a lighter note, check out this clip from the Daily Show…

Here’s an explanatory article about the significance of the “hide the decline” comment, along with background information which gives context for it. Here’s a Finnish TV documentary that touched on the major issues that were revealed in the CRU e-mails (The link is to part 1. Look for the other two parts on the right sidebar at the linked page).

Carl Sagan saw this pattern of thought before:

“A fire-breathing dragon lives in my garage.”

Suppose (I’m following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you’d want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

“Show me,” you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle–but no dragon.

“Where’s the dragon?” you ask.

“Oh, she’s right here,” I reply, waving vaguely. “I neglected to mention that she’s an invisible dragon.”

You propose spreading flour on the floor of the garage to capture the dragon’s footprints.

“Good idea,” I say, “but this dragon floats in the air.”

Then you’ll use an infrared sensor to detect the invisible fire.

“Good idea, but the invisible fire is also heatless.”

You’ll spray-paint the dragon and make her visible.

“Good idea, except she’s an incorporeal dragon, and the paint won’t stick.”

And so on. I counter every physical test you propose with a special explanation of why it won’t work.

Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I’m asking you to do comes down to believing, in the absence of evidence, on my say-so.

The only thing you’ve really learned from my insistence that there’s a dragon in my garage is that something funny is going on inside my head.

Now another scenario: Suppose it’s not just me. Suppose that several people of your acquaintance, including people who you’re pretty sure don’t know each other, all tell you they have dragons in their garages–but in every case the evidence is maddeningly elusive. All of us admit we’re disturbed at being gripped by so odd a conviction so ill-supported by the physical evidence. None of us is a lunatic. We speculate about what it would mean if invisible dragons were really hiding out in our garages all over the world, with us humans just catching on. I’d rather it not be true, I tell you. But maybe all those ancient European and Chinese myths about dragons weren’t myths at all…

Gratifyingly, some dragon-size footprints in the flour are now reported. But they’re never made when a skeptic is looking. An alternative explanation presents itself: On close examination it seems clear that the footprints could have been faked. Another dragon enthusiast shows up with a burnt finger and attributes it to a rare physical manifestation of the dragon’s fiery breath. But again, other possibilities exist. We understand that there are other ways to burn fingers besides the breath of fiery dragons. Such “evidence”–no matter how important the dragon advocates consider it–is far from compelling. Once again, the only sensible approach is tentatively to reject the dragon hypothesis, to be open to future physical data, and to wonder what the cause might be that so many apparently sane and sober people share the same strange delusion.


[my emphasis in bold italics — Mark]

In an earlier part of the book he said:

The hard but just rule is that if the ideas don’t work, you must throw them away. Don’t waste neurons on what doesn’t work. Devote those neurons to new ideas that better explain the data. The British physicist Michael Faraday warned of the powerful temptation

“to seek for such evidence and appearances as are the favour of our desires, and to disregard those which oppose them . . . We receive as friendly that which agrees with [us], we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.”

Meanwhile, the risks we are ignoring

The obsession with catastrophic human-caused global warming, driven by ideology and a kind of religious group think, and the flow of money to the tune of tens of billions of dollars, represents a misplacement of priorities. It seems to me that if we should be focusing on any catastrophic threats from Nature, we should be putting more resources into a scientifically validated, catastrophic threat that hardly anyone is paying attention to: The possibility of the extinction of the human race, or an extreme culling, not to mention the extinction of most of life on Earth, from a large asteroid or comet impact. Science has revealed that large impacts have happened several times before in Earth’s history. A large impactor will come our way again someday, and we currently have no realistic method for averting such a disaster, even if we spotted a body heading for us months in advance. The number of scientists who are monitoring bodies in space that cross Earth’s orbit could literally fit around a table at McDonalds! Yet there are thousands of these missiles. These scientists say it is very difficult to make a case in congress for an increase in funding for their efforts, because the likelihood of an impact seems so remote to politicians.

The New Madrid fault zone represents a huge, known risk to the Midwestern part of the U.S. Scientists have tried to warn cities along the zone about updating their building codes to withstand the next quake that will inevitably occur. But so far they have gotten a cool reception.

Alan Kay commented on Bill Kerr’s blog that regardless of what’s caused global warming (he leaves that as an open question), what we should really be worried about is a “crash” of our climate system, where it suddenly changes state from even a small “nudge.” It could even come about as a result of natural forces. I hadn’t thought about the issue from that perspective, and I’m glad he brought it up. He cited an example for such a crash (though on a smaller scale), pointing to “dead zones” in coastal waters all over the world, resulting from agricultural effluents. The example distracts a bit from his main point, but I see what he’s getting at. He said that governments have not been focused on how to prepare for this scenario of a climate “system crash,” and are instead distracted by meaningless “counter measures.”

The implications for science and our democratic republic

The values of science and the values of democracy are concordant, in many cases indistinguishable. Science and democracy began–in their civilized incarnations–in the same time and place, Greece in the seventh and sixth centuries B.C. … Science thrives on, indeed requires, the free exchange of ideas; its values are antithetical to secrecy. Science holds to no special vantage points or privileged positions. Both science and democracy encourage unconventional opinions and vigorous debate. … Science is a way to call the bluff of those who pretend to knowledge. It is a bulwark against mysticism, against superstition, against religion misapplied to where it has no business being. If we’re true to our values, it can tell us when we’re being lied to. The more widespread its language, rules, and methods, the better chance we have of preserving what Thomas Jefferson and his colleagues had in mind. But democracy can also be subverted more thoroughly through the products of science than any pre-industrial demagogue ever dreamed.


I’m going to jump ahead a bit with this quote from an interview with Carl Sagan on Charlie Rose (shown further below):

If we are not able to ask skeptical questions, to interrogate those who tell us that something is true–to be skeptical of those in authority–then we’re up for grabs for the next charlatan, political or religious, who comes ambling along. It’s a thing that Jefferson lay great stress on. It wasn’t enough, he said, to enshrine some rights in a constitution or bill of rights. The people had to be educated, and they had to practice their skepticism in their education. Otherwise, we don’t run the government. The government runs us.

On December 7, 2009 the EPA came out with its endangerment finding saying that carbon dioxide is a pollutant that threatens public health. The agency will proceed to impose restrictions on CO2 emitters itself, since congress has not acted to impose its own. What is all this based on now?

President Obama, you said, “We will restore science to its rightful place.” I’m still waiting for that to happen.

Science helped birth democracy. Its shadow is now being used to create conditions for a more authoritarian government. This isn’t the first time this has happened. The pseudo-science of eugenics, which was once regarded as scientific since it was ostensibly based on the theory of evolution, was used as justification for the slaughter of millions in Europe in the 1930s and 40s. It was also used as justification for shameful actions and experiments performed by our government on certain groups of people in the U.S.

Global warming has been blown up into a huge issue. There aren’t too many people who haven’t at least heard of it. We are seriously considering taking actions that could cost ordinary people, the poor in particular, and businesses, a lot of money. When the stakes are this high we’d better have a good reason for it. This is like the craze with bran muffins and foods with oats in them, because of the belief (supported by scientific studies that were misreported to the public) that they prevented cancer, only it’s more serious. I worry about what this does to science, because it seems like since people can get away with debauching it, why not continue doing it in the future?

One worry I have about the debauching of science is that it will delegitimize science in the eyes of the public, and encourage the same superstition and magical thinking that marked the Middle Ages. Who could blame us for rejecting it after it’s been perceived as “crying wolf” too many times?

The public has valued science up to now, because of the information it can bring us. The problem is we don’t care to understand what it is or how it works. “Just give us the facts,” is our attitude. We have blindly given the name of science a legitimacy that, like other things I’ve talked about on this blog, doesn’t take into account the quality of the findings, or the way they were obtained. It reminds me of a reference Alan Kay made to Neil Postman:

Our [scientific] artifacts are everywhere, but most people, as Neil Postman said once, have to take more things on faith now in the 20th century than they did in the Middle Ages. There’s more knowledge that most people have to believe in dogmatically or be confused about.

As a result, we have set up scientists as authorities. Some purport to tell us what to believe, and how to behave, and we as a society expect this of them. The problem with this is when a “scientific fact” is later revealed to be wrong, people feel jilted. Science itself is thought of as a collection of facts, written by our scientific “priesthood.” We expect this “priesthood” to do right by the rest of us. Science was never meant to take on this role. I think a good part of the reason for this passive attitude towards science in the public sphere is the quality and the methodology of findings are not reported to the general public. Most journalists wouldn’t understand the criteria enough to explain it to the citizenry in a way they’d understand.

The other part of the problem is that science is presented in our educational system as something that’s not very interesting. In fact most students only experience a small sliver of science, if that. It’s rather like mathematics (or arithmetic and calculation that’s called “mathematics”) for them, something they’re required to take. They just want to “get through it,” and they’re thankful when it’s over.

An issue I’m not even addressing here, though it’s worth noting, is that science is often perceived as heartless and cold, a discipline that has allowed us as a society to act without a moral sense of responsibility. This I’m sure has also contributed to the public’s aversion to science. And I can see that because of this, people might prefer “the science of global warming alarm” to “the science of skepticism.” One seems to be promoting “good action,” while the other seems like a bunch of backward, out of touch folks, who don’t care about the earth. These are emotional images, a way of thought that a lot of people the world over are prone to. However, as Sagan said in the interview below, “Science is after how the Universe really is, and not what makes us feel good.” These images of one group and the other are stereotypes, not so much the truth.

Of course a moral sense is necessary for a self-governing society like ours, but morality can be misapplied. By trying to do good we could in fact be hurting people if the solution we implement is not thought through. We may act on incomplete information, all the while thinking that we have the complete picture, thereby ignoring important factors that may require a very different solution to resolve. Our understanding of complex systems and the effects of tampering with them may also be grossly incomplete. While attempting to shape and direct a system that is behaving in a way we don’t like, we may make matters worse. Intent matters, but results matter, too. What appear to be moral actions will not always result in moral outcomes, especially in systems that are huge in scale and complexity. This applies to the environment and our economy.

As we’ve seen in our past, people eventually do figure out that the science behind a spurious claim was flawed, but it tends to take a while. By that point the damage has already been done. Perhaps scientists need to take a more active public service role in informing the public about claims that are made through news outlets. What would be better is if people understood scientific thinking, but in the absence of that, scientists could do the public a service by explaining issues from a scientific perspective, and perhaps educating the audience about what science is along the way. This would need to be done carefully, though. A real effort at this would probably expose people to notions that they are uncomfortable with. Without a sufficient grounding in the importance of science, that is, the importance of listening and considering these uncomfortable ideas, most people will just change the channel when that happens. In order for this to work, people need to be willing to think, because the activity is interesting, and sometimes produces useful results. Science cannot just be regarded as a vocation in our society. It is an essential part of the health of our democratic republic.

The danger of our two-tiered knowledge society

In all uses of science, it is insufficient–indeed it is dangerous–to produce only a small, highly competent, well-rewarded priesthood of professionals. Instead, some fundamental understanding of the findings and methods of science must be available on the broadest scale.


I’m going to turn the subject now to the matter of science and technology, and our collective ignorance, because it also has bearing on this “dangerous brew.” I found this interview with Carl Sagan, which was done shortly before he died in 1996. He talked with Charlie Rose about his then-new book, The Demon-Haunted World. He had some very prescient things to say which add to the quotes I’ve been using from his book. I found myself agreeing with what Sagan said in this interview regarding science, scientific awareness, and science vs. faith and emotions, but context is everything. He may not have been arguing from the same point of view I am, as I reveal further below. I find it interesting, though, that his quotes seem to apply very nicely to my argument. I’ve been reading this book, and I don’t see how I might be quoting him out of context. You’ll see why I’m hedging as you read further.

This is a poignant interview, because they talk about death and what that means. It’s a bit sad and ironic to see his optimism about his good health. He died from pneumonia, which was a complication of his bone marrow transplant, which was a treatment he received for his myelodysplasia. His final accomplishment was completing work on a movie version of his novel, Contact, which came out in 1997. Interestingly, his movie touched on some themes from The Demon-Haunted World.

My jaw dropped when I heard Charlie Rose read that less than half of American adults in 1996 thought that our planet orbits the Sun once a year! I did a quick check of science surveys on the internet and it doesn’t look like the situation has gotten any better since then.

Sagan’s point was not that magical thinking in human beings was growing. He said it’s always been with us, but in the technological society we have built, the prominence of this kind of thinking is dangerous. This is partly because we are wielding great power without knowing it, and partly because it makes us as a people impotent on issues of science and technology. We will feel it necessary to just leave decisions about these issues up to a scientific-technological elite. I’ve argued before that we have an elite that has been making technological decisions for us, but not at a public policy level. It’s been at the level of IT administrators and senior engineers within organizations. In the realm of science, however, we clearly have an elite which has gladly taken over decisions about science at the policy level.

The climate issue points to another aspect of this. As Dr. Lindzen pointed out in his presentation (above), we have people who are misusing climate models (and it’s anyone’s guess whether it’s on purpose, or due to ignorance) as a substitute for the natural phenomenon itself! I’ve talked to a few climate modelers who believe that human activities are causing catastrophic climate change, and this is how they view it: Since we do not have another Earth to use as a “control,” or to use as a means for “repeatability,” we use computer models as a “control,” or in order to repeat an “experiment.” It’s absurd. Talking to these people is like entering the Twilight Zone. They argue as if they’re the professionals who know what they’re doing. The truth is they’re ignorant of the scientific method and its value, yet their theories of computer modeling and methodology carry a high level of legitimacy in the field of climate science. It’s what a lot of the prognosticating in the IPCC (Intergovernmental Panel on Climate Change) assessment reports are based on. This gives you an idea of the ignorance that at times passes for knowledge and wisdom in this field!

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality.

— Doron Swade, curator of the London Science Museum

As I’ve talked about before, a computer model is only a theory. That’s it. It’s a representation of reality created by imperfect human beings (programmers, though in principle it’s not that different from scientists creating theories and mathematical models of reality). It’s irrational to use a theory as a “control,” or as a proxy for the real thing in an “experiment.” It goes against what science is about, which is an acknowledgment that human beings are ignorant and flawed observers of Nature. Even if we have a theory that seems to work, there is always the possibility that in some circumstance that we cannot predict it will be wrong. This is because our knowledge of Nature will always be incomplete to some degree. What science offers, when applied rigorously, is very good approximations. Within the boundaries of those approximations we can find ideas that are useful and which work. There are no shortcuts to this, though.

Theories are of course welcome in science, but the only rational thing to do with them while using the scientific method is to test them against the real thing, and to pay attention to how well theory and reality match, in as many aspects as can be discerned.

Climate modelers who back the idea of catastrophe claim they do this when forming their models, but I’ve heard first-hand accounts from scientists about how modelers will “tweak” parameters to make a model do something “interesting.” This gets them attention, and I detect some techno-cultish behavior in this. I’ve heard second-hand accounts from scientists about how modelers will input unrealistic parameters to make the models closely match the temperature record, which they term “validating the model.” As Dr. John Christy, a scientist who studies temperature in the atmosphere and at the surface, at the University of Alabama at Huntsville, once remarked, “They already knew what the correct answer was.” This is an illegitimate methodology, because it’s no better than forming a conclusion based on a data correlation. I’m sure if I worked hard enough at it, I could create a computer model that also closely tracked the temperature record, just drawing lines on the screen, and/or producing numbers, coming from a standpoint of total ignorance of how the climate works, and I suppose by their criteria my model would be “validated.”

I’ve cited this quote from Alan Kay before (though he did not specifically address the issue of climate modeling, or anything having to do with climate science when he said it):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

To clarify, Kay was talking about the application of computing to non-computational sciences.

Beware of those who come bearing predictions

I have praised Carl Sagan for what he talked about in the Charlie Rose interview above (I praise him for some of his other work as well), but I feel I would be remiss if I didn’t talk about a portion of his career where he fell into doing what I’m complaining about in this post. He promoted an untested prediction for a political agenda. I’m going to talk about this, because it illustrates a temptation that scientists (who are flawed human beings like the rest of us) can succumb to.

Sagan was one of the chief proponents of the theory of nuclear winter in the 1980s during the Cold War between the U.S. and the Soviet Union. As Michael Crichton pointed out (you may want to search on Sagan’s name in the linked article to reach the relevant part), like with catastrophic AGW, this was based on a prediction that was supported by flimsy evidence. In fact, computer climate modeling had a central role in the prediction’s supposed legitimacy.

Sagan exhibited a fallacy in thinking on a number of occasions that I’ll call “belief in the mathematical scenario.” Such scenarios are supported by a concept that can be conjured up as a technical, mathematical model. Here’s the thing. Is the scenario even plausible? Does the fact that we can imagine it in a plausible way justify believing that it’s real? Does a moral belief that something is right or wrong justify promoting an unwavering belief in an untested theory that supports the moral rule, because it will cause people to “do the right thing”? How is this different from aspects of organized religion? Do these questions matter? From where I sit, it’s the academic equivalent of people making up their own myths, using the technical tool of mathematics as a legitimizer, mistaking mathematical precision for objective truth in the real world. This is a behavior that science is supposed to help us avoid!

On one level, trying to apply the scientific method to the nuclear winter prediction sounds absurd: “You want evidence confirming that a nuclear war would result in nuclear winter?? Are you nuts?” First of all, we don’t have to resort to that extreme. Scientists have found ways to physically model a scenario by using materials from Nature, but at a small scale, in order to arrive at approximations that are quite good. We don’t have to experience the real thing at full scale to get an idea of what will really happen. It’s just a matter of arriving at a realistic model, and in the case of this prediction that might’ve been difficult.

The point is that a prediction, an assertion, must be tested before it can be considered scientifically valid. It’s not science to begin with unless it’s falsifiable. And what’s worse, Sagan knew this! Without falsifiability, the appropriate scientific answer is, “We don’t know,” but that’s not what he said about this scenario. He at least admitted it was a prediction, but he also called it “science.” It was disingenuous, and he should’ve known better.

Without testing our notions, our assumptions, and our models, we are left with superstition–irrational fears of the unknown, and irrational hopes for things that defy what is possible in Nature (whether we know what’s possible is beside the point), even though they are dispensed using ideas that sound modern, and comport with what we think intelligent, educated people should know.

It doesn’t matter if the untested prediction is made seemingly plausible by mathematics, or a computer model (which is another form of mathematics). That’s mere hand waving. Prediction, mathematical or otherwise, is not science, and therefor it’s not nearly as reliable as analysis derived from the scientific method. Our predictions are hopefully derived from science, but even so, an untested prediction really is only as reliable as the experience of the person giving it.

The same sort of political dynamic came into play at the time that the nuclear winter theory was popular that has existed in climate science: If you were skeptical about the theory of nuclear winter, that meant you were in the “minority” (or so they had people believe)–not with the “consensus.” You were accused of supporting nuclear arms, and our government’s tough “cowboy” anti-Soviet policy, and were a bad person. Such smears were unjustified, but they were used to shame and silence dissent. I don’t mean to suggest that Sagan was a communist sympathizer, or anything of that sort. I think he wanted to prevent nuclear war, period. Not a bad motive in itself, but it seems to me has was willing to sacrifice the legitimacy of science for this.

A lot of scientists who didn’t know too much about the science at issue, but didn’t want to ruffle feathers, went along with it to be a part of the accepted group. The whole thing was desperate and cynical. It’s my understanding from history that the fears exhibited by the promoters of this theory were unfounded, and I think they came about because of a fundamental misunderstanding of realpolitik.

It’s not as if this scare tactic was really necessary. The consequences of nuclear war that we knew about were horrifying enough. It’s apparent from the interview with Ted Turner in the above video that there were worries about the escalation of the nuclear arms race, perhaps the Reagan Administration’s first strike nuclear capability against the Soviet Union in particular. You’ll notice that Sagan talks about (I’m paraphrasing), “One nation bombing another before the other can respond, the attacker thinking that they will remain untouched.” People like Sagan didn’t want the U.S., or perhaps the Soviets for that matter, to think it could carry out a first strike and wipe out the other side with impunity (because the climate would “get” the other side in return). Surely, a nuclear war with a large number of blasts would’ve caused some changes in climate, but how much was anyone’s guess.

The only evidence that could’ve realistically tested the theory, to a degree, would’ve been from above ground nuclear tests, or the bombs that were dropped on Hiroshima and Nagasaki, Japan. To my knowledge, none of them gave results that would’ve contributed to the prediction’s validity.

Before the very first nuclear bomb was tested there was at least one scientist in the Manhattan Project who thought that a single nuclear blast might ignite our atmosphere. That would be a fate worse than the predicted nuclear winter. Imagine everything charred to a crisp! Others thought that while it was possible, the probability was remote. Still, the scenario was terrifying. The bomb was tested. Many other above-ground atom bomb tests followed, and we’re all still here. Not to say that nuclear testing is good for us or the environment, but the prediction didn’t come true. The point is, yes, there are terrible scenarios that can be imagined. These scenarios are made plausible based on things that we know are real, and a knowledge of mathematics, but that does not mean any of these terrible scenarios will happen.

You’ll notice if you watch the interview with Turner that Sagan even talks about catastrophic AGW! Again, what he spoke of was a prediction, not a scientifically validated conclusion. It’s hard to know what his motivation was with that, but it sounded like he was uncomfortable with the idea that our civilization was not consciously thinking about the environment, and what consequences that might have down the line. Western governments began to understand environmental issues in the 1980s, and implemented regulations to clean up what was our highly polluted environment. From what I understand though, this did not happen in other parts of the world.

Not to say that all environmental problems have been solved in the U.S. There are real environmental issues that science can inform us about today, and will need to be acted upon. One example is “dead zones,” which I referred to earlier, where coastal waters are losing their oxygen due to an interaction between nitrogen-rich compounds that agricultural operations are releasing into streams, and algae. It’s killing off all marine life in the affected areas, and these “dead zones” exist all over the world. There’s a Frontline documentary called “Poisoned Waters” that talks about it. Another is an issue that does have to do with human-induced climate forcing, but not strictly in the sense of warming or cooling the planet. The PBS show Nova talked about it in an episode called “Dimming the Sun.” Huge quantities of sooty pollution have been found to affect relative humidity on Earth, which does have a significant effect on our weather. Aside from the application of this new find to the issue of AGW, which to me was rather irrelevant, this was a very interesting show. They gave what I thought was a very thorough and compelling exposition of the science behind the “dimming” effect.

The legacy of Carl Sagan

Based on what I’ve read in Sagan’s book, if he were still alive today, he would probably still be promoting the theory of catastrophic AGW. That is something I find hard to understand, given the understanding of science that he had, its implications for our society, and the seemingly innate need for humans to create their own myths, all of which he seemed to know about. Perhaps he was not one to look inward, as well as outward. Though it’s impossible to do this now, this is an issue I’d dearly like to ask him about.

I can’t help but think that Sagan and his cohorts created the template for the pseudo-science that bedevils climate science today. Richard Lindzen’s paper, which I referred to at the beginning of this post, paints the picture of what’s happened more fully, and points to some other motivations, besides politics. One of Sagan’s phrases that I still remember is, “Extraordinary claims require extraordinary proof.” Too bad he didn’t follow that maxim sometimes.

Despite this, I respect the fact that he really did try to bring scientific understanding to the masses. Sagan in my mind was a great man, but like all of us he was flawed, and even he was willing to set aside his scientific thinking and participate in the promotion of pseudo-science for non-scientific goals.

I could rant that Sagan was a hypocrite, because he cynically exploited the very ignorance he expressed concern about. However, my guess is that he saw what he thought was a dangerous situation developing in an ignorant world–a “demon-haunted” one at that. Perhaps the only way he knew how to deal with it in such “dire circumstances” was to “take what existed,” promote a scenario that was not based on much, which we ignoramuses would believe, and cynically exploit the good name of science (and his own good name) so that we would pull back from the brink. It is elitist, though I can understand the temptation.

If we really want to bring people out of ignorance it’s best to try to educate them, even though that can be hard. Sometimes people just don’t want to hear it. But if this approach is not taken, then it’s just a bunch of elites messing with people’s heads so we’ll give them a response they want. We won’t be any more enlightened. There’s too much of that already.

I guess another lesson is that even though we can see ignorance in people, when the human spirit is brought out it can manifest solutions to problems in ways that people like me would not anticipate, and things work out okay. That in essence is the genius of semi-autonomous systems like ours that have diffused power structures. It acknowledges that no one person, or group, has all the right answers. The same is true of science, when it’s done well, and relatively free markets. It’s best if we respect that, even though we may be tempted to subvert these systems for causes we ourselves deem noble.

Even so, I feel as though we put too much faith in our semi-autonomous, diffused systems. Some of us think they will solve all problems, and it’s not necessary to worry about being well educated. I think people push aside the idea too casually that more sophisticated ways of thinking and perceiving would help all of us (not just a few) make those systems more optimal.

So what are we to do?

The tenets of skepticism do not require an advanced degree to master, as most successful used car buyers demonstrate. The whole idea of a democratic application of skepticism is that everyone should have the essential tools to effectively and constructively evaluate claims to knowledge. All science asks is to employ the same levels of skepticism we use in buying a used car or in judging the quality of analgesics or beer from their television commercials.

But the tools of skepticism are generally unavailable to the citizens of our society. They’re hardly ever mentioned in the schools, even in the presentation of science, its most ardent practitioner, although skepticism repeatedly sprouts spontaneously out of the disappointments of everyday life. Our politics, economics, advertising, and religions (New Age and Old) are awash in credulity. Those who have something to sell, those who wish to influence public opinion, those in power, a skeptic might suggest, have a vested interest in discouraging skepticism.


I noted as I wrote this post that both Sallie Baliunas and Carl Sagan said that science needed special protection in our society. Richard Lindzen indicates that this protection is paper thin. He said it’s unfortunately easy to co-opt science in our society. The only way science can be protected in my view is if we value it for what it really is. Students need to be taught that, and shown its beauty. Sagan said a key thing in the Charlie Rose interview: “Science is more than a body of knowledge. It’s a way of thinking.” I must admit some ignorance to this, but what little I’ve heard about science education in schools now indicates that it’s taught almost strictly as a body of knowledge. I suspect this is because of No Child Left Behind and standardized testing. I remember a CS professor saying a while back that in his kid’s science class there was a lot of workbook material, but very little experimentation, because the teachers were afraid to allow their students to do experiments. He didn’t explain why. My suspicion is they didn’t want the students to come to their own conclusions about what they had seen, and possibly get “confused” about what they’re supposed to know for their tests. In any case I remember exclaiming to the professor that he should get his child out of that class! I asked rhetorically, “What do they think they’re teaching?”

Even when I look at my own science education I realize that I wasn’t given a complete sense of what science was. The hypotheses were practically given to us. When we did an experiment, the steps for it were always given to us. We were always given the “correct” answer in the end, so we could compare it against the answer we came up with. This is how we calculated error. We compared the answer we had against the “correct” answer. One thing that was valuable was we were asked to think of any reasons why the error occurred. Some error always existed (it always does in real science), and we could speculate that maybe our instruments introduced some error, and that we may have done the procedure a bit wrong, etc. This was teaching only one part of real science: observation, being skeptical of our observations, and recognizing human fallibility. That’s valuable. On the other hand, what it also taught was the fallacy that there was a perfectly correct answer, which was achieved via. mathematics, which was formulated by “masters of science.” There’s so much more to it than that. In real science, scientists come up with their own hypotheses. They design their own experiments. When they get their results they have nothing to compare them against, unless they’re reproducing someone else’s experiment. Even then they can’t just say “the results in the original experiment are the correct answer,” because the other experiment may have had unrecognized flaws, too. The process by which those mathematical formulas that we used became so good was not a “one shot” deal. They came about from making a lot of mistakes, realizing what they were, and correcting for them.

I wonder how real scientists figure out what error figures/bars to put in their results. Maybe they could come from instrument ratings, or probabilities, based on an examination of the scale of the observation.

Anyway, science is really about wondering, exploring, being curious, being skeptical of your own observations, as well as those of others. It also takes into account what’s been discovered previously. Alan Kay has talked about how there’s also a kind of critical argument that goes on in science, where the weak ideas are identified and set aside, and the strong ones are allowed to rise to the top.

So much stress is put on the need for math and science education for our country’s future economic health. It’s necessary for our society’s general health, too. I hope someday we will recognize that.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Update 8-17-09: I’ve revised this post a bit to clarify some points I made.

I received a request 2-1/2 weeks ago to write a post based on video of a speech that Alan Kay gave at Kyoto University in February, titled “Systems Thinking For Children And Adults.” Here it is. The volume in the first 10 minutes of the video is really low, so you’ll probably need to turn up your volume. The volume in the video gets readjusted louder after that.

Edit 1-2-2014: See also Kay’s 2012 speech, “Normal Considered Harmful,” which I’ve included at the bottom of this post.

On the science of computer science

Kay said what he means by a science of computing is the forward-looking study, understanding, and invention of computing. Will the “science” come to mean something like the other real sciences? Or will it be like library and social science, which means a gathering of knowledge? He said this is not the principled way that physics, chemistry, and biology have been able to revolutionize our understanding of phenomena. Likewise, will we develop a software engineering that is like the other engineering disciplines?

I’ve looked back at my CS education with more scrutiny, and given what I’ve found, I’m surprised that Kay is asking this question. Maybe I’m misunderstanding what he said, but for me CS was a gathering of knowledge a long time ago. The question for me is can it change from that to a real science? Perhaps he’s asking about the top universities.

When I took CS as an undergraduate in the late 80s/early 90s it was clear that some research had gone into what I was studying. All the research was in the realm of math. There was no sense of tinkering with architectures that existed, and very little practice in analyzing it. We were taught to apply a little analysis to algorithms. There was no sense of trying to create new architectures. What we were given was pre-digested analysis of what existed. So it had gotten as far as exposing us to the “TEM” parts (of “TEMS”–Technology, Engineering, Mathematics, Science), but only in a narrow band. The (S)cience was non-existent.

What we got instead is what I’d call “small science” in the sense that we had lots of programming labs where we experimented with our own knowledge of how to write programs that worked; how to use, organize, and address memory; and how to manage complexity in our software. We were given some strategies for doing this, which were taught as catechisms. The labs gave us an opportunity to see where those strategies were most effective. We were sometimes graded on how well we applied them.

We got to experience a little bit of how computers could manipulate symbols, which I thought was real interesting. I wished that there would’ve been more of that.

One of the tracks I took while in college was focused on software engineering, which really focused on project management techniques and rules of thumb. It was not a strong engineering discipline backed by scientific findings and methods.

When I got out into the work world I felt like I had to “spread the gospel,” because what IT shops were doing was ad hoc, worse than the methodologies I was taught. I was bringing them “enlightenment” compared to what they were doing. The nature and constraints of the workplace broke me out of this narrow-mindedness, and not always in good ways.

It’s only been by doing a lot of thinking about what I’ve learned, and my POV of computers, that I’ve been able to see this that I’ve been able to see that what I got out of CS was a gathering of knowledge with some best practices. At the time I had no concept that I was only getting part of the picture even though our CS professors openly volunteered with a wry humor that “computer science is not a science.” They compared the term “computer science” to “social science” in the sense that it was an ill-defined field. There was the expectation that it would develop into something more cohesive, hopefully a real science, later on. Given the way we were taught though, I guess they expected it to develop with no help from us.

Kay has complained previously that the commercial personal computing culture has contributed greatly to the deterioration of CS in academia. I have to admit I was a case in point. A big reason why I thought of CS the way I did was this culture I grew up in. Like I’ve said before, I have mixed feelings about this, because I don’t know if I would be a part of this field at all if the commercial culture he complains about never existed.

What I saw was that people were discouraged from tinkering with the hardware, seeing how it worked, much less trying to create their own computers. Not to say this was impossible, because there were people who tinkered with 8- and 16-bit computers. Of course, as I think most people in our field still know, Steve Wozniak was able to build his own computer. That’s how he and Steve Jobs created Apple. Computer kits were kind of popular in the late 1970s, but that faded by the time I really got into it.

When I used to read articles about modifying hardware there was always the caution about, “Be careful or you could hose your entire machine.” These machines were expensive at the time. There were the horror stories about people who tried some machine language programming and corrupted the floppy disk that had the only copy of their program on it (ie. hours and hours of work). So people like me didn’t venture into the “danger zone.” Companies (except for Apple with the Apple II) wouldn’t tell you about the internals of their computers without NDAs and licensing agreements, which I imagine one had to pay for handsomely. Instead we were given open access to a layer we could experiment on, which was the realm of programming either in assembly or a HLL. There were books one could get that would tell you about memory locations for system functions, and how to manipulate features of the system in software. I never saw discussion of how to create a software computer, for example, that one could tinker with, but then the hardware probably wasn’t powerful enough for that.

By and large, CS fit the fashion of the time. The one exception I remember is that in the CS department’s orientation/introductory materials they encouraged students to build their own computers from kits (this was in the late 1980s), and try writing a few programs, before entering the CS program. I had already written plenty of my own programs, but as I said, I was intimidated by the hardware realm.

My education wasn’t the vocational school setting that it’s turning into today, but it was not as rigorous as it could have been. It met my expectations at the time. What gave me a hint that my education wasn’t as complete as I thought was that opportunities which I thought would be open to me were not available when I looked for employment after graduation. The hint was there, but I don’t think I really got it until a year or two ago.

What would computing as a real science be like?

I attended the 2009 Rebooting Computing summit on CS education in January, and one of the topics discussed was what is the science of computer science? In my opinion it was the only topic brought up there that was worth discussing at that time, but that’s just me. The consensus among the luminaries that participated was that historically science has always followed technology and engineering. The science explains why some engineering works and some doesn’t, and it provides boundaries for a type of engineering.

We asked the question, “What would a science of computing look like?” Some CS luminaries used an acronym “TEMS” (Technology, Engineering, Mathematics, Science), and there seemed to be a deliberate reason why they had those terms in that order. In other circles it’s often expressed as “STEM.” Technology is developed first. Some engineering gets developed from patterns that are seen (best practices). Some math can be derived from it. Then you have something you can work with, experiment with, and reason about–science. The part that’s been missing from CS education is the science itself: experimentation, an interest and proclivity to get into the guts of something and try out new things with whatever–the hardware, the operating system, a programming language, what have you–just because we’re curious. Or, we see that what we have is inadequate and there’s a need for something that addresses the problem better.

Alan Kay participated in the summit and gave a description of a computing science that he had experienced at Xerox PARC. They studied existing computing artifacts, tried to come up with better architectures that did the same things as the old artifacts, and then applied the new architectures to “everything else.” I imagine that this would test the limits of the architecture, and provide more avenues for other scientists to repeat the process (take an artifact, create a new architecture for it, “spread it everywhere”) and gain more improvement.

(Update 12-14-2010: I added the following 3 paragraphs after finding Dan Ingalls’s “Design Principles Behind Smalltalk” article. It clarifies the idea that a science of computing was once attempted.)

Dan Ingalls gave a brief description of a process they used at Xerox to drive their innovative research, in “Design Principles Behind Smalltalk,” published in Byte Magazine in 1981:

Our work has followed a two- to four-year cycle that can be seen to parallel the scientific method:

  • Build an application program within the current system (make an observation)
  • Based on that experience, redesign the language (formulate a theory)
  • Build a new system based on the new design (make a prediction that can be tested)

The Smalltalk-80 system marks our fifth time through this cycle.

This parallels a process that was once described to me in computer science, called “bootstrapping”: Building a language and system “B” from a “lower form” language and system “A”. What was unique here was they had a set of overarching philosophies they followed, which drove the bootstrapping process. The inventors of the C language and Unix went through a similar process to create those artifacts. Each system had different goals and design philosophies, which is reflected in their design.

To give you a “starter” idea of what this process is like, read the introduction to Design Patterns, by Gamma, Helm, Johnson, and Vlissides, and take note of how they describe coming up with their patterns. Then notice how widely those patterns have been applied to projects that have nothing to do with what the Gang of Four originally created with the patterns they came up with. The exception here is the Gang of Four didn’t redesign a language as part of their process. They invented a terminology set, and a concept of coding patterns that are repeatable. They established architectural patterns to use within an existing language. The thing is, if they had explored what was going on with the patterns mathematically, they might very well have been able to formulate a new architecture that encompassed the patterns they saw, which if successful, would’ve allowed them to create a new language.

The problem in our field is illustrated by the fact that more often than not, there’s been no study of the other technologies where these patterns have been applied. When the Xerox Learning Research Group came up with Smalltalk (object-orientation, late-binding, GUI), Alan Kay expected that others would use the same process they did to improve on it, but instead people either copied OOP into less advanced environments and then used them to build practical software applications, or they built applications on top of Smalltalk. It’s just as I described with my CS education: The strategies get turned into a catechism by most practitioners. Rather than studying our creations, we’ve just kept building upon and using the same frameworks, and treating them with religious reverence–we dare not change them lest we lose community support. Instead of looking at how to improve upon the architecture of Smalltalk, people adopted OOP as a religion. This happened and continues to happen because almost nobody in the field is being taught to apply mathematical and scientific principles to computing (except in the sense of pursuing proofs of computability), and there’s little encouragement from funding sources to carry out this kind of research.

There are degrees of scientific thinking that software developers use. One IT software house where I worked for a year used patterns on a regular basis that we created ourselves. We understood the essential idea of patterns, though we did not understand the scientific principles Kay described. Everywhere else I worked didn’t use design patterns at all.

There has been more movement in the last few years to break out of the confines developers have been “living” in; to try and improve upon fundamental runtime/VM architecture, and build better languages on top of it. This is good, but in reality most of it has just recapitulated language features that were invented decades ago through scientific approaches to computing. There hasn’t been anything dramatically new developed yet.

The ignorance we ignore

“What is the definition of ignorance and apathy?”

“I don’t know, and I don’t care.”

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well.

— Alan Kay, The Early History of Smalltalk

A fundamental problem with our field is there’s very little appreciation for good architecture. We keep tinkering with old familiar structures, and produce technologies that are marginally better than what came before. We assume that brute force will get us by, because it always has in the past. This is ignoring a lot. One reason that brute force has been able to work productively in the past is because of the work of people who did not use brute force, creating: functional programming, interactive computing, word processing, hyperlinking, search, semiconductors, personal computing, the internet, object-oriented programming, IDEs, multimedia, spreadsheets, etc. This kind of research went into decline after the 1970s and has not recovered. It’s possible that the brute force mentality will run into a brick wall, because there will be no more innovative ideas to save it from itself. A symptom of this is the hand-wringing I’ve been hearing about for a few years now about how to leverage multiple CPU cores, though Kay thinks this is the wrong direction to look in for improvement.

There’s a temptation to say “more is better” when you run into a brick wall. If one CPU core isn’t providing enough speed, add another one. If the API is not to your satisfaction, just add another layer of abstraction to cover it over. If the language you’re using is weak, create a large API to give it lots of functionality and/or a large framework to make it easier to develop apps. in it. What we’re ignoring is the software architecture (all of it, including the language(s), and OS), and indeed the hardware architecture. These are the two places where we put up the greatest resistance to change. I think it’s because we acknowledge to ourselves that the people who make up our field by and large lack some basic competencies that are necessary to reconsider these structures. Even if we had the competencies nobody would be willing to fund us for the purpose of reconsidering said structures. We’re asked to build software quickly, and with that as the sole goal we’ll never get around to reconsidering what we use. We don’t like talking about it, but we know it’s true, and we don’t want to bother gaining those competencies, because they look hard and confusing. It’s just a suspicion I have, but I think An understanding of mathematics and the scientific outlook is important for all this essential to the process of invention in computing. We can’t get to better architectures without it. Nobody else seems to mind the way things are going. They accept it. So there’s no incentive to try to bust through some perceptual barriers to get to better answers, except the sense that some of us have that what’s being done is inadequate.

In his presentation in the video, Kay pointed out the folly of brute force thinking by showing how humungous software gets with this approach, and how messy the software is architecturally. He said that the “garbage dump” that is our software is tolerated because most people can’t actually see it. Our perceptual horizons are so limited that most people can only comprehend a piece of the huge mess, if they’re able to look at it. In that case it doesn’t look so bad, but if we could see the full expanse, we would be horrified.

This clarifies what had long frustrated me about IT software development, and I’m glad Kay talked about it. I hated the fact that my bosses often egged me on towards creating a mess. They were not aware they were doing this. Their only concern was getting the computer to do what the requirements said in the quickest way possible. They didn’t care about the structure of it, because they couldn’t see it. So to them it was irrelevant whether it was built well or not. They didn’t know the difference. I could see the mess, at least as far as my project was concerned. It eventually got to the point that I could anticipate it, just from the way the project was being managed. So often I wished that the people managing me or my team, and our customers, could see what we saw. I thought that if they did they would recognize the consequences of their decisions and priorities, and we could come to an agreement about how to avoid it. That was my “in my dreams” wish.

Kay said, “Much of the applications we use … actually take longer now to load than they did 20 years ago.” I’ve read about this (h/t to Paul Murphy). (Update 3-25-2010: I used to have video of this, but it was taken down by the people who made it. So I’ll just describe it.) A few people did a side-by-side test of a 2007 Vista laptop with a dual-core Intel processor (I’m guessing 2.4 Ghz) and 1 Gig. RAM vs. a Mac Classic II with a 16 Mhz Motorola 68030 and 2 MB RAM. My guess is the Mac was running a SCSI hard drive (the only kind you could install on a Mac when they were made back then). I didn’t see them insert a floppy disk. They said in the video that the Mac is “1987 technology.” Even though the Mac Classic II was introduced in 1991, they’re probably referring to the 68030 CPU it uses, which came out in 1987.

The tasks the two computers were given were to boot up, load a document into a word processor, quit out of the word processor, and shut down the machine. The Mac completed the contest in 1 minute, 42 seconds, 25% faster than the Vista laptop, which took 2 minutes, 17 seconds. Someone else posted a demonstration video in 2009 of a Vista desktop PC which completed the same tasks in 1 minute, 18 seconds–23% faster than the old Mac. I think the difference was that the laptop vs. Mac demo likely used a slower processor for the laptop (vs. the 2009 demo), and probably a slower hard drive. The thing is though, it probably took a 4 Ghz dual-processor (or quad core?) computer, faster memory, and a faster hard drive to beat the old Mac. To be fair, I’ve heard from others that the results would be much the same if you compared a modern Mac to the old Mac. The point is not which platform is better. It’s that we’ve made little progress in responsiveness.

The wide gulf between the two pieces of hardware (old Mac vs. Vista PC) is dramatic. The hardware got about 15,000-25,000% faster via. Moore’s Law (though Moore’s Law only applied to transistors, not speed), but we have not seen a commensurate speed up in system responsiveness. In some cases the newer software technology is slower than what existed 22 years ago.

When Kay has elaborated on this in the past he’s said this is also partly due to the poor hardware architecture which was adopted by the microprocessor industry in the 1970s, and which has been marginally improved over the years. Quoting from an interview with Alan Kay in ACM Queue in 2004:

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea [referring to the Burroughs B5000 computer].

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

Another factor is the configuration and speed of main and cache memory. Cache is built into the CPU, is directly accessed by it, and is very fast. Main memory has always been much slower than the CPU in microcomputers. The CPU spends the majority of its time waiting for memory, or data to stream from a hard drive or internet connection, if the data is not already in cache. This has always been true. It may be one of the main reasons why no matter how fast CPU speeds have gotten the user experience has not gotten commensurately more responsive.

The revenge of data processing

The section of Kay’s speech where he talks about “embarrassing questions” from his wife (the slide is subtitled “Two cultures in computing”) gets to a complaint I’ve had for a while now. There have been many times while I’m writing for this blog when I’ve tried to “absent-mindedly” put the cursor somewhere on a preview page and start editing, but then realize that the technology won’t let me do it. You can always tell when a user interaction issue needs to be addressed when your subconscious tries to do something with a computer and it doesn’t work.

Speaking about her GUI apps. his wife said, “In these apps I can see and do full WYSIWYG authoring.” She said about web apps., “But with this stuff in the web browser, I can’t–I have to use modes that delay seeing what I get, and I have to start guessing,” and, “I have to edit through a keyhole.” He said what’s even more embarrassing is that the technology she likes was invented in the 1970s (things like word processing and desktop publishing in a GUI with WYSIWYG–aspects of the personal computing model), whereas the stuff she doesn’t like was invented in the 1990s (the web/terminal model). Actually I have to quibble a bit with him about the timeline.

“The browser = next generation 3270 terminal”
3270 terminal image from Wikipedia.org

The technology she doesn’t like–the interaction model–was invented in the 1970s as well. Kay mentioned 3270 terminals earlier in his presentation. The web browser with its screens and forms is an extension of the old IBM mainframe batch terminal architecture from the 1970s. The difference is one of culture, not time. Quoting from the Wikipedia article on the 3270:

[T]he Web (and HTTP) is similar to 3270 interaction because the terminal (browser) is given more responsibility for managing presentation and user input, minimizing host interaction while still facilitating server-based information retrieval and processing.

Applications development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. [my emphasis]

It’s true that the appearance and user interaction with the browser itself has changed a lot from the 3270 days in terms of a nicer presentation (graphics and fonts vs. green-screen text), and the ability to use a mouse with the browser UI (vs. keyboard-only with the 3270). It features document composition, which comes from the GUI world. The 3270 did not. Early on, a client scripting language was added, Javascript, which enabled things to happen on the browser without requiring server interaction. The 3270 had no scripting language.

There’s more freedom than the 3270 allowed. One can point their browser anywhere they want. A 3270 was designed to be hooked up to one IBM mainframe, and the user was not allowed to “roam” on other systems with it, except if given permission via. mainframe administration policies. What’s the same, though, is the basic interaction model of filling in a form on the client end, sending that information to the server in a batch, and then receiving a response form.

The idea that Kay brought to personal computing was immediacy. You immediately see the effect of what you are doing in real time. He saw it as an authoring platform. The browser model, at least in its commercial incarnation, was designed as a form submission and publishing platform.

Richard Gabriel wrote extensively about the difference between using an authoring platform and a publishing platform for writing, and its implications from the perspective of a programmer, in The Art of Lisp and Writing. The vast majority of software developers work in what is essentially a “publishing” environment, not an authoring environment, and this has been the case for most of software development’s history. The only time when most developers got closer to an authoring environment was when Basic interpreters were installed on microcomputers, and it became the most popular language that programmers used. The old Visual Basic offered the same environment. The interpreter got us closer to an authoring environment, but it still had one barrier: you either had to be editing code, or running your program. You could not do both at the same time, but there was less of a wait to see your code run because there was no compile step. Unfortunately with the older Basics, the programmer’s power was limited.

People’s experience with the browser has been very slowly coming back towards the idea of authoring via. the AJAX kludge. Kay showed a screenshot of a more complete authoring environment inside the browser using Dan Ingalls’s Lively Kernel. It looks and behaves a lot like Squeak/Smalltalk. The programming language within Lively is Javascript.

We’ve been willing to sacrifice immediacy to get rid of having to install apps. on PCs. App. installation is not what Kay envisioned with the Dynabook, but that’s the model that prevailed in the personal computer marketplace. You know you’ve got a problem with your thinking if you’re coming up with a bad design to compensate for past design decisions that were also bad. The bigger problem is that our field is not even conscious that the previous bad design was avoidable.

Kay said, “A large percentage of the main software that a lot of people use today, because it’s connected to the web, is actually inferior to stuff that was done before, and for no good reason whatsoever!”

I agree with him, but there are people who would disagree on this point. They are not enlightened, but they’re a force to be reckoned with.

What’s happened with the web is the same thing that happened to PCs: The GUI was seen as a “good idea” and was grafted on to what has been essentially a minicomputer, and then a mainframe mindset. Blogger Paul Murphy used to write, from a business perspective, about how there are two cultures in computing/IT that have existed for more than 100 years. The oldest culture that’s operated continuously throughout this time is data processing. This is the culture that brought us punch cards, mainframes, and 3270 terminals. It brought us the idea that the most important function of a computer system is to encode data, retrieve it on demand, and produce reports from automated analysis. The other culture is what Murphy called “scientific computing,” and it’s closer to what Kay promotes.

The data processing culture has used web applications to recapitulate the style of IT management that existed 30 years ago. Several years ago, I heard from a few data processing believers who told me that PCs had been just a fad, a distraction. “Now we can get back to real computing,” they said. The meme from data processing is, “The data is more important than the software, and the people.” Why? Because software gets copied and devalued. People come and go. Data is what you’ve uniquely gathered and can keep proprietary.

I think this is one reason why CS enrollment has fallen. It’s becoming dead like Latin. What’s the point of going into it if the knowledge you learn is irrelevant to the IT jobs (read “most abundant technology jobs”) that are available, and there’s little to no money going into exciting computing research? You don’t need a CS degree to create your own startup, either. All you need is an application server, a database, and a scripting/development platform, like PHP. Some university CS departments have responded by following what industry is doing in an effort to seem relevant and increase enrollment (ie. keep the department going). The main problem with this strategy is they’re being led by the nose. They’re not leading our society to better solutions.

What’s natural and what’s better

Kay tried to use some simple terms for the next part of his talk to help the audience relate to two ideas:

He gave what I think is an elegant graphical representation of one of the reasons things have turned out the way they have. He talked about modes of thought, and said that 80% of people are outer-directed and instrumental reasoners. They only look at ideas and tools in the context of whether it meets their current goals. They’re very conservative, seeking consensus before making a change. Their goals are the most important thing. Tools and ideas are only valid if they help meet these goals. This mentality dominates IT. Some would say, “Well, duh! Of course that’s the way they think. Technology’s role in IT is to automate business processes.” That’s the kind of thinking that got us to where we are. As Paul Murphy has written previously, the vision of “scientific computing,” as he calls it, is to extend human capabilities, not replace/automate them.

Kay said that 1% of people look at tools and respond to them, reconsidering their goals in light of what they see as the tool’s potential. This isn’t to say that they accept the tool as it’s given to them, and adjust their goals to fit the limitations of that version of the tool. Rather, they see its potential and fashion the tool to meet it. Think about Lively Kernel, which I mentioned above. It’s a proof of this concept.

There’s a symbiosis that takes place between the tool and the tool user. As the tool is improved, new ideas and insights become more feasible, and so new avenues for improvement can be explored. As the tool develops, the user can rethink how they work with it, and so improve their thinking about processes. As I’ve thought about this, it reminds me of how Engelbart’s NLS developed. The NLS team called it “bootstrapping.” It led to a far more powerful leveraging of technology than the instrumental approach.

The “80%” dynamic Kay described happened to NLS. Most people didn’t understand the technology, how it could empower them, and how it was developed. They still don’t. In fact Engelbart is barely known for his work today. Through the work that was done in the early days of Xerox PARC, a few of his ideas managed to get into technology that we’ve used over the years. He’s most known now for the invention of the mouse, but he did much more than that.

In the instrumental approach the goals precede the tools, and are actually much more conservative than the approach of the 1-percenters. In the instrumental scenario the goals people have are similar whether the tools exist or not, and so there’s no potential for this human-tool interaction to provide insight into what might be better goals.

Kay talked about this subject at Rebooting Computing, and I think he said that a really big challenge in education is to get students to shift from the “80%” mindset to one of the other modes of thought that have to do with deep thinking, at least exposing students to this potential within themselves. I think he would say that we’re not predisposed to only think one way. It’s just that left to our own devices we tend to fall into one of these categories.

I base the following on the notes Kay had on his slides in his speech:

He said that in the commercial computing realm (which is based on the way mainframes were sold to the public years ago) it’s about “news”: The emphasis is on functionality, and people as components. This approach also says you get your hardware, OS, programming language, tools, and user interface from a vendor. You should not try to make your own. This sums up the attitude of IT in a nutshell. It’s not too far from the mentality of CS in academia, either.

The “new” in the 1970s was a focus on the end-user and whether what they can learn and do. “We should design how user interactions and learning will be done and work our way down to the functions they need!” In other words, rather than thinking functionality-first, think user-first–in the context of the computer being a new medium, extending human capabilities. Aren’t “functionality” and “user” equally high priorities? Users just want functionality from computers, don’t they? Ask yourself this question: Is a book’s purpose only to provide functionality? The idea at Xerox PARC was to create a friendly, inviting, creative environment that could be explored, built on, and used to create a better version of itself–an authoring platform, in a very deep sense.

Likewise, with Engelbart’s NLS, the idea was to create an information sharing and collaboration environment that improved how groups work together. Again, the emphasis was on the interaction between the computer and the user. Everything else was built on that basis.

By thinking functionality-first you turn the computer into a device, or a device server, speaking metaphorically. You’re not exposing the full power of computing to the user. I’m going to use some analogies which Chris Crawford has written about previously. In the typical IT mentality this is the idea. It’s too dangerous to expose the full power of computing to end users, so the thinking goes. They’ll mess things up either on purpose or by mistake. Or they’ll steal or corrupt proprietary information. Such low expectations… It’s like they’re children or illiterate peasants who can’t be trusted with the knowledge of how to read or write. They must be read to aloud rather than reading for themselves. Most IT environments believe that they cannot be allowed to write, for they are not educated enough to write in the computer’s complex language system (akin to hieroglyphics). Some allow their workers to do some writing, but in limited, and not very expressive languages. This “limits their power to do damage.” If they were highly educated, they wouldn’t be end users. They’d be scribes, writing what others will hear. Further, these “illiterates” are never taught the value of books (again, using a metaphor). They are only taught to believe that certain tomes, written by “experts” (scribes), are of value. It sounds Medieval if you ask me. Not that this is entirely IT’s fault. Computer science has fallen down on the job by not creating better language systems, for one thing. Our whole educational/societal thought structure also makes it difficult to break out of this dynamic.

The “news” way of thinking has fit people into specialist roles that require standardized training. The “new” emphasized learning by doing and general “literacy.”

We’re in danger of losing it

Everyone interested in seeing what the technology developed at ARPA and Xerox PARC (the internet, personal computing, object-oriented programming) was intended to represent should pay special attention to the part of Kay’s speech titled “No Gears, No Centers: ARPA/PARC Outlook.” Kay told me about most of this a couple years ago, and I incorporated it into a guest post I wrote for Paul Murphy’s blog, called “The tattered history of OOP” (see also “The PC vision was lost from the get-go”). Kay shows it better in his presentation.

I think the purpose of Kay’s criticism of what exists now is to point out what we’ve lost, and continue to lose. Since our field doesn’t understand the research that created what we have today, we’ve helplessly taken it for granted that what we have, and the method of conservative incremental improvement we’ve practiced, will always be able to handle future challenges.

Kay said that because of what he’s seen with the development of the field of computing, he doesn’t believe what we have in computer science is a real field. And we are in danger of losing altogether any remnants of the science that was developed decades ago. This is because it’s been ignored. The artifacts that are based on that research are all around us. We use them every day, but the vast majority of practitioners don’t understand the principles and outlook that made it possible to create them. This isn’t a call to reinvent the wheel, but rather a qualitative statement: We’re losing the ability to make the kind of leap that was made in the 1970s, to go beyond what that research has brought us. In fact, in Kay’s view, we’re regressing. The potential scenario he paints reminds me of the science fiction stories where people use a network of space warping portals for efficient space travel and say, “We don’t know who built it. They disappeared a long time ago.”

I think there are three forces that created this problem:

The first was the decline in funding for interesting and powerful computing research. What’s become increasingly clear to me from listening to CS luminaries, who were around when this research was being done, is that the world of personal computing and the internet we have today is an outgrowth–you could really say an accident–of defense funding that took place during the Cold War–ARPA, specifically research funded by the IPTO (the Information Processing Techniques Office). This isn’t to say that ARPA was like a military operation. Far from it. When it started, it was staffed by academics and they were given a wide berth in which to do their research on computing. An important ingredient of this was the expectation that researchers would come up with big ideas, ones that were high risk. We don’t see this today.

The reason this computing research got as much funding as it did was due to the Sputnik launch. This got Americans to wake up to the fact that mathematics, science, and engineering were important, because of the perception that they were important for the defense of the country. The U.S. knew that the Soviets had computer technology they were using as part of their military operations, and the U.S. wanted to be able to compete with them in that realm.

In terms of popular perception, there’s no longer a sense that we need technical supremacy over an enemy. However, computing is still important to national defense, even taking the war against Al Qaeda into account. A few defense and prominent technology leaders in the know have said that Al Qaeda is a very early adopter of new technologies, and an innovator in their use for their own ends.

In 1970 computing research split into part government-funded and part private, with some ARPA researchers moving to the newly created PARC facility at Xerox. This is where they created the modern form of personal computing, some aspects of which made it into the technologies we’ve been using since the mid-1980s. The groundbreaking work at PARC ended in the early 1980s.

The second force, which Kay identified, was the rise of the commercial market in PCs. He’s said that computers were introduced to society before we were ready to understand what’s really powerful about them. This market invited us in, and got us acquainted with very limited ideas about computing and programming. We went into CS in academia with an eye towards making it like Bill Gates (never mind that Gates is actually a college drop-out), and in the process of trying to accommodate our limited ideas about computing, the CS curriculum was dumbed down. I didn’t go into CS for the dream of becoming a billionaire, but I had my eye on a career at first.

It’s been difficult for universities in general to resist this force, because it’s pretty universal. The majority of parents of college-bound youth have believed for decades that a college degree means a secure economic future–meaning a job, a career that pays well. It’s not always true, but it’s believed in our culture. This sets expectations for universities to be career “launching pads” and centers for social networking, rather than institutions that develop a student’s faculties, their ability to think, and expose them to high level perspectives to improve their perception.

The third force is, as Paul Murphy wrote, the imperative of IT since the earliest days of the 20th century, which is to use technology to extend and automate bureaucracy. I don’t see bureaucracy as a totally bad thing. I think of Alexander Hamilton’s quote about public debt, rephrased as, “A bureaucracy, if not excessive, will be to us a blessing.” However, in its traditional role it creates and follows rules and procedures formulated in a hierarchy. There’s accountability for meeting directives, not broad goals. Its thinking is deterministic and algorithmic. IT, being traditionally a bureaucratic function, models this (we should be asking ourselves, “Does it have to be limited to this?”) My sense of it is CS responded to the economic imperatives. It felt the need to lean towards what industry was doing, and how it thinks. The way it tried to add value was by emphasizing optimization strategies. The difference was it used to have a strong belief in staying true to some theoretical underpinnings, which somewhat counter-balanced the strong pull from industry. That resolve is slipping.

Kay and I have talked a little about math and science education. He said that our educational system has long viewed math and science as important, and so these subjects have always been taught, but our school system hasn’t understood their significance. So the essential ideas of both tend to get lost. What the computer brings to light as a new medium is that math (as mathematicians understand it, not how it’s typically taught) and science are essential for understanding computing’s potential. They are foundations for a new literacy. Without this perspective, and the general competencies in math and science to support it, both students and universities will likely continue the status quo.

Outlook: The key ingredient

Kay’s focus on outlook really struck me, because we have such an emphasis in our society on knowledge and IQ. My view of this has certainly evolved as I’ve gotten into my studies.

Whenever you interview for a job in the IT industry you get asked about your knowledge, and maybe some questions that probe your management skills. In technology companies that actually invent stuff you are probed for your mental faculties, particularly at tech companies that are known as “where the tech whizzes work.” I had an interview 5 years ago with a software company where the employer gave me something that resembled an IQ test. I have not seen nor heard of a technology company that asks questions about one’s outlook.

Kay said,

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that. And in contrast to these two, IQ is just a big lump of lead. It’s one of the worst things in our field that we have clever people in it, because like Leonardo [da Vinci] none of us is clever enough to deal with the scaling problems that we’re dealing with. So we need to be less clever and be able to look at things from better points of view.

This is truly remarkable! I have never heard anyone say this, but I think he may be right. All of the technology that has been developed, that has been used by millions of people, and has been talked about lo these many years was created by very smart people. More often than not what’s been created, though, has reflected a limited vision.

Given how society’s perception of computing has been, I doubt Kay’s vision of human-computer interaction would’ve been able to fly in the commercial marketplace, because as he’s said, we weren’t ready for it. In my opinion it’s even less ready for it now. But certainly the internal and networked computing vision that was developed at PARC could’ve been implemented in machines that the consuming public could’ve purchased decades ago. I think that’s where one could say, “You had your chance, but you blew it.” Vendors didn’t have the vision for it.

What’s needed in the future

One of Kay’s last slides referred to Marvin Minsky. I am not familiar with Minsky, and I guess I should be. He said that in terms of software we need to move from a “biology of systems” (architecture) to a “psychology of systems” (which I’m going to assume for the moment means “behavior”). I don’t really know about this, so I’m just going to leave it at that.

In a chart titled “Software Has Fallen Short” Kay made a clearer case than he has in the past for not idolizing the accomplishments that were made at ARPA/Xerox PARC. In the past he’s tried to discourage people from getting too excited about the PARC stuff, in some cases downplaying it as if it’s irrelevant. He’s always tried to get people to not fall in love with it too much, because he wanted people to improve upon it. He used to complain that the Lisp and Smalltalk communities thought that these languages were the greatest thing, and didn’t dream of creating anything better. Here he explains why it’s important to think beyond the accomplishments at PARC: It’s insufficient for what’s needed in the future. In fact the research is so far behind that the challenges are getting ahead of what the current best research can deal with.

He said that the PARC stuff is now “news,” and my guess is he means “it’s been turned into news.” He talked about this earlier in the speech. The essential ideas that were developed at PARC have not made it into the computing culture, with the exception of the internet and the GUI. Instead some of the “new” ideas developed there have been turned into superficial “news.”

Those who are interested in thinking about the future will find the slide titled “Simplest Idea” interesting. Kay threw out some concepts that are food for thought.

A statement that Kay closed with is one he’s mentioned before, and it depresses me. He said that when he’s traveled around to universities and met with CS professors, they think what they’ve got “is it.” They think they’re doing real science. All I can do is shake my head in disbelief at that. Even my CS professors understood that what they were teaching was not a science. For the most part what’s passing for CS now is no better than what I had–excepting the few top schools.

Likewise, I’ve heard accounts saying that there are some enterprise software leaders who (mistakenly) think they understand software engineering, and that it’s now a fully mature engineering discipline.

Coming full circle

Does computer science have a future? I think as before, computing is going to be at the mercy of events, and people’s emotional perceptions of computing’s importance. This is because the vast majority of people don’t understand what computing’s potential is. Today we see it as “current” not “future.” I think that people’s perception of computing will become more remote. Computing will be in devices we use every day, as we can see today, and we will see them as digital devices, where they used to be analog.

“Digital” has become the new medium, not computing. Computing has been overlayed with analog media (text, graphics, audio, video). Kay has said for a while now that this is how it is with new media. When it arises, the generation that’s around doesn’t see its potential. They see it as the “new old thing” and they just use it to optimize what they did before. Computing for now is just the magical substrate that makes “digital” work. The essential element that makes computing powerful and dynamic has been pushed to the side, with a couple of exceptions, as it often has been.

Kay didn’t talk about this in the video, but he and I have discussed this previously. There used to be a common concept in computing, which has come to be called “generative technology.” It used to be a common expectation that digital technology was open ended. You could do with it what you wanted. This idea has been severely damaged by the way that the web has developed. First, commercial PCs, which were designed to be used alone and in LANs, were thrust upon the internet using native code and careless programming, which didn’t anticipate security risks. In addition, identities became more loosely managed, and this became a problem as people figured out how to hide behind pseudonyms. Secondly, the web browser wasn’t designed initially as a programmable medium. A scripting language was added to browsers, but it was not made accessible through the browser. The scripting language was not designed too well, either.

Many security catastrophes ensued, each eroding people’s confidence in the safety of the internet, and the image of programming. People who didn’t know how to program, much less how the web worked, felt like they were the victims of those who did know how to program (though this went all the way back to stand-alone PCs as well when mischievous and destructive viruses circulated around in infected executables on BBSes and floppy disks). Programming became seen as a suspicious activity, rather than a creative one. And so as vendors put up defensive barriers to try to compensate for their own flawed designs, it only reinforced the initial design decision that was made when the commercial browser was conceived: that programming would be restricted to people who worked for service providers. Ordinary users should neither expect to gain access to code to modify it, nor want to. It’s gotten to the point we see today where even text formatting on message boards is restricted. HTML tags are restricted or banned altogether in favor of special codes, and you can’t use CSS at all. Programming by the public on websites is usually forbidden, because it’s seen as a security risk. And most web operators don’t see the value of letting people program on their site.

Kay complained that despite the initial idealistic visions of how the internet would develop, it’s still difficult or impossible to send a simulation to somebody on a message board, or through e-mail, to show the framework for a concept. What I think he had envisioned for the internet is a multimedia environment in which people could communicate in all sorts of media: text, graphics, audio, video, and simulations–in real time. Specialized software has made this possible using services you can pay for, but it’s still not something that people can universally access on the internet.

To get to the future we need to look at the world differently

I agree with the sentiment that in order to make the next leaps we need to not accept the world as it appears. We have to get out of what we think is the current accepted reality, what we’ve put up with and gotten used to, and look at what we want the world to be like. We should use that as inspiration for what we do next. One strategy to get that started might be to look at what we find irritating about what currently exists, what we would like to see changed, and think about it from a fundamental, structural perspective. What are some things you’ve tried to do, but given up on, because when you tried to use the tools or resources available they just weren’t up to the task? What sort of technology (perhaps a kind that doesn’t exist yet) do you think would do the job?

For further reading/exploring:

A complete system (apps. and all) in 20KLOC – Viewpoints Research

Demonstration of Lively Kernel, by Dan Ingalls

Edit 8-21-2009: A reader left a comment to an old post I wrote on Lisp and Dijkstra, quoting a speech of Dijkstra’s from 10 years ago, titled “Computing Science: Achievements and Challenges.” Towards the end of his speech he bemoaned the fact that CS in academia, and industry in the U.S. had rejected the idea of proving program correctness. He attributed this to an increasing mathematical illiteracy here. I think his analysis of cause and effect is wrong. My own CS professors liked the discipline that proofs of program correctness provided, but they rejected the idea that all programs can and should be proved correct. Dijkstra held the view that CS should only focus on programs that could be proved correct.

I think Dijkstra’s critique of anti-intellectualism in the U.S. is accurate, however, including our aversion to mathematics, and I found that it answered some questions I had about what I wrote above. It also gets to the heart of one of the issues I harp on repeatedly in my blog. His third bullet point is most prescient. Quoting Dijkstra:

  • The ongoing process of becoming more and more an amathematical society is more an American specialty than anything else. (It is also a tragic accident of history.)
  • The idea of a formal design discipline is often rejected on account of vague cultural/philosophical condemnations such as “stifling creativity”; this is more pronounced in the Anglo-Saxon world where a romantic vision of “the humanities” in fact idealizes technical incompetence. Another aspect of that same trait is the cult of iterative design.
  • Industry suffers from the managerial dogma that for the sake of stability and continuity, the company should be independent of the competence of individual employees. Hence industry rejects any methodological proposal that can be viewed as making intellectual demands on its work force. Since in the US the influence of industry is more pervasive than elsewhere, the above dogma hurts American computing science most. The moral of this sad part of the story is that as long as computing science is not allowed to save the computer industry, we had better see to it that the computer industry does not kill computing science. [my emphasis]

Edit 1-2-2014: Alan Kay gave a presentation called “Normal Considered Harmful” in 2012, at the University of Illinois, where he repeated some of the same topics as his 2009 speech above, but he clarified some of the points he made. So I thought it would be valuable to refer people to it as well.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

“A man must learn on this principle, that he is far removed from the truth”
– Democritus

Science is a way of thinking. As Neil deGrasse Tyson has said, “It is a philosophy of discovery.” I reflected recently on what being a scientist is really all about. Good scientists are constantly trying to change their perception of reality. No, they’re not using psychedelic drugs (hopefully). They are rather like art appreciators trying to see what the artist is saying more clearly, except that their method is to guess at what the “artist meant” and then test the guess by going out and using instruments to help them observe the object of the guess more clearly than our native senses can. They share their observations with other scientists so that they can be validated or invalidated by peer review. Think of it as a “sanity check” on what you’ve found. It’s really like reverse-engineering nature if you think about it. I don’t mean to mislead with these analogies. I’m not trying to say that I know there is an “artist” of the world, or an “engineer”. I have my own opinions about that. Science has not found a creator for the world and since I’m talking about science I will respect that here. I’m trying to convey how scientists discover and use different perspectives to try to get at what’s really going on in our world and universe. They try to get beyond what the untrained eye and mind can see.

In our everyday lives we have a saying, “If it’s too good to be true it probably is.” Scientists try to get beyond the obvious, because they know that if it looks obvious it probably isn’t in reality. Scientists are natural skeptics. I’ve heard the saying that the best scientists are people who are always trying to prove themselves wrong. The most succinct description I’ve heard for how scientists come up with guesses (hypotheses) is that they must come up with something that can be “falsified”. In other words, it must be something that can be observed and/or measured so that others can say, “I came up with the same thing” or “No, this is not right. You missed ‘this’, and/or you did not consider ‘that’.” Most hypotheses are proved wrong in some way. There is a constant process of “debugging” our own notions of what’s happening. If a hypothesis cannot be falsified it is not science.

Even if a hypothesis turns out to be valid, scientists try to find the limits of its validity. This is commonly called “error”. It’s a reflection of what scientists think is the confidence level of their result(s), and there is always some error in science. The way I view it is to think of yourself inside a partially opaque sphere. You can see through it some, but the shapes of the objects outside of it look foggy, unclear. Science is the process of wiping away bits of what’s obscuring the image. You create a “window” through which you can see a bit of reality, but not all of it. Part of being a good scientist is understanding where there is a decent level of clarity and what the boundaries of it are. There is always a limit on it. Even scientific instruments have the potential to introduce error into observations, and scientists must be aware of these limitations. Science is about the process of trying to expand that “window” more and more, sometimes in small steps, sometimes big ones, to see reality more clearly.

Recently I’ve begun to wonder if our schools are teaching science correctly. A week or two ago I began having a debate with a newspaper columnist by the name of Mike Ellis at the Daily Camera, Boulder, Colorado’s daily newspaper. We’d chat in various comment sections on the Daily Camera web site whenever the issue of global warming came up. At the time Ellis asserted that CO2 levels had remained very consistent for thousands of years, and had only recently begun to change to levels not seen for a very, very long time (hundreds of thousands of years, I believe he said). I’ve also seen him say that prior to the Industrial Revolution climate change happened because of changes in Earth’s orbit. One could throw in continental drift as well, in all seriousness. He said that the CO2 levels correlate very well with the trend we’ve seen of rising temperature since the Industrial Revolution, and that CO2 is entirely responsible for this. I had been paying attention to the debate on this issue off and on for several years. From what I’ve heard from the proponents of the theory of anthropogenic global warming (AGW–climatic warming caused by human activity), even they don’t make such a claim. They would say that CO2 is a significant contributor to the rise in temperature, but not that it’s responsible for all of it. I asked over at WattsUpWithThat.com about this argument he made and they were struck by it. One commenter said, “Not even the IPCC makes that claim.” What bugged me is Ellis made such a hard and fast scientific claim based on a correlation he said he saw in the data. Correlations in data can be deceiving. They can make you think you’ve found a relationship between phenomena when you haven’t. The question is what is the relationship, if any? It must be tested scientifically by observation before the relationship can be legitimately claimed to be realistic. It turns out it has been tested, but the scientific conclusion is unclear to me now. For now I’m taking the position that it’s not settled, at least in my own mind. I’ll need to look into this further.

I came upon an article in the opinion section of the Daily Camera web site titled “Global warming whodunit”, written by Ellis. He is a blogger and his credit says that he studies climate change as a hobby. Okay, so he’s not a professional scientist. I read through most of the article and thought he used an interesting literary framework for making the argument. He puts CO2 “on trial”. However, when I got to the last three paragraphs I thought, “Wait a minute. There’s something wrong here.” Ellis asserts:

Still not convinced? I loaded as much publicly available data as I could into Microsoft Excel. The result? An 88 percent correlation between global temperatures and atmospheric CO2 concentration. The temperature correlation peaks about 12 years after the CO2 stimulus, and falls off slowly over decades. This is huge evidence that CO2 drives temperatures, and that the oil we burn today causes the most warming 10 to 15 years from now. [my emphasis]

Notice his use of the term “evidence”. I thought surely he had some scientific source to back up what is unmistakably a scientific claim. It sounded kind of like the arguments we had earlier, but this time he added this 10-15 year delay factor. I and others asked Ellis in the comments section (I’m “mmille10” in the comments) to show how he came up with this conclusion. He posted the URL to a blog posting where he asserted the same thing, showing charts he had produced in Excel, created from combining two data sets (for CO2 and global average temperature) and shifting the temperature data set 12 years to show the correlation in higher relief. Take a look at them. The correlation he talks about looks quite beautiful…and obvious. In his article he talked about other correlations he tried with other purported causes, such as sun spots, but they were not as good of a fit as CO2 to temperature. The implication he leaves the reader with is that CO2 is most definitely the “culprit”.

Ellis admitted in the comments that he had no scientific source to back up his claim that there was this 10-15 year relationship between CO2 and temperature (though he did reference scientific information which he said proved that CO2 causes global warming), and that he had seen nothing that contradicted his “results” (hah!), but that it didn’t matter because it was an opinion column. That’s just a cop out. There’s an old saying I’ve heard in journalism: “We have the right to our own opinions, but we do not have the right to our own facts.” Ellis used his own statistical analysis, had the audacity to dress it up as a scientific claim, and then used it as fact in his argument. This is pseudo-science at its most brazen. It’s not even that hard to figure out that he’s doing it. His whole column hinges on this claim. He doesn’t even give the old saw of, “Most of the world’s scientists believe this is true.” He could’ve used that instead and it would’ve had more legitimacy than this red herring.

Ellis’s whole point is about the correlation that matches up so well. He begs his readers, “What else could it be?” (I’m paraphrasing) Well that’s the thing. It could be something else. The only way to eliminate or diminish that possibility is to test the relationship out in nature. I’ll ignore for the moment that Ellis said this was “evidence”, which it most certainly is not. At best it is a hypothesis, but is it falsifiable?

Leaving his pseudo-science aside, I think where Ellis made an error is he assumes that there can only be one or two major factors that affect temperature. A thought I had was maybe one of the data sets that he threw out, due to the fact that it doesn’t correlate well by itself against temperature, might actually correlate well if he put in other factors and events which climatologists also think affect temperature. Just doing a simple-minded analysis is not good enough for science.

I respect Ellis’s right to his opinions, but I think it goes beyond the pale for even an opinion columnist to mislead the reading public using the platform of a newspaper of record. I haven’t read him extensively so I can’t speak to the quality of his other work. I’m talking about this one article, but due to the platform he has I found his article and his ignorance of the scientific method offensive. The Daily Camera should try for better quality than this in a city that has three major science labs (NOAA, NIST, and NCAR) and one of the premiere science and engineering universities in the country (C.U. Boulder). Publishing this drivel was an insult to our intelligence.

I kind of understand what’s going on here. Newspapers are really hurting right now financially. The Rocky Mountain News, a paper that’s been around for more than a century, went out of business a few months ago. Newspapers are desperate to find avenues to seem more relevant in order to attract readers. In this case the Camera has brought in a blogger. It hasn’t helped the quality of what they publish. I can tell you that much.

On the use of computers in science

I’m including some material here on what Dr. Alan Kay says about science education and computers, because this has implications for what I talk about above. Incidentally, Kay graduated with a B.A. in molecular biology and mathematics from the University of Colorado, back in the 1960s, I believe. He received his masters and doctorate in computer science from the University of Utah. In addition to being a pioneer in computing, he’s done a lot of pioneering work in developing math and science education principles, using computers for childhood education, outside of academia.

He’s said the appropriate way to approach teaching scientific principles to children using computers, is to create simulations of what they see. This is important, because doing it backwards, presenting a model and expecting reality to match it is pseudoscience. He said in one presentation on this, before a group of teachers (which was called “What is Squeak?”):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

There was a Q & A session after his talk. I couldn’t hear the question exactly, but I think a teacher asked whether a simulation that Kay had up on screen was pseudoscience. Kay said, “This is a model. If you present it to the kids as fact, it is pseudoscience.” The idea being that a model is something that the kids should construct after having experienced the actual phenomenon in order to explore what they have just observed. By the way, scientists who are using computers properly to create simulations use the same process. By going through it, students and scientists can learn something more: the mechanics of more about what they have observed, and come up with insights that lead to new questions. Kids also get the added benefit of learning some mathematical principles in the process. He makes a big point about not taking a pre-existing model and just showing it to kids as if it was fact, because then you lose what science is about as a thought process.

After watching his presentation, I began wondering about how computers are used by professional scientists. For example, meteorologists use computer models as part of their weather prediction process. I don’t know for sure but I feel fairly certain that they didn’t create these models themselves. They may alter parameters that go into the model. I don’t know enough about meteorological practice to say that for sure. So are they participating in pseudo-science? I’d say one difference is meteorologists do not just say, “The computer models says X, so that’s our prediction.” They actually use several prediction models at once, because there’s not just one “right” model–they come up with different results. From what I’ve heard about this process they use them to set boundaries for what could happen, within a certain boundary of error (I’m doing some hand waving here). One question I have is do these models have an error rating? It seems to me this could theoretically be established with time, comparing a long series of model predictions with what actually happened. The question is can the error be measured?

Meteorologists use their own skills, gathering data like temperature, humidity, barometric pressure, etc., in addition to looking at the models to make a prediction. Even so, due to the chaotic nature of the atmosphere, the only weather prediction you can have some confidence in is the one for the next day.

I don’t know for sure but I feel fairly certain that computers are used by NASA to try to determine the flight paths of the spacecraft they launch, taking gravity wells into consideration. Even there the science is not exact, which I think is proven out by the number of failed landings that NASA has had on Mars. There have also been a number of times when NASA has had to make unplanned corrections to the flight path of a spacecraft in flight, which had nothing to do with equipment malfunctions.

What about using a computer model to demonstrate what has been found scientifically? I think if the computer is just used to display scientifically gathered data (it could be in any form: a chart, an animation, etc.), that’s different from running a model that’s actually computing its way through a process (a simulation) and saying, “This is reality.” Even if they are used in prediction, I can tell you from experience that there is some error involved, as is true of any scientific instrument. Having said this, I think if a computer is used in a scientific presentation of observed data there should be accompanying materials which demonstrate the error in the observations. People have a perception that computers are precise, exacting, and therefor reveal absolute truth. It can be a challenge to try to convey error through a demonstration on a computer.

Bringing this full circle, one might think that Mike Ellis in his article did what I just described, using a computer to display data. The difference is he drew unwarranted conclusions from the “data display” and parlayed it as “huge evidence.” It would’ve been scientifically valid for Ellis to point out that a data correlation exists between CO2 and temperature. That’s interesting. It could be used as a hypothesis, which could be used as motivation to do scientific research on it. The point where he stepped into pseudoscience was when he said this correlation showed a strong cause and effect relationship existed between the two. He did tried to do “science on a computer,” as Kay would put it.

Read Full Post »

This guy sucks!

I first noticed this on reddit today, but eventually saw it in some pingbacks to my blog (at tekkie.wordpress.com). Somebody has a “blog” hosted in Germany that is ripping off other people’s blog posts and posting them as his own at www.indquery.com. He’s copied a bunch of my posts. In one case I saw he copied a Wikipedia entry. So far, of my posts, this guy’s copied in full (by title):

Straying into dangerous territory
Squeak is like an operating system
Redefining Computing, Part 2
Redefining Computing, Part 1
The real computer revolution
Having fun playing with fire,…er C
Seaside hosting redux
Coding like writing
Squeak/anti-virus problem solved

Secondly the blogger has ads on his site, so I don’t know. Maybe he’s making some money off of what I wrote. He never asked me for permission to reproduce my stuff. In any case, he sucks. My guess is this guy’s looking for what might be popular articles for wherever he is, and trying to attract traffic, taking all the credit for it.

To tell you the truth, I don’t mind if other people copy what I post, but at least have the decency to give me credit, and a link to this blog. That’s just basic etiquette. I do tend to put a lot of work into what I write, so I want to at least get credit for it. I’ve been doing it in the interest of sharing knowledge, not trying to profit from it in an immediately tangible way (though that thought has crossed my mind from time to time).

Anybody have any ideas about administrative action I can take to deal with this leech?

I guess I should start doing this now: Copyright 2007, Mark Miller, https://tekkie.wordpress.com. Sheesh!

Here’s the Netcraft info. I found on the “blog”:

site: http://www.indquery.com
domain: indquery.com
IP Address:
Country: DE (Germany)
Domain Registry: Unknown
Organization: Unknown
Netblock owner: RadixDirect.com
nameserver: ns1.cpxadvertising.com
DNS Admin: cpxclick@gmail.com
Reverse DNS: serverpoint.com
Nameserver Organization: Unknown

Read Full Post »