Feeds:
Posts
Comments

Archive for January, 2010

I was going through my list of links for this blog (some call it a “blogroll”) and I came upon a couple items that might be of interest.

The first is, it appears that Dolphin Smalltalk is getting back on its feet again. You can check it out at Object Arts. I had reported 3 years ago that it was being discontinued. So I’m updating the record about that. Object Arts says it’s working with Lesser Software to produce an updated commercial version of Dolphin that will run on top of Lesser’s Smalltalk VM. According to the Object Arts website this product is still in development, and there’s no release date yet.

Another item is since last year I’ve been hearing about a branch-off of Squeak called Pharo. According to its description it’s a version of Squeak designed specifically for professional developers. From what I’ve read, even though people have had the impression that the squeak.org release was also for professional developers, there were some things that the Pharo dev. team felt were getting in the way of making Squeak a better professional dev. tool, mainly the EToys package, which has caused consternation. EToys was stripped out of Pharo.

There’s a book out now called “Pharo by Example”, written by the same people who wrote “Squeak by Example”. Just from perusing the two books, they look similar. There were a couple differences I picked out.

The PbE book says that Pharo, unlike Squeak, is 100% open source. There’s been talk for some time now that while Squeak is mostly open source, there has been some code in it that was written under non-open source licenses. In the 2007-2008 time frame I had been hearing that efforts were under way to make it open source. I stopped keeping track of Squeak about a year ago, but last I checked this issue hadn’t been resolved. The Pharo team rewrote the non-open source code after they forked from the Squeak project, and I think they said that all code in the Pharo release is under a uniform license.

The second difference was that they had changed some of the fundamental architecture of how objects operate. If you’re an application developer I imagine you won’t notice a difference. Where you would notice it is at the meta-class/meta-object level.

Other than that, it’s the same Squeak, as best I can tell. According to what I’ve read Pharo is compatible with the Seaside web framework.

An introduction to the power of Smalltalk

I’m changing the subject some, but I couldn’t resist talking about this, because I read a neat thing in the PbE book. I imagine it’s in SbE as well. Coming from the .Net world, I had gotten used to the idea of “setters” and “getters” for class properties. When I first started looking at Squeak, I downloaded Ramon Leon‘s Squeak image. I may have seen this in a screencast he produced. I found out there was a modification to the browser in his image that I could use to have it set up default “setters” and “getters” to my class’s variables automatically. I thought this was neat, and I imagine other IDEs already had such a thing (like Eclipse). I used that feature for a bit, and it was a good time-saver.

PbE revealed that there’s a way to have your class set up its own “setters” and “getters”. You don’t even need a browser tool to do it for you. You just use the #doesNotUnderstand message handler (also known as “DNU”), and Smalltalk’s ability to “compile on the fly” with a little code generation. Keep in mind that this happens at run time. Once you get the idea, it’s not that hard, it turns out.

Assume you have a class called DynamicAccessors (though it can be any class). You add a message handler called “doesNotUnderstand” to it:

DynamicAccessors>>doesNotUnderstand: aMessage
| messageName |
messageName := aMessage selector asString.
(self class instVarNames includes: messageName)
ifTrue: [self class compile: messageName, String cr, ' ^ ', messageName.
         ^aMessage sendTo: self].
^super doesNotUnderstand: aMessage

This code traps the message being sent to a DynamicAccessors instance, because there is no method for what’s being called for at the moment. It extracts the method name that’s being called, looks to see if the class (DynamicAccessors) has a variable by the same name, and if so, compiles a method by that name, with a little boilerplate code that just returns the variable’s value. Once it’s created, it resends the original message to itself, so that the now-compiled accessor can return the value. However, if no variable exists that matches the message name, it triggers the superclass’s “doesNotUnderstand” method, which will typically activate the debugger, halting the program, and notifying the programmer that the class, “doesn’t understand this message.”

Assuming that DynamicAccessors has a member variable “x”, but no “getter”, it can be accessed by:

myDA := DynamicAccessors new.
someValue := myDA x

If you want to set up “setters” as well, you could add a little code to the doesNotUnderstand method that looks for a parameter value being passed along with the message, and then compiles a default method for that.

Of course, one might desire to have some member variables protected from external access and/or modification. I think that could be accomplished by having a variable naming convention, or some other convention, such as a collection that contains member variable names along with a notation specifying to the class how certain variables should be accessed. The above code could follow those rules, allowing access to some internal values and not others. A thought I had is you could set this up as a subclass of Object, and then just derive your own objects off of that. That way this action will apply to any classes you create, which you choose to have it apply to (otherwise, just have them derive from Object).

Once an accessor is compiled, the above code will not be executed for it again, because Smalltalk will know that the accessor exists, and will just forward the message to it. You can go in and modify the method’s code however you want in a browser as well. It’s as good as if you created the accessor yourself.

Edit 5-6-2010: Heard about this recently. Squeak 4.1 has been released. From what I’ve read on The Weekly Squeak, Squeak has been 100% open source since Version 4.0. I was talking about this earlier in this article in relation to Pharo. 4.1 features some “touch up” stuff. It sounds like this makes it nicer to use. The description says it includes some first-time user features and a nicer, cleaner visual interface.

Read Full Post »

Every once in a while, starting in Jr. high school, I would get the opportunity to do an open subject research paper. Just to set the time period, this was in the early 1980s. We could write it on anything we wanted. Back then I had just discovered computer programming, and I was really into it. I was also beginning to see that computers were historically significant, so I chose that. I already knew about the big mainframes, and I figured that’s where the story would probably start. However, when I looked up my research materials, the earliest reference I found to the invention of the automatic computer was Charles Babbage, who began his work in 1821. I use the word “automatic” here, because I found out that the term “computer” had been around since before Babbage came along. It used to refer to a person who manually calculated numbers for mathematical tables. In fact the term was used this way up through WW II. Babbage looked for a way to make calculations more reliable. Human computers made mistakes, and depending on the mathematical formulation of the tables, this could invalidate everything after the point the mistake was made. Babbage tried to increase reliability by bringing redundancy to the process. He would have two computers working on the same set of numbers, and they would compare each calculation. If one made a mistake it was likely that the other would find it, and the mistake could be corrected before the problem was compounded. Even so, Babbage was frustrated.

His first attempt at solving the problem was to design what he called Difference Engine #1, which would use the method of finite differences to compute polynomials. This technique had been used for years. A mathematician would calculate and “set up” some initial values, and after that less skilled computers could do the rest, since all that was required for using the method was the ability to add, based on a prior set of calculated values. Babbage wanted to automate this process (though the mathematician’s “priming” of the process would still be required).

He built a prototype model in 1831. It was 1/7th of the actual machine he wanted to build. It was very expensive to develop. Apparently this was due in no small part to Babbage’s ineffective management of the project, though from what I read at the time, the main factor was the lack of precise manufacturing of mechanical parts. It was very difficult for anyone to manufacture them at the tolerances Babbage’s design required. The idea of standardized mechanical parts hadn’t been developed yet, and all parts were hand made. What I remember reading is that Babbage is considered the father of modern mechanical engineering. He had to invent his own design techniques.

Babbage showed his prototype to a lot of people, and they were amazed, but he never got wide acceptance for his idea.

In 1834 Babbage had the revolutionary idea of building a programmable calculating machine, which has been called the “Analytical Engine”. The difference engine that Babbage conceived of was a computer, but it had a program “hard wired” into it (I’m using a modern term here). The inputs, and some operating parameters to the program could be changed, to set up different calculations to do, but the machine could only use the method of finite differences. The Analytical Engine concept was more in line with what we’d call a modern computer. It was what I’d now call a “meta-calculator”. The machine would only know how to calculate numbers, but a programmer could program the machine in the steps required to carry out any calculation methodology.

Lady Ada of Lovelace came into the picture at this time. Her role in Babbage’s work was unclear to me from what I read. My memory is that she was an avid supporter of his work, and there has been speculation that she may have been the world’s first computer programmer. This was based on the idea that even though she didn’t have a computer to work on, she helped Babbage develop a programming language for it. Doron Swade of the London Science Museum filled in this picture more and has corrected the record about her involvement. I have video of him talking about her work as part of a talk on Babbage below.

The Analytical Engine was designed to be programmable via. punch cards. Babbage got this idea from a French maker of textiles, named Joseph Jacquard, who used punch cards to drive the parts of a loom to produce designs on his textiles automatically. The Analytical Engine was designed with a printing unit, so it could output results in a permanent format. Babbage never had the chance to build it, though a piece of it was built by his son, which exists in a museum today.

Babbage then created another design, called Difference Engine #2, around 1847. Like the Analytical Engine, he designed it to have a printing unit. And like the Analytical Engine he never got funding to build it.

The ending to Babbage’s life always read, “He died a disappointed, bitter old man.” My memory is the word “penniless” was thrown in as well, though I could be wrong about that. As Swade explained, part of his frustration was that most people in his society never understood his idea of using a machine to calculate numbers. They thought of machines as only being useful for doing manual labor. The idea of creating a “mechanical brain” was a bridge too far. Babbage was one of probably only a handful of people thinking of machines in this way, and his bad management habits discredited his ideas in the eyes of potential funders. His Difference Engine prototype (from his first design) was judged to be a super-expensive flop. In fact Swade says he’s found evidence that Babbage’s mishandling of this project led the British government to not fund another computing machine inventor. So he probably killed the idea of funding computer development for decades.

Bringing Difference Engine #2 to life

In 1985 Doron Swade had the idea to build a working replica of Difference Engine #2. He wanted it to be as authentic as possible, because the idea was to say, “Babbage could’ve built this, and it would’ve worked,” if only he had been able to manage a complex project of that size. Swade gathered a team to build it, going off of Babbage’s original design drawings, which the Babbage family had donated to the museum. They used authentic metallurgy, with modern machining techniques, though they used the same tolerances that Babbage would’ve been able to achieve. So they wanted to work with the same level of engineering difficulty that he would’ve had to deal with, but manufacture the parts more quickly.

The first Difference Engine replica was partly finished in 1991. The drive train and the calculation section were finished, but it did not have a printing unit. Swade went looking for sponsors so that he could build the printing unit, and found Nathan Myhrvold in 1996. Myhrvold agreed to fund the printing unit on the condition that Swade would build him another Difference Engine #2 replica. The printing unit for the first difference engine (in London) was finished in 2002. The second replica was finished in 2008, and was put on display at the Computer History Museum (CHM) in Mountain View, CA. for one year.

I had the opportunity to see the Difference Engine in operation at the CHM when I went out to Mountain View for the Rebooting Computing Summit in January 2009. What I was told was that after the year was up the engine was going to be moved to Myhrvold’s home, though the people at the museum hoped he would allow them to keep it longer. The CHM website still has a page for the Babbage Engine display, so it may still be there.

The reason I’m talking about this now is I finally found some online videos on the Difference Engine that I can show here. The first couple videos are from the unveiling of the 2nd Difference Engine #2 replica at the Computer History Museum in 2008. That’s followed by video of the opening ceremony for the exhibit, featuring Doron Swade and Nathan Myhrvold. Embedding was disabled on the first two of these videos, so follow the links.

Part 1

Part 2

What I was also told was that the Engine had been damaged in transit. It was not ready to go when they got it unwrapped. They had engineers working on it for about two weeks before they could run it.

Doron Swade & Nathan Myrhvold introduce Babbage Difference Engine #2
at the Computer History Museum in May 2008

One funny anecdote I could relate to is Swade said that Babbage thought God was a programmer. I thought, “Yeah, and he wrote his code in Lisp.” 🙂

There were many interesting details in the CHM presentation, but the part that stuck out for me the most is Swade revealed that according to the historical record he’s been able to review, Ada Lovelace did not have the kind of role in Babbage’s Analytical Engine project that many think she had. Her ideas were no less significant, however.

Swade said flat out that Ada was not the world’s first programmer, as has been speculated, and sometimes stated as fact in the computer science community for decades. He explained that while she did get an algorithm published, it was all based on Babbage’s examples. So Babbage was the first programmer. Swade said that Ada was not a brilliant mathematician, as she is sometimes portrayed. She was a novice. Again, he surmised this from the historical evidence.

What she did contribute that was significant was the idea that numbers could represent symbols, or anything really. This is a surprisingly modern concept of computing for someone coming from the 19th century. She had this vision of computing about 90 years before Alan Turing had the same idea. It’s an idea that is so pervasive in computing now we don’t even think about it. The very text that’s displayed on your screen right now, all of the graphics; all of the audio you hear when you listen to a podcast, or to music; all of the video you see online or on a DVD is just a bunch of numbers to the computer, or more accurately, a series of electrical pulses. Each is represented in a way that we recognize, and it doesn’t seem numeric to us. Instead it’s a collage of sensory elements that help us make sense of what’s inside the machine. This is the vision she had, though in a 19th century context. She didn’t imagine the electronic technology we have, nor the visual displays, but she did imagine some things akin to computer graphics and MIDI music files (Swade mentioned “musical notes” as one of the representations she thought of). Swade said that she didn’t merely put this forward as a suggestion. She was “banging on the table,” saying, “This is what’s important about the Analytical Engine!” How right she was! So her idea is a big deal, and she deserves to be regarded as a thought pioneer in the history of computing.

Babbage listened to her notions, but he didn’t understand them. He designed the Analytical Engine only as a calculating machine, nothing more, though I don’t mean to diminish the power of that vision for its time. However, it’s now undeniable that her vision was greater.

Interestingly, Swade said that Babbage also had some ideas which Turing would flesh out more later. For example, Babbage’s Difference Engine prototype had a column of numbers which would all turn to zeros when a solution to a polynomial had been found. Babbage said if this special column never turned to zeros, there was no solution. This has shades of Turing’s “halting problem” in computational mathematics. I don’t know this for sure, but it sounds like Babbage only applied this idea to polynomials. Turing applied it to all of mathematics.

Swade talked a bit about the Analytical Engine. The scale of it was immense. I had no idea! He said if it were built, the “entry level” model would be as large as a locomotive. A larger design that Babbage conceived of would’ve been half the size of an auditorium. Both would’ve required a steam engine to run them.

Swade said that there were bugs in Babbage’s design of Difference Engine #2 that needed to be worked out. I thought, “Well of course.” Since Babbage never had the chance to build his machine, he didn’t have the reality of trying to build it to force him to look at things he had neglected, and to realize that certain ideas were not going to work. Swade’s team tried to work out these bugs using parts and techniques that Babbage had devised for other devices, or which were available to him at the time the machine was designed. Again, they tried to maintain authenticity.

The following video is another presentation Doron Swade gave on Babbage at Google about a week after the CHM event. I include it here, because while he says some of the same things he said in his CHM presentation, he adds more detail, including demonstrations of how Difference Engine #2 works.

The Difference Engine is something to behold. If you get a chance to see it, either at the CHM, or at the London Science Museum (where it’s on permanent display), I really encourage you to do so. The idea that kind of floored me as I was watching it being run was that this era is the first time in history that people have had the opportunity to see this thing really work! I felt privileged to have the opportunity. My thanks go to Doron Swade and Nathan Myhrvold! Before this, all I knew about it was what I had read in history books. To see it right before me “in the flesh”, as it were, was kind of breathtaking.

The video below was running in a continuous loop near the Difference Engine display. It gives you some sense of what it’s like to watch the Engine run. What I remember is it sounded like a “heavy loom”. The way I’ve described it to others is “It sounded like no computer I’ve heard before.” The “clack” sounds you hear were more like thuds when I heard them. The technology used to record the sound does not pick this up well.

Here are a couple photos I took of it.

Here are a few books that were mentioned in the presentations above:

The Cogwheel Brain, by Doron Swade – This is the UK edition of the book Swade wrote on the history of Babbage’s work, and his own work to build the Difference Engine #2 replica.

The Difference Engine: Charles Babbage and the Quest to Build the First Computer, by Doron Swade – This is the American edition of the same book

The Difference Engine, by William Gibson and Bruce Sterling – This is a science fiction novel set in the 19th century, attempting to imagine how things would have been different if Babbage had succeeded in building his models. It speculates on the creation of a thriving computer industry in Victorian England.

Read Full Post »

This is the first in what I hope will be many posts talking about Structure and Interpretation of Computer Programs, by Abelson and Sussman. This and future posts on this subject will be based on the online (second) edition that’s available for free. I started in on this book last year, but I haven’t been posting about it because I hadn’t gotten to any interesting parts. Now I finally have.

After I solved this problem, I really felt that this scene from The Matrix, “There is no spoon” (video), captured the experience of what it was like to finally realize the solution. First realize the truth, that “there is no spoon”–separate form from appearance (referring to Plato’s notion of “forms”). Once you do that, you can come to see, “It is not the spoon that bends. It is only yourself.” I know, I’m gettin’ “totally trippin’ deep, man,” but it was a real trip to do this problem!

This exercise is an interesting look at how programmers such as myself have viewed how we should practice our craft. It starts off innocuously:

A function f is defined by the rule that f(n) = n if n < 3 and f(n) = f(n – 1) + 2f(n – 2) + 3f(n – 3) if n >= 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by an iterative process.

(Update 5-22-2010: Don’t read too much into the “cross-outs” I’m using. I’m just trying to be more precise in my description.) Writing it recursively is pretty easy. You just do a translation from the mathematical classical algebraic notation to Scheme code. It speaks for itself. The challenging part is writing the same thing a solution that gives you the same result using an iterative process. Before you say it can’t be done, it can!

I won’t say never, but you probably won’t see me describing how I solve any of the exercises, because that makes it too easy for CS students to just read this and copy it. I want people to learn this stuff for themselves. I will, however, try to give some “nudges” in the right direction.

  • I’ll say this right off the bat: This is not a (classical) mathematical an algebra problem you’re dealing with. It’s easy to fall into thinking this, especially since you’re presented with some algebra as something to implement. This is a computing problem. I’d venture to say it’s more about the mathematics of computing than it is about algebra. As developers we often think of the ideal as “expressing the code in a way that means something to us.” While this is ideal, it can get in the way of writing something optimally. This is one of those cases. I spent several hours trying to optimize the math algebraic operations, and looking for mathematical patterns that might optimize the recursive algorithm into an iterative one. Most of my mathematical the patterns I thought I had fell to pieces. It was a waste of time anyway. I did a fair amount of fooling myself into thinking that I had created an iterative algorithm when I hadn’t. It turned out to be the same recursive algorithm done differently.
  • Pay attention to the examples in Section 1.2.1 (Linear Recursion and Iteration), and particularly Section 1.2.2 (Tree Recursion). Notice the difference, in the broadest sense, in the design between the recursive and iterative algorithms in the sample code.
  • As developers we’re used to thinking about dividing code into component parts, and that this is the right way to do it. The recursive algorithm lends itself to that kind of thinking. Think about information flow instead for the iterative algorithm. Think about what computers do beyond calculation, and get beyond the idea of calling a function to get a desired result.
  • It’s good to take a look at how the recursive algorithm works to get some ideas about how to implement the iterative version. Try some example walk-throughs with the recursive code and watch what develops.
  • Here’s a “you missed your turn” signpost (using a driving analogy): If you’re repeating calculation steps in your iterative algorithm, you’re probably not writing an iterative procedure. The authors described the characteristics of a recursive and an iterative procedure earlier in the chapter. See how well your procedure fits into either description. There’s a fine line between what’s “iterative” and what’s “recursive” in Scheme, because in both cases you have a function calling itself. The difference is in a recursive procedure you tend to have the function calling itself more than once in the same expression. Whereas In an iterative procedure you tend to have a simpler calling scheme where the function only calls itself once in an expression (though it may call itself once in more than one expression inside the function), and all you’re doing is computing an intermediate result, and incrementing a counter, to input into the next step. You should not have operators which are “lingering”, waiting for a function call to return, as a rule, though they did show an example earlier of a function that was mostly iterative, with a little recursion thrown in now and then. With this exercise I found that I was able to find a solution that was strictly iterative.
  • As I believe was mentioned earlier in this chapter (in the SICP book), remember to use the parameter list of your function as a means for changing state.
  • I will say that both the recursive and the iterative functions are pretty simple, though the iterative function uses a couple concepts that most developers are not used to. That’s why it’s tricky. When I finally got it, I thought, “Oh! There it is! That’s cool.” When you get it, it will just seem to flow very smoothly.

Happy coding!

Read Full Post »

My first instinct to hearing this news (h/t to Bill Kerr) was, “About friggin time!” Here’s a permalink to Bill’s post on it.

Google discovered recently that some of its servers, as well as the servers at 34 other companies were attacked in a sophisticated cracking campaign originating in China. As far as Google’s services were concerned, the attacks seemed to be targeted at trying to access the e-mails of Chinese human rights activists, and Google’s intellectual property. The attack on the former failed, but the attack on the latter apparently succeeded. Google announced after this discovery that it is removing censorship measures on google.cn, their Chinese search service, and they acknowledged that they are facing the prospect of being forced to leave China altogether. Wow! Now, I understand that this was discussed as a business decision, probably from a security standpoint, but I think that this burnishes Google’s image, nevertheless, of “not being evil”. China’s government and business environment are not compatible with Google’s corporate culture. This isn’t the first time that Google has gotten harassed, apparently. The conflicts with the Chinese government have gone on a long time, partly, Google suspects, at the behest of its competitors.

Here’s a link to video of a news segment on this on the News Hour with Jim Lehrer.

Ethan Zuckerman made a good point that Google was blamed for moving into China and agreeing to their censorship guidelines, but they’ve been less stringent on internet communications, and have been more willing to do things the Chinese government doesn’t like, than Google’s competitors, such as Microsoft and Yahoo. The reason they got beat up on more than the others is that Google began with the motto “don’t be evil”. None of their competitors had such a mission statement.

I was originally opposed to Google entering China a few years ago for these reasons. However, apparently Google offered enough openings for human rights activists in China so they could use Google’s services to organize and exert some power in Chinese politics. I hadn’t heard of this until now. It turns out they offered private GMail accounts to Chinese users, based in a U.S. server, and this was one avenue that activists have been using.

Now, these same Chinese activists are worried that Google may be forced to leave, thereby removing a powerful tool they had in advancing their causes. I do feel for these people.

I feel in a strange way after hearing this news that if it took Google continuing the censorship to stay so that Chinese activists would have something that they could continue to use for their causes, then I’d say “bring on the censorship!” It’s better than nothing, and I’m now realizing that this was probably Google’s calculation all along. But I think the jig is up. Probably not even that would repair this situation. As I read about this it felt like the situation with NBC, Conan O’Brien, and the Tonight Show. Basically, “I have to leave, because these bozos want me to do something that will betray my sense of integrity.”

Edit 5-22-2010: I was informed yesterday by Tammy Bruce that despite what some might have heard, Google has not left China. Instead they have relocated to Hong Kong, and have managed to set up what I’d call a “beach head” where they are not censoring their searches. This does not mean that most Chinese have unfettered access to information. From what I hear the Chinese are blocking Google, though my guess is this is not total, or else they would have no choice but to leave entirely.

Read Full Post »

This is kind of a follow-up post to one I wrote in March, 2009, talking about what I anticipated in our economic future. This takes a look back at the past, and brings us up to the present, showing us a bit of how we got here.

I saw an interview with economist and Wall Street Journal columnist Judy Shelton a couple months ago on C-SPAN. It was meant to be a nice sit-down chat, but she gives an interesting history of what’s happened to world monetary systems since the 1930s. She said that the global economy went from instability in the 1930s, where every nation was trying to undercut every other country’s currency, to stability with the creation of the International Monetary Fund (IMF). Then the IMF betrayed its own original mission, President Nixon took the U.S. off Bretton-Woods in the early 1970s, and now we’re back to where we were in the 1930s in terms of international finance–each country trying to undercut everyone else’s currencies.

Quite a bit of this interview is a profile of Shelton and her interests, particularly her love of Russian culture. The interview is an hour long.

The most intriguing thing to me is she likes the idea of a Bretton-Woods-like gold standard for the U.S. dollar, an idea that most economists would consider anachronistic. This is an idea that I got into shortly after I left college in the early 1990s, but later abandoned, because I confused it with the gold standard that existed prior to Bretton-Woods. The older one was partly blamed for the Great Depression.

The most interesting parts of the video are at 42 minutes and 55 minutes in. At minute 42 she explains her rationale for an international gold standard, saying she’s not a “gold bug,” I guess meaning that she doesn’t like it for its own sake. Rather she sees it as a relatively stable guarantor of value, and she dislikes the fact that world currencies are now not comparable to anything that seems real to most people. Instead, what people get is a sliding measure of value in the international currency market that varies from month to month and year to year. She said that now world currency trading is just a gamble, not unlike derivatives, and this has a debilitating effect on international trade. She also advocates it so that people can count on a dollar still being worth approximately what it was when they earned it, rather than its value wasting away because of inflation.

At minute 55 she gets very frank about the fiscal and economic situation that we are now in, and how we got into it. She utterly rejects the notion that supply side economics and the free market created the mess. She puts the blame squarely on government interference in the real estate market, and the dysfunctional incentives this created, with the government on the one hand tacitly guaranteeing mortgage-backed securities, while on the other allowing speculators to use them in free wheeling bets, which ultimately caused the crash to be as bad as it was. This put the government (in other words, the tax paying public) in the position of having to back the bets to try to contain an economic collapse.

On the lighter side, I noted that Mrs. Shelton said she grew up in “The Valley” (San Fernando Valley in CA). “I’m really kind of the classic Valley girl,” she said. Wow! You wouldn’t know it by listening to her talk. Note this is not “The Valley” I sometimes refer to. To those of us with a computer/tech background, “The Valley” means Silicon Valley, centered in San Jose, CA. San Fernando is a northern suburb of Los Angeles. Anyway, this took me back to the early 1980s when I used to hear “Valley Girl” by Frank Zappa on the radio.

Valley girl
she’s a Valley girl
Valley girl
she’s a Valley girl
Okay fine
fer sure, fer sure
She’s a Valley girl
in a clothing store …

Like, totally. Good memories. 🙂

Read Full Post »

“Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. … The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present – and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”
— from President Dwight Eisenhower’s farewell address, Jan. 17, 1961

Before I begin, I’m going to recommend right off a paper called “Climate science: Is it currently designed to answer questions?”, by Dr. Richard Lindzen, as an accompaniment to this post. It really lays out a history of what’s happened to climate science, and a bit of what’s happened to science generally, during the post-WW II period. I was surprised that some of what he said was relevant to explaining what happened to ARPA computer science research as it entered the decade of the 1970s, and thereafter, though he doesn’t specifically talk about it. The footnotes make interesting reading.

The issue of political influence in science has been around for a long time. Several presidential administrations in the past have been accused of distorting science to fit their predilections. I remember years ago, possibly during the Clinton Administration, hearing about a neuroscientist who was running into resistance for trying to study the difference between male and female brains. Feminists objected, because it’s their belief that there are no significant differences between men and women. President George W. Bush’s administration was accused of blocking stem cell research for religious reasons, and of altering the reports of government scientists, particularly on the issue of global warming. When funding for narrow research areas are blocked, it doesn’t bother me so much. There are private organizations that fund science. What irks me more is when the government, or any organization, alters the reports of its scientists. What’s bothered me much more is when scientists have chosen to distort the scientific process for an agenda.

One example of this was in 2001 when the public learned that a few activist scientists had planted lynx fur on rubbing sticks that were set out by surveyors of lynx habitat. The method was to set out these sticks, and lynx would come along and rub them, leaving behind a little fur, thereby revealing where their habitat was. The intent was to determine where it would be safe to allow development in or near wilderness areas so as to not intrude on this habitat. A few scientists who were either involved in the survey, or knew of it, decided to skew the results in order to try to prevent development in the area altogether. This was caught, but it shows that not all scientists want the evidence to lead them to conclusions.

The most egregious example of the confluence of politics and science that I’ve found to date, and I will be making it the “poster child” of my concern about this, is the issue of catastrophic human-caused global warming in the climate science community. I will use the term anthropogenic global warming, or “AGW” for short. I’m not going to give a complete exposition of the case for or against this theory. I leave it to the reader to do their own research on the science, though I will provide some guidance that I consider helpful. This post is going to assume that you’re already familiar with the science and some of the “atmospherics” that have been occurring around it. The purpose of this post is to illustrate corruption in the scientific process, its consequences, and how our own societal ignorance about science allows this to happen.

There is legitimate climate research going on. I don’t want to besmirch the entire field. There is, however, a significant issue in the field that is not being dealt with honestly, and it cannot be dealt with honestly until the influences of politics, and indeed religion–a religious mindset, are acknowledged and dealt with, however unfortunate and distasteful that is. The issue I refer to is the corruption of science in order to promote non-scientific agendas.

I felt uncomfortable with the idea of writing this post, because I don’t like discussing science together with politics. The two should not mix to this degree. I’d much prefer it if everyone in the climate research field respected the scientific method, and were about exploring what the natural world is really doing, and let the chips fall where they may. What prompted me to write this is I understand enough about the issue now to be able to speak somewhat authoritatively about it, and my conclusions have been corroborated by the presentations I’ve seen a few climate scientists give on the subject. I hate seeing science corrupted, and so I’ve felt a need to speak up about it.

I will quote from Carl Sagan’s book, The Demon-Haunted World, from time to time, referring to it as “TDHW,” to provide relevant descriptions of science to contrast against what’s happening in the field of climate science.

Scientists are insistent on testing . . . theories to the breaking point. They do not trust the intuitively obvious. The truth may be puzzling or counter-intuitive. It may contradict deeply held beliefs.

— TDHW

I think it is important to give some background on the issue as I talk about it. Otherwise I fear my attempt at using this as an example will be too esoteric for readers. There are two camps battling out this issue of the science of AGW. For the sake of description I’ll use the labels “warmist” and “skeptic” for them. They may seem inaccurate, given the nuances of the issue, but they’re the least offensive labels I could find in the dialogue.

The warmists claim that increasing carbon dioxide from human activities (factories, energy plants, and vehicles) is causing our climate to warm up at an alarming rate. If this is not curtailed, they predict that the earth’s climate and other earth systems will become inhospitable to life. They point to the rising levels of CO2, and various periods in the temperature record to make their point, usually the last 30 years. The predictions of doom resulting from AGW that they have communicated to the public are more based on conjecture and scenarios produced by computer models than anything else. This is the perspective that we all most often hear on the news.

The skeptics claim that climate has always changed on earth, naturally. It has never been constant, and the most recent period is no exception. They also say that while CO2 is a greenhouse gas, it is not that important, since the amount of it in the atmosphere is so small (it’s probably around 390 parts per million now), and secondly its impact is not linear. It’s logarithmic, so the more CO2 is added to the atmosphere, the less impact that addition has over what existed previously. From what I hear, even warmists agree on this point. Skeptics say that water vapor (H2O) is the most influential greenhouse gas. It is the most voluminous, from measurements that have been taken. Some challenge the idea that increased CO2 has caused the warming we’ve seen at all, whether it be from human or natural sources. Some say it probably has had some small influence, but it’s not big enough to matter, and that there must be other reasons not yet discovered for the warming that’s occurred. Others don’t care either way. They say that warming is good. It’s definitely better compared to dramatic cooling, as was seen in the Little Ice Age. Most say that the human contribution of CO2 is tiny compared to its natural sources. I haven’t seen any scientific validation of this claim yet, so I don’t put a lot of weight in it. As you’ll see, I don’t consider it that relevant, either.

In any case, they don’t see what the big deal is. They often point to geologic CO2 records and temperature proxies going back thousands of years to make their point, but they have some recent evidence on their side as well. They also use the geologic record and historical records to show that past warming periods (the Medieval Warm Period being the most recent–1,000 years ago) were not catastrophic, but in fact beneficial to humanity.

Some sober climate scientists say that there is a human influence on local climate, and I find that plausible, just from my own experience of traveling through different landscapes. They say that the skylines of our cities alter airflow over large areas, and the steel, stone, and asphalt/cement we use all absorb and radiate heat. This can have an effect on regional weather patterns.

Not everyone involved in distributing this information to the public is a scientist. There are many people who have other duties, such as journalists, reviewers of scientific papers, and climate modelers, who may have some scientific knowledge, but do not participate in obtaining observational data from Nature, or analyzing it.

So what is the agenda of the warmists? Well, that’s a little hard to pin down, because there are many interests involved. It seems like the common agenda is more government control of energy use, a desire to make a major move to alternative energy sources, such as wind and solar (and maybe natural gas), and a desire to set up a transfer of payments system from the First World to the Third World, a.k.a. carbon trading, which as best I can tell has more to do with international politics than climate. The issue of population control seems to be deeply entwined in their agenda as well, though it’s rarely discussed. Giving it a broader view, the people who hold this view are critics of our civilization as it’s been built. They would like to see it reoriented towards one that they see would be more environmentally friendly, and more “socially just.”

The sense I get from listening to them is they believe that our society is destroying the earth, and as our sins against the environment build up, the earth will one day make our lives a living hell (an “apocalypse,” if you will). Some will not admit to this description, but will instead prefer a more technical explanation that still amounts to a faith-based argument. Michael Crichton said in 2003 that this belief seemed religious. Lately there’s some evidence he was right. A lot of the AGW arguments I hear sound like George Michael’s “Praying for Time” from the early 1990s.

Crichton made a well-reasoned argument that environmentalism as religion does not serve us well:

Let me be clear. I am not anti-religion in all aspects of life. My concern here is not that people have religious beliefs, of whatever kind. What concerns me is the attempt to use religious beliefs as justification for government policy. I understand that environmentalism is not officially recognized as a religion in the U.S….yet. We can, however, recognize something that “walks like a duck, quacks like a duck,” etc., for ourselves. I agree with Crichton. The consciousness that environmentalism provides, that we have a role to play in the development of the natural world, a responsibility to be good stewards, is good. However, it should not be a religion. Despite the more alarmist environmentalists who try to scare people with phantoms, there are some sober environmentalists who act based on real scientific findings rather than a religious notion of how nature behaves, or would like to behave if we weren’t around to influence it. In my view those people should be supported.

To be fair, the skeptics have some political views of their own. Often they seem to have a politically conservative bent, with a belief in greater freedom and capitalism, though I think to a person they are environmentally conscious. The difference I’ve seen with them is they’re not coy, and are more willing to show what they’ve found in the evidence, and discuss it openly. They seem to act like scientists rather than proselytizers.

My experience with warmists is they want to control the message. They don’t want to discuss the scientific evidence. They seem to care more about whether people agree with them or not. The most I get out of them for “evidence” of AGW is anecdotes, even if their findings have been scientifically derived. I’m sure their findings are useful for something, but not for proving AGW. I’d be more willing to consider their arguments if they’d act like scientists. My low opinion of these people is driven not by the positions they take, but by how they behave.

We need science to be driven by the search for truth, and for that to happen we need people seeking evidence, being willing to share it openly, as well as their analysis, and allow it to be criticized and defended on its merits. Some climate scientists have been trying to do this. Some have been successful, but from what I’ve seen they represent only gradations of the “skeptic” position. Warmists have forfeited the debate by disclosing only as much information as they say supports their argument, restricting as much information as they can on areas that might be useful for disproving their argument (this gets to the issue of falsifiability, which is essential to science), and basically refusing to debate the data and the analysis, with a few rare exceptions.

The influence that warmists have had on culture, politics, and climate science has been tremendous. Skeptics have faced an uphill battle to be heard on the issue within their discipline since about the mid-1990s. Whole institutions have been set up under the assumption that AGW is catastrophic. Their mission is to fund research projects into the effects, and possible effects of AGW, not the cause of it. Nevertheless, the people who work for these institutions, or are funded by them, are frequently cited as the “thousands of scientists around the world who have proved catastrophic AGW is real.” The only thing is there’s not much going into looking at what’s causing global climate change, so I’ve heard, because the thinking is “everybody knows we’re the ones causing it”–it’s the consensus view, but that’s not based on strong evidence that validates the proposition.

“Consensus” might as well be code in the scientific community for “belief in the absence of evidence,” also known as “faith,” because that’s what “consensus” tends to be. Unfortunately this happens in the scientific community in general from time to time. It’s not unique to climate science.

Science is far from a perfect instrument of knowledge. It’s just the best we have. In this respect, as in many others, it’s like democracy. Science by itself cannot advocate courses of human action, but it can certainly illuminate the possible consequences of alternative courses of action.

The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t conform to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous skeptical scrutiny of everything–new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.

One of the reasons for its success is that science has built-in, error correcting machinery at its very heart. Some may consider this an overbroad characterization, but to me every time we exercise self-criticism, every time we test our ideas against the outside world, we are doing science. When we are self-indulgent and uncritical, when we confuse hopes and facts, we slide into pseudoscience and superstition.

— TDHW

In light of the issue I’m discussing I would revise that last sentence to say, “when we confuse hopes and fears with facts, we slide into pseudoscience and superstition.” Continuing…

Every time a scientific paper presents a bit of data, it’s accompanied by an error bar–a quiet but insistent reminder that no knowledge is complete or perfect. It’s a calibration of how much we trust what we think we know. … Except in pure mathematics, nothing is known for certain (although much is certainly false).

I thought I should elucidate the distinction that Sagan makes here between science and mathematics. Mathematics is a pure abstraction. I’ve heard those more familiar with mathematics than myself say that it’s the only thing that we can really know. However, things that are true in mathematics are not necessarily true in the real world. Sometimes people confuse mathematics with science, particularly when objects from the real world are symbolically brought into formulas and equations. Scientists make a point of trying to avoid this confusion. Any mathematical formulas that are created in scientific study, because they seem to make sense, must be tested by experimentation with the actual object that’s being studied, to see if the formulas are a good representation of reality. Mathematics is used in science as a way of modeling reality. However, this does not make it a substitute for reality, only a means for understanding it better. Tested mathematical formulas create a mental scaffolding around which we can organize and make sense of our thoughts about reality. Once a model is validated by a lot of testing, it’s often used for prediction, though it’s essential to keep in mind the limitations of the model, as much as they are known. Sometimes a new limitation is discovered even when a well established prediction is tested.

Continuing with TDHW…

Moreover, scientists are usually careful to characterize the veridical status of their attempts to understand the world–ranging from conjectures and hypotheses, which are highly tentative, all the way up to laws of Nature which are repeatedly and systematically confirmed through many interrogations of how the world works. But even laws of Nature are not absolutely certain.

Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science–by far the most successful claim to knowledge accessible to humans–teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.

We will always be mired in error. The most each generation can hope for is to reduce the error bars a little, and to add to the body of data to which error bars apply. The error bar is a pervasive, visible self-assessment of the reliability of our knowledge.

The following paragraphs are of particular interest to what I will discuss next:

One of the great commandments of science is, “Mistrust arguments from authority.” (Scientists, being primates, and thus given to dominance hierarchies, of course do not always follow this commandment.) Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else. This independence of science, its occasional unwillingness to accept conventional wisdom, makes it dangerous to doctrines less self-critical, or with pretensions of certitude.

Because science carries us toward an understanding of how the world is, rather than how we would wish it to be, its findings may not in all cases be immediately comprehensible or satisfying. It may take a little work to restructure our mindsets. Some of science is very simple. When it gets complicated, that’s usually because the world is complicated, or because we’re complicated. When we shy away from it because it seems too difficult (or because we’ve been taught so poorly), we surrender the ability to take charge of our future. We are disenfranchised. Our self-confidence erodes.

But when we pass beyond the barrier, when the findings and methods of science get through to us, when we understand and put this knowledge to use, many feel deep satisfaction.

— TDHW

Sagan believed that science is the province of everyone, given that we understand what it’s about. In our society we often think of science as a two-tiered thing. There are the scientists who are authorities we can trust, and then there’s the rest of us. Sagan argued against that.

In the case of the AGW issue, what I often see with warmists is the promotion of blind trust, “The science says this,” or, “The world’s scientists have spoken,” and, “therefor we must act.” A note of certainty that in reality science does not offer. Whether we should act or not is a value judgement, and I argue that a cost/benefit analysis should be applied to such decisions as well, taking the scientific evidence and analysis into account, along with other considerations.

Breaking it wide open

There are a few really meaty exposés that have happened this year on what’s been going on in the climate science community around the issue of AGW. One of them I’ll include here is a presentation by Dr. Richard Lindzen, a climate scientist at MIT. It was sponsored by the Competitive Enterprise Institute. In addition, he also addressed an issue related to what Sagan talked about: the lack of critical thinking on the part of leaders and decision makers. Instead there are appeals to authority.

Up until a few hundred years ago, we in the West appealed to authority–monarchs and popes–for answers about how we should be governed, and how we should live. Thousands of years ago, the geometers (meaning “earth measurers”) of Egypt, who could measure and calculate angles so that great structures could be built, were worshipped. Temples were built for them. What created democracy was an appeal to rational argument among the people. A significant part of this came from habits formed in the discipline of science. Unfortunately with today’s social/political/intellectual environment, to discuss the climate issue rationally is to, in effect, commit heresy! What Lindzen showed in his presentation is the unscientific thinking that is passing for legitimate reasoning in climate science, along with a little of the science of climate.

I can vouch for most of the “trouble areas” that Dr. Lindzen talks about, with regard to the arguments warmists make, because I have seen them as I have studied this issue for myself, and discussed it with others. It’s as disconcerting as it looks.

The slides Lindzen used in his presentation are still available here. The notes below are from the video.

It’s ironic that we should be speaking of “ignorance” among the educated. Yet that seems to be the case. The leaders of universities should be scratching their heads and wondering why that is. Perhaps it has something to do with C. P. Snow’s “two cultures,” which I’ve brought up before. People in positions of administrative leadership seem to be more comfortable with narratives and notions of authorship than critically examining material that’s presented to them. If they are critical, they look at things only from a perspective of political priorities.

What’s interesting is this has been a persistent problem for ages. Dr. Sallie Baliunas talked about how the educated elite of some in Europe during the Little Ice Age persecuted, tortured, and executed people suspected of witchcraft, after severe weather events, because it was thought that the climate could be “cooked” by sorcery. In other words, it was caused by a group of people that was seen as evil. Since the weather events were “unnatural” they had to be supernatural in origin, and according to the beliefs of the day that could only happen by sorcery, and the people who caused it had to be eradicated. Skeptics who challenged the idea of weather “cooking” were marginalized and silenced.

Edit 3-14-2014: After being prompted to do some of my own research on this, I got the sense that Baliunas’s presentation was somewhat inaccurate. In my research I found there was a rivalry that went on between two prominent individuals around this issue, which Baliunas correctly identifies, one accusing witches of causing the aberrant weather, and another arguing that this was impossible, because the Bible said that only God controlled the weather. However, according to the sources I read, the accusations and prosecutions for witchcraft/sorcery only happened in rural areas, and were carried out by locals. If elites were involved, it was only the elites in those areas. The Church, and political leadership of Europe did not buy the idea that witches could alter the weather. Perhaps Baliunas had access to source material I didn’t, and that’s how she came to her conclusions. Some of what I found was behind a “pay wall,” and I wasn’t willing to put up money to research this topic.

The sense I get after looking at the global warming debate for a while is there’s disagreement between warmists and skeptics about where we are along the logarithmic curve for CO2 impact, and what coefficient should be applied to it. What Lindzen says, though, is that the idea of a “tipping point” with respect to CO2 is spurious, because you don’t get “tipping points” in situations with diminishing returns, which is what the logarithmic model tells us we will get. Some might ask, “Okay, but what about the positive feedbacks from water vapor and other greenhouse gases?” Well, I think Lindzen answered that with the data he gathered.

To clarify the graph that Lindzen showed towards the end, what he was saying is that as surface temperatures increased, so did the radiation that went back out into space. This contradicts the prediction made by computer models that as the earth warms, the greenhouse effect will be enhanced by a “piling on” effect, where warming will cause more water vapor to enter the atmosphere, and more ice to melt, causing more radiation to be trapped and absorbed–a positive feedback.

This study was just recently completed. Based on the scientific data that’s been published, and this presentation by Lindzen, it seems to me that these the computer models the IPCC was using were not based on actual observations, but instead represent untested theories–speculation.

The audio at the end of the Q & A section gets hard to hear, so I’ve quoted it. This is Lindzen:

The answer to this is unfortunately one that Arron Wildavsky gave 15-20 years ago before he died, which is, the people who are interested in the policy (and we all are to some extent, but some people, like you–foremost) have to genuinely familiarize themselves with the science. I’ll help. Other people will help. But you’re going to have to break a certain impasse. That impasse begins with the word “skeptic.” Whenever I’m asked, am I a climate skeptic? I always answer, “No. To the extent possible I am a climate denier.” That’s because skepticism assumes there is a good a priori case, but you have doubts about it. There isn’t even a good a priori case! And so by allowing us to be called skeptics, they have forced us to agree that they have something.

Despite Dr. Lindzen’s attempt to clarify his position from “skeptic” to “denier,” I think that’s a bad use of rhetoric, because “denier” in climate science circles has the political connotation of “holocaust denier,” which indicates that “the other side has something, and you have nothing.” Personally, I think that people like Lindzen should recognize that “climate skeptic” is a loaded term, and answer instead, “I am a skeptic of most everything, because that’s what good scientists are.” One can be skeptical of spurious claims.

It’s difficult for climate scientists from one side to even debate the other, as they should, because politics is inevitably introduced. This is a symptom of the corruption of science. Scientists should not have to defend their political or industrial affiliations with respect to scientific issues. This is tantamount to guilt by association and “attack the messenger” tactics, which are irrelevant as far as Nature is concerned. I’ve heard more than one scientist say, “Nature doesn’t give a damn about our opinions,” and it’s true. Science depends on the validity of observed data, and skeptical, probing analysis of that data. When the subject of study is human beings themselves, or products that could affect humans and the environment, then ethics comes into play, but this only extends so far as how to design experiments, or whether to do them at all, not what is discovered from observation.

This is old news by now, but a ton of e-mails and source code were stolen from The University of East Anglia’s Climate Research Unit (CRU), also called the Hadley Center, on November 19, 2009, and made public. I hadn’t heard about it until the last week in November. WattsUpWithThat.com has been publishing a series of articles on what’s being discovered in the e-mails, which provides a good synopsis. I picked out some of them that I thought summed up their contents and implications: herehere, here, and here. The following two interviews with retired climatologist Dr. Tim Ball also summed it up pretty well:

There are no forbidden questions in science, no matters too sensitive, or delicate to be probed, no sacred truths. Diversity and debate are valued. Opinions are encouraged to contend–substantively and in depth.

We insist on independent and–to the extent possible–quantitative verification of proposed tenets of belief. We are constantly prodding, challenging, seeking contradictions or small, persistent, residual errors, proposing alternate explanations, encouraging heresy. We give our highest rewards to those who convincingly disprove established beliefs.

— TDHW

[my emphasis in bold italics — Mark]

Ball referred to the following sites as good sources of information on climate science:

ClimateAudit

WattsUpWithThat

Here are articles written by Ball for the Canada Free Press

In the above interview Ball gets to one of the crucial issues that has frustrated skeptics for years: the publishing of scientific findings and peer review. He said the disclosed e-mails reveal that a small group of warmists exerted a tremendous amount of control over the process. He said he was mystified about why some climate scientists were emphasizing “peer review” 20 years ago (the peer-reviewed literature). He realizes now, after having reviewed the e-mails, that they were in effect promoting their own group. If you weren’t in their club, it’s likely you wouldn’t get published (they’d threaten editors if you did), and you wouldn’t get the coveted “peer review” that they touted so much. Of course, if you didn’t toe their line, you weren’t allowed in their club. No wonder former Vice-President Al Gore could say, “The debate is over.”

It goes without saying that publishing is the lifeblood of academia. If you don’t get published, you don’t get tenure, or you might even lose it. You might as well find another career if you can’t find another sponsor for your research.

The video below is called “Climategate: The backstory.” It looks like this interview with Ball was done earlier, probably in August or September.

The “damage control” from the Hadley e-mails incident was apparent in the media in December, around the time of the Copenhagen conference. There was an effort to distract people from the real issues, preferring instead to try to focus people’s attention on the nasty personalities involved. What galls me is this effort betrays a contempt for the public, taking advantage of the notion that we have little knowledge or interest in how science works, and so we can be easily distracted with personality issues.

I have to say the media reporting on this incident was pretty disappointing. If they talked about it at all, they frequently had pundits on who were not familiar with the science. They simply applied their reading skills to the e-mails and jumped to conclusions about what they read. In other cases they invited on PR flacks to give some counterpoint to the controversy. Warmists had a field day playing with the ignorance of correspondents and pundits. Some of the pundits were “in the ballpark.” At least their conclusions on the issue were sometimes correct, even if the reasoning behind them was not. A couple shows actually invited on real scientists to talk about the issue. What a concept!

On a lighter note, check out this clip from the Daily Show…

Here’s an explanatory article about the significance of the “hide the decline” comment, along with background information which gives context for it. Here’s a Finnish TV documentary that touched on the major issues that were revealed in the CRU e-mails (The link is to part 1. Look for the other two parts on the right sidebar at the linked page).

Carl Sagan saw this pattern of thought before:

“A fire-breathing dragon lives in my garage.”

Suppose (I’m following a group therapy approach by the psychologist Richard Franklin) I seriously make such an assertion to you. Surely you’d want to check it out, see for yourself. There have been innumerable stories of dragons over the centuries, but no real evidence. What an opportunity!

“Show me,” you say. I lead you to my garage. You look inside and see a ladder, empty paint cans, an old tricycle–but no dragon.

“Where’s the dragon?” you ask.

“Oh, she’s right here,” I reply, waving vaguely. “I neglected to mention that she’s an invisible dragon.”

You propose spreading flour on the floor of the garage to capture the dragon’s footprints.

“Good idea,” I say, “but this dragon floats in the air.”

Then you’ll use an infrared sensor to detect the invisible fire.

“Good idea, but the invisible fire is also heatless.”

You’ll spray-paint the dragon and make her visible.

“Good idea, except she’s an incorporeal dragon, and the paint won’t stick.”

And so on. I counter every physical test you propose with a special explanation of why it won’t work.

Now, what’s the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there’s no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I’m asking you to do comes down to believing, in the absence of evidence, on my say-so.

The only thing you’ve really learned from my insistence that there’s a dragon in my garage is that something funny is going on inside my head.

Now another scenario: Suppose it’s not just me. Suppose that several people of your acquaintance, including people who you’re pretty sure don’t know each other, all tell you they have dragons in their garages–but in every case the evidence is maddeningly elusive. All of us admit we’re disturbed at being gripped by so odd a conviction so ill-supported by the physical evidence. None of us is a lunatic. We speculate about what it would mean if invisible dragons were really hiding out in our garages all over the world, with us humans just catching on. I’d rather it not be true, I tell you. But maybe all those ancient European and Chinese myths about dragons weren’t myths at all…

Gratifyingly, some dragon-size footprints in the flour are now reported. But they’re never made when a skeptic is looking. An alternative explanation presents itself: On close examination it seems clear that the footprints could have been faked. Another dragon enthusiast shows up with a burnt finger and attributes it to a rare physical manifestation of the dragon’s fiery breath. But again, other possibilities exist. We understand that there are other ways to burn fingers besides the breath of fiery dragons. Such “evidence”–no matter how important the dragon advocates consider it–is far from compelling. Once again, the only sensible approach is tentatively to reject the dragon hypothesis, to be open to future physical data, and to wonder what the cause might be that so many apparently sane and sober people share the same strange delusion.

— TDHW

[my emphasis in bold italics — Mark]

In an earlier part of the book he said:

The hard but just rule is that if the ideas don’t work, you must throw them away. Don’t waste neurons on what doesn’t work. Devote those neurons to new ideas that better explain the data. The British physicist Michael Faraday warned of the powerful temptation

“to seek for such evidence and appearances as are the favour of our desires, and to disregard those which oppose them . . . We receive as friendly that which agrees with [us], we resist with dislike that which opposes us; whereas the very reverse is required by every dictate of common sense.”

Meanwhile, the risks we are ignoring

The obsession with catastrophic human-caused global warming, driven by ideology and a kind of religious group think, and the flow of money to the tune of tens of billions of dollars, represents a misplacement of priorities. It seems to me that if we should be focusing on any catastrophic threats from Nature, we should be putting more resources into a scientifically validated, catastrophic threat that hardly anyone is paying attention to: The possibility of the extinction of the human race, or an extreme culling, not to mention the extinction of most of life on Earth, from a large asteroid or comet impact. Science has revealed that large impacts have happened several times before in Earth’s history. A large impactor will come our way again someday, and we currently have no realistic method for averting such a disaster, even if we spotted a body heading for us months in advance. The number of scientists who are monitoring bodies in space that cross Earth’s orbit could literally fit around a table at McDonalds! Yet there are thousands of these missiles. These scientists say it is very difficult to make a case in congress for an increase in funding for their efforts, because the likelihood of an impact seems so remote to politicians.

The New Madrid fault zone represents a huge, known risk to the Midwestern part of the U.S. Scientists have tried to warn cities along the zone about updating their building codes to withstand the next quake that will inevitably occur. But so far they have gotten a cool reception.

Alan Kay commented on Bill Kerr’s blog that regardless of what’s caused global warming (he leaves that as an open question), what we should really be worried about is a “crash” of our climate system, where it suddenly changes state from even a small “nudge.” It could even come about as a result of natural forces. I hadn’t thought about the issue from that perspective, and I’m glad he brought it up. He cited an example for such a crash (though on a smaller scale), pointing to “dead zones” in coastal waters all over the world, resulting from agricultural effluents. The example distracts a bit from his main point, but I see what he’s getting at. He said that governments have not been focused on how to prepare for this scenario of a climate “system crash,” and are instead distracted by meaningless “counter measures.”

The implications for science and our democratic republic

The values of science and the values of democracy are concordant, in many cases indistinguishable. Science and democracy began–in their civilized incarnations–in the same time and place, Greece in the seventh and sixth centuries B.C. … Science thrives on, indeed requires, the free exchange of ideas; its values are antithetical to secrecy. Science holds to no special vantage points or privileged positions. Both science and democracy encourage unconventional opinions and vigorous debate. … Science is a way to call the bluff of those who pretend to knowledge. It is a bulwark against mysticism, against superstition, against religion misapplied to where it has no business being. If we’re true to our values, it can tell us when we’re being lied to. The more widespread its language, rules, and methods, the better chance we have of preserving what Thomas Jefferson and his colleagues had in mind. But democracy can also be subverted more thoroughly through the products of science than any pre-industrial demagogue ever dreamed.

— TDHW

I’m going to jump ahead a bit with this quote from an interview with Carl Sagan on Charlie Rose (shown further below):

If we are not able to ask skeptical questions, to interrogate those who tell us that something is true–to be skeptical of those in authority–then we’re up for grabs for the next charlatan, political or religious, who comes ambling along. It’s a thing that Jefferson lay great stress on. It wasn’t enough, he said, to enshrine some rights in a constitution or bill of rights. The people had to be educated, and they had to practice their skepticism in their education. Otherwise, we don’t run the government. The government runs us.

On December 7, 2009 the EPA came out with its endangerment finding saying that carbon dioxide is a pollutant that threatens public health. The agency will proceed to impose restrictions on CO2 emitters itself, since congress has not acted to impose its own. What is all this based on now?

President Obama, you said, “We will restore science to its rightful place.” I’m still waiting for that to happen.

Science helped birth democracy. Its shadow is now being used to create conditions for a more authoritarian government. This isn’t the first time this has happened. The pseudo-science of eugenics, which was once regarded as scientific since it was ostensibly based on the theory of evolution, was used as justification for the slaughter of millions in Europe in the 1930s and 40s. It was also used as justification for shameful actions and experiments performed by our government on certain groups of people in the U.S.

Global warming has been blown up into a huge issue. There aren’t too many people who haven’t at least heard of it. We are seriously considering taking actions that could cost ordinary people, the poor in particular, and businesses, a lot of money. When the stakes are this high we’d better have a good reason for it. This is like the craze with bran muffins and foods with oats in them, because of the belief (supported by scientific studies that were misreported to the public) that they prevented cancer, only it’s more serious. I worry about what this does to science, because it seems like since people can get away with debauching it, why not continue doing it in the future?

One worry I have about the debauching of science is that it will delegitimize science in the eyes of the public, and encourage the same superstition and magical thinking that marked the Middle Ages. Who could blame us for rejecting it after it’s been perceived as “crying wolf” too many times?

The public has valued science up to now, because of the information it can bring us. The problem is we don’t care to understand what it is or how it works. “Just give us the facts,” is our attitude. We have blindly given the name of science a legitimacy that, like other things I’ve talked about on this blog, doesn’t take into account the quality of the findings, or the way they were obtained. It reminds me of a reference Alan Kay made to Neil Postman:

Our [scientific] artifacts are everywhere, but most people, as Neil Postman said once, have to take more things on faith now in the 20th century than they did in the Middle Ages. There’s more knowledge that most people have to believe in dogmatically or be confused about.

As a result, we have set up scientists as authorities. Some purport to tell us what to believe, and how to behave, and we as a society expect this of them. The problem with this is when a “scientific fact” is later revealed to be wrong, people feel jilted. Science itself is thought of as a collection of facts, written by our scientific “priesthood.” We expect this “priesthood” to do right by the rest of us. Science was never meant to take on this role. I think a good part of the reason for this passive attitude towards science in the public sphere is the quality and the methodology of findings are not reported to the general public. Most journalists wouldn’t understand the criteria enough to explain it to the citizenry in a way they’d understand.

The other part of the problem is that science is presented in our educational system as something that’s not very interesting. In fact most students only experience a small sliver of science, if that. It’s rather like mathematics (or arithmetic and calculation that’s called “mathematics”) for them, something they’re required to take. They just want to “get through it,” and they’re thankful when it’s over.

An issue I’m not even addressing here, though it’s worth noting, is that science is often perceived as heartless and cold, a discipline that has allowed us as a society to act without a moral sense of responsibility. This I’m sure has also contributed to the public’s aversion to science. And I can see that because of this, people might prefer “the science of global warming alarm” to “the science of skepticism.” One seems to be promoting “good action,” while the other seems like a bunch of backward, out of touch folks, who don’t care about the earth. These are emotional images, a way of thought that a lot of people the world over are prone to. However, as Sagan said in the interview below, “Science is after how the Universe really is, and not what makes us feel good.” These images of one group and the other are stereotypes, not so much the truth.

Of course a moral sense is necessary for a self-governing society like ours, but morality can be misapplied. By trying to do good we could in fact be hurting people if the solution we implement is not thought through. We may act on incomplete information, all the while thinking that we have the complete picture, thereby ignoring important factors that may require a very different solution to resolve. Our understanding of complex systems and the effects of tampering with them may also be grossly incomplete. While attempting to shape and direct a system that is behaving in a way we don’t like, we may make matters worse. Intent matters, but results matter, too. What appear to be moral actions will not always result in moral outcomes, especially in systems that are huge in scale and complexity. This applies to the environment and our economy.

As we’ve seen in our past, people eventually do figure out that the science behind a spurious claim was flawed, but it tends to take a while. By that point the damage has already been done. Perhaps scientists need to take a more active public service role in informing the public about claims that are made through news outlets. What would be better is if people understood scientific thinking, but in the absence of that, scientists could do the public a service by explaining issues from a scientific perspective, and perhaps educating the audience about what science is along the way. This would need to be done carefully, though. A real effort at this would probably expose people to notions that they are uncomfortable with. Without a sufficient grounding in the importance of science, that is, the importance of listening and considering these uncomfortable ideas, most people will just change the channel when that happens. In order for this to work, people need to be willing to think, because the activity is interesting, and sometimes produces useful results. Science cannot just be regarded as a vocation in our society. It is an essential part of the health of our democratic republic.

The danger of our two-tiered knowledge society

In all uses of science, it is insufficient–indeed it is dangerous–to produce only a small, highly competent, well-rewarded priesthood of professionals. Instead, some fundamental understanding of the findings and methods of science must be available on the broadest scale.

— TDHW

I’m going to turn the subject now to the matter of science and technology, and our collective ignorance, because it also has bearing on this “dangerous brew.” I found this interview with Carl Sagan, which was done shortly before he died in 1996. He talked with Charlie Rose about his then-new book, The Demon-Haunted World. He had some very prescient things to say which add to the quotes I’ve been using from his book. I found myself agreeing with what Sagan said in this interview regarding science, scientific awareness, and science vs. faith and emotions, but context is everything. He may not have been arguing from the same point of view I am, as I reveal further below. I find it interesting, though, that his quotes seem to apply very nicely to my argument. I’ve been reading this book, and I don’t see how I might be quoting him out of context. You’ll see why I’m hedging as you read further.

This is a poignant interview, because they talk about death and what that means. It’s a bit sad and ironic to see his optimism about his good health. He died from pneumonia, which was a complication of his bone marrow transplant, which was a treatment he received for his myelodysplasia. His final accomplishment was completing work on a movie version of his novel, Contact, which came out in 1997. Interestingly, his movie touched on some themes from The Demon-Haunted World.

My jaw dropped when I heard Charlie Rose read that less than half of American adults in 1996 thought that our planet orbits the Sun once a year! I did a quick check of science surveys on the internet and it doesn’t look like the situation has gotten any better since then.

Sagan’s point was not that magical thinking in human beings was growing. He said it’s always been with us, but in the technological society we have built, the prominence of this kind of thinking is dangerous. This is partly because we are wielding great power without knowing it, and partly because it makes us as a people impotent on issues of science and technology. We will feel it necessary to just leave decisions about these issues up to a scientific-technological elite. I’ve argued before that we have an elite that has been making technological decisions for us, but not at a public policy level. It’s been at the level of IT administrators and senior engineers within organizations. In the realm of science, however, we clearly have an elite which has gladly taken over decisions about science at the policy level.

The climate issue points to another aspect of this. As Dr. Lindzen pointed out in his presentation (above), we have people who are misusing climate models (and it’s anyone’s guess whether it’s on purpose, or due to ignorance) as a substitute for the natural phenomenon itself! I’ve talked to a few climate modelers who believe that human activities are causing catastrophic climate change, and this is how they view it: Since we do not have another Earth to use as a “control,” or to use as a means for “repeatability,” we use computer models as a “control,” or in order to repeat an “experiment.” It’s absurd. Talking to these people is like entering the Twilight Zone. They argue as if they’re the professionals who know what they’re doing. The truth is they’re ignorant of the scientific method and its value, yet their theories of computer modeling and methodology carry a high level of legitimacy in the field of climate science. It’s what a lot of the prognosticating in the IPCC (Intergovernmental Panel on Climate Change) assessment reports are based on. This gives you an idea of the ignorance that at times passes for knowledge and wisdom in this field!

[Computers offer] a level of abstraction that makes them very much like minds, or rather makes them mind-like. And that is to say computers manipulate not reality, but representations of reality.

— Doron Swade, curator of the London Science Museum

As I’ve talked about before, a computer model is only a theory. That’s it. It’s a representation of reality created by imperfect human beings (programmers, though in principle it’s not that different from scientists creating theories and mathematical models of reality). It’s irrational to use a theory as a “control,” or as a proxy for the real thing in an “experiment.” It goes against what science is about, which is an acknowledgment that human beings are ignorant and flawed observers of Nature. Even if we have a theory that seems to work, there is always the possibility that in some circumstance that we cannot predict it will be wrong. This is because our knowledge of Nature will always be incomplete to some degree. What science offers, when applied rigorously, is very good approximations. Within the boundaries of those approximations we can find ideas that are useful and which work. There are no shortcuts to this, though.

Theories are of course welcome in science, but the only rational thing to do with them while using the scientific method is to test them against the real thing, and to pay attention to how well theory and reality match, in as many aspects as can be discerned.

Climate modelers who back the idea of catastrophe claim they do this when forming their models, but I’ve heard first-hand accounts from scientists about how modelers will “tweak” parameters to make a model do something “interesting.” This gets them attention, and I detect some techno-cultish behavior in this. I’ve heard second-hand accounts from scientists about how modelers will input unrealistic parameters to make the models closely match the temperature record, which they term “validating the model.” As Dr. John Christy, a scientist who studies temperature in the atmosphere and at the surface, at the University of Alabama at Huntsville, once remarked, “They already knew what the correct answer was.” This is an illegitimate methodology, because it’s no better than forming a conclusion based on a data correlation. I’m sure if I worked hard enough at it, I could create a computer model that also closely tracked the temperature record, just drawing lines on the screen, and/or producing numbers, coming from a standpoint of total ignorance of how the climate works, and I suppose by their criteria my model would be “validated.”

I’ve cited this quote from Alan Kay before (though he did not specifically address the issue of climate modeling, or anything having to do with climate science when he said it):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

To clarify, Kay was talking about the application of computing to non-computational sciences.

Beware of those who come bearing predictions

I have praised Carl Sagan for what he talked about in the Charlie Rose interview above (I praise him for some of his other work as well), but I feel I would be remiss if I didn’t talk about a portion of his career where he fell into doing what I’m complaining about in this post. He promoted an untested prediction for a political agenda. I’m going to talk about this, because it illustrates a temptation that scientists (who are flawed human beings like the rest of us) can succumb to.

Sagan was one of the chief proponents of the theory of nuclear winter in the 1980s during the Cold War between the U.S. and the Soviet Union. As Michael Crichton pointed out (you may want to search on Sagan’s name in the linked article to reach the relevant part), like with catastrophic AGW, this was based on a prediction that was supported by flimsy evidence. In fact, computer climate modeling had a central role in the prediction’s supposed legitimacy.

Sagan exhibited a fallacy in thinking on a number of occasions that I’ll call “belief in the mathematical scenario.” Such scenarios are supported by a concept that can be conjured up as a technical, mathematical model. Here’s the thing. Is the scenario even plausible? Does the fact that we can imagine it in a plausible way justify believing that it’s real? Does a moral belief that something is right or wrong justify promoting an unwavering belief in an untested theory that supports the moral rule, because it will cause people to “do the right thing”? How is this different from aspects of organized religion? Do these questions matter? From where I sit, it’s the academic equivalent of people making up their own myths, using the technical tool of mathematics as a legitimizer, mistaking mathematical precision for objective truth in the real world. This is a behavior that science is supposed to help us avoid!

On one level, trying to apply the scientific method to the nuclear winter prediction sounds absurd: “You want evidence confirming that a nuclear war would result in nuclear winter?? Are you nuts?” First of all, we don’t have to resort to that extreme. Scientists have found ways to physically model a scenario by using materials from Nature, but at a small scale, in order to arrive at approximations that are quite good. We don’t have to experience the real thing at full scale to get an idea of what will really happen. It’s just a matter of arriving at a realistic model, and in the case of this prediction that might’ve been difficult.

The point is that a prediction, an assertion, must be tested before it can be considered scientifically valid. It’s not science to begin with unless it’s falsifiable. And what’s worse, Sagan knew this! Without falsifiability, the appropriate scientific answer is, “We don’t know,” but that’s not what he said about this scenario. He at least admitted it was a prediction, but he also called it “science.” It was disingenuous, and he should’ve known better.

Without testing our notions, our assumptions, and our models, we are left with superstition–irrational fears of the unknown, and irrational hopes for things that defy what is possible in Nature (whether we know what’s possible is beside the point), even though they are dispensed using ideas that sound modern, and comport with what we think intelligent, educated people should know.

It doesn’t matter if the untested prediction is made seemingly plausible by mathematics, or a computer model (which is another form of mathematics). That’s mere hand waving. Prediction, mathematical or otherwise, is not science, and therefor it’s not nearly as reliable as analysis derived from the scientific method. Our predictions are hopefully derived from science, but even so, an untested prediction really is only as reliable as the experience of the person giving it.

The same sort of political dynamic came into play at the time that the nuclear winter theory was popular that has existed in climate science: If you were skeptical about the theory of nuclear winter, that meant you were in the “minority” (or so they had people believe)–not with the “consensus.” You were accused of supporting nuclear arms, and our government’s tough “cowboy” anti-Soviet policy, and were a bad person. Such smears were unjustified, but they were used to shame and silence dissent. I don’t mean to suggest that Sagan was a communist sympathizer, or anything of that sort. I think he wanted to prevent nuclear war, period. Not a bad motive in itself, but it seems to me has was willing to sacrifice the legitimacy of science for this.

A lot of scientists who didn’t know too much about the science at issue, but didn’t want to ruffle feathers, went along with it to be a part of the accepted group. The whole thing was desperate and cynical. It’s my understanding from history that the fears exhibited by the promoters of this theory were unfounded, and I think they came about because of a fundamental misunderstanding of realpolitik.

It’s not as if this scare tactic was really necessary. The consequences of nuclear war that we knew about were horrifying enough. It’s apparent from the interview with Ted Turner in the above video that there were worries about the escalation of the nuclear arms race, perhaps the Reagan Administration’s first strike nuclear capability against the Soviet Union in particular. You’ll notice that Sagan talks about (I’m paraphrasing), “One nation bombing another before the other can respond, the attacker thinking that they will remain untouched.” People like Sagan didn’t want the U.S., or perhaps the Soviets for that matter, to think it could carry out a first strike and wipe out the other side with impunity (because the climate would “get” the other side in return). Surely, a nuclear war with a large number of blasts would’ve caused some changes in climate, but how much was anyone’s guess.

The only evidence that could’ve realistically tested the theory, to a degree, would’ve been from above ground nuclear tests, or the bombs that were dropped on Hiroshima and Nagasaki, Japan. To my knowledge, none of them gave results that would’ve contributed to the prediction’s validity.

Before the very first nuclear bomb was tested there was at least one scientist in the Manhattan Project who thought that a single nuclear blast might ignite our atmosphere. That would be a fate worse than the predicted nuclear winter. Imagine everything charred to a crisp! Others thought that while it was possible, the probability was remote. Still, the scenario was terrifying. The bomb was tested. Many other above-ground atom bomb tests followed, and we’re all still here. Not to say that nuclear testing is good for us or the environment, but the prediction didn’t come true. The point is, yes, there are terrible scenarios that can be imagined. These scenarios are made plausible based on things that we know are real, and a knowledge of mathematics, but that does not mean any of these terrible scenarios will happen.

You’ll notice if you watch the interview with Turner that Sagan even talks about catastrophic AGW! Again, what he spoke of was a prediction, not a scientifically validated conclusion. It’s hard to know what his motivation was with that, but it sounded like he was uncomfortable with the idea that our civilization was not consciously thinking about the environment, and what consequences that might have down the line. Western governments began to understand environmental issues in the 1980s, and implemented regulations to clean up what was our highly polluted environment. From what I understand though, this did not happen in other parts of the world.

Not to say that all environmental problems have been solved in the U.S. There are real environmental issues that science can inform us about today, and will need to be acted upon. One example is “dead zones,” which I referred to earlier, where coastal waters are losing their oxygen due to an interaction between nitrogen-rich compounds that agricultural operations are releasing into streams, and algae. It’s killing off all marine life in the affected areas, and these “dead zones” exist all over the world. There’s a Frontline documentary called “Poisoned Waters” that talks about it. Another is an issue that does have to do with human-induced climate forcing, but not strictly in the sense of warming or cooling the planet. The PBS show Nova talked about it in an episode called “Dimming the Sun.” Huge quantities of sooty pollution have been found to affect relative humidity on Earth, which does have a significant effect on our weather. Aside from the application of this new find to the issue of AGW, which to me was rather irrelevant, this was a very interesting show. They gave what I thought was a very thorough and compelling exposition of the science behind the “dimming” effect.

The legacy of Carl Sagan

Based on what I’ve read in Sagan’s book, if he were still alive today, he would probably still be promoting the theory of catastrophic AGW. That is something I find hard to understand, given the understanding of science that he had, its implications for our society, and the seemingly innate need for humans to create their own myths, all of which he seemed to know about. Perhaps he was not one to look inward, as well as outward. Though it’s impossible to do this now, this is an issue I’d dearly like to ask him about.

I can’t help but think that Sagan and his cohorts created the template for the pseudo-science that bedevils climate science today. Richard Lindzen’s paper, which I referred to at the beginning of this post, paints the picture of what’s happened more fully, and points to some other motivations, besides politics. One of Sagan’s phrases that I still remember is, “Extraordinary claims require extraordinary proof.” Too bad he didn’t follow that maxim sometimes.

Despite this, I respect the fact that he really did try to bring scientific understanding to the masses. Sagan in my mind was a great man, but like all of us he was flawed, and even he was willing to set aside his scientific thinking and participate in the promotion of pseudo-science for non-scientific goals.

I could rant that Sagan was a hypocrite, because he cynically exploited the very ignorance he expressed concern about. However, my guess is that he saw what he thought was a dangerous situation developing in an ignorant world–a “demon-haunted” one at that. Perhaps the only way he knew how to deal with it in such “dire circumstances” was to “take what existed,” promote a scenario that was not based on much, which we ignoramuses would believe, and cynically exploit the good name of science (and his own good name) so that we would pull back from the brink. It is elitist, though I can understand the temptation.

If we really want to bring people out of ignorance it’s best to try to educate them, even though that can be hard. Sometimes people just don’t want to hear it. But if this approach is not taken, then it’s just a bunch of elites messing with people’s heads so we’ll give them a response they want. We won’t be any more enlightened. There’s too much of that already.

I guess another lesson is that even though we can see ignorance in people, when the human spirit is brought out it can manifest solutions to problems in ways that people like me would not anticipate, and things work out okay. That in essence is the genius of semi-autonomous systems like ours that have diffused power structures. It acknowledges that no one person, or group, has all the right answers. The same is true of science, when it’s done well, and relatively free markets. It’s best if we respect that, even though we may be tempted to subvert these systems for causes we ourselves deem noble.

Even so, I feel as though we put too much faith in our semi-autonomous, diffused systems. Some of us think they will solve all problems, and it’s not necessary to worry about being well educated. I think people push aside the idea too casually that more sophisticated ways of thinking and perceiving would help all of us (not just a few) make those systems more optimal.

So what are we to do?

The tenets of skepticism do not require an advanced degree to master, as most successful used car buyers demonstrate. The whole idea of a democratic application of skepticism is that everyone should have the essential tools to effectively and constructively evaluate claims to knowledge. All science asks is to employ the same levels of skepticism we use in buying a used car or in judging the quality of analgesics or beer from their television commercials.

But the tools of skepticism are generally unavailable to the citizens of our society. They’re hardly ever mentioned in the schools, even in the presentation of science, its most ardent practitioner, although skepticism repeatedly sprouts spontaneously out of the disappointments of everyday life. Our politics, economics, advertising, and religions (New Age and Old) are awash in credulity. Those who have something to sell, those who wish to influence public opinion, those in power, a skeptic might suggest, have a vested interest in discouraging skepticism.

— TDHW

I noted as I wrote this post that both Sallie Baliunas and Carl Sagan said that science needed special protection in our society. Richard Lindzen indicates that this protection is paper thin. He said it’s unfortunately easy to co-opt science in our society. The only way science can be protected in my view is if we value it for what it really is. Students need to be taught that, and shown its beauty. Sagan said a key thing in the Charlie Rose interview: “Science is more than a body of knowledge. It’s a way of thinking.” I must admit some ignorance to this, but what little I’ve heard about science education in schools now indicates that it’s taught almost strictly as a body of knowledge. I suspect this is because of No Child Left Behind and standardized testing. I remember a CS professor saying a while back that in his kid’s science class there was a lot of workbook material, but very little experimentation, because the teachers were afraid to allow their students to do experiments. He didn’t explain why. My suspicion is they didn’t want the students to come to their own conclusions about what they had seen, and possibly get “confused” about what they’re supposed to know for their tests. In any case I remember exclaiming to the professor that he should get his child out of that class! I asked rhetorically, “What do they think they’re teaching?”

Even when I look at my own science education I realize that I wasn’t given a complete sense of what science was. The hypotheses were practically given to us. When we did an experiment, the steps for it were always given to us. We were always given the “correct” answer in the end, so we could compare it against the answer we came up with. This is how we calculated error. We compared the answer we had against the “correct” answer. One thing that was valuable was we were asked to think of any reasons why the error occurred. Some error always existed (it always does in real science), and we could speculate that maybe our instruments introduced some error, and that we may have done the procedure a bit wrong, etc. This was teaching only one part of real science: observation, being skeptical of our observations, and recognizing human fallibility. That’s valuable. On the other hand, what it also taught was the fallacy that there was a perfectly correct answer, which was achieved via. mathematics, which was formulated by “masters of science.” There’s so much more to it than that. In real science, scientists come up with their own hypotheses. They design their own experiments. When they get their results they have nothing to compare them against, unless they’re reproducing someone else’s experiment. Even then they can’t just say “the results in the original experiment are the correct answer,” because the other experiment may have had unrecognized flaws, too. The process by which those mathematical formulas that we used became so good was not a “one shot” deal. They came about from making a lot of mistakes, realizing what they were, and correcting for them.

I wonder how real scientists figure out what error figures/bars to put in their results. Maybe they could come from instrument ratings, or probabilities, based on an examination of the scale of the observation.

Anyway, science is really about wondering, exploring, being curious, being skeptical of your own observations, as well as those of others. It also takes into account what’s been discovered previously. Alan Kay has talked about how there’s also a kind of critical argument that goes on in science, where the weak ideas are identified and set aside, and the strong ones are allowed to rise to the top.

So much stress is put on the need for math and science education for our country’s future economic health. It’s necessary for our society’s general health, too. I hope someday we will recognize that.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »