Answering some basic mathematical questions about rational numbers (fractions)

A couple questions have bugged me for ages about rational numbers:

  1. Why is it that we invert and multiply when dividing two fractions?
  2. Why is it that when we solve a proportion, we multiply two elements, and then divide by a third?

For example, with a proportion like:

\frac {3} {180} = \frac {2} {x}

the method I was taught was to cross-multiply the two numbers that are diagonally across from each other (2 and 180), and divide by the number opposite x (3). We solve for x with x = \frac {2 \times 180} {3}, but why?

These things weren’t explained. We were just told, “When you have this situation, do X.” It works. It produces what we’re after (which school “math” classes see as the point), but once I got into college, I talked with a fellow student who had a math minor, and he told me while he was taking Numerical Analysis that they explained this sort of stuff with proofs. I thought, “Gosh, you can prove this stuff?” Yeah, you can.

I’ve picked up a book that I’d started reading several years ago, “Mathematics in 10 Lessons,” by Jerry King, and he answers Question 1 directly, and Question 2 indirectly. I figured I would give proofs for both here, since I haven’t found mathematical explanations for this stuff in web searches.

I normally don’t like explaining stuff like this, because I feel like I’m spoiling the experience of discovery, but I found as I tried to answer these questions myself that I needed a lot of help (from King). My math-fu is pretty weak, and I imagine it is for many others who had a similar educational experience.

I’m going to answer Question 2 first.

The first thing he lays out in the section on rational numbers is the following definition:

\frac {m} {n} = \frac {p} {q} \Leftrightarrow mq = np, where\: n \neq 0,\, q \neq 0

I guess I should explain the double-arrow symbol I’m using (and the right-arrow symbol I’ll use below). It means “implies,” but in this case, with the double-arrow, both expressions imply each other. It’s saying “if X is true, then Y is also true. And if Y is true, then X is also true.” (The right-arrow I use below just means “If X is true, then Y is also true.”)

In this case, if you have two equal fractions, then the product equality in the second expression holds. And if the product equality holds, then the equality for the terms in fraction form holds as well.

When I first saw this, I thought, “Wait a minute. Doesn’t this need a proof?”

Well, it turns out, it’s easy enough to prove it.

The first thing we need to understand is that you can do anything to one side of an equation so long as you do the same thing to the other side.

We can take the equal fractions and multiply them by the product of their denominators:

nq (\frac {m} {n}) = (\frac {p} {q}) nq

by cancelling like terms, we get:

{mq = np}

This explains Question 2, because if we take the proportion I started out with, and translate it into this equality between products, we get:

{3x = 2 \times 180}

To solve for x, we get:

x = \frac {2 \times 180} {3}

which is what we’re taught, but now you know the why of it.

To answer Question 1, we have to establish a couple other things.

The first is the concept of the multiplicative inverse.

For every x ≠ 0, there’s a unique v such that xv = 1, which means that v = \frac {1} {x}.

From that, we can say:

xv = \frac {x} {1} \frac {1} {x} = \frac {x} {x} = 1

From this, we can say that the inverse of x is unique to x.

King goes forward with another proof, which will lead us to answering Question 1:

r = \frac {a} {b} \Rightarrow a = br, where\; b \neq 0

Proof:

b (\frac {a} {b}) = rb

by cancelling like terms, we get:

{a = br}

(It’s also true that a = br \Rightarrow r = \frac {a} {b}, but I won’t get into that here.)

Now, consider:

r = \frac {\frac {m} {n}} {\frac {p} {q}}, where\; n \neq 0, p \neq 0, q \neq 0

By the above proof, where we determined that r = \frac {a} {b} \Rightarrow a = br, we can say:

\frac {m} {n} = r (\frac {p} {q})

Then,

\frac {m} {n} \frac {q} {p} = r (\frac {p} {q}) \frac {q} {p}

By cancelling like terms, we get:

\frac {m} {n} \frac {q} {p} = (r) 1

r = \frac {m} {n} \frac {q} {p}

Therefor,

\frac {\frac {m} {n}} {\frac {p} {q}} = \frac {m} {n} \frac {q} {p}

And there you have it. This is why we invert and multiply when dividing fractions.

Some may ask, since the mathematical logic for these truths is fairly simple, from an algebraic perspective, why don’t math classes teach this? Well, it’s because they’re not really teaching math…

Note for commenters:

WordPress supports LaTeX. That’s how I’ve been able to publish these mathematical expressions. I’ve tested it out, and LaTeX formatting works in the comments as well. You can read up on how to format LaTeX expressions at LaTeX — Support — WordPress. You can read up on what LaTeX formatting commands to use at Mathematical expressions — ShareLaTeX under “Further Reading”.

HTML codes also work in the comments. If you want to use HTML for math expressions, just a note, you will need to use specific codes for ‘<‘ and ‘>’. I’ve seen cases in the past where people have tried using them “naked” in comments, and WordPress interprets them as HTML tags, not how they were intended. You can read up on math HTML character codes here and here. You can read up on formatting fractions in HTML here.

Related post: The beauty of mathematics denied

— Mark Miller, https://tekkie.wordpress.com

Advertisements

SICP: Chapter 3 and exercises 3.59, 3.60, and 3.62

Prologue

SICP reaches a point, in Chapter 3, where for significant parts of it you’re not doing any coding. It has exercises, but they’re all about thinking about the concepts, not doing anything with a computer. It has you do substitutions to see what expressions result. It has you make diagrams that focus in on particular systemic aspects of processes. It also gets into operational models, talking about simulating logic gates, how concurrent processing can work (expressed in hypothetical Scheme code). It’s all conceptual. Some of it was good, I thought. The practice of doing substitutions manually helps you really get what your Scheme functions are doing, rather than guessing. The rest didn’t feel that engaging. One could be forgiven for thinking that the book is getting dry at this point.

It gets into some coding again in Section 3.3, where it covers building data structures. It gets more interesting with an architecture called “streams” in Section 3.5.

One thing I will note is that the only way I was able to get the code in Section 3.5 to work in Racket was to go into “Lazy Scheme.” I don’t remember what language setting I used for the prior chapters, maybe R5RS, or “Pretty Big.” Lazy Scheme does lazy evaluation on Scheme code. One can be tempted to think that this makes using the supporting structures for streams covered in this section pointless, because the underlying language is doing the delayed evaluation that this section implements in code. Anyway, Lazy Scheme doesn’t interfere with anything in this section. It all works. It just makes the underlying structure for streams redundant. For the sake of familiarity with the code this section discusses (which I think helps in preserving one’s sanity), I think it’s best to play along and use its code.

Another thing I’ll note is this section makes extensive use of knowledge derived from calculus. Some other parts of this book do that, too, but it’s emphasized here. It helps to have that background.

I reached a stopping point in SICP, here, 5 years ago, because of a few things. One was I became inspired to pursue a history project on the research and development that led to the computer technology we use today. Another is I’d had a conversation with Alan Kay that inspired me to look more deeply at the STEPS project at Viewpoints Research, and try to make something of my own out of that. The third was Exercise 3.61 in SICP. It was a problem that really stumped me. So I gave up, and looked up an answer for it on the internet, in a vain attempt to help me understand it. The answer didn’t help me. It worked when I tried the code, but I found it too confusing to understand why it produced correct results. The experience was really disappointing, and disheartening. Looking it up was a mistake. I wished that I could forget I’d seen the answer, so I could go back to working on trying to figure it out, but I couldn’t. I’d seen it. I worked on a few more exercises after that, but then I dropped SICP. I continued working on my history research, and I got into exploring some fundamental computing concepts on processors, language parsing, and the value of understanding a processor as a computing/programming model.

I tried an idea that Kay told me about years earlier, of taking something big, studying it, and trying to improve on it. That took me in some interesting directions, but I hit a wall in my skill, which I’m now trying to get around. I figured I’d continue where I left off in SICP. One way this diversion helped me is I basically forgot the answers for the stuff I did. So, I was able to come back to the problem, using a clean slate, almost fresh. I had some faded memories of it, which didn’t help. I just had to say to myself “forget it,” and focus on the math, and the streams architecture. That’s what finally helped me solve 3.60, which then made 3.61 straightforward. That was amazing.

A note about Exercise 3.59

It was a bit difficult to know at first why the streams (cosine-series and sine-series) were coming out the way they were for this exercise. The cosine-series is what’s called an even series, because its powers are even (0, 2, 4, etc.). The sine-series is what’s called an odd series, because its powers are odd (1, 3, 5, etc.). However, when you’re processing the streams, they just compute a general model of series, with all of the terms, regardless of whether the series you’re processing has values in each of the terms or not. So, cosine-series comes out (starting at position 0) as: [1, 0, -1/2, 0, 1/24, …], since the stream is computing a0x0 + a1x1 + a2x2 …, where ai is each term’s coefficient, and some of the terms are negative, in this case. The coefficients of the terms that don’t apply to the series come out as 0. With sine-series, it comes out (starting at position 0) as: [0, 1, 0, -1/6, 0, 1/120, …].

What’s really interesting is that exercises 3.59, 3.61, and 3.62 are pretty straightforward. Like with some of the prior exercises, all you have to do is translate the math into stream terms (and it’s a pretty direct translation from one to the other), and it works! You’re programming in mathland! I discovered this in the earlier exercises in this section, and it amazed me how expressive this is. I could pretty much write code as if I was writing out the math. I could think in terms of the math, not so much the logistics of implementing computational logic to make the math work. At the same time, this felt disconcerting to me, because when things went wrong, I wanted to know why, computationally, and I found that working with streams, it was difficult to conceptualize what was going on. I felt as though I was just supposed to trust that the math I expressed worked, just as it should. I realize now that’s what I should have done. It was starting to feel like I was playing with magic. I didn’t like that. It was difficult for me to trust that the math was actually working, and if it wasn’t, that it was because I was either not understanding the math, and/or not understanding the streams architecture, not because there was something malfunctioning underneath it all. I really wrestled with that, and I think it was for the good, because now that I can see how elegant it all is, it looks so beautiful!

Exercise 3.60

This exercise gave me a lot of headaches. When I first worked on it 5 years ago, I kind of got correct results with what I wrote, but not quite. I ended up looking up someone else’s solution to it, which worked. It was kind of close to what I had. I finally figured out what I did wrong when I worked on it again recently: I needed to research how to multiply power series, but in a computationally efficient manner. It turns out you need to use a method called the Cauchy product, but there’s a twist, because of the way the streams architecture works. A lot of math sources I looked up use the Cauchy product method, anyway (including a source that covers power series multiplication that I cite below), but the reason it needs to be used is it automatically collects all like terms as each term of the product is produced. The problem you need to work around is that you can’t go backwards through a stream, except by using stream-ref and indexes, and I’ve gotten the sense by going through these exercises that when it comes to doing math problems, you’re generally not supposed to be doing that, though there’s an example later where they talk about a method Euler devised for accelerating computation of power series where they use stream-ref to go backwards and forwards inside a stream. I think that’s an exception.

One hint I’ll leave here is that there is a way to do the Cauchy product without using mutable state in variables, nor is it necessary to do anything particularly elaborate. You can do it just using the stream architecture. Thinking about the operations involved, you don’t have to do the computations strictly left to right, because all of the operations are commutative.

Here are a couple sources that I found helpful when trying to work this out:

Formal Power Series, from Wikipedia. Pay particular attention to the section on “Operations on Formal Power Series.” It gives succinct descriptions on multiplying power series, inverting them (though I’d only pay attention to the mathematical expression of this in Exercise 3.61), and dividing them (which you’ll need for Exercise 3.62).

I found this video on the Cauchy product very helpful in coming up with my solution for this exercise. Notice the pattern by which the products are computed. This pattern should influence your thinking in figuring out how to carry out the Cauchy product using streams.

It is crucial that you get the solution for this exercise right, because the next two exercises (3.61 and 3.62) will give you no end of headaches if you don’t. They build on each other, and this exercise.

The exercise says you can try out mul-series using sin(x) and cos(x), squaring both, and adding them together, and you should get 1. How is “1” represented in streams? Well, it makes sense that it will be in the constant-term position (the zeroth position) in the stream. That’s where a scalar value would be in a power series.

There is a related concept to working with power series called a unit series. A unit series is just a list of the coefficients in a power series. If you’ve done the previous exercises where SICP says you’re working with power series, this is what you’re really working with (though SICP doesn’t mention this), and it’s why in 3.61 it has you write a function called “invert-unit-series”.

The unit series equivalent for the scalar value 1 is [1, 0, 0, 0, 0, …].

A note about Exercise 3.62

The exercise talks about using your div-series function to compute tan-series. Here was a good source for finding out how to do that:

Trigonometric Functions

The unit series for tan-series should come out as: [0, 1, 0, 1/3, 0, 2/15, 0, 17/315, …]

The beauty of mathematics denied, revisited: Lockhart’s Lament

Vladislav Zorov, a Quora user, brought this to my attention. It is a worthy follow-up to Jerry King’s “The Art of Mathematics,” called Lockhart’s Lament. Lockhart really fleshes out what King was talking about. Both are worth your time. Lockhart’s lament reminds me a lot of a post I wrote, called The challenge of trying to get a real science of computing in our schools. I talked about an episode of South Park to illustrate this challenge, where a wrestling teacher is confronted with a pop culture in wrestling that’s “not real wrestling” (ie. WWE), as an illustration of the challenge that computer scientists have in trying to get “the real thing” into school curricula. The wrestling teacher is continually frustrated that no one understands what real wrestling is, from the kids who are taking his class, to the people in the community, to the administrators in his school. There is a “the inmates have taken over the asylum” feeling to all of this, where “the real thing” has been pushed to the margins, and the pop culture has taken over. The people who see “the real thing,” and value it are on the outside, looking in. Hardly anybody on the inside can understand what they’re complaining about, but some of them are most worried that nothing they’re doing seems to be enough to make a big dent in improving the lot of their students. Quite the conundrum. It looks idiotic, but it’s best not to dismiss it as such, because the health and welfare of our society is at stake.

Lockhart’s Lament — The Sequel is also worth a look, as it talks about critics of Lockhart’s argument.

Two issues that come to mind from Lockhart’s lament (and the “sequel”) is it seems like since we don’t have a better term for what’s called “math” in school, it’s difficult for a lot of people to disambiguate mathematics from the benefits that a relative few students ultimately derive from “math.” I think that’s what the critics hang their hat on: Even though they’ll acknowledge that what’s taught in school is not what Lockhart wishes it was, it does have some benefits for “students” (though I’d argue it’s relatively few of them), and this can be demonstrated, because we see every day that some number of students who take “math” go on to productive careers that use that skill. So, they will say, it can’t be said that what’s taught in school is worthless. Something of value is being transmitted. Though, I would encourage people to take a look at the backlash against requiring “math” in high school and college as a counterpoint to that notion.

Secondly, I think Lockhart’s critics have a good point in saying that it is physically impossible for the current school system, with the scale it has to accommodate, to do what he’s talking about. Maybe a handful of schools would be capable of doing it, by finding knowledgeable staff, and offering attractive salaries. I think Lockhart understands that. His point, that doesn’t seem to get through to his critics, is, “Look. What’s being taught in schools is not math, anyway! So, it’s not as if anyone would be missing out more than they are already.” I think that’s the sticking point between him and his critics. They think that if “math” is eliminated, and only real math is taught in a handful of schools (the capacity of the available talent), that a lot of otherwise talented students would be missing out on promising careers, which they could benefit from using “math.”

An implicit point that Lockhart is probably making is that real math has a hard time getting a foothold in the education system, because “math” has such a comprehensive lock on it. If someone offered to teach real math in our school system, they would be rejected, because their method of teaching would be so outside the established curriculum. That’s something his critics should think about. Something is seriously wrong with a system when “the real thing” doesn’t fit into it well, and is barred because of that.

I’ve talked along similar lines with others, and a persistent critic on this topic, who is a parent of children who are going through school, has told me something similar to a criticism Lockhart anticipated. It goes something like, “Not everyone is capable of doing what you’re talking about. They need to obtain certain basic skills, or else they will not be served well by the education they receive. Schools should just focus on that.” I understand this concern. What people who think about this worry about is that in our very competitive knowledge economy, people want assurances that their children will be able to make it. They don’t feel comfortable with an “airy-fairy, let’s be creative!” account of what they see as an essential skill. That’s leaving too much to chance. However, a persistent complaint I used to hear from employers (I assume this is still the case) is that they want people who can think creatively out of the box, and they don’t see enough of that in candidates. This is precisely what Lockhart is talking about (he doesn’t mention employers, though it’s the same concern, coming from a different angle). The only way we know of to cultivate creative thinkers is to get people in the practice of doing what’s necessary to be creative, and no one can give them a step-by-step guide on how to do that. Students have to go through the experience themselves, though of course adult educators will have a role in that.

A couple parts of Lockhart’s account that really resonated with me was where he showed how one can arrive at a proof for the area of a certain type of triangle, and where he talked about students figuring out imaginary problems for themselves, getting frustrated, trying and failing, collaborating, and finally working it out. What he described sounds so similar to what my experience was when I was first learning to program computers, when I was 12 years old, and beyond. I didn’t have a programming course to help me when I was first learning to do it. I did it on my own, and with the help of others who happened to be around me. That’s how I got comfortable with it. And none of it was about, “To solve this problem, you do it this way.” I can agree that would’ve been helpful in alleviating my frustration in some instances, but I think it would’ve risked denying me the opportunity to understand something about what was really going on while I was using a programming language. You see, what we end up learning through exploration is that we often learn more than we bargained for, and that’s all to the good. That’s something we need to understand as students in order to get some value out of an educational experience.

By learning this way, we own what we learn, and as such, we also learn to respond to criticism of what we think we know. We come to understand that everything that’s been developed has been created by fallible human beings. We learn that we make mistakes in what we own as our ideas. That creates a sense of humility in us, that we don’t know everything, and that there are people who are smarter than us, and that there are people who know what we don’t know, and that there is knowledge that we will never know, because there is just so much of it out there, and ultimately, that there are things that nobody knows yet, not even the smartest among us. That usually doesn’t feel good, but it is only by developing that sense of humility, and responding to criticism well that we improve on what we own as our ideas. As we get comfortable with this way of learning, we learn to become good at exploring, and by doing that, we become really knowledgeable about what we learn. That’s what educators are really after, is it not, to create lifelong learners? Most valuable of all, I think, is learning this way creates independent thinkers, and indeed, people who can think out of the box, because you have a sense of what you know, and what you don’t know, and what you know is what you question, and try to correct, and what you don’t know is what you explore, and try to know. Furthermore, you have a sense of what other people know, and don’t know, because you develop a sense of what it actually means to know something! This is what I see Lockhart’s critics are missing the boat on: What you know is not what you’ve been told. What you know is what you have tried, experienced, analyzed, and criticized (wash, rinse, repeat).

On a related topic, Richard Feynman addressed something that should concern us re. thinking we know something because it’s what we’ve been told, vs. understanding what we know, and don’t know.

What seems to scare a lot of people is even after you’ve tried, experienced, analyzed, and criticized, the only answer you might come up with is, “I don’t know, and neither does anybody else.” That seems unacceptable. What we don’t know could hurt us. Well, yes, but that’s the reality we exist in. Isn’t it better to know that than to presume we know something we don’t?

The fundamental disconnect between what people think is valuable about education and what’s actually valuable in it is they think that to ensure that students understand something, it must be explicitly transmitted by teachers and instructional materials. It can’t just be left to chance in exploratory methods, because they might learn the wrong things, and/or they might not learn enough to come close to really mastering the subject. That notion is not to be dismissed out of hand, because that’s very possible. Some scaffolding is necessary to make it more likely that students will arrive at powerful ideas, since most ideas people come up with are bad. The real question is finding a balance of “just enough” scaffolding to allow as many students as possible to “get it,” and not “too much,” such that it ruins the learning experience. At this point, I think that’s more an art than a science, but I could be wrong.

I’m not suggesting just using a “blank page” approach, where students get no guidance from adults on what they’re doing, as many school systems have done (which they mislabeled “constructivism,” and which doesn’t work). I don’t think Lockhart is talking about that, either. I’m not suggesting autodidactic learning, nor is Lockhart. There is structure to what he is talking about, but it has an open-ended broadness to it. That’s part of what I think scares his critics. There is no sense of having a set of requirements. Lockhart would do well to talk about thresholds that he is aiming for with students. I think that would get across that he’s not willing to entertain lazy thinking. He tries to do that by talking about how students get frustrated in his scheme of imaginative work, and that they work through it, but he needs to be more explicit about it.

He admits in the “sequel” that his first article was a lament, not a proposal of what he wants to see happen. He wanted to point out the problem, not to provide an answer just yet.

The key point I want to make is the perception that drives not taking a risk is in fact taking a big risk. It’s risking creating people, and therefore a society that only knows so much, and doesn’t know how to exceed thresholds that are necessary to come up with big leaps that advance our society, if not have the basic understanding necessary to retain the advances that have already been made. A conservative, incremental approach to existing failure will not do.

Related post: The beauty of mathematics denied

— Mark Miller, https://tekkie.wordpress.com

The Babbage Difference Engine at the Computer History Museum is going away

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

Psychic elephants and evolutionary psychology

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

Foster also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

He says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Edit 2/16/2016: I’ve added 3 more paragraphs relating to another video I’m adding, since it relates more specifically to this topic.

The video, below, is intriguing, but I can’t help but wonder if the two researchers, Cosmides and Tooby, are reading current issues they’re hearing about in political discourse into “stone age thinking” unjustifiably, because how do we know what stone age thinking was? I have to admit, I have no background in anthropology or archaeology at this point. I might need that to give more weight to this inference. The topic they discuss here, about a common misunderstanding of market economics, relates back to something they discussed in the above video, about humans trying to detect and form coalitions, and how market mechanisms have the effect of appearing to interfere with coalition-building strategies. They say this leads to resentment against the market system.

What this would seem to suggest is that the idea that humans are drastically changing our planet’s climate system for the worst is a nice salve for that desire for coalition building, because it leads one to a much larger inference that market economics (the perceived enemy of coalition strategies) is a problem that transcends national boundaries. The constant mantra of warmists that, “We must act now to solve it,” appears to demand a coalition, which to those who feel disconnected by markets feels very desirable.

One of the most frequent desires I’ve heard from those who believe that we are changing our climate for the worst is that they only want to deal with market participants, “Who care about me and my community.” What Cosmides and Tooby say is this relates back to our innate desire to build coalitions, and is evidence that these people feel that the market system is interfering, or not cooperating in that process. What they say, as Foster says, is this reflects a lack of understanding of market economics, and a total ignorance of the benefits its effects bring to humanity.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in our evolutionary experience, by comparison. So we still carry an evolutionary heritage that forms our perceptions of fairness and social survival, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions in “climate policy” that are taken in its name.

Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

Reviving programming as literacy

I came across this interview with Lesley Chilcott, the producer of “An Inconvenient Truth” and “Waiting For Superman.” Kind of extending her emphasis on improving education, she produced a short 9-minute video selling the idea of “You should learn to code,” both to adults and children. It addresses two points: 1) the anticipated shortage of programmers needed to write software in the future, and 2) the increasing ubiquity of programming in all sorts of fields where people would think it wouldn’t exist, such as manufacturing and agriculture.

The interview gets interesting at 3 minutes 45 seconds in.

Michelle Fields, the interviewer, asked what I thought were some insightful questions. She started things off with:

It seems as though the next generation is so fluent in technology. How is it that they don’t know what computer programming is?

Chilcott said:

I think the reason is, you know, we all use technology every day. It’s surrounding us. Like, we can debate the pro’s and con’s of technology/social media, but the bottom line is it’s everywhere, right? So I think a lot of people know how to read it. They grow up playing with an iPhone or something like that, but they don’t know how to write it. And so when you say, “Do you know what this is,” specifically, or what this job is–and you know, those kids are in first, second, fifth grade–they know all about it, but they don’t know what the job is.

I found this answer confusing. She’s kind of on the right track, thinking of programming as “writing.” I cut her some slack, because as she admits in the interview, she’s just started programming herself. However, as I’ve said before, running software is not “reading.” It’s really more like being read to by a machine, like listening to an audio book, or someone else reading to you. You don’t have to worry about the mental tasks of pronunciation, sentence construction, or punctuation. You can just listen to the story. Running software doesn’t communicate the process that the code is generating, because there’s a lot that the person using it is not shown. This is on purpose, because most people use software to accomplish some utilitarian task unrelated to how a computer works. They’re not using it to understand a process.

The last sentence came across as muddled. I think what she meant was they know all about using technology, but they don’t know how to create it (“what the job is”).

Fields then asked,

There was this study which found that 56% of students would rather eat broccoli than learn math. Do you think that since computer programming is somewhat related to math, that that’s the reason children and students shy away from it?

Chilcott said:

It could be. That is one of the myths that exist. There is some, you know, math, but as Bill Gates and some other people said, you know, addition, subtraction–It’s much more about problem solving, and I think people like to problem-solve, they like mysteries, they like decoding things. It’s much more about that than complicated algorithms.

She’s right that there is problem solving involved with programming, but she’s either mistaken or confusing math with arithmetic when she says that the relationship between math and programming is a “myth.” I can understand why she tries to wave it off, because as Fields pointed out, most students don’t like math. I contend, as do some mathematicians, this is due to the way it’s taught in our schools. The essence of math gets lost. Instead it’s presented as a tool for calculation, and possibly a cognitive development discipline for problem solving, both of which don’t communicate what it really is, and remove a lot of its beauty.

In reality math is pervasive in programming, but to understand why I say this you have to understand that math is not arithmetic–addition, subtraction, like she suggests. This confusion is common in our society. I talk more about this here. Having said this, it does not mean that programming is hard right off the bat. The math involved has more to do with logic and reasoning. I like the message in the video below from a couple of the programmers interviewed: “You don’t have to be a genius to know how to code. … Do you have to be a genius to do math? No.” I think that’s the right way to approach this. Math is important to programming, but it’s not just about calculating a result. While there’s some memorization, understanding a programming language’s rules, and knowing what different things are called, that’s not a big part of it.

The cool thing is you can accomplish some simple things in programming, to get started, without worrying about math at all. It becomes more important if you want to write complex programs, but that’s something that can wait.

My current understanding is the math in programming is about understanding the rules of a system and what statements used in that system imply, and then understanding the effects of those implications. That sounds complicated, but it’s just something that has to be learned to do anything significant with programming, and once learned will become more and more natural. I liken it to understanding how to drive a car on the road. You don’t have to learn this concept right away, though. When first starting out, you can just look at and enjoy the effects of trying out different things, exploring what a programming environment offers you.

Where Chilcott shines in the interview above is when she becomes the “organizer.” She said that even though 95% of the schools have computers and internet access, only 10% have what she calls a “computer science” course. (I wish they’d go back to calling it a “programming course.” Computer science is more than what most of these schools teach, but I’m being nit-picky.) The cool thing about Code.org, a web site she promotes, is that it tries to locate a school near you that offers programming courses. If there aren’t any, no problem. You can learn some basics of programming right inside your browser using the online tools that it offers on the site.

The video Chilcott produced is called “Code Stars” in the above interview, but when I went looking for it I found it under the name “the Code.org film,” or, “What Most Schools Don’t Teach.”

Here is the full 9-minute video:

If you want the shorter videos, you can find them here.

The programming environment you see kids using in these videos is called “Scratch.”

Gabe Newell said of programming:

When you’re programming, you’re teaching possibly the stupidest thing in the entire universe–a computer–how to do something.

I see where Newell is going with this, but from my perspective it depends on what programming environment you’re using. Some programming languages have the feel of you “teaching” the system when you’re programming. Others have the feel of creating relationships between simple behaviors. Others, still, have the feel of using relationships to set up rules for a new system. Programming comes in a variety of approaches. However, the basic idea that Newell gets across is true, that computers only come with a set of simple operations, and that’s it. They don’t do very much by themselves, or even in combination. It’s important for those new to programming to learn this early on. Some of my early experiences in programming match those of new programmers even today. One of them is, when using a programming language, one is tempted to assume that the computer will infer the meaning of some programming expression from context. There is some context used in programming, but not much, and it’s highly formalized. It’s not intuitive. I can remember the first time I learned this it was like the joke where, say, someone introduces his/her friend to a dumb, witless character in a skit. He/she says, “Say hi to my friend, Frank,” and the dummy says, “Hi to my friend Frank.” And the guy/gal says, “NO! I mean…say hello,” making a hand gesture trying to get the two to connect, and the dummy might look at the friend and say, “Hello,” but that’s it. That’s kind of a realization to new programmers. Yeah, the computer has to have almost everything explained to it (or modeled), even things we do without thinking about it. It’s up to the programmer to make the connections between the few things the computer knows how to do, to make something larger happen.

Jack Dorsey talked about programming in a way that I think is important. His ultimate goal when he started out was to model something, and make the model malleable enough that he could manipulate it, because he wanted to use it for understanding how cities work.

Bill Gates emphasized control. This is a common early motivation for programmers. Not necessarily controlling people, but controlling the computer. What Gates was talking about was what I’d call “making your own world,” like Dorsey was saying, but he wanted to make it real. When I was in high school (late 1980s) it was a rather common project for aspiring programming students to create “matchmaking” programs, where boys and girls in the whole school would answer a simple questionnaire, and a computer program that a student had written would try to match them up by interests, not unlike some of the online dating sites that are out there now. I never heard of any students finding their true love through one of these projects, but it was fun for some people.

Vanessa Hurst said, “You don’t have to be a genius to know how to code. You need to be determined.” That’s pretty much it in a nutshell. In my experience everything else flowed from determination when I was learning how to do this. It will drive you to learn what you need to learn to get it, even if sometimes it’s subject matter you find tedious and icky. You learn to just push through it to get to the glorious feeling at the end of having accomplished what you set out to do.

Newell said at the end of the video,

The programmers of tomorrow are the wizards of the future. You’re going to look like you have magic powers compared to everybody else.

That’s true, but this has been true for a long time. In my professional work developing custom database solutions for business customers I had the experience of being viewed like a magician, because customers didn’t know how I did what I did. They just appreciated the fact that I could do it. I really don’t mean to discourage anyone, because I still enjoy programming today, and I want to encourage people to learn programming, but I feel the need to say something, because I don’t want people to get disillusioned over this. This status of “wizard,” or “magician” is not always what it’s cracked up to be. It can feel great, but there is a flip side to it that can be downright frustrating. This is because people who don’t know a wit of what you know how to do can get confused about what your true abilities are, and they can develop unrealistic expectations of you. I’ve found that wherever possible, the most pleasurable work environment is working among those who also know how to code, because we’re able to size each other up, and assign tasks appropriately. I encourage those who are pursuing software development as a career to shoot for that.

A couple things I can say for being able to code are:

  • It makes you less of a “victim” in our technology world. Once you know how to do it, you have an idea about how other programs work, and the pitfalls they can fall into that might compromise your private information, allow a computer cracker to access it, or take control of your system. You don’t have to feel scared at the alarming “hacking” or phishing reports you hear on the news, because you can be choosey about what software you use based on how it was constructed, what it’s capable of, how much power it gives you (not someone else), and not just base a decision on the features it has, or cool graphics and promotion. You can become a discriminating user of software.
  • You gain the power to create the things that suite you. You don’t have to use software that you don’t like, or you think is being offered on unreasonable terms. You can create your own, and it can be whatever you want. It’s just a matter of the knowledge you’re willing to gather and the amount of energy you’re willing to put into developing the software.

Edit 5-20-2013: While I’m on this subject, I thought I should include this video by Mitch Resnick, who has been involved in creating Scratch at MIT. Similar to what Lesley Chilcott said above, he said, “It’s almost as if [users of new technologies] can read, but not write,” referring to how people use technology to interact. I disagreed with the notion, above, that using technology is the same as reading. Resnick hedged a bit on that. I can kind of understand why he might say this, because by running a Scratch program, it is like reading it, because you can see how code creates its results in the environment. This is not true, however, of much of the technology people use today.

Mark Guzdial asked a question a while back that I thought was important, because it brings this issue down to where a lot of people live. If the kind of literacy I’m going to talk about below is going to happen, the concept needs to be able to come down “out of the clouds” and become more pedestrian. Not to say that literacy needs to be watered down in toto (far from it), but that it should be possible to read and write to communicate everyday ideas and experiences without being super sophisticated about it. What Mark asked was, in the context of a computing medium, what would be the equivalent of a “note to grandma”? I remember suggesting Dan Ingalls’s prop-piston concept from his Lively Kernel demos as one candidate. Resnick provided what I thought were some other good ones, but in the context of Mother’s Day.

Context reversal

The challenge that faces new programmers today is different from when I learned programming as a child in one fundamental way. Today, kids are introduced to computers before they enter school. They’re just “around.” If you’ve got a cell phone, you’ve got a computer in your pocket. The technology kids use presents them with an easy-to-use interface, but the emphasis is on use, not authoring. There is so much software around it seems you can just wish for it, and it’s there. The motivation to get into programming has to be different than what motivated me.

When I was young the computer industry was still something new. It was not widespread. Most computers that were around were big mainframes that only corporations and universities could afford and manage. When the first microcomputers came out, there wasn’t much software for them. It was a lot easier to be motivated to learn programming, because if you didn’t write it, it probably didn’t exist, or it was too expensive to get (depending on your financial circumstances). The way computers operated was more technical than they are today. We didn’t have graphical user interfaces (at first). Everything was done from some kind of text command line interface that filled the entire screen. Every computer came with a programming language as well, along with a small manual giving you an introduction on how to use it.

PC-DOS, from Wikipedia

It was expected that if you bought a computer you’d learn something about programming it, even if it was just a little scripting. Sometimes the line between what was the operating system’s command line interface, and what was the programming language was blurred. So even if all you wanted to do was manipulate files and run programs, you were learning a little about programming just by learning how to use the computer. Some of today’s software developers came out of that era (including yours truly).

Computer and operating system manufacturers had stopped including programming languages with their systems by the mid-1990s. Programming languages had also been taken over by professionals. The typical languages used by developers were much harder to learn for beginners. There were educational languages around, but they had fallen behind the times. They were designed for older personal computer systems, and when the systems got more sophisticated no one had come around to update them. That began to be remedied only in the last 10 years.

Computer science was still a popular major at universities in the 1990s, due to the dot-com craze. When that bubble burst in 2000, that went away, too. So in the last 18 years we’ve had what I’d call an “educational programming winter.” Maybe we’ll see a revival. I hope so.

Literacy reconsidered

I’m directing the rest of this post to educators, because there are some issues around a programming revival I’d like to address. I’m going to share some more detailed history, and other perspectives on computer programming.

What many may not know is that we as a society have already gone through this once. From the late 1970s to the mid-1980s there was a major push to teach programming in schools as “computer literacy.” This was the regime that I went through. The problem was some mistakes were made, and this caused the educational movement behind it to collapse. I think the reason this happened was due to a misunderstanding of what’s powerful about programming, and I’d like educators to evaluate their current thinking in light of this, so that hopefully they do not repeat the mistakes of the past.

As I go through this part, I’ll mostly be quoting from a Ph.D. thesis written by John Maxwell in 2006 called Tracing the Dynabook: A Study of Technocultural Transformations.” (h/t Bill Kerr)

Back in the late 1970s microcomputers/personal computers were taking off like wildfire with Apple II’s, and Commodore VIC-20’s, and later, Commodore 64’s, and IBM PCs. They were seen as “the future.” Parents didn’t want their children to be “left behind” as “technological illiterates.” This was the first time computers were being brought into the home. It was also the first time many schools were able to grant students access to computers.

Educators thought about the “benefits” of using a computer for certain cognitive and social skills.  Programming spread in public school systems as something to teach students. Fred D’Ignazio wrote in an article called “Beyond Computer Literacy,” from 1983:

A recent national “computers in the schools” survey conducted by the Center for the Social Organization of Schools at Johns Hopkins University found that most secondary schools are using computers to teach programming. … According to the survey, the second most popular use of computers was for drill and practice, primarily for math and language arts. In addition, the majority of the teachers who responded to the survey said that they looked at the computer as a “resource” rather than as a “tool.”

…Another recent survey (conducted by the University of Maryland) echoes the Johns Hopkins survey. It found that most schools introduce computers into the curriculum to help students become literate in computer technology. But what does this literacy entail?

Because of the pervasive spread of computers throughout our society, we have all become convinced that computers are important. From what we read and hear, when our kids grow up almost everyone will have to use computers in some aspect of their lives. This makes computers, as a subject, not only important, but also relevant.

An important, relevant subject like computers should be part of a school’s curriculum. The question is how “Computers” ought to be taught.

Special computer classes are being set up so that students can play with computers, tinker with them, and learn some basic programming. Thus, on a practical level, computer literacy turns out to be mere computer exposure.

But exposure to what? Kids who are now enrolled in elementary and secondary schools are exposed to four aspects of computers. They learn that computers are programmable machines. They learn that computers are being used in all areas of society. They learn that computers make good electronic textbooks. And (something they already knew), they learn that computers are terrific game machines.

… According to the surveys, real educational results have been realized at schools which concentrate on exposing kids to computers. … Kids get to touch computers, play with them, push their buttons, order them about, and cope with computers’ incredible dumbness, their awful pickiness, their exasperating bugs, and their ridiculous quirks.

The main benefits D’Ignazio noted were ancillary. Students stayed at school longer, came in earlier, and stayed late. They were more attentive to their studies, and the computers fostered a sense of community, rather than competition and rivalry. If you read his article, you get a sense that there was almost a “worship” of computers on the part of educators. They didn’t understand what they were, or what they represented, but they were so interesting! There’s a problem there… When people are fascinated by something they don’t understand, they tend to impose meanings on it that are not backed by evidence, and so miss the point. The mistaken perceptions can be strengthened by anecdotal evidence (one of the weakest kinds). This is what happened to programming in schools.

The success of the strategy of using computers to try to improve higher-order thinking was illusory. John Maxwell’s telling of the “life and death of Logo” (my phrasing) serves as a useful analog to what happened to programming in schools generally. For those unfamiliar with it, the basic concept of Logo was a programming environment in which the student manipulates an object called a “turtle” via. commands. The student can ask the turtle to rotate and move. As it moves it drags a pen behind it, tracing its trail.  Other versions of this language were created that allowed more capabilities, allowing further exploration of the concepts for which it was created. The original idea Seymour Papert, who taught children using Logo, had was to teach young children about sophisticated math concepts, but our educational system imposed a very different definition and purpose on it. Just because something is created on a computer with the intent of it being used for a specific purpose doesn’t mean that others can’t use it for completely different, and possibly less valuable purposes. We’ve seen this a lot with computers over the years; people “misusing” them for both constructive and destructive ends.

As I go forward with this, I just want to put out a disclaimer that I don’t have answers to the problems I point out here. I point them out to make people aware of them, to get people to pause with the pursuit of putting people through this again, and to point to some people who are working on trying to find some answers. I present some of their learned opinions. I encourage interested readers to read up on what these people have had to say about the use of computers in education, and perhaps contact them with the idea of learning more about what they’ve found out.

I ask the reader to pay particular attention to the “benefits” that educators imposed on the idea of programming during this period that Maxwell talks about, via. what Papert called “technocentrism.” You hear this being echoed in the videos above. As you go through this, I also want you to notice that Papert, and another educator by the name of Alan Kay, who have thought a lot about what computers represent, have a very different idea about the importance of computers and programming than is typical in our school system, and in the computer industry.

The spark that started Logo’s rise in the educational establishment was the publication of Papert’s book, “Mindstorms: Children, Computers, and Powerful Ideas” in 1980. Through the process of Logo’s promotion…

Logo became in the marketplace (in the broad sense of the word) [a] particular black box: turtle geometry; the notion that computer programming encourages a particular kind of thinking; that programming in Logo somehow symbolizes “computer literacy.” These notions are all very dubious—Logo is capable of vastly more than turtle graphics; the “thinking skills” strategy was never part of Papert’s vocabulary; and to equate a particular activity like Logo programming with computer literacy is the equivalent of saying that (English) literacy can be reduced to reading newspaper articles—but these are the terms by which Logo became a mass phenomenon.

It was perhaps inevitable, as Papert himself notes (1987), that after such unrestrained enthusiasm, there would come a backlash. It was also perhaps inevitable given the weight that was put on it: Logo had come, within educational circles, to represent computer programming in the large, despite Papert’s frequent and eloquent statements about Logo’s role as an epistemological resource for thinking about mathematics. [my emphasis — Mark] In the spirit of the larger project of cultural history that I am attempting here, I want to keep the emphasis on what Logo represented to various constituencies, rather than appealing to a body of literature that reported how Logo “didn’t work as promised,” as many have done (e.g., Sloan 1985; Pea & Sheingold 1987). The latter, I believe, can only be evaluated in terms of this cultural history. Papert indeed found himself searching for higher ground, as he accused Logo’s growing numbers of critics of technocentrism:

“Egocentrism for Piaget does not mean ‘selfishness’—it means that the child has difficulty understanding anything independently of the self. Technocentrism refers to the tendency to give a similar centrality to a technical object—for example computers or Logo. This tendency shows up in questions like ‘What is THE effect of THE computer on cognitive development?’ or ‘Does Logo work?’ … such turns of phrase often betray a tendency to think of ‘computers’ and ‘Logo’ as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations—people and cultures—to a secondary, faciltiating role. The context for human development is always a culture, never an isolated technology.”

But by 1990, the damage was done: Logo’s image became that of a has-been technology, and its black boxes closed: in a 1996 framing of the field of educational technology, Timothy Koschmann named “Logo-as-Latin” a past paradigm of educational computing. The blunt idea that “programming” was an activity which could lead to “higher order thinking skills” (or not, as it were) had obviated Papert’s rich and subtle vision of an ego-syntonic mathematics.

By the early 1990s … Logo—and with it, programming—had faded.

The message–or black box–resulting from the rise and fall of Logo seems to have been the notion that “programming” is over-rated and esoteric, more properly relegated to the ash-heap of ed-tech history, just as in the analogy with Latin. (pp. 183-185)

To be clear, the last part of the quote refers only to the educational value placed on programming by our school system. When educators attempted to formally study and evaluate programming’s benefits on higher-order thinking and the like, they found it wanting, and so most schools gradually dropped teaching programming in the 1990s.

Maxwell addresses the conundrum of computing and programming in schools, and I think what he says is important to consider as people try to “reboot” programming in education:

[The] critical faculties of the educational establishment, which we might at least hope to have some agency in the face of large-scale corporate movement, tend to actually disengage with the critical questions (e.g., what are we trying to do here?) and retreat to a reactionary ‘humanist’ stance in which a shallow Luddism becomes a point of pride. Enter the twin bogeymen of instrumentalism and technological determinism: the instrumentalist critique runs along the lines of “the technology must be in the service of the educational objectives and not the other way around.” The determinist critique, in turn, says, ‘the use of computers encourages a mechanistic way of thinking that is a danger to natural/human/traditional ways of life’ (for variations, see, Davy 1985; Sloan 1985; Oppenheimer 1997; Bowers 2000).

Missing from either version of this critique is any idea that digital information technology might present something worth actually engaging with. De Castell, Bryson & Jenson write:

“Like an endlessly rehearsed mantra, we hear that what is essential for the implementation and integration of technology in the classroom is that teachers should become ‘comfortable’ using it. […] We have a master code capable of utilizing in one platform what have for the entire history of our species thus far been irreducibly different kinds of things–writing and speech, images and sound–every conceivable form of information can now be combined with every other kind to create a different form of communication, and what we seek is comfort and familiarity?”

Surely the power of education is transformation. And yet, given a potentially transformative situation, we seek to constrain the process, managerially, structurally, pedagogically, and philosophically, so that no transformation is possible. To be sure, this makes marketing so much easier. And so we preserve the divide between ‘expert’ and ‘end-user;’ for the ‘end-user’ is profoundly she who is unchanged, uninitiated, unempowered.

A seemingly endless literature describes study after study, project after project, trying to identify what really ‘works’ or what the critical intercepts are or what the necessary combination of ingredients might be (support, training, mentoring, instructional design, and so on); what remains is at least as strong a body of literature which suggests that this is all a waste of time.

But what is really at issue is not implementation or training or support or any of the myriad factors arising in discussions of why computers in schools don’t amount to much. What is really wrong with computers in education is that for the most part, we lack any clear sense of what to do with them, or what they might be good for. This may seem like an extreme claim, given the amount of energy and time expended, but the record to date seems to support it. If all we had are empirical studies that report on success rates and student performance, we would all be compelled to throw the computers out the window and get on with other things.

But clearly, it would be inane to try to claim that computing technology–one of the most influential defining forces in Western culture of our day, and which shows no signs of slowing down–has no place in education. We are left with a dilemma that I am sure every intellectually honest researcher in the field has had to consider: we know this stuff is important, but we don’t really understand how. And so what shall we do, right now?

It is not that there haven’t been (numerous) answers to this question. But we have tended to leave them behind with each surge of forward momentum, each innovative push, each new educational technology “paradigm” as Timothy Koschmann put it. (pp. 18-19)

The answer is not a “reboot” of programming, but rather a rethinking of it. Maxwell makes a humble suggestion: that educators stop being blinded by “the shiny new thing,” or some so-called “new” idea such that they lose their ability to think clearly about what’s being done with regard to computers in education, and that they deal with history and historicism. He said that the technology field has had a problem with its own history, and this tends to bleed over into how educators regard it. The tendency is to forget the past, and to downplay it (“That was neat then, but it’s irrelevant now”).

In my experience, people have associated technology’s past with memories of using it. They’ve given little if any thought to what it represented. They take for granted what it enabled them to do, and do not consider what that meant. Maxwell said that this…

…makes it difficult, if not impossible, to make sense of the role of technology in education, in society, and in politics. We are faced with a tangle of hobbles–instrumentalism, ahistoricism, fear of transformation, Snow’s “two cultures,” and a consumerist subjectivity.

An examination of the history of educational technology–and educational computing in particular–reveals riches that have been quite forgotten. There is, for instance, far more richness and depth in Papert’s philosophy and his more than two decades of practical work on Logo than is commonly remembered. And Papert is not the only one. (p. 20)

Maxwell went into what Alan Kay thought about the subject. Kay has spent almost as many years as Papert working on a meaningful context for computing and programming within education. Some of the quotes Maxwell uses are from “The Early History of Smalltalk,” (h/t Bill Kerr) which I’ll also refer to. The other sources for Kay’s quotes are included in Maxwell’s bibliography:

What is Literacy?

“The music is not in the piano.” — Alan Kay

The past three or four decades are littered with attempts to define “computer literacy” or something like it. I think that, in the best cases, at least, most of these have been attempts to establish some sort of conceptual clarity on what is good and worthwhile about computing. But none of them have won large numbers of supporters across the board.

Kay’s appeal to the historical evolution of what literacy has meant over the past few hundred years is, I think, a much more fruitful framing. His argument is thus not for computer literacy per se, but for systems literacy, of which computing is a key part.

That this is a massive undertaking is clear … and the size of the challenge is not lost on Kay. Reflecting on the difficulties they faced in trying to teach programming to children at PARC in the 1970s, he wrote that:

“The connection to literacy was painfully clear. It is not just enough to learn to read and write. There is also a literature that renders ideas. Language is used to read and write about them, but at some point the organization of ideas starts to dominate the mere language abilities. And it helps greatly to have some powerful ideas under one’s belt to better acquire more powerful ideas.”

Because literature is about ideas, Kay connects the notion of literacy firmly to literature:

“What is literature about? Literature is a conversation in writing about important ideas. That’s why Euclid’s Elements and Newton’s Principia Mathematica are as much a part of the Western world’s tradition of great books as Plato’s Dialogues. But somehow we’ve come to think of science and mathematics as being apart from literature.”

There are echoes here of Papert’s lament about mathophobia, not fear of math, but the fear of learning that underlies C.P. Snow’s “two cultures,” and which surely underlies our society’s love-hate relationship with computing. Kay’s warning that too few of us are truly fluent with the ways of thinking that have shaped the modern world finds an anchor here. How is it that Euclid and Newton, to take Kay’s favourite examples, are not part of the canon, unless one’s very particular scholarly path leads there? We might argue that we all inherit Euclid’s and Newton’s ideas, but in distilled form. But this misses something important … Kay makes this point with respect to Papert’s experiences with Logo in classrooms:

“Despite many compelling presentations and demonstrations of Logo, elementary school teachers had little or no idea what calculus was or how to go about teaching real mathematics to children in a way that illuminates how we think about mathematics and how mathematics relates to the real world.” (Maxwell, pp. 135-137)

Just a note of clarification: I refer back to what Maxwell said re. Logo and mathematics. Papert did not use his language to teach programming as an end in itself. His goal was to use a computer to teach mathematics to children. Programming with Logo was the means for doing it. This is an important concept to keep in mind as one considers what role computer programming plays in education.

The problem, in Kay’s portrayal, isn’t “computer literacy,” it’s a larger one of familiarity and fluency with the deeper intellectual content; not just that which is specific to math and science curriculum. Kay’s diagnosis runs very close to Neil Postman’s critiques of television and mass media … that we as a society have become incapable of dealing with complex issues.

“Being able to read a warning on a pill bottle or write about a summer vacation is not literacy and our society should not treat it so. Literacy, for example, is being able to fluently read and follow the 50-page argument in [Thomas] Paine’s Common Sense and being able (and happy) to fluently write a critique or defense of it.” (Maxwell, p. 137)

Extending this quote (from “The Early History of Smalltalk”), Kay went on to say:

Another kind of 20th century literacy is being able to hear about a new fatal contagious incurable disease and instantly know that a disastrous exponential relationship holds and early action is of the highest priority. Another kind of literacy would take citizens to their personal computers where they can fluently and without pain build a systems simulation of the disease to use as a comparison against further information.

At the liberal arts level we would expect that connections between each of the fluencies would form truly powerful metaphors for considering ideas in the light of others.

Continuing with Maxwell (and Kay):

“Many adults, especially politicians, have no sense of exponential progressions such as population growth, epidemics like AIDS, or even compound interest on their credit cards. In contrast, a 12-year-old child in a few lines of Logo […] can easily describe and graphically simulate the interaction of any number of bodies, or create and experience first-hand the swift exponential progressions of an epidemic. Speculations about weighty matters that would ordinarily be consigned to common sense (the worst of all reasoning methods), can now be tried out with a modest amount of effort.”

Surely this is far-fetched; but why does this seem so beyond our reach? Is this not precisely the point of traditional science education? We have enough trouble coping with arguments presented in print, let alone simulations and modeling. Postman’s argument implicates television, but television is not a techno-deterministic anomaly within an otherwise sensible cultural milieu; rather it is a manifestation of a larger pattern. What is wrong here has as much to do with our relationship with print and other media as it does with television. Kay noted that “In America, printing has failed as a carrier of important ideas for most Americans.” To think of computers and new media as extensions of print media is a dangerous intellectual move to make; books, for all their obvious virtues (stability, economy, simplicity) make a real difference in the lives of only a small number of individuals, even in the Western world. Kay put it eloquently thus: “The computer really is the next great thing after the book. But as was also true with the book, most [people] are being left behind.” This is a sobering thought for those who advocate public access to digital resources and lament a “digital divide” along traditional socioeconomic lines. Kay notes,

“As my wife once remarked to Vice President Al Gore, the ‘haves and have-nots’ of the future will not be caused so much by being connected or not to the Internet, since most important content is already available in public libraries, free and open to all. The real haves and have-nots are those who have or have not acquired the discernment to search for and make use of high content wherever it may be found.” (Maxwell, pp. 138-139)

I’m still trying to understand myself what exactly Alan Kay means by “literature” in the realm of computing. He said that it is a means for discussing important ideas, but in the context of computing, what ideas? I suspect from what’s been said here he’s talking about what I’d call “model content,” thought forms, such as the idea of an exponential progression, or the concept of velocity and acceleration, which have been fashioned in science and mathematics to describe ideas and phenomena. “Literature,” as he defined it, is a means of discussing these thought forms–important ideas–in some meaningful context.

In prior years he had worked on that in his Squeak environment, working with some educators. They would show children a car moving across the screen, dropping dots as it went, illustrating velocity, and then, modifying the model, acceleration. Then they would show them Galileo’s experiment, dropping heavy and light balls from the roof of a building (real balls from a real building), recording the ball dropping, and allowing the children to view the video of the ball, and simultaneously model it via. programming, and discovering that the same principle of acceleration applied there as well. Thus, they could see in a couple contexts how the principle worked, how they could recognize it, and see its relationship to the real world. The idea being that they could grasp the concepts that make up the idea of acceleration, and then integrate it into their thinking about other important matters they would encounter in the future.

Maxwell quoted from an author named Andrew diSessa to get deeper into the concept of literacy, specifically what literacy in a type of media offers our understanding of issues:

The hidden metaphor behind transparency–that seeing is understanding–is at loggerheads with literacy. It is the opposite of how media make us smarter. Media don’t present an unadulterated “picture” of the problem we want to solve, but have their fundamental advantage in providing a different representation, with different emphases and different operational possibilities than “seeing and directly manipulating.”

What’s a good goal for computing?

The temptation in teaching and learning programming is to get students familiar enough with the concepts and a language that they can start creating things with it. But create what? The typical cases are to allow students to tinker, and/or to create applications which gradually become more complex and feature-rich, with the idea of building confidence and competence with increasing complexity. The latter is not a bad idea in itself, but listening to Alan Kay has led me to believe that starting off with this is the equivalent of jumping to a conclusion too quickly, and to miss the point of what’s powerful about computers and programming.

I like what Kay said in “The Early History of Smalltalk” about this:

A twentieth century problem is that technology has become too “easy.” When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

Edit 4-5-2013: I thought I should point out that this quote has some nuance to it that people might miss. I don’t believe Kay is saying that “programming should be hard.” Quite the contrary. One can observe from his designs that he’s advocated the opposite. Not that technology should mold itself to what is “natural” for humans. It might require some training and practice, but once mastered, it should magnify or enhance human capabilities, thereby making previously difficult or tedious tasks easier to accomplish and incorporate into a larger goal.

Kay was making an observation about the history of technology’s relationship to society, that the effect on people of useful technology being hard to build has generally caused the people who created something useful to make it well. What he’s pointing out is that people generally take the presence of technology as an excuse to use it as a crutch, in this case to make immediate use of it towards some other goal that has little to do with what the technology represents, rather than an invitation to revisit it, criticize its design, and try to make it better. This is an easy sell, because everyone likes something that makes their lives easier (or seems to), but we rob ourselves of something important in the process if that becomes the only end goal. What I see him proposing is that people with some skill should impose a high standard for design on themselves, drawing inspiration for that standard from how the best art humanity has produced was developed and nurtured, but guard against the sense of feeling small, inadequate, and overwhelmed by the challenge.

Maxwell (and Kay) explain further why this idea of “literacy” as being able to understand and communicate important ideas, which includes ideas about complexity, is something worth pursuing:

“If we look back over the last 400 years to ponder what ideas have caused the greatest changes in human society and have ushered in our modern era of democracy, science, technology and health care, it may come as a bit of a shock to realize that none of these is in story form! Newton’s treatise on the laws of motion, the force of gravity, and the behavior of the planets is set up as a sequence of arguments that imitate Euclid’s books on geometry.”

The most important ideas in modern Western culture in the past few hundred years, Kay claims, are the ones driven by argumentation, by chains of logical assertions that have not been and cannot be straightforwardly represented in narrative. …

But more recent still are forms of argumentation that defy linear representation at all: ‘complex’ systems, dynamic models, ecological relationships of interacting parts. These can be hinted at with logical or mathematical representations, but in order to flesh them out effectively, they need to be dynamically modeled. This kind of modeling is in many cases only possible once we have computational systems at our disposal, and in fact with the advent of computational media, complex systems modeling has been an area of growing research, precisely because it allows for the representation (and thus conception) of knowledge beyond what was previously possible. In her discussion of the “regime of computation” inherent in the work of thinkers like Stephen Wolfram, Edward Fredkin, and Harold Morowitz, N. Katherine Hayles explains:

“Whatever their limitations, these researchers fully understand that linear causal explanations are limited in scope and that multicausal complex systems require other modes of modeling and explanation. This seems to me a seminal insight that, despite three decades of work in chaos theory, complex systems, and simulation modeling, remains underappreciated and undertheorized in the physical sciences, and even more so in the social sciences and humanities.”

Kay’s lament too is that though these non-narrative forms of communication and understanding–both in the linear and complex varieties–are key to our modern world, a tiny fraction of people in Western society are actually fluent in them.

“In order to be completely enfranchised in the 21st century, it will be very important for children to become fluent in all three of the central forms of thinking that are now in use. […] the question is: How can we get children to explore ways of thinking beyond the one they’re ‘wired for’ (storytelling) and venture out into intellectual territory that needs to be discovered anew by every thinking person: logic and systems ‘eco-logic?'” …

In this we get Kay’s argument for ‘what computers are good for’ … It does not contradict Papert’s vision of children’s access to mathematical thinking; rather, it generalizes the principle, by applying Kay’s vision of the computer as medium, and even metamedium, capable of “simulating the details of any descriptive model.” The computer was already revolutionizing how science is done, but not general ways of thinking. Kay saw this as the promise of personal computing, with millions of users and millions of machines.

“The thing that jumped into my head was that simulation would be the basis for this new argument. […] If you’re going to talk about something really complex, a simulation is a more effective way of making your claim than, say, just a mathematical equation. If, for example, you’re talking about an epidemic, you can make claims in an essay, and you can put mathematical equations in there. Still, it is really difficult for your reader to understand what you’re actually talking about and to work out the ramifications. But it is very different if you can supply a model of your claim in the form of a working simulation, something that can be examined, and also can be changed.”

The computer is thus to be seen as a modeling tool. The models might be relatively mundane–our familiar word processors and painting programs define one end of the scale–or they might be considerably more complex. [my emphasis — Mark] It is important to keep in mind that this conception of computing is in the first instance personal–“personal dynamic media”–so that the ideal isn’t simulation and modeling on some institutional or centralized basis, but rather the kind of thing that individuals would engage in, in the same way in which individuals read and write for their own edification and practical reasons. This is what defines Kay’s vision of a literacy that encompasses logic and systems thinking as well as narrative.

And, as with Papert’s enactive mathematics, this vision seeks to make the understanding of complex systems something to which young children could realistically aspire, or that school curricula could incorporate. Note how different this is from having a ‘computer-science’ or an ‘information technology’ curriculum; what Kay is describing is more like a systems-science curriculum that happens to use computers as core tools:

“So, I think giving children a way of attacking complexity, even though for them complexity may be having a hundred simultaneously executing objects–which I think is enough complexity for anybody–gets them into that space in thinking about things that I think is more interesting than just simple input/output mechanisms.” (Maxwell, pp. 132-135)

I wanted to highlight the part about “word processors” and “paint programs,” because this idea that’s being discussed is not limited to simulating real world phenomena. It could be incorporated into simulating “artificial phenomena” as well. It’s a different way of looking at what you are doing and creating when you are programming. It takes it away from asking, “How do I get this thing to do what I want,” and redirects it to, “What entities do we want to make up this desired system, what are they like, and how can they interact to create something that we can recognize, or otherwise leverages human capabilities?”

Maxwell said that computer science is not the important thing. Rather, what’s important about computer science is what it makes possible: “the study and engagement with complex or dynamic systems–and it is this latter issue which is of key importance to education.” Think about this in relation to what we do with reading and writing. We don’t learn to read and write just to be able to write characters in some sequence, and then for others to read what we’ve written. We have events and ideas, perhaps more esoteric to this subject, emotions and poetry, that we write about. That’s why we learn to read and write. It’s the same thing with computer science. It’s pretty worthless, if we as a society value it for communicating ideas, if it’s just about learning to read and write code. To make the practice something that’s truly valuable to society, we need to have content, ideas, to read and write about in code. There’s a lot that can be explored with that idea in mind.

Characterizing Alan Kay’s vision for personal computing, Maxwell talked about Kay’s concept of the Dynabook:

Alan Kay’s key insight in the late 1960s was that computing would become the practice of millions of people, and that they would engage with computing to perform myriad tasks; the role of software would be to provide a flexible medium with which people could approach those myriad tasks. … [The] Dynabook’s user is an engaged participant rather than a passive, spectatorial consumer—the Dynabook’s user was supposed to be the creator of her own tools, a smarter, more capable user than the market discourse of the personal computing industry seems capable of inscribing—or at least has so far, ever since the construction of the “end-user” as documented by Bardini & Horvath. (p. 218)

Kay’s contribution begins with the observation that digital computers provide the means for yet another, newer mode of expression: the simulation and modeling of complex systems. What discursive possibilities does this new modality open up, and for whom? Kay argues that this latter communications revolution should in the first place be in the hands of children. What we are left with is a sketch of a possible new literacy; not “computer literacy” as an alternative to book literacy, but systems literacy—the realm of powerful ideas in a world in which complex systems modelling is possible and indeed commonplace, even among children. Kay’s fundamental and sustained admonition is that this literacy is the task and responsibility of education in the 21st century. The Dynabook vision presents a particular conception of what such a literacy would look like—in a liberal, individualist, decentralized, and democratic key. (p. 262)

I would encourage interested readers to read Maxwell’s paper in full. He gives a rich description of the problem of computers in the educational context, giving a much more detailed history of it than I have here, and what the best minds on the subject have tried to do to improve the situation.

The main point I want to get across is if we as a society really want to get the greatest impact out of what computers can do for us, beyond just being tools that do canned, but useful things, I implore educators to see computers and programming environments more as apparatus, instruments, media (the computers and programming environments themselves, not what’s “played” on computers, and languages and metaphors, which are the media’s means of expression, not just a means to some non-expressive end), rather than as agents and tools. Sure, there will be room for them to function as agents and tools, but the main focus that I see as important in this subject area is in how the machine helps facilitate substantial pedagogies and illuminates epistemological concepts that would otherwise be difficult or impossible to communicate.

—Mark Miller, https://tekkie.wordpress.com

Are we future-oriented?

The death of Neil Armstrong, the first man to walk on the Moon, on August 25 got me reflecting on what was accomplished by NASA during his time. I found a YouTube channel called “The Conquest of Space,” and it’s been wonderful getting acquainted with the history I didn’t know.

I knew about the Apollo program from the time I was a kid in the 1970s. I was born two months after Apollo 11, so I only remember it in hindsight. By the time I was old enough to be conscious of the Apollo program’s existence, it had been mothballed for four or five years. I could not be ignorant of its existence. It was talked about often on TV, and in the society around me. I lived in Virginia, near Washington, D.C., in my early childhood. I remember I used to be taken regularly to the Smithsonian Air and Space Museum. Of all of their museums, it was my favorite. There, I saw video of one of the moon walks, the space suits used for the missions (as mannequins), the Command Module, and Lunar Module at full scale, artifacts of a time that had come and gone. There was hope that someday we would go back to the Moon, and go beyond it to the planets. The Air and Space Museum had an IMAX movie that was played continuously, called “To Fly.” From what I’ve read, they still show it. It was produced for the museum in 1976. I remember watching it a bunch of times. It was beautifully done, though looking back on it, it had the feel of a “demo” movie, showing off what could be done with the IMAX format. It dramatizes the history of flight, from hot air balloons in the 19th century, to the jet age, to rockets to the Moon. A cool thing about it is it talked about the change in perspective that flight offered, a “new eye.” At the end it predicted that we would have manned space missions to the planets.

Why wouldn’t we have manned missions that venture to the planets, and ultimately, perhaps a hundred years off, to other star systems? It would just be an extension of the advancements in flight we had made on earth. The idea that we would keep pushing the boundaries of our reach seemed like a given, that this technological pace we had experienced would just keep going. That’s what everything that was science-oriented was telling me. Our future was in space.

In the late 1970s Carl Sagan produced a landmark series on science called “Cosmos.” He talked about the history of space exploration, mostly from the ground, and how our destiny was to travel into space. He said, introducing the series,

The surface of the earth is the shore of the cosmic ocean. On this shore we’ve learned most of what we know. Recently we’ve waded a little way out, maybe ankle deep, and the water seems inviting. Some part of our being knows this is where we came from. We long to return, and we can…

Winding down?

As I got into my twenties, in the 1990s, I started to worry about NASA’s robustness as a space program. It started to look like a one-trick pony that only knew how to launch astronauts into low-earth orbit. “When are we going to return to the Moon,” I’d ask myself. NASA sent probes out to Jupiter, Mars, and then Saturn, following in the footsteps of Voyager 1 and 2. Surely similar questions were being asked of NASA, because I’d often hear them say that the probes were forerunners to future manned space flight, that they were gathering information that we needed to know in advance for manned missions, holding out that hope that someday we’d venture out again.

The Space Shuttle was our longest running space program, from 1981 to 2011, 30 years. Back around the year 2000 I remember Vice President Al Gore announcing the winner of the contract to build the next generation space shuttle, which would take the place of the older models, but it never came to be. Under the administration of George W. Bush the Constellation program started in 2005, with the idea of further developing the International Space Station, returning astronauts to the Moon, establishing a base there for the first time, and then launching manned missions to Mars. This program was cancelled in 2010 in the Obama Administration, and there has been nothing to replace it. I heard some criticism of Constellation, saying that it was ill-defined, and an expensive boondoggle, though it was defended by Neil Armstrong and Gene Cernan, two Apollo astronauts. Perhaps it was ill-defined, and a waste of money, but it felt sad to see the Space Shuttle program end, and to see that NASA didn’t have a way to get into low-earth orbit, or to the International Space Station. The original idea was to have the first stage of the Constellation program follow, after the space shuttles were retired. Now NASA has nothing but rockets to send out space probes and robotic rovers to bodies in space. Even the Curiosity rover mission, now on Mars, was largely developed during the Bush Administration, so I hear.

I have to remember at times that even in the 1970s, during my childhood, there was a lull in the manned space program. The Apollo program was ended in the Nixon Administration, before it was finished. There was a planned flight, with a rocket ready to go, to continue the program after Apollo 17, but it never left the ground. There’s a Saturn V rocket that was meant for one of the later missions that lays today as a display model on the grounds of the Kennedy Space Center. I have to remember as well that then, as now, the program was ended during a long drawn out war. Then, it was in Vietnam. Now, it’s in the Middle East.

Manned space flight ended for a time after the SL-4 mission to the Skylab space station in 1974. It didn’t begin again for another 7 years, with the first launch of the Space Shuttle. The difference is the Shuttle was first conceptualized towards the end of the Apollo program. It was there as a goal. Perhaps we are experiencing the same gap in manned flight now, though I don’t have a sense that NASA has a “next mission” in mind. As best I can tell the Obama Administration has tasked NASA with supporting private space flight. There is good reason to believe that private space flight companies will be able to send astronauts into low-earth orbit soon. That’s a consolation. The thing is that’s likely all they’re going to do in the future–launch to low-earth orbit. They’re at the stage that the Mercury program was more than 50 years ago.

What I ask is do we have anything beyond this in mind? Do we have a sense of building on the gains in knowledge that have been made, to venture out beyond what we now know? I grew up being told that “humans want to explore, to push the boundaries of what we know.” I guess we still are that, but maybe we’re directing that impulse in new ways here on earth, rather than into space. I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence. Astrophysicist, and vocal advocate for NASA, Neil deGrasse Tyson has worried about this, too.

I realized a few years ago, to my dismay, that what really drove the creation of the space program, and our flights to the Moon, was not an ambition to push our frontiers of knowledge just for the sake of gaining knowledge. There was a major political aspect to it: beating the Soviets in “the space race” of the 1960s, establishing higher ground for ourselves, in a military sense. Yes, some very valuable scientific and engineering work was done in the process, but as Tyson would say, “science hitched a ride on another agenda.” That’s what it’s often done in human history. Many non-military benefits to our society flowed from what NASA once did, none of which are widely recognized today. Most people think that our technological development came from innovators in the private sector alone. The private sector did a lot, but they also drew from a tremendous resource in our space and defense research and development programs, as I’ve documented in earlier posts.

I’ll close with this great quote. It echoes what Tyson has said, though it’s fleshed out in an ethical sense, too, which I think is impressive.

The great enemy of the human race is ignorance. It’s what we don’t know that limits our progress. And everything that we learn, everything that we come to know, no matter how esoteric it seems, no matter how ivory tower-ish, will fit into the general picture a block in its proper place that in the end will make it possible for mankind to increase and grow; become more cosmic, if you wish; become more than a species on Earth, but become a species in the Universe, with capacities and abilities we can’t imagine now. Nor do I mean greater and greater consumption of energy, or more and more massive cities.

It’s so difficult to predict, because the most important advances are exactly in the directions that we now can’t conceive, but everything we now do, every advance in knowledge we now make, contributes to that. And just because I can’t see it, and I’m an expert at this, … doesn’t mean it isn’t there. And if we refuse to take those steps, because we don’t see what the future holds, all we’re making certain of is that the future won’t exist, and that we will stagnate forever. And this is a dreadful thought. And I am very tired when people ask me, “What’s the good of it,” because the proper answer is, “You may never know, but your grandchildren will.”

— Isaac Asimov, 1973, from the NASA film “Small Steps, Giant Strides”

Then as now, this is the lament of the scientist, I think. Scientists must ask society’s permission to explore, because they usually need funds from others to do their work, and there is no immediate payback to be had from it. It is for this reason that justifying the funding of that work is tough, because scientific work goes outside the normal set of expectations people have about what is of value. If the benefits can’t be seen here and now, many wonder, “What’s the point?” What Asimov pointed out is the pursuit of knowledge is its own reward, but to really gain its benefits you must be future-oriented. You have to think about and value the world in which your children and grandchildren will live, not your own. If your focus is on the here and now, you will not value the future, and so potential future benefits of scientific research will not seem valuable, and therefor will not seem worthy of pursuit. It is a cultural mindset that is at issue.

Edit 12-10-2012: Going through some old articles I’d saved, I came upon this essay about humanity’s capacity for intellectual thought, called “Why is there Anti-Intellectualism?”, by Steven Dutch at the University of Wisconsin-Green Bay. It provides some reasonable counter-notions to my own that seem to confirm what I’ve seen, but will still take some contemplation on my part.

There’s no science in the article. In terms of quality, at best, I’d call this an “executive summary.” Maybe there’s more detailed research behind it, but I haven’t found it yet. Dutch uses heuristics to provide his points of comparison, and uses a notion of evidence to provide some meat to the bones. He asks some reasonable questions that are worth contemplating, challenging the notion that “humans are naturally curious, and strive to explore.” He then makes observations that seem to come from his own experience. Overall, he provides a reasonable basis for answering a statement I made in this article: “I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence.” He comes down on the side of saying, in his opinion (paraphrasing), “Yes, some in the scientific community have fooled themselves on this issue.” He discusses the notion that “humans are naturally curious,” due to the behavior exhibited by children. He concludes by saying that children naturally display a shallow curiosity, which he calls “tinkering.” The harder task of creative, deep thought does not come naturally. It’s something that needs to be cultivated to take root. Hence the need for schools. The question I think we as citizens should be asking is whether our schools are actually doing this, or something else.

An example of computing as a new medium

Bret Victor, a former designer at Apple, is working on a way to use a computer to make math more meaningful. I can see that he really gets the representational aspect, that the symbols are not the math, just a way to represent it, and it’s not a particularly good way to represent it. This is not the whole of math encapsulated into something that’s easy to understand (math is about assertions and inferences of relationships, which are then proved or disproved), but it’s an alternative to using symbols for representing complex relationships.

Here’s an article talking about Bret’s work on an early version of something he’s working on for the iPad.

Great stuff, and I congratulate him on finding a good use for a computer!

Another picture on the nature of warped space-time

I was excited to see this article yesterday on a long-planned experiment conducted in space, called “Gravity Probe B”. According to the article, the results confirm predictions made by Einstein’s theory of General Relativity. The experiment made extremely precise measurements of the Earth’s gravity well. The results comport with the idea that space-time is warped by the mass of the Earth. They also match a prediction made by Einstein’s theory that the Earth’s spin produces a swirling gravity well, with a shape analogous to that of a whirlpool drawing water into itself, in this case drawing space into itself. The space-time distortion caused by the Earth appears to move as a result of Earth’s rotation, though the rates of spin are not anywhere close to each other. The rate of spin of the gravity well is much slower. What has interested the scientists on this project in this effect is it helps explain the jets of charged particles that are generated by massive black holes in space, as the whirling distortions around them are probably more extreme, spinning much faster than here on Earth.

If you’re interested in getting into the history and the details of the theory that inspired this experiment, click on the first graphic embedded in the linked article. This will take you to another page, which also includes some video clips that provide easy to understand illustrations of what was observed. I particularly liked Kip Thorne’s “missing inch” model demonstration. He provided an easy to understand 2D model, which he transforms into a 3D model, of how they were able to show with Gravity Probe B’s instruments that the space around Earth is indeed warped.

The philosophy of science, and science in the 20th century

This is from Bryan Magee’s 15-part series The Great Philosophers, broadcast on BBC2 in 1987. Here he talks with Hilary Putnam about the philosophy of science.

Alan Kay has talked about how most people don’t understand how science worked in the 20th century, much less the present century. Our schools still teach science in a 19th century fashion in terms of how people can approach knowledge. This episode of the show amply demonstrates this disconnect.

As I studied what was said here about scientific thinking up until the late 19th century, it reminded me a lot of how I was taught computer science. It was thought to be a process of adding on to existing knowledge, and sorting out any inconsistencies. With rare exceptions, alternatives to existing models were not discussed, or even made known to students.

I’m going to put the notes I took from this episode below each section of video, because I found it hard to follow the arguments without going through them slowly, and looking at what Magee and Putnam said explicitly.

Magee: The religious world view has been largely replaced by a world view purportedly derived from science, since the 17th century.

Science was seen as a process of “adding” and sorting for 300 years, up until the late 19th century. The view of knowledge was that it grew by accumulation. Science was seen as getting its success from an inductive method. For 300 years educated people thought of the Universe as just matter in motion. They thought if we went on long enough, we’d find out everything about the Universe there was to know. This whole idea has been abandoned by science, but non-scientists think that scientists still think this way.

Kant challenged the “correspondence” view of truth, that theories correspond directly to reality without nuances, that the world makes its truths apparent, and scientists just find out what that is and write it down. Kant said there’s a contribution of the thinking mind. Truth depends on what exists, and on the mind of the observer. Scientists have come to a similar view, that theories are not merely dictated to us by the “facts.”

The categories and interpretations we use, the ideas within which we organize our observations, are our contributions. The world that’s perceived by us is partly contributed to by external effects, and is partly made up of categories and ways of seeing things that come from us. [Mark says: I think Plato’s notion of “forms” also applies here, in terms of our theories. We could equate a theory of a phenomenon to its form in our own minds, and we could recognize that if a phenomenon is unfamiliar enough, different people will create different forms of it in their own minds.]

In the late 19th century the views of scientists changed to realize that the old theories could be wrong. The more modern view is that not only is our view of reality partly mind-dependent, but there are alternatives, and the concepts we impose on the world may not be the right ones, we may have to change them, and there is an interaction between what we contribute and what we find out. It was realized that there were alternative descriptions of some things that were equally as valid.

The new conception of science is that theories are constantly being replaced by newer and better ones, which are richer, that explain phenomena more fully, and there’s a mystery to our universe which will never be completely discovered. The view about “truth” that Putnam said was coming into use more and more in science is the idea that there cannot be a separation between what’s considered true and what our standards of assertability are. So the way that mind-dependence comes in is the fact that what’s true and what’s false is partly a function of what our standards of truth and falsity are. That depends on our interests, which change over time. Putnam defined “interests” in a cultural context, such as, in our modern world we’d recognize that there’s a policeman on the corner. Someone from a tribal culture, which may have no formal social services, wouldn’t recognize a policeman, but would instead see “someone in blue” on the corner.

Even facts within theories can have alternatives, as was evidenced by the theory of relativity.

A scientific theory can be useful even if nobody really understands what it means (quantum mechanics). [Mark says: You could also say the same about Newton’s theory of gravity.]

Putnam thinks the way in which the old scientific view (pre-19th century) was destructive was that since it saw scientific findings as objective facts which were accumulated over time, then everything else was considered non-knowledge, that couldn’t even be considered true or false.

Real interesting! Putnam and Magee talk about how computer science, through an interaction with computer simulations, has created a growth of knowledge about the human mind. This relates to what I’ve read about Licklider’s use of computers at MIT in the 1950s, in Waldrop’s book, “The Dream Machine.”

Edit 5-4-2011: I had a brief conversation with Alan Kay about the main theme of this program, and I put particular focus on this part, where Magee and Putnam discussed the role that computer science has played in the advancement of knowledge in other areas of research. He zeroed in on this part, saying that it was a misperception of what really happened, at least in relation to Licklider (and Engelbart). He said the advances in knowledge from computer science have been paradigm shifts, not advancements by interaction. He pointed out that the theory of evolution did not begin in the field of biology, as Magee asserts. He also said that computers did not begin as “a self-conscious analogy to the human mind,” but rather as machines to run calculations. I knew that. I thought Magee’s description of this was rather romantic, and other historical accounts have agreed with his assessment, so I let it slide, but I have since had second thoughts about that.

Magee: Most people with college degrees have no idea what Einstein’s theory of relativity is all about, more than 70 years after it was published. It’s done very little to influence their view of the world. Isn’t there a danger that science and mathematics are racing ahead, and the whole range of insight that that’s giving us into the Universe simply isn’t filtering through to the layman?

Putnam: There was a text on Special Relativity called “Space-Time Physics” that was designed for the first month of the first college physics course, and the authors hoped that someday it would be taught in high schools.

The question and answer above really struck a chord with me. First of all, I didn’t encounter Einstein’s theory of relativity in my first semester physics course in college. All we covered, that I remember, was Newtonian mechanics, though at a more detailed and advanced level than what we got in high school.

The discussion that Magee and Putnam had about General Relativity, and the risk we take with science “racing past” the rest of society, I think, makes a good case for Alan Kay’s efforts to teach more advanced math concepts to children, because without that, they’re never going to understand it. To put this in perspective, take a look at General Relativity, and think about the understanding of mathematics that would be required to understand it.

Strangely enough, I got more exposure to Einstein’s theories of relativity when I was in Jr. high school (1982-’85) than I got at any other time. We watched parts of Carl Sagan’s Cosmos series. In it Sagan talked about Einstein’s theories of relativity, and included Einstein’s notion of gravity as warped space-time. Farther back than that, I had been fascinated by the idea of black holes, since I was in 4th grade. I read all I could on them, and I learned some very strange things: that not even light could escape from them, and that matter was destroyed upon entering them, for all intents and purposes, though I think the evidence still supports the idea of conservation of matter. It’s just that the matter gets separated into its component parts, and some of the matter is converted into energy.

I got exposed a bit to a theory of gravity that was different from Einstein’s or Newton’s outside of school, by a toy inventor, of all people. So I knew there were other theories besides those that were commonly accepted. What became clear to me after exposure to these ideas was that gravity is a fascinating mystery. We didn’t know what caused it then, and we don’t know now, not really, except for mass. The question that would not go away for me was what about mass causes it? The idea that mass causes it just because it exists never satisfied me. The other forces were explained through phenomena I could relate to, but gravity was different.

(Update 12-15-2013: I’ve updated the paragraph below with what I think is a more accurate description of the theory, and I’ve added a couple paragraphs to further explain. I was mistaken in saying that astrophysicists think that the distortion in space-time causes us to “slide” in towards the gravity well.)

What Einstein’s theory explains is that gravity is not a force in the way that we think about other forces. His theory said that matter warps space-time, and it is this warping which creates the perception of a force acting on objects and energy. The way Sagan explained it is that we are all “sliding” being pushed inward towards the center of the gravity well, on by this distortion, and what keeps us from going to the center of the well is the outward force exerted by the earth we stand on. Scientists have tried to explain it using an analogy of a large ball setting on a piece of taut fabric, which creates a dip in it. If you toss a marble onto the fabric, it will fall towards the larger ball, due to the anomaly in the fabric. This is not a great analogy, because in it, gravity is pulling the larger ball down creating the dip in the fabric. If you can disregard that fact, what they are trying to get across is its the distortion of the fabric that alters the path of the marble, not any force. Another way this analogy is imperfect is it doesn’t illustrate how space-time is being drawn inward by something that is not yet explained. What Einstein’s theory says is that there is something about matter that causes this distortion in space-time. That could be attributed to a force, but from everything I’ve studied about the theory so far, Einstein didn’t say that. That’s left as “a problem for the reader,” so to speak.

The strange thing about the theory that might seem confusing is it explains that while we are “pushed” inward towards the well, it’s not a push in the Newtonian sense. This “push” is coming from what is called “space-time” itself. It’s a push on everything that constitutes matter and energy, down to our body’s subatomic particles, and photons of light.

Looking at how it is we stand on earth, rather than all matter being drawn into the well, it’s rather like standing on stretchy material that’s constantly being drawn into something, with a “wall” in our way (which does not “catch” the fabric). The closer the material gets to whatever is pulling on it, the more stretched it becomes. As it becomes stretched, so are we, and all the matter around us, because, as we understand, matter is “situated” in space-time. There may be some “friction,” if you will, between us and the material (space-time), that causes us to be drawn in with it, but it’s not total, allowing this “material” to slide past us, while we are repelled outward by the “wall” (forces in the matter we stand on). Hopefully my attempt to explain this isn’t too confusing. I’m groping at understanding it myself.

An example of the way school gets science wrong

High school physics was odd to me, because we talked about stuff like how electrons had both the property of a particle and a wave (as Putnam discussed above), a very interesting idea, but when it came to gravity, all we focused on was Galileo’s and Newton’s notions of it, particularly Newton’s Universal Law of Gravitation. One of the things I remember being emphasized was that “this law is the same throughout the Universe.” For the sake of argument, from our perspective, being on Earth, I was willing to accept that claim, but I remember being a bit skeptical that it was really true. I knew that we hadn’t really explored the whole universe, and that we hadn’t even come close to testing this notion everywhere. I was open to the idea that someday we might find an exception to this notion if we were to theoretically explore the Universe, which is something that may never happen (I didn’t even know about Mercury’s orbit, which doesn’t fit Newton’s theory as well as the orbits of the other planets). So, for practical purposes, we could assume that it’s “universal.”

We talked about Einstein’s theory of Special Relativity, and what was really fascinating about that was E = mc2, that there’s a relationship between matter and energy that’s only “separated” by the square of the speed of light. My memory, though, is we hardly talked about General Relativity at all, except perhaps in a historical context. Looking back on this, it’s rather obvious to see why. In high school science we were expected to get a little more into the details of scientific theories, and work with the math concepts more. The school system hadn’t prepared us to work with the notion of General Relativity at that level. So in effect we skipped it, but the conceit that was presented in class was that Newton’s law of gravity was “the truth.”

I remember being asked a question on a physics test that asked, “If aliens visited Earth, would we find that they have the same knowledge of gravity as we do?” I paused. This was an interesting question to me, because I thought it was asking me to consider what understanding another race of intelligent beings would have about this phenomenon. I asked myself, if space aliens existed that were intelligent enough to build craft for interstellar travel, would they have the same ideas about gravity as we do? I answered, “Maybe.” I added something about how since the aliens had managed to make the journey from whatever star system they came from, that their technology was probably more advanced than ours (I mean, we haven’t tried this yet, so that was a good guess), and maybe they had a better understanding of gravity than we did, particularly what caused it. I hedged a bit, but I guessed that there was probably a link between technological development and greater scientific understanding of our universe. Granted, this was a totally speculative answer, but it was a speculative question, as far as I was concerned.

My physics teacher marked this answer wrong. I was floored! I wondered, “What did she expect?” I asked her about it after class, and she said the answer she expected was something along the lines of, “Yes, because the Law of Gravity is universal.” I was so disappointed (in her). It immediately hit me that, “Oh, yeah. I remember we talked about that.” I could’ve almost kicked myself for thinking that she had asked a thought-provoking question, and falling for it! I was supposed to remember to recall what we had talked about in class. I wasn’t supposed to think on it! Duh! How could I have been so stupid? That’s really how perverse and offensive this was. It brings to mind the fictional short story of “Harrison Bergeron,” now that I think about it… However, trying not to see that she was telling me not to think, I tried to talk her through my reasoning, because I thought I gave a legitimate answer. I told her about the other notions of gravity I knew about, and the questions they raised for me. She wouldn’t hear of it. I think I said in a final protest, “Do you really think we’ve discovered everything there is to know about gravity?!” In any case, she didn’t answer me. I walked out of the classroom exasperated. It was one of the most disillusioning experiences of my life. It made me fume!

Looking back on things like this, she probably didn’t even understand what a good question she had asked. Secondly, there were other instances where this happened in my schooling. Sometimes I wanted to think through things and come up with original answers, not merely regurgitate what I had been fed, and I got penalized for it. She and I had not been getting along for most of the time while I was in her class, and I think it was over issues like this. So this was nothing new, but this incident revealed a disturbing fact to me in a way that was so obvious, I couldn’t just brush it off as a misunderstanding between us: Her approach to science was that we were supposed to accept what she said as truth. We were not supposed to think about it, or question it. The only thinking we were supposed to do was in calculating results from experiments, but a lot of that was applying the “correct” formulas. More memorization. Nevertheless, I got an “A” in her class.

Looking at this from a “mountaintop” view, I think this example shows the split between 20th century scientific thinking, and the 19th century thinking that’s been used to teach science in schools. I saw a discussion recently where Alan Kay talked about this with the No Child Left Behind policy, that for students who were developing an understanding of scientific thinking, they had to, on the one hand, gain real understanding, and on the other, remember to answer “wrongly” on the test. That summed up the experience I describe above! I’ve used my example sometimes when I’ve heard people complain about this, because I can say to them I got the same treatment when I was in my high school science classes, more than 20 years ago. As far as I’m concerned, this policy is just taking that idea of instruction, which has been around for years, to its logical conclusion. It’s now metastasized throughout the public education system, at least in the areas that are tested for proficiency, whereas in my day there were exceptions.

—Mark Miller, https://tekkie.wordpress.com