Vladislav Zorov, a Quora user, brought this to my attention. It is a worthy follow-up to Jerry King’s “The Art of Mathematics,” called Lockhart’s Lament. Lockhart really fleshes out what King was talking about. Both are worth your time. Lockhart’s lament reminds me a lot of a post I wrote, called The challenge of trying to get a real science of computing in our schools. I talked about an episode of South Park to illustrate this challenge, where a wrestling teacher is confronted with a pop culture in wrestling that’s “not real wrestling” (ie. WWE), as an illustration of the challenge that computer scientists have in trying to get “the real thing” into school curricula. The wrestling teacher is continually frustrated that no one understands what real wrestling is, from the kids who are taking his class, to the people in the community, to the administrators in his school. There is a “the inmates have taken over the asylum” feeling to all of this, where “the real thing” has been pushed to the margins, and the pop culture has taken over. The people who see “the real thing,” and value it are on the outside, looking in. Hardly anybody on the inside can understand what they’re complaining about, but some of them are most worried that nothing they’re doing seems to be enough to make a big dent in improving the lot of their students. Quite the conundrum. It looks idiotic, but it’s best not to dismiss it as such, because the health and welfare of our society is at stake.

Lockhart’s Lament — The Sequel is also worth a look, as it talks about critics of Lockhart’s argument.

Two issues that come to mind from Lockhart’s lament (and the “sequel”) is it seems like since we don’t have a better term for what’s called “math” in school, it’s difficult for a lot of people to disambiguate mathematics from the benefits that a relative few students ultimately derive from “math.” I think that’s what the critics hang their hat on: Even though they’ll acknowledge that what’s taught in school is not what Lockhart wishes it was, it does have some benefits for “students” (though I’d argue it’s relatively few of them), and this can be demonstrated, because we see every day that some number of students who take “math” go on to productive careers that use that skill. So, they will say, it can’t be said that what’s taught in school is worthless. Something of value is being transmitted. Though, I would encourage people to take a look at the backlash against requiring “math” in high school and college as a counterpoint to that notion.

Secondly, I think Lockhart’s critics have a good point in saying that it is physically impossible for the current school system, with the scale it has to accommodate, to do what he’s talking about. Maybe a handful of schools would be capable of doing it, by finding knowledgeable staff, and offering attractive salaries. I think Lockhart understands that. His point, that doesn’t seem to get through to his critics, is, “Look. What’s being taught in schools is not math, anyway! So, it’s not as if anyone would be missing out more than they are already.” I think that’s the sticking point between him and his critics. They think that if “math” is eliminated, and only real math is taught in a handful of schools (the capacity of the available talent), that a lot of otherwise talented students would be missing out on promising careers, which they could benefit from using “math.”

An implicit point that Lockhart is probably making is that real math has a hard time getting a foothold in the education system, because “math” has such a comprehensive lock on it. If someone offered to teach real math in our school system, they would be rejected, because their method of teaching would be so outside the established curriculum. That’s something his critics should think about. Something is seriously wrong with a system when “the real thing” doesn’t fit into it well, and is barred because of that.

I’ve talked along similar lines with others, and a persistent critic on this topic, who is a parent of children who are going through school, has told me something similar to a criticism Lockhart anticipated. It goes something like, “Not everyone is capable of doing what you’re talking about. They need to obtain certain basic skills, or else they will not be served well by the education they receive. Schools should just focus on that.” I understand this concern. What people who think about this worry about is that in our very competitive knowledge economy, people want assurances that their children will be able to make it. They don’t feel comfortable with an “airy-fairy, let’s be creative!” account of what they see as an essential skill. That’s leaving too much to chance. However, a persistent complaint I used to hear from employers (I assume this is still the case) is that they want people who can think creatively out of the box, and they don’t see enough of that in candidates. This is precisely what Lockhart is talking about (he doesn’t mention employers, though it’s the same concern, coming from a different angle). The only way we know of to cultivate creative thinkers is to get people in the practice of doing what’s necessary to be creative, and no one can give them a step-by-step guide on how to do that. Students have to go through the experience themselves, though of course adult educators will have a role in that.

A couple parts of Lockhart’s account that really resonated with me was where he showed how one can arrive at a proof for the area of a certain type of triangle, and where he talked about students figuring out imaginary problems for themselves, getting frustrated, trying and failing, collaborating, and finally working it out. What he described sounds so similar to what my experience was when I was first learning to program computers, when I was 12 years old, and beyond. I didn’t have a programming course to help me when I was first learning to do it. I did it on my own, and with the help of others who happened to be around me. That’s how I got comfortable with it. And none of it was about, “To solve this problem, you do it this way.” I can agree that would’ve been helpful in alleviating my frustration in some instances, but I think it would’ve risked denying me the opportunity to understand something about what was really going on while I was using a programming language. You see, what we end up learning through exploration is that we often learn more than we bargained for, and that’s all to the good. That’s something we need to understand as students in order to get some value out of an educational experience.

By learning this way, we own what we learn, and as such, we also learn to respond to criticism of what we think we know. We come to understand that everything that’s been developed has been created by fallible human beings. We learn that we make mistakes in what we own as our ideas. That creates a sense of humility in us, that we don’t know everything, and that there are people who are smarter than us, and that there are people who know what we don’t know, and that there is knowledge that we will never know, because there is just so much of it out there, and ultimately, that there are things that nobody knows yet, not even the smartest among us. That usually doesn’t feel good, but it is only by developing that sense of humility, and responding to criticism well that we improve on what we own as our ideas. As we get comfortable with this way of learning, we learn to become good at exploring, and by doing that, we become really knowledgeable about what we learn. That’s what educators are really after, is it not, to create lifelong learners? Most valuable of all, I think, is learning this way creates independent thinkers, and indeed, people who can think out of the box, because you have a sense of what you know, and what you don’t know, and what you know is what you question, and try to correct, and what you don’t know is what you explore, and try to know. Furthermore, you have a sense of what other people know, and don’t know, because you develop a sense of what it actually means to know something! This is what I see Lockhart’s critics are missing the boat on: What you know is not what you’ve been told. What you know is what you have tried, experienced, analyzed, and criticized (wash, rinse, repeat).

On a related topic, Richard Feynman addressed something that should concern us re. thinking we know something because it’s what we’ve been told, vs. understanding what we know, and don’t know.

What seems to scare a lot of people is even after you’ve tried, experienced, analyzed, and criticized, the only answer you might come up with is, “I don’t know, and neither does anybody else.” That seems unacceptable. What we don’t know could hurt us. Well, yes, but that’s the reality we exist in. Isn’t it better to know that than to presume we know something we don’t?

The fundamental disconnect between what people think is valuable about education and what’s actually valuable in it is they think that to ensure that students understand something, it must be explicitly transmitted by teachers and instructional materials. It can’t just be left to chance in exploratory methods, because they might learn the wrong things, and/or they might not learn enough to come close to really mastering the subject. That notion is not to be dismissed out of hand, because that’s very possible. Some scaffolding is necessary to make it more likely that students will arrive at powerful ideas, since most ideas people come up with are bad. The real question is finding a balance of “just enough” scaffolding to allow as many students as possible to “get it,” and not “too much,” such that it ruins the learning experience. At this point, I think that’s more an art than a science, but I could be wrong.

I’m not suggesting just using a “blank page” approach, where students get no guidance from adults on what they’re doing, as many school systems have done (which they mislabeled “constructivism,” and which doesn’t work). I don’t think Lockhart is talking about that, either. I’m not suggesting autodidactic learning, nor is Lockhart. There is structure to what he is talking about, but it has an open-ended broadness to it. That’s part of what I think scares his critics. There is no sense of having a set of requirements. Lockhart would do well to talk about thresholds that he is aiming for with students. I think that would get across that he’s not willing to entertain lazy thinking. He tries to do that by talking about how students get frustrated in his scheme of imaginative work, and that they work through it, but he needs to be more explicit about it.

He admits in the “sequel” that his first article was a lament, not a proposal of what he wants to see happen. He wanted to point out the problem, not to provide an answer just yet.

The key point I want to make is the perception that drives not taking a risk is in fact taking a big risk. It’s risking creating people, and therefore a society that only knows so much, and doesn’t know how to exceed thresholds that are necessary to come up with big leaps that advance our society, if not have the basic understanding necessary to retain the advances that have already been made. A conservative, incremental approach to existing failure will not do.

Related post: The beauty of mathematics denied

— Mark Miller, https://tekkie.wordpress.com

It’s been 5 years since I’ve written something on SICP–almost to the day! I recently provided some assistance to a reader of my blog on this exercise. In the process, I of course had to understand something about it. Like with Exercise 1.19, I didn’t think this one was written that clearly. It seemed like the authors assumed that the students were familiar with the Miller-Rabin test, or at least modular mathematics.

The exercise text made it sound like one could implement Miller-Rabin as a modification to Fermat’s Little Theorem (FLT). From providing my assistance, I found out that one could do it as a modification to FLT, though the implementation looked a bit convoluted. When I tried solving it myself, I felt more comfortable just ignoring the prior solutions for FLT in SICP, and completely rewriting the expmod routine. Along with that, I wrote some intermediate functions, which produced results that were ultimately passed to expmod. My own solution was iterative.

I found these two Wikipedia articles, one on modular arithmetic, the other on Miller-Rabin, to be more helpful than the SICP exercise text in understanding how to implement it. Every article I found on Miller-Rabin, doing a Google search, discussed the algorithm for doing the test. It was still a bit of a challenge to translate the algorithm to Scheme code, since they assumed an imperative style, and there is a special case in the logic.

It seemed like when it came right down to it, the main challenge was understanding how to calculate factors of n – 1 (n being the candidate tested for primality), and how to handle the special case, while still remaining within the confines of what the exercise required (to do the exponentiation, use the modulus function (remainder), and test for non-trivial roots inside of expmod). Once that was worked out, the testing for non-trivial roots turned out to be pretty simple.

If you’re looking for a list of primes to test against your solution, here are the first 1,000 known primes.

I thought I’d share the video below, since it has some valuable insights on what computer science should be, and what education should be, generally. It’s all integrated together in this presentation, and indeed, one of the projects of education should be integrating computer science into it, but not with the explicit purpose to create more programmers for future jobs, though it could always be used for that by the students. Alan Kay presents a different definition of CS than is pursued in universities today. He refers to how Alan Perlis defined it (Perlis was the one to come up with the term “computer science”), which I’ll get to below.

This thinking about CS and education provides, among other things, a pathway toward reimagining how knowledge, literature, and art can be presented, organized, dissected, annotated, and shared in a way that’s more meaningful than can be achieved with old media. (For more on that, see my post “Getting beyond paper and linear media.”) As with what the late Carl Sagan often advocated, Kay’s presentation here advocates for a general model for the education of citizens, not just professional careerists.

Another reason I thought of sharing this is several years ago I remember hearing that Kay was working on rethinking computer science curriculum. What he presents is more about suggestion, “Start thinking about it this way.” (He takes a dim view of the concept of curriculum, as it suggests setting students on a rigid path of study with the intent to create minds with a cookie cutter, though he is in favor of classical liberal arts education, with the prescriptions that entails.)

As he says in the presentation, there’s a lot to develop in the practice of education in order to bring this into fruition.

This is from 2015:

I wrote notes below for some of the segments he talks about, just because I think his presentation bears some interpretation for many readers. He uses metaphors a lot.

The bicycle

This is an analogy for how an apparatus, or a subject, is typically modified for education. We take the optimized, or the adult version of something, and add compensators, which make it so that beginners can use it without falling all over themselves. It’s seen as easier to present it this way, and as a skill-building experience, where in order to learn how to do something, you need to use “the real thing.” Beginners can put on a good show of using this sort of apparatus, or modified subject, but the problem is that it doesn’t teach a beginner how to become good at really using the real thing at its full potential. The compensators become a crutch. He said a better idea is to use an apparatus, or a component of the subject, that allows a beginner to get a feel for how to use the thing in a way that gets across significant aspects of its potential, but without the optimizations, or the way adults use it, which make it too complicated for a beginner to use under their own power. In this case, a lowbike is better. This beginner apparatus, or component, is more like the first version of the thing you’re trying to teach. Bicycles were originally more like scooters, without pedals, or a chain, where you’d sit in the seat, push it along with your legs, kind of “running while sitting,” glide, and turn by shifting your weight, and turning into the turn. Once a beginner gets a feel for that, they can deal with the optimized version, or a scaled down adult version, with pedals and a chain to drive the bike, because all that adds is the ability to get more power out of it. It doesn’t change any of the fundamentals of how you use it.

This gets to an issue of pedagogy, that learners need to learn something in components, rather than dealing with the whole thing at once. Once they learn one capacity, they can move on to the next challenge in learning the “whole thing.”

Radiation vs. nouns

He put forward a proposition for his talk, which is that he’s mixing a bunch of ideas together, because they overlap. This is a good metaphor, because most of his talk is about what we are as human beings, and how society should situate and view education. Only a little of it is actually on computer science, but all of it is germane to talking about computer science education.

He also gives a little advice to education reformers. He pointed out what’s wrong with education as it is right now, but rather than cursing it, he said one should make a deliberate effort to “build a tribe” or coalition with those who are causing the problem, or are in the midst of the problem, and suggest ways to bring them into a dignified position, perhaps by sharing in their honors, or, as his example illustrated, their shame. I draw some of this from Plato’s Cave metaphor.

Cooperation and competition in society

I once heard Kay talk about this many years ago. He said that, culturally, modern corporations are like the ancient hunter-gatherers. They exploit the resources of an area for a while, and once it’s exhausted, they move on, and that as a culture, they have yet to learn about democracy, which requires more of a “settlement” mentality toward what they’re doing. Here, he used an agricultural metaphor to talk about a cooperative society that creates the wealth that is then used by competitive forces within it. What he means by this is that the true wealth is the knowledge that’s ultimately used to develop new products and services. It’s not all developed inside the marketplace. He doesn’t talk about this, but I will. Even though a significant part of the wealth (as he said, you can think of it as “potential energy”) is generated inside research labs, what research labs tend to lack is knowledge of what the members of society can actually understand of this developed knowledge. That’s where the competitive forces in society come in, because they understand this a lot better. They can negotiate between how much of the new knowledge to put into a product, and how much it will cost, to reach as many people as possible. This is what happened in the computer industry of the past.

I think I understand what he’s getting at with the agricultural metaphor, because farmers don’t just want to reap a crop for one season. Their livelihood depends on maintaining fertility on their land. That requires not just exploiting what’s there season after season, or else you get the dust bowl. If instead, practices are modified to allow the existing land to become fertile again, or, in the case of hunter-gathering, aggressively managing the environment to create favorable grazing to attract game, then you can create a cycle of exploitation and care such that a group can stay in one area for a long time, without denying themselves any of the benefits of how they live. I think what he suggests is that if corporations would modify their behavior to a more settled, agricultural model, to use some of their profits to contribute to educating the society in which they exist, and to funding scientific research on a private basis, that would “regenerate the soil” for economic growth, which can then fuel more research, creating a cycle of renewal. No doubt the idea he’s presenting includes the businesses who would participate in doing this. They should be allowed to profit (“reap”) from what they “sow,” but the point is they’re not the only ones who can profit. Other players in the marketplace can also exploit the knowledge that’s generated, and profit as well. That’s what’s been done in the past with private research labs.

He attributes the lack of this to culture, of not realizing that the economic model that’s being used is not sustainable. Eventually, you use up the “soil,” and it becomes “infertile,” and “blows away,” and, in the case of hunter-gathering, the “good hunting grounds” are used up.

He makes a crucial point, though, that education is not just about jobs and competitiveness. It’s also about inculcating what citizenship really means. I’m sure if he was asked to drill down on this more, he would suggest a classical education for this, along with a modified math and science curriculum that would give students a sense of what those subjects really are like.

The sense I get is he’s advocating almost more of an Andrew Carnegie model of corporate stewardship, who, after he made his money, directed his philanthropy to building schools and libraries. Kay would just add science labs to that mix. (He mentions this later in his talk.)

I feel it necessary to note that not all for-profit entities would be able to participate in funding these cooperative activities, because their profit margins are so slim. I don’t think that’s what he’s expecting out of this.

What we are, to the best of our knowledge

He gives three views into human mental capacity: the way we perceive (theatrical), how much we can perceive at any moment in time (1 ± 2), and how educators should perceive ourselves psychologically and mentally (more primate and mammalian). This relates to neuroscience, and to some extent, evolutionary psychology.

The raison d’être of computer science

The primary purpose of computer science should be developing a science of systems in process, and that means all processes: mechanical processes, technological processes, social processes, biological processes, mental processes, etc. This relates to my earlier post, “Beginning the journey of becoming a computer scientist.” It’s partly about developing a new kind of mathematics for modeling processes. Alan Turing did it, with his Turing Machine concept, though he was trying to model the process of generating mathematical statements, and doing mathematical tests on them.

Shipping the design

Kay talks about how programmers today don’t have access to anything like what designers in other fields have, where they’re able to model their design, simulate it, and then have a machine fabricate a prototype that you can actually show and use.

I had an epiphany about this about 8 or 9 years ago. I was at a friend’s party, and there were a few mechanical engineers among the guests. I overheard a couple of them talking about the computer-aided design (CAD) software they were using. One talked about a “terrible” piece of CAD software he used at work. He said he had a much better CAD system at home, but it was incompatible with the data files that were used at work. As much as he would’ve loved to use it, he couldn’t. He said the system he used at work required him to group pieces of a design together as he was building the model, and once he did that, those pieces became inflexible. He couldn’t just redesign one piece of it, or separate one out individually from the model. He couldn’t move the pieces around on the model, and have them fit. Once they were grouped, that was it. It became this static thing. He said in order to redesign one piece of it, he had to take the entire model apart, piece by piece, redesign the part, and then redesign all the other pieces in the group to make the new part fit. He said he hated it, and as he talked about it, he acted like he was so disgusted with it, he wanted to throw it in the trash, like it was a piece of garbage. He said on his CAD system at home, it was wonderful, because he could separate a part from a model any time he wanted, and the system would adjust the model automatically to “make sense” out of the part being missing. He could redesign the part, and move it to a different part of the model, “attach it” somewhere, and the system would automatically adjust the model so that the new part would fit. The way he described it gave it a sense of fluidity. Whereas the system he used at work sounded rigid. It reminded me of the programming languages I had been using, where once relationships between entities were set up, it was really difficult to take pieces of it “out” and redesign them, because everything that depended on that piece would break once I redesigned it. I had to go around and redesign all the other entities that related to it to adjust to the redesign of the one piece.

I can’t remember how this worked, but another thing the engineer talked about was the system at work had some sort of “binding” mechanism that seemed to associate parts by “type,” and that this was also rigid, which reminded me a lot of the strong typing system in the languages I had been using. He said the system he had at home didn’t have this, and to him, it made more sense. Again, his description lent a sense of fluidity to the experience of using it. I thought, “My goodness! Why don’t programmers think like this? Why don’t they insist that the experience be like this guy’s CAD system at home?” For the first time in my career, I had a profound sense of just what Alan Kay talked about, here, that the computing field is retrograde. It has not advanced anywhere close to the kind of engineering that exists in other fields, where they would insist on this sort of experience. We accept so much less, whereas modern engineers have a hard time standing for it, because they know they have better options.

Don’t be fooled by large efforts below threshold

Before I begin this part, I want to share a crucial point that Kay makes, because it’s one of the big ones:

Think about what thinking is. Thinking is not being logical. Thinking is choosing the environment that you’re going to think in before you start rationalizing.

Kay had something very interesting, and startling, to say about the Apollo space program, using that as a metaphor for large reform initiatives in education generally. I recently happened upon a video of testimony he gave to a House committee on educational computing back in 1982, chaired by then-congressman Al Gore, and Kay talked about this same example back then. He said that the way the Apollo rockets were designed was a “hack.” They were not the best design for space travel, but it was the most expedient for the mission of getting to the Moon by the end of the 1960s. Here, in this presentation, he talks about how each complete rocket was the height of a 45-story building (1-1/2 football fields in length), most of it high explosives, with only a tiny capsule at the top that could fit 3 astronauts. This is not a model that could scale to what was needed for space travel.

It became this huge worldwide cultural event when NASA launched it, and landed men on the Moon, but Kay said it was really not a great accomplishment. I remember Rep. Gore said in jest, “The walls in this room are shaking!” The camera panned around a bit, showing pictures on the wall from NASA. How could he say such a thing?! This was the biggest cultural event of the century, perhaps in all of human history. He explained the same thing here: that the Apollo program didn’t advance space travel beyond the mission to the Moon. It was not technology that would get us beyond that, though, in hindsight we can say technology like it enabled launching probes throughout the Solar System.

Now, what he means by “space travel,” I’m not sure. Is it manned missions to the outer planets, or to other star systems? Kay is someone who has always thought big. So, it’s possible he was thinking of interstellar travel. What he was talking about was the problem of propulsion, getting something powerful enough to make significant discoveries in space exploration possible. He said chemical propellant just doesn’t do it. It’s good enough for launching orbital vehicles around our planet, and launching probes, but that’s really it. The rest is just wasting time below threshold.

Another thing he explained is that large initiatives which don’t cross a meaningful threshold can be harmful to efforts to advancing any endeavor, because large initiatives come with extended expectations that the investment will continue to be used, and they must be satisfied, or else there will be no cooperation in doing the initial effort. The participants will want their return on investment. He said that’s what happened with NASA. The ROI had to play out, but that ruined the program, because as that happened, people could see we weren’t advancing the state of the art that much in space travel, and the science that was being produced out of it was usually nothing to write home about. Eventually, we got what we now see: People are tired of it, and have no enthusiasm for it, because it set expectations so low.

What he was trying to do in his House committee testimony, and what he’s trying to do here, is provide some perspective that science offers, vs. our common sense notion of how “great” something is. You cannot get qualitative improvement in an endeavor without this perspective, because otherwise you have no idea what you’re dealing with, or what progress you’re building on, if any. Looking at it from a cultural perspective is not sufficient. Yes, the Moon landing was a cultural milestone, but not a scientific or engineering milestone, and that matters.

Modern science and engineering have a sense of thresholds, that there can come a point where some qualitative leap is made, a new perspective on what you’re doing is realized that is significantly better than what existed prior. He explains that once a threshold has been crossed, you can make small improvements which continue to build on that significant leap, and those improvements will stick. The effort won’t just crash back down into mediocrity, because you know something about what you have, and you value it. It’s a paradigm shift. It is so significant, you have little reason to go back to what you were doing before. From there, you can start to learn the limits of that new perspective, and at some point, make more qualitative leaps, crossing more thresholds.

“Problem-finding”/finding the goal vs. problem-solving

Problem solving begins with a current context, “We’re having a problem with X. How do we solve it?” Problem finding asks do we even have a good idea of what the problem is? Maybe the reason for the problems we’ve been seeing has to do with the fact that we haven’t solved a different problem we don’t know about yet. “Let’s spend time trying to find that.”

Another way of expressing this is a concept I’ve heard about from economists, called “opportunity cost,” which, in one context, gets across the idea that by implementing a regulation, it’s possible that better outcomes will be produced in certain categories of economic interactions, but it will also prevent certain opportunities from arising which may also be positive. The rub is these opportunities will not be known ahead of time, and will not be realized, because the regulation creates a barrier to entry that entrepreneurs and investors will find too high of a barrier to overcome. This concept is difficult to communicate to many laymen, because it sounds speculative. What this concept encourages people cognizant of it to do is to “consider the unseen,” to consider the possibilities that lie outside of what’s currently known. One can view “problem finding” in a similar way, not as a way of considering the unseen, but exploring it, and finding new knowledge that was previously unknown, and therefore unseen, and then reconsidering one’s notion of what the problem really is. It’s a way of expanding your knowledge base in a domain, with the key practice being that you’re not just exploring what’s already known. You’re exploring the unknown.

The story he tells about MacCready illustrates working with a good modeling system. He needed to be able to fail with his ideas a lot, before he found something that worked. So he needed a simple enough modeling system that he could fail in, where when he crashed with a particular model, it didn’t take a lot to analyze why it didn’t work, and it didn’t take a lot to put it back together differently, so he could try again.

He made another point about Xerox PARC, that it took years to find the goal, and it involved finding many other goals, and solving them in the interim. I’ve written about this history at “A history lesson on government R&D” Part 2 and Part 3. There, you can see the continuum he talks about, where ARPA/IPTO work led into Xerox PARC.

This video with Vishal Sikka and Alan Kay gives a brief illustration of this process, and what was produced out of it.

Erosion gullies

There are a couple metaphors he uses to talk about the lack of flexibility that develops in our minds the more we focus our efforts on coping, problem solving, and optimizing how we live and work in our current circumstances. One is erosion gullies. The other is the “monkey trap.”

Erosion gullies channel water along a particular path. They develop naturally as water erodes the land it flows across. These “gullies” seem to fit with what works for us, and/or what we’re used to. They develop into habits about how we see the world–beliefs, ideas which we just accept, and don’t question. They allow some variation in the path that’s followed, but they provide boundaries that don’t allow the water to go outside the gully (leaving aside the exception of floods, for the sake of argument). He uses this to talk about how “channels” develop in our minds that direct our thinking. The more we focus our thoughts in that direction, the “deeper” the gully gets. Keep at it too long, and the “gully” won’t allow us to see anything different than what we’re used to. He says that it may become inconceivable to think that you could climb out of it. Most everything inside the “gully” will be considered “right” thinking (no reason why), and anything outside of it will be considered “wrong” (no reason why), and even threatening. This is why he mentions that wars are fought over this. “We’re all in different erosion gullies.” They don’t meet anywhere, and my “right” is your “wrong,” and vice-versa. The differences are irreconcilable, because the idea of seeing outside of them is inconceivable.

He makes two points with this. One is that we have erosion gullies re. stories that we tell ourselves, and beliefs that we hold onto. Another is that we have erosion gullies even in our sensory perceptions that dictate what we see and don’t see. We can see things that don’t even exist, and typically do. He uses eyewitness testimony to illustrate this.

I think what he’s saying with it is we need to watch out for these “gullies.” They develop naturally, but it would be good if we had the flexibility to be able to eventually get out of our “gully,” and form a new “channel,” which I take is a metaphor for seeing the world differently than what we’re used to. We need a means for doing that, and what he proposes is science, since it questions what we believe, and tests our ideas. We can get around our beliefs, and thereby get out of our “gullies” to change our perspective. It doesn’t mean we abandon “gullies,” but just become aware that other “channels” (perspectives) are possible, and we can switch between them, to see better, and achieve better outcomes.

Regarding the “monkey trap,” he uses it as a metaphor for us getting on a single track, grasping for what we want, not realizing that the very act of being that focused, to the exclusion of all other possibilities, is not getting us anywhere. It’s a trap, and we’d benefit by not being so dogged in pursuing goals if they’re not getting us anywhere.

“Fast” vs. “slow”

He gets into some neuroscience that relates to how we perceive, what he called “fast” and “slow” response. You can train your mind through practice in how to use “fast” and “slow” for different activities, and they’re integral to our perception of what we’re doing, and our reactions to it, so that we don’t careen into a catastrophe, or miss important ideas in trying to deal with problems. He said that cognitive approaches to education deal with the “slow” systems, but not the “fast” ones, and it’s not enough to help students in really understanding a subject. As other forms of training inherently deal with the “fast” systems, educators need to think about how the “fast” systems responds to their subjects, and incorporate that into how they are taught. He anticipates this will require radically redesigning the pedagogy that’s typically used.

He says that the “fast” systems deal with the “atoms” of ideas that the “slow” system also deals with. By “atoms,” I take it he means fundamental, basic ideas or concepts for a subject. (I think of this as the “building blocks of molecules.”)

The way I take this is that the “slow” systems he’s talking about are what we use to work out hard problems. They’re what we use to sit down and ponder a problem for a while. The “fast” systems are what we use to recognize or spot possible patterns/answers quickly, a kind of quick, first-blush analysis that can make solving the problem easier. To use an example, you might be using “fast” systems now to read this text. You can do it without thinking about it. The “slow” systems are involved in interpreting what I’m saying, generating ideas that occur to you as you read it.

This is just me, but “fast” sounds like what we’d call “intuition,” because some of the thinking has already been done before we use the “slow” systems to solve the rest. It’s a thought process that takes place, and has already worked some things out, before we consciously engage in a thought process.


This is the clearest expression I’ve heard Kay make about what science actually is, not what most people think it is. He’s talked about it before in other ways, but he just comes right out and says it in this presentation, and I hope people watching it really take it in, because I see too often that people take what they’ve been taught about what science is in school and keep reiterating it for the rest of their lives. This goes on not only with people who love following what scientists say, but also in our societal institutions that we happen to associate with science.

…[Francis] Bacon wrote a book called “The Novum Organum” in 1620, where he said, “Hey, look. Our brains are messed up. We have bad brains.” He called the ways of messing up “idols.” He said we get serious errors because of our genetics. We get serious errors because of the culture we’re in. We get serious errors because of the languages we use. They don’t represent what’s actually out there. We get serious errors from the way that academia hangs on to bad ideas, and teaches them over again. These are his four “idols.” Anyone ever read Bacon? He said we need something to get around our bad brains! A set of heuristics, is the term we’d use today.

What he called for was … science, because that’s what “Novum Organum,” the rest of the title, was: “A new way of dealing with knowledge.”

Science is not the knowledge, because knowledge is in this context. What science is is a negotiation between what’s out there and what we can represent.

This is the big idea. This is the idea they don’t teach in school. This is the idea we should be teaching. It’s one of the biggest ideas of all time.

It isn’t the knowledge. It’s the relationship, because what’s out there is only knowable by a phenomena that is being filtered in every possible way. We don’t even know if our brain is capable of representing the stuff.

So, to think about science as the truth is completely wrong! It’s not the right way to look at it. But if you think about it as a negotiation between the best you can do right now and stuff that’s out there, where you’re not completely sure, you’re in a very strong position.

Science has been the most important, powerful thought system humans have ever invented, because it gave up the idea of truth, and it substituted for it a thousand variations of false, some of which are incredibly powerful. This is the big idea.

So, if we’re going to think about computing, this is one way … of thinking about, “Wow! Computers!” They are representers. We can learn about representations. We can simulate ideas. We can get a very good–much better sense of dealing with thinking about these complexities.

“Getting there”

The last part demonstrates what I’ve seen with exploration. You start out thinking you’re going to go from Point A to Point B, but you take diversions, pathways that are interesting, but related to your initial search, because you find that it’s not a straight path from Point A to Point B. It’s not as straightforward as you thought. So, you try other ways of getting there. It is a kind of problem solving, but it’s really what Kay called “problem finding,” or finding the goal. In the process, the goal is to find a better way to get to the goal, and along the way, you find problems that are worth solving, that you didn’t anticipate at all when you first got going. In that process, you’ll find things you didn’t expect to learn, but which are really valuable to your knowledge base. In your pursuit of trying to find a better way to get to your destination, you might even get through a threshold, and find that your initial goal is no longer worth pursuing, but there are better goals to pursue in this new perception you’ve obtained.

Related post: Reviving programming as literacy

—Mark Miller, https://tekkie.wordpress.com

10 years

I started this blog on May 31, 2006. It’s been 10 years.

When I started on it, I expected to be leaving technical advice for programmers and IT people, since I had been doing that in comment forums for a while. The advice I was leaving was becoming repetitive, and rather involved. The same problems kept coming up for people. I saw a blog as a way of optimizing the process. I thought I’d just write it here once, and refer people to it. Very quickly, though, a new interest was emerging for me, which grew into re-evaluating and exploring computer science, and that’s mostly what I’ve been writing about since.

The purpose I’ve held for this blog is to share what I’ve learned as I’ve tried to understand in much more depth the perspective on computing that drew me to the field when I was young. The reason I wanted to share it is I understood quickly that if it’s going to be something I’m going to pursue down the road, it has to be more than just me who sees value in it. I wanted to bring other people along. I thought of it as a “trail of breadcrumbs” for people like myself. I don’t know if it’s had that effect. It seems more like there have been certain subjects that have drawn a large audience, but the interest is focused there, and on a few related posts, and not on the blog as a whole. That’s alright. At least it’s having some impact, as I see when people leave appreciative comments, and on occasion e-mail me to extend the conversation. This is just an impression, but it appears I’m on a path that most don’t want to follow, and that may have to do with the fact that most don’t have the luxury. Nevertheless, I will continue posting about what I learn. It helps my learning process to write about it, and it helps my writing to do it with the knowledge that others will see it.

Related posts:

My journey, Part 6

The 150th post

I’ve had it on my mind to post this for a while, though I do it with hesitation, since this has been so politicized. I wrote about the ban on incandescents in 2008, in “Say goodbye to incandescent bulbs. Say hello to the ‘new normal’.”

I found this interesting nugget as I was writing this. Back then, I had an idea for how to make CFLs safer. I don’t know why no one thought of it before I did. From a consumer safety standpoint, it would seem like an obvious solution:

[M]aybe they could put a plastic bulb around the coil that would absorb the shock, or at least contain the contents if the coil broke from jarring. This would require them to make the coils smaller, but the added safety would be worth it.

Several months ago, while shopping for bulbs at the grocery store, I noticed they were selling CFLs with a hard plastic bulb, just like I suggested! Too little, too late, I guess. I am happy to say that CFLs are going away at the end of this year.

I actually stocked up on incandescents on clearance. I haven’t been using CFLs for most of my lighting. I ended up using them for a couple overhead lights, just because an inspector came through a couple years ago to make sure the apartment complex I live in was “smart regs-compliant,” and I didn’t want to embarrass my landlord by creating any excuse to call the dwelling non-compliant. “I love my minders…”

This doesn’t mean incandescents are coming back. They’re still banned by legislation passed in 2007, but at least there are now better lighting options. Since a few years ago, those who prefer incandescents can get halogen bulbs that are the same size as the old incandescents, and are just as bright. They cost more, but they’re supposed to use less electricity, and last longer. What I care about is that they operate on the same principle. They use a filament. So if you break one, all you have to do is clean up the glass, and vacuum or mop. No toxic spill. Yay!

As the guys in the video say, there are now very good LED lighting options, which are also non-toxic to you.

So I see this as reason to celebrate. The market actually succeeded in getting rid of an expensive, hazardous form of lighting that, it seems to me, our government tried to force on us. I wish the ban would be lifted. I don’t see the point in it. I know people will say it reduces pollution, because we can lower the amount of electricity we use, but as I explained years ago, this is a dumb way to go about it. Lighting is not the major source of electricity usage. Our air conditioning is the elephant in the room, each summer!

Aaron Renn had a good article on what was really going on with the light bulb ban, in “The Return of the Monkish Virtues,” written in 2012. It’s interesting reading. The idea always was to force people to think about their impact on the environment, not to actually do anything substantial about it.


“We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.”
— “SIGACT trying to get children excited about CS,”
by Michael R. Fellows and Ian Parberry

There have been many times where I’ve thought about writing about this over the years, but I’ve felt like I’m not sure what I’m doing yet, or what I’m learning. So I’ve held off. I feel like I’ve reached a point now where some things have become clear, and I can finally talk about them coherently.

I’d like to explain some things that I doubt most CS students are going to learn in school, but they should, drawing from my own experience.

First, some notions about science.

Science is not just a body of knowledge (though that is an important part of it). It is also venturing into the unknown, and developing the idea within yourself about what it means to really know something, and to understand that knowing something doesn’t mean you know the truth. It means you have a pretty good idea of what the reality of what you are studying is like. You are able to create a description that somehow resembles the reality of what you’re studying to such a degree that you are able to make predictions about it, and those predictions can be confirmed within some boundaries that are either known, or can be discovered, and subsequently confirmed by others without the need to revise the model. Even then, what you have is not “the truth.” What you have might be a pretty good idea for the time being. As Richard Feynman said, in science you never really know if you are right. All you can be certain of is that you are wrong. Being wrong can be very valuable, because you can learn the limits of your common sense perceptions, and knowing that, direct your efforts towards what is more accurate, what is closer to reality.

The field of computing and computer science doesn’t have this perspective. It is not science. My CS professors used to admit this openly, but my understanding is CS professors these days aren’t even aware of this distinction anymore. In other words, they don’t know what science is, but they presume that what they practice is science. CS, as it’s taught, has some value, but what I hope to introduce people to here is the notion that after they have graduated with a CS degree, they are in kindergarten, or perhaps first grade. They have learned some basics about how to read and write, but they have not learned what is really powerful about that skill. I will share some knowledge I have gleaned as someone who has ventured as a beginner in this perspective.

The first thing to understand is that an undergraduate computer science education doesn’t tend to give you any notions about what computing is. It doesn’t explore models of computing, and in many cases, it doesn’t even present much in the way of different models of programming. It presents one model of computing (the Von Neumann model), but it doesn’t venture deeply into concepts of how the model works. It provides some descriptions of entities that are supposed to make it work, but it doesn’t tie them together so that you can see the relationships; see how the thing really works.

Science is about challenging and forming models of phenomena. A more powerful notion of computing is realized by understanding that the point of computer science is not about the application of computing to problems, but about creating, criticizing, and studying the strengths and weaknesses of different models of computing systems, and that programming is a means of modeling our ideas about them. In other words, programming languages and development environments can be used to model, explore, and test, and criticize our notions about different computing system models. It behooves the computer scientist to either choose an existing computing architecture represented by an existing language, or to create a new architecture, and a new language, that is suitable to what is being attempted, so that the focus can be maintained on what is being modeled, and testing that model without a lot of distraction.

As Alan Kay has said, the “language” part of “programming language” is really a user interface. I suggest that it is a user interface to a model of computing, a computing architecture. We can call it a programming model. As such, it presents a simulated environment of what is actually being accomplished in real terms, which enables a certain way of seeing the model you are trying to construct. (As I said earlier, we can use programming (with a language) to model our notions of computing.) In other words, you can get out of dealing with the guts of a certain computing architecture, because the runtime is handling those details for you. You’re just dealing with the interface to it. In this case, you’re using, quite explicitly, the architecture embodied in the runtime, not an API, as a means to an end. The power doesn’t lie in an API. The power you’re seeking to exploit is the architecture represented by the language runtime (a computing model).

This may sound like a confusing round-about, but what I’m saying is you can use a model of computing as a means for defining a new model of computing, whatever best suits the task. I propose that, indeed, that’s what we should be doing in computer science, if we’re not working directly with hardware.

When I talk about modeling computing systems, given their complexity, I’ve come to understand that constructing them is a gradual process of building up more succinct expressions of meaning for complex processes, with a goal of maintaining flexibility, in terms of how sub-processes that are used for higher levels of meaning can be used. As a beginner, I’ve found that starting with a powerful programming language in a minimalist configuration, with just basic primitives, was very constructive for understanding this, because it forced me to venture into constructing my own means for expressing what I want to model, and to understand what is involved in doing that. I’ve been using a version of Lisp for this purpose, and it’s been a very rewarding experience, but other students may have other preferences. I don’t mean to prejudice anyone toward one programming model. I recommend finding, or creating one that fits the kind of modeling you are pursuing.

Along with this perspective, I think, there must come an understanding of what different programming languages are really doing for you, in their fundamental architecture. What are the fundamental types in the language, and for what are they best suited? What are the constructs and facilities in the language, and what do they facilitate, in real terms? Sure, you can use them in all sorts of different ways, but to what do they lend themselves most easily? What are the fundamental functions in the runtime architecture that are experienced in the use of different features in the language? And finally, how are these things useful to you in creating the model you want to attempt? Seeing programming languages in this way provides a whole new perspective on them that has you evaluating how they facilitate your modeling practice.

Application comes along eventually, but it comes late in the process. The real point is to create the supporting architecture, and any necessary operating systems first. That’s the “hypothesis” in the science you are pursuing. Applications are tests of that hypothesis, to see if what is predicted by its creators actually bears out. The point is to develop hypotheses of computing systems so that their predictive limits can be ascertained, if they are not falsified. People will ask, “Where does practical application come in?” For the computer scientist involved in this process, it doesn’t, really. Engineers can plumb the knowledge base that’s generated through this process to develop ideas for better system software, software development systems, and application environments to deploy. However, computer science should not be in service to that goal. It can merely be useful toward it, if engineers see that it is fit to do so. The point is to explore, test, discuss, criticize, and strive to know.

These are just some preliminary thoughts on computer systems modeling. In my research, I’ve gotten indications that it’s not healthy for the discipline to be insular, just looking at its own artifacts for inspiration for new ideas. It needs to look at other fields of study to introduce unorthodox models into the discussion. I haven’t gotten into that yet. I’m still trying to understand some basic ideas of a computer system, using a more mathematical approach.

Related post: Does computer science have a future?

— Mark Miller, https://tekkie.wordpress.com

As this blog has progressed, I’ve gotten farther away from the technical, and moved more towards a focus on the best that’s been accomplished, the best that’s been thought (that I can find and recognize), with people who have been attempting to advance, or have made some contribution to what we can be as a society. I have also put a focus on what I see happening that is retrograde, which threatens the possibility of that advancement. I think it is important to point that out, because I’ve come to realize that the crucible that makes those advancements possible is fragile, and I want it to be protected, for whatever that’s worth.

As I’ve had conversations with people about this subject, I’ve been coming to realize why I have such a strong desire to see us be a freer nation than we are. It’s because I got to see a microcosm of what’s possible within a nascent field of research and commercial development, with personal computing, and later the internet, where the advancements were breathtaking and exciting, which inspired my imagination to such a height that it really seemed like the sky was the limit. It gave me visions of what our society could be, most of them not that realistic, but they were so inspiring. It took place in a context of a significant amount of government-funded, and private research, but at the same time, in a legal environment that was not heavily regulated. At the time when the most exciting stuff was happening, it was too small and unintelligible for most people to take notice of it, and society largely thought it could get along without it, and did. It was ignored, so people in it were free to try all sorts of interesting things, to have interesting thoughts about what they were accomplishing, and for some to test those ideas out. It wasn’t all good, and since then, lots of ideas I wouldn’t approve of have been tried as a result of this technology, but there is so much that’s good in it as well, which I have benefitted from so much over the years. I am so grateful that it exists, and that so many people had the freedom to try something great with it. This experience has proven to me that the same is possible in all human endeavors, if people are free to pursue them. Not all goals that people would have in it would be things I think are good, but by the same token I think there would be so much possible that would be good, from which people of many walks of life would benefit.

Glenn Beck wrote a column that encapsulates this sentiment so well. I’m frankly surprised he thought to write it. Some years ago, I strongly criticized Beck for writing off the potential of government-funded research in information technology. His critique was part of what inspired me to write the blog series “A history lesson on government R&D.” We come at this from different backgrounds, but he sums it up so well, I thought I’d share it. It’s called The Internet: What America Can And Should Be Again. Please forgive the editing mistakes, of which there are many. His purpose is to talk about political visions for the United States, so he doesn’t get into the history of who built the technology. That’s not his point. He captures well the idea that the people who developed the technology of the internet wanted to create in society: a free flow of information and ideas, a proliferation of services of all sorts, and a means by which people could freely act together and build communities, if they could manage it. The key word in this is “freedom.” He makes the point that it is we who make good things happen on the internet, if we’re on it, and by the same token it is we who can make good things happen in the non-digital sphere, and it can and should be mostly in the private sector.

I think of this technological development as a precious example of what the non-digital aspects of our society can be like. I don’t mean the internet as a verbatim model of our society. I mean it as an example within a society that has law, which applies to people’s actions on the internet, and that has an ostensibly representational political system already; an example of the kind of freedom we can have within that, if we allow ourselves to have it. We already allow it on the internet. Why not outside of it? Why can’t speech and expression, and commercial enterprise in the non-digital realm be like what it is in the digital realm, where a lot goes as-is? Well, one possible reason why our society likes the idea of the freewheeling internet, but not a freewheeling non-digital society is we can turn away from the internet. We can shut off our own access to it, and restrict access (ie. parental controls). We can be selective about what we view on it. It’s harder to do that in the non-digital world.

As Beck points out, we once had the freedom of the internet in a non-digital society, in the 19th century, and he presents some compelling, historically accurate examples. I understand he glosses over significant aspects of our history that was not glowing, where not everyone was free. In today’s society, it’s always dangerous to harken back romantically to the 19th, and early 20th centuries as “golden times,” because someone is liable to point out that it was not so golden. His point is to say that people of many walks of life (who, let’s admit it, were often white) had the freedom to take many risks of their own free will, even to their lives, but they took them anyway, and the country was better off for it. It’s not to ignore others who didn’t have freedom at the time. It’s using history as an illustration of an idea for the future, understanding that we have made significant strides in how we view people who look different, and come from different backgrounds, and what rights they have.

The context that Glenn Beck has used over the last 10 years is that history as a barometer on progress is not linear. Societal progress ebbs and flows. It has meandered in this country between freedom and oppression, with different depredations visited on different groups of people in different generations. They follow some patterns, and they repeat, but the affected people are different. The depredations of the past were pretty harsh. Today, not so much, but they exist nevertheless, and I think it’s worth pointing them out, and saying, “This is not representative of the freedom we value.”

The arc of America had been towards greater freedom, on balance, from the time of its founding, up until the 1930s. Since then, we’ve wavered between backtracking, and moving forward. I think it’s very accurate to say that we’ve gradually lost faith in it over the last 13 years. Recently, this loss of faith has become acute. Every time I look at people’s attitudes about it, they’re often afraid of freedom, thinking it will only allow the worst of humanity to roam free, and to lay waste to what everyone else has hoped for. Yes, some bad things will inevitably happen in such an environment. Bad stuff happens on the internet every day. Does that mean we should ban it, control it, contain it? If you don’t allow the opportunity that enables bad things to happen, you will not get good things, either. I’m all in favor of prosecuting the guilty who hurt others, but if we’re always in a preventative mode, you prevent that which could make our society so much better. You can’t have one without the other. It’s like trying to have your cake, and eat it, too. It doesn’t work. If we’re so afraid of the depredations of our fellow citizens, then we don’t deserve what wonderful things they might bring, and that fact is being borne out in lost opportunities.

We have an example in our midst of what’s possible. Take the idea of it seriously, and consider how deprived your present existence would be if it wasn’t allowed to be what it is now. Take its principles, and consider widening the sphere of the system that allows all that it brings, beyond the digital, into our non-digital lives, and what wonderful things that could bring to the lives of so many.

Christina Engelbart has written an excellent summary of her late father’s (Doug Engelbart’s) ideas, and has outlined what’s missing from digital media.

Collective IQ Review

dce-jcnprofiles-interviewInteractive computing pioneer Doug Engelbart
coined the term Collective IQ
to inform the IT research agenda

Doug Engelbart was recognized as a great pioneer of interactive computing. Most of his dozens of prestigious awards cited his invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger strategic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, research the issues and options, develop responses and solutions, integrate learnings, iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we…

View original post 1,949 more words

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.