Alan Kay: Rethinking CS education

I thought I’d share the video below, since it has some valuable insights on what computer science should be, and what education should be, generally. It’s all integrated together in this presentation, and indeed, one of the projects of education should be integrating computer science into it, but not with the explicit purpose to create more programmers for future jobs, though it could always be used for that by the students. Alan Kay presents a different definition of CS than is pursued in universities today. He refers to how Alan Perlis defined it (Perlis was the one to come up with the term “computer science”), which I’ll get to below.

This thinking about CS and education provides, among other things, a pathway toward reimagining how knowledge, literature, and art can be presented, organized, dissected, annotated, and shared in a way that’s more meaningful than can be achieved with old media. (For more on that, see my post “Getting beyond paper and linear media.”) As with what the late Carl Sagan often advocated, Kay’s presentation here advocates for a general model for the education of citizens, not just professional careerists.

Another reason I thought of sharing this is several years ago I remember hearing that Kay was working on rethinking computer science curriculum. What he presents is more about suggestion, “Start thinking about it this way.” (He takes a dim view of the concept of curriculum, as it suggests setting students on a rigid path of study with the intent to create minds with a cookie cutter, though he is in favor of classical liberal arts education, with the prescriptions that entails.)

As he says in the presentation, there’s a lot to develop in the practice of education in order to bring this into fruition.

This is from 2015:

I wrote notes below for some of the segments he talks about, just because I think his presentation bears some interpretation for many readers. He uses metaphors a lot.

The bicycle

This is an analogy for how an apparatus, or a subject, is typically modified for education. We take the optimized, or the adult version of something, and add compensators, which make it so that beginners can use it without falling all over themselves. It’s seen as easier to present it this way, and as a skill-building experience, where in order to learn how to do something, you need to use “the real thing.” Beginners can put on a good show of using this sort of apparatus, or modified subject, but the problem is that it doesn’t teach a beginner how to become good at really using the real thing at its full potential. The compensators become a crutch. He said a better idea is to use an apparatus, or a component of the subject, that allows a beginner to get a feel for how to use the thing in a way that gets across significant aspects of its potential, but without the optimizations, or the way adults use it, which make it too complicated for a beginner to use under their own power. In this case, a lowbike is better. This beginner apparatus, or component, is more like the first version of the thing you’re trying to teach. Bicycles were originally more like scooters, without pedals, or a chain, where you’d sit in the seat, push it along with your legs, kind of “running while sitting,” glide, and turn by shifting your weight, and turning into the turn. Once a beginner gets a feel for that, they can deal with the optimized version, or a scaled down adult version, with pedals and a chain to drive the bike, because all that adds is the ability to get more power out of it. It doesn’t change any of the fundamentals of how you use it.

This gets to an issue of pedagogy, that learners need to learn something in components, rather than dealing with the whole thing at once. Once they learn one capacity, they can move on to the next challenge in learning the “whole thing.”

Radiation vs. nouns

He put forward a proposition for his talk, which is that he’s mixing a bunch of ideas together, because they overlap. This is a good metaphor, because most of his talk is about what we are as human beings, and how society should situate and view education. Only a little of it is actually on computer science, but all of it is germane to talking about computer science education.

He also gives a little advice to education reformers. He pointed out what’s wrong with education as it is right now, but rather than cursing it, he said one should make a deliberate effort to “build a tribe” or coalition with those who are causing the problem, or are in the midst of the problem, and suggest ways to bring them into a dignified position, perhaps by sharing in their honors, or, as his example illustrated, their shame. I draw some of this from Plato’s Cave metaphor.

Cooperation and competition in society

I once heard Kay talk about this many years ago. He said that, culturally, modern corporations are like the ancient hunter-gatherers. They exploit the resources of an area for a while, and once it’s exhausted, they move on, and that as a culture, they have yet to learn about democracy, which requires more of a “settlement” mentality toward what they’re doing. Here, he used an agricultural metaphor to talk about a cooperative society that creates the wealth that is then used by competitive forces within it. What he means by this is that the true wealth is the knowledge that’s ultimately used to develop new products and services. It’s not all developed inside the marketplace. He doesn’t talk about this, but I will. Even though a significant part of the wealth (as he said, you can think of it as “potential energy”) is generated inside research labs, what research labs tend to lack is knowledge of what the members of society can actually understand of this developed knowledge. That’s where the competitive forces in society come in, because they understand this a lot better. They can negotiate between how much of the new knowledge to put into a product, and how much it will cost, to reach as many people as possible. This is what happened in the computer industry of the past.

I think I understand what he’s getting at with the agricultural metaphor, though perhaps I need to be filled in more. My understanding of what he means is that farmers don’t just want to reap a crop for one season. Their livelihood depends on maintaining fertility on their land. That requires not just exploiting what’s there season after season, or else you get the dust bowl. If instead, practices are modified to allow the existing land to become fertile again, or, in the case of hunter-gathering, aggressively managing the environment to create favorable grazing to attract game, then you can create a cycle of exploitation and care such that a group can stay in one area for a long time, without denying themselves any of the benefits of how they live. I think what he suggests is that if corporations would modify their behavior to a more settled, agricultural model, to use some of their profits to contribute to educating the society in which they exist, and to funding scientific research on a private basis, that would “regenerate the soil” for economic growth, which can then fuel more research, creating a cycle of renewal. No doubt the idea he’s presenting includes the businesses who would participate in doing this. They should be allowed to profit (“reap”) from what they “sow,” but the point is they’re not the only ones who can profit. Other players in the marketplace can also exploit the knowledge that’s generated, and profit as well. That’s what’s been done in the past with private research labs.

He attributes the lack of this to culture, of not realizing that the economic model that’s being used is not sustainable. Eventually, you use up the “soil,” and it becomes “infertile,” and “blows away,” and, in the case of hunter-gathering, the “good hunting grounds” are used up.

He makes a crucial point, though, that education is not just about jobs and competitiveness. It’s also about inculcating what citizenship really means. I’m sure if he was asked to drill down on this more, he would suggest a classical education for this, along with a modified math and science curriculum that would give students a sense of what those subjects really are like.

The sense I get is he’s advocating almost more of an Andrew Carnegie model of corporate stewardship, who, after he made his money, directed his philanthropy to building schools and libraries. Kay would just add science labs to that mix. (He mentions this later in his talk.)

I feel it necessary to note that not all for-profit entities would be able to participate in funding these cooperative activities, because their profit margins are so slim. I don’t think that’s what he’s expecting out of this.

What we are, to the best of our knowledge

He gives three views into human mental capacity: the way we perceive (theatrical), how much we can perceive at any moment in time (1 ± 2), and how educators should perceive ourselves psychologically and mentally (more primate and mammalian). This relates to neuroscience, and to some extent, evolutionary psychology.

The raison d’être of computer science

The primary purpose of computer science should be developing a science of systems in process, and that means all processes: mechanical processes, technological processes, social processes, biological processes, mental processes, etc. This relates to my earlier post, “Beginning the journey of becoming a computer scientist.” It’s partly about developing a new kind of mathematics for modeling processes. Alan Turing did it, with his Turing Machine concept, though he was trying to model the process of generating mathematical statements, and doing mathematical tests on them.

Shipping the design

Kay talks about how programmers today don’t have access to anything like what designers in other fields have, where they’re able to model their design, simulate it, and then have a machine fabricate a prototype that you can actually show and use.

I want to clarify this one point, because I don’t think he articulated it well (I found out about this because he expressed a similar thought on Quora, and I finally understood what he meant), but at one point he said that students at UCLA, one of the Top 10 CS schools, use “vi terminal emulators” (he sounds like he said “bi-terminal emulators”), emulating punched cards. What he meant by this was that students are logging in to a Unix/Linux system, bringing up an X-Windows terminal window, which is 80 columns wide (hence the punched card metaphor he used, because punch cards were also 80 columns wide), and using the “vi” text editor (or more likely “vim”, which is vi emulated in Emacs) to write their C++ code, the primary language they use.

I had an epiphany about this gulf between the tools that programmers use and the tools that engineers use, about 8 or 9 years ago. I was at a friend’s party, and there were a few mechanical engineers among the guests. I overheard a couple of them talking about the computer-aided design (CAD) software they were using. One talked about a “terrible” piece of CAD software he used at work. He said he had a much better CAD system at home, but it was incompatible with the data files that were used at work. As much as he would’ve loved to use it, he couldn’t. He said the system he used at work required him to group pieces of a design together as he was building the model, and once he did that, those pieces became inflexible. He couldn’t just redesign one piece of it, or separate one out individually from the model. He couldn’t move the pieces around on the model, and have them fit. Once they were grouped, that was it. It became this static thing. He said in order to redesign one piece of it, he had to take the entire model apart, piece by piece, redesign the part, and then redesign all the other pieces in the group to make the new part fit. He said he hated it, and as he talked about it, he acted like he was so disgusted with it, he wanted to throw it in the trash, like it was a piece of garbage. He said on his CAD system at home, it was wonderful, because he could separate a part from a model any time he wanted, and the system would adjust the model automatically to “make sense” out of the part being missing. He could redesign the part, and move it to a different part of the model, “attach it” somewhere, and the system would automatically adjust the model so that the new part would fit. The way he described it gave it a sense of fluidity. Whereas the system he used at work sounded rigid. It reminded me of the programming languages I had been using, where once relationships between entities were set up, it was really difficult to take pieces of it “out” and redesign them, because everything that depended on that piece would break once I redesigned it. I had to go around and redesign all the other entities that related to it to adjust to the redesign of the one piece.

I can’t remember how this worked, but another thing the engineer talked about was the system at work had some sort of “binding” mechanism that seemed to associate parts by “type,” and that this was also rigid, which reminded me a lot of the strong typing system in the languages I had been using. He said the system he had at home didn’t have this, and to him, it made more sense. Again, his description lent a sense of fluidity to the experience of using it. I thought, “My goodness! Why don’t programmers think like this? Why don’t they insist that the experience be like this guy’s CAD system at home?” For the first time in my career, I had a profound sense of just what Alan Kay talked about, here, that the computing field is retrograde. It has not advanced anywhere close to the kind of engineering that exists in other fields, where they would insist on this sort of experience. We accept so much less, whereas modern engineers have a hard time standing for it, because they know they have better options.

Don’t be fooled by large efforts below threshold

Before I begin this part, I want to share a crucial point that Kay makes, because it’s one of the big ones:

Think about what thinking is. Thinking is not being logical. Thinking is choosing the environment that you’re going to think in before you start rationalizing.

Kay had something very interesting, and startling, to say about the Apollo space program, using that as a metaphor for large reform initiatives in education generally. I recently happened upon a video of testimony he gave to a House committee on educational computing back in 1982, chaired by then-congressman Al Gore, and Kay talked about this same example back then. He said that the way the Apollo rockets were designed was a “hack.” They were not the best design for space travel, but it was the most expedient for the mission of getting to the Moon by the end of the 1960s. Here, in this presentation, he talks about how each complete rocket was the height of a 45-story building (1-1/2 football fields in length), most of it high explosives, with only a tiny capsule at the top that could fit 3 astronauts. This is not a model that could scale to what was needed for space travel.

It became this huge worldwide cultural event when NASA launched it, and landed men on the Moon, but Kay said it was really not a great accomplishment. I remember Rep. Gore said in jest, “The walls in this room are shaking!” The camera panned around a bit, showing pictures on the wall from NASA. How could he say such a thing?! This was the biggest cultural event of the century, perhaps in all of human history. He explained the same thing here: that the Apollo program didn’t advance space travel beyond the mission to the Moon. It was not technology that would get us beyond that, though, in hindsight we can say technology like it enabled launching probes throughout the Solar System.

Now, what he means by “space travel,” I’m not sure. Is it manned missions to the outer planets, or to other star systems? Kay is someone who has always thought big. So, it’s possible he was thinking of interstellar travel. What he was talking about was the problem of propulsion, getting something powerful enough to make significant discoveries in space exploration possible. He said chemical propellant just doesn’t do it. It’s good enough for launching orbital vehicles around our planet, and launching probes, but that’s really it. The rest is just wasting time below threshold.

Another thing he explained is that large initiatives which don’t cross a meaningful threshold can be harmful to efforts to advancing any endeavor, because large initiatives come with extended expectations that the investment will continue to be used, and they must be satisfied, or else there will be no cooperation in doing the initial effort. The participants will want their return on investment. He said that’s what happened with NASA. The ROI had to play out, but that ruined the program, because as that happened, people could see we weren’t advancing the state of the art that much in space travel, and the science that was being produced out of it was usually nothing to write home about. Eventually, we got what we now see: People are tired of it, and have no enthusiasm for it, because it set expectations so low.

What he was trying to do in his House committee testimony, and what he’s trying to do here, is provide some perspective that science offers, vs. our common sense notion of how “great” something is. You cannot get qualitative improvement in an endeavor without this perspective, because otherwise you have no idea what you’re dealing with, or what progress you’re building on, if any. Looking at it from a cultural perspective is not sufficient. Yes, the Moon landing was a cultural milestone, but not a scientific or engineering milestone, and that matters.

Modern science and engineering have a sense of thresholds, that there can come a point where some qualitative leap is made, a new perspective on what you’re doing is realized that is significantly better than what existed prior. He explains that once a threshold has been crossed, you can make small improvements which continue to build on that significant leap, and those improvements will stick. The effort won’t just crash back down into mediocrity, because you know something about what you have, and you value it. It’s a paradigm shift. It is so significant, you have little reason to go back to what you were doing before. From there, you can start to learn the limits of that new perspective, and at some point, make more qualitative leaps, crossing more thresholds.

“Problem-finding”/finding the goal vs. problem-solving

Problem solving begins with a current context, “We’re having a problem with X. How do we solve it?” Problem finding asks do we even have a good idea of what the problem is? Maybe the reason for the problems we’ve been seeing has to do with the fact that we haven’t solved a different problem we don’t know about yet. “Let’s spend time trying to find that.”

Another way of expressing this is a concept I’ve heard about from economists, called “opportunity cost,” which, in one context, gets across the idea that by implementing a regulation, it’s possible that better outcomes will be produced in certain categories of economic interactions, but it will also prevent certain opportunities from arising which may also be positive. The rub is these opportunities will not be known ahead of time, and will not be realized, because the regulation creates a barrier to entry that entrepreneurs and investors will find too high of a barrier to overcome. This concept is difficult to communicate to many laymen, because it sounds speculative. What this concept encourages people cognizant of it to do is to “consider the unseen,” to consider the possibilities that lie outside of what’s currently known. One can view “problem finding” in a similar way, not as a way of considering the unseen, but exploring it, and finding new knowledge that was previously unknown, and therefore unseen, and then reconsidering one’s notion of what the problem really is. It’s a way of expanding your knowledge base in a domain, with the key practice being that you’re not just exploring what’s already known. You’re exploring the unknown.

The story he tells about MacCready illustrates working with a good modeling system. He needed to be able to fail with his ideas a lot, before he found something that worked. So he needed a simple enough modeling system that he could fail in, where when he crashed with a particular model, it didn’t take a lot to analyze why it didn’t work, and it didn’t take a lot to put it back together differently, so he could try again.

He made another point about Xerox PARC, that it took years to find the goal, and it involved finding many other goals, and solving them in the interim. I’ve written about this history at “A history lesson on government R&D” Part 2 and Part 3. There, you can see the continuum he talks about, where ARPA/IPTO work led into Xerox PARC.

This video with Vishal Sikka and Alan Kay gives a brief illustration of this process, and what was produced out of it.

Erosion gullies

There are a couple metaphors he uses to talk about the lack of flexibility that develops in our minds the more we focus our efforts on coping, problem solving, and optimizing how we live and work in our current circumstances. One is erosion gullies. The other is the “monkey trap.”

Erosion gullies channel water along a particular path. They develop naturally as water erodes the land it flows across. These “gullies” seem to fit with what works for us, and/or what we’re used to. They develop into habits about how we see the world–beliefs, ideas which we just accept, and don’t question. They allow some variation in the path that’s followed, but they provide boundaries that don’t allow the water to go outside the gully (leaving aside the exception of floods, for the sake of argument). He uses this to talk about how “channels” develop in our minds that direct our thinking. The more we focus our thoughts in that direction, the “deeper” the gully gets. Keep at it too long, and the “gully” won’t allow us to see anything different than what we’re used to. He says that it may become inconceivable to think that you could climb out of it. Most everything inside the “gully” will be considered “right” thinking (no reason why), and anything outside of it will be considered “wrong” (no reason why), and even threatening. This is why he mentions that wars are fought over this. “We’re all in different erosion gullies.” They don’t meet anywhere, and my “right” is your “wrong,” and vice-versa. The differences are irreconcilable, because the idea of seeing outside of them is inconceivable.

He makes two points with this. One is that we have erosion gullies re. stories that we tell ourselves, and beliefs that we hold onto. Another is that we have erosion gullies even in our sensory perceptions that dictate what we see and don’t see. We can see things that don’t even exist, and typically do. He uses eyewitness testimony to illustrate this.

I think what he’s saying with it is we need to watch out for these “gullies.” They develop naturally, but it would be good if we had the flexibility to be able to eventually get out of our “gully,” and form a new “channel,” which I take is a metaphor for seeing the world differently than what we’re used to. We need a means for doing that, and what he proposes is science, since it questions what we believe, and tests our ideas. We can get around our beliefs, and thereby get out of our “gullies” to change our perspective. It doesn’t mean we abandon “gullies,” but just become aware that other “channels” (perspectives) are possible, and we can switch between them, to see better, and achieve better outcomes.

Regarding the “monkey trap,” he uses it as a metaphor for us getting on a single track, grasping for what we want, not realizing that the very act of being that focused, to the exclusion of all other possibilities, is not getting us anywhere. It’s a trap, and we’d benefit by not being so dogged in pursuing goals if they’re not getting us anywhere.

“Fast” vs. “slow”

He gets into some neuroscience that relates to how we perceive, what he called “fast” and “slow” response. You can train your mind through practice in how to use “fast” and “slow” for different activities, and they’re integral to our perception of what we’re doing, and our reactions to it, so that we don’t careen into a catastrophe, or miss important ideas in trying to deal with problems. He said that cognitive approaches to education deal with the “slow” systems, but not the “fast” ones, and it’s not enough to help students in really understanding a subject. As other forms of training inherently deal with the “fast” systems, educators need to think about how the “fast” systems responds to their subjects, and incorporate that into how they are taught. He anticipates this will require radically redesigning the pedagogy that’s typically used.

He says that the “fast” systems deal with the “atoms” of ideas that the “slow” system also deals with. By “atoms,” I take it he means fundamental, basic ideas or concepts for a subject. (I think of this as the “building blocks of molecules.”)

The way I take this is that the “slow” systems he’s talking about are what we use to work out hard problems. They’re what we use to sit down and ponder a problem for a while. The “fast” systems are what we use to recognize or spot possible patterns/answers quickly, a kind of quick, first-blush analysis that can make solving the problem easier. To use an example, you might be using “fast” systems now to read this text. You can do it without thinking about it. The “slow” systems are involved in interpreting what I’m saying, generating ideas that occur to you as you read it.

This is just me, but “fast” sounds like what we’d call “intuition,” because some of the thinking has already been done before we use the “slow” systems to solve the rest. It’s a thought process that takes place, and has already worked some things out, before we consciously engage in a thought process.

Science

This is the clearest expression I’ve heard Kay make about what science actually is, not what most people think it is. He’s talked about it before in other ways, but he just comes right out and says it in this presentation, and I hope people watching it really take it in, because I see too often that people take what they’ve been taught about what science is in school and keep reiterating it for the rest of their lives. This goes on not only with people who love following what scientists say, but also in our societal institutions that we happen to associate with science.

…[Francis] Bacon wrote a book called “The Novum Organum” in 1620, where he said, “Hey, look. Our brains are messed up. We have bad brains.” He called the ways of messing up “idols.” He said we get serious errors because of our genetics. We get serious errors because of the culture we’re in. We get serious errors because of the languages we use. They don’t represent what’s actually out there. We get serious errors from the way that academia hangs on to bad ideas, and teaches them over again. These are his four “idols.” Anyone ever read Bacon? He said we need something to get around our bad brains! A set of heuristics, is the term we’d use today.

What he called for was … science, because that’s what “Novum Organum,” the rest of the title, was: “A new way of dealing with knowledge.”

Science is not the knowledge, because knowledge is in this context. What science is is a negotiation between what’s out there and what we can represent.

This is the big idea. This is the idea they don’t teach in school. This is the idea we should be teaching. It’s one of the biggest ideas of all time.

It isn’t the knowledge. It’s the relationship, because what’s out there is only knowable by a phenomena that is being filtered in every possible way. We don’t even know if our brain is capable of representing the stuff.

So, to think about science as the truth is completely wrong! It’s not the right way to look at it. But if you think about it as a negotiation between the best you can do right now and stuff that’s out there, where you’re not completely sure, you’re in a very strong position.

Science has been the most important, powerful thought system humans have ever invented, because it gave up the idea of truth, and it substituted for it a thousand variations of false, some of which are incredibly powerful. This is the big idea.

So, if we’re going to think about computing, this is one way … of thinking about, “Wow! Computers!” They are representers. We can learn about representations. We can simulate ideas. We can get a very good–much better sense of dealing with thinking about these complexities.

“Getting there”

The last part demonstrates what I’ve seen with exploration. You start out thinking you’re going to go from Point A to Point B, but you take diversions, pathways that are interesting, but related to your initial search, because you find that it’s not a straight path from Point A to Point B. It’s not as straightforward as you thought. So, you try other ways of getting there. It is a kind of problem solving, but it’s really what Kay called “problem finding,” or finding the goal. In the process, the goal is to find a better way to get to the goal, and along the way, you find problems that are worth solving, that you didn’t anticipate at all when you first got going. In that process, you’ll find things you didn’t expect to learn, but which are really valuable to your knowledge base. In your pursuit of trying to find a better way to get to your destination, you might even get through a threshold, and find that your initial goal is no longer worth pursuing, but there are better goals to pursue in this new perception you’ve obtained.

Related post:

Reviving programming as literacy

The necessary ingredients for computer science

—Mark Miller, https://tekkie.wordpress.com

Advertisements

The best summation of what I have hoped for my country

As this blog has progressed, I’ve gotten farther away from the technical, and moved more towards a focus on the best that’s been accomplished, the best that’s been thought (that I can find and recognize), with people who have been attempting to advance, or have made some contribution to what we can be as a society. I have also put a focus on what I see happening that is retrograde, which threatens the possibility of that advancement. I think it is important to point that out, because I’ve come to realize that the crucible that makes those advancements possible is fragile, and I want it to be protected, for whatever that’s worth.

As I’ve had conversations with people about this subject, I’ve been coming to realize why I have such a strong desire to see us be a freer nation than we are. It’s because I got to see a microcosm of what’s possible within a nascent field of research and commercial development, with personal computing, and later the internet, where the advancements were breathtaking and exciting, which inspired my imagination to such a height that it really seemed like the sky was the limit. It gave me visions of what our society could be, most of them not that realistic, but they were so inspiring. It took place in a context of a significant amount of government-funded, and private research, but at the same time, in a legal environment that was not heavily regulated. At the time when the most exciting stuff was happening, it was too small and unintelligible for most people to take notice of it, and society largely thought it could get along without it, and did. It was ignored, so people in it were free to try all sorts of interesting things, to have interesting thoughts about what they were accomplishing, and for some to test those ideas out. It wasn’t all good, and since then, lots of ideas I wouldn’t approve of have been tried as a result of this technology, but there is so much that’s good in it as well, which I have benefitted from so much over the years. I am so grateful that it exists, and that so many people had the freedom to try something great with it. This experience has proven to me that the same is possible in all human endeavors, if people are free to pursue them. Not all goals that people would have in it would be things I think are good, but by the same token I think there would be so much possible that would be good, from which people of many walks of life would benefit.

Glenn Beck wrote a column that encapsulates this sentiment so well. I’m frankly surprised he thought to write it. Some years ago, I strongly criticized Beck for writing off the potential of government-funded research in information technology. His critique was part of what inspired me to write the blog series “A history lesson on government R&D.” We come at this from different backgrounds, but he sums it up so well, I thought I’d share it. It’s called The Internet: What America Can And Should Be Again. Please forgive the editing mistakes, of which there are many. His purpose is to talk about political visions for the United States, so he doesn’t get into the history of who built the technology. That’s not his point. He captures well the idea that the people who developed the technology of the internet wanted to create in society: a free flow of information and ideas, a proliferation of services of all sorts, and a means by which people could freely act together and build communities, if they could manage it. The key word in this is “freedom.” He makes the point that it is we who make good things happen on the internet, if we’re on it, and by the same token it is we who can make good things happen in the non-digital sphere, and it can and should be mostly in the private sector.

I think of this technological development as a precious example of what the non-digital aspects of our society can be like. I don’t mean the internet as a verbatim model of our society. I mean it as an example within a society that has law, which applies to people’s actions on the internet, and that has an ostensibly representational political system already; an example of the kind of freedom we can have within that, if we allow ourselves to have it. We already allow it on the internet. Why not outside of it? Why can’t speech and expression, and commercial enterprise in the non-digital realm be like what it is in the digital realm, where a lot goes as-is? Well, one possible reason why our society likes the idea of the freewheeling internet, but not a freewheeling non-digital society is we can turn away from the internet. We can shut off our own access to it, and restrict access (ie. parental controls). We can be selective about what we view on it. It’s harder to do that in the non-digital world.

As Beck points out, we once had the freedom of the internet in a non-digital society, in the 19th century, and he presents some compelling, historically accurate examples. I understand he glosses over significant aspects of our history that was not glowing, where not everyone was free. In today’s society, it’s always dangerous to harken back romantically to the 19th, and early 20th centuries as “golden times,” because someone is liable to point out that it was not so golden. His point is to say that people of many walks of life (who, let’s admit it, were often white) had the freedom to take many risks of their own free will, even to their lives, but they took them anyway, and the country was better off for it. It’s not to ignore others who didn’t have freedom at the time. It’s using history as an illustration of an idea for the future, understanding that we have made significant strides in how we view people who look different, and come from different backgrounds, and what rights they have.

The context that Glenn Beck has used over the last 10 years is that history as a barometer on progress is not linear. Societal progress ebbs and flows. It has meandered in this country between freedom and oppression, with different depredations visited on different groups of people in different generations. They follow some patterns, and they repeat, but the affected people are different. The depredations of the past were pretty harsh. Today, not so much, but they exist nevertheless, and I think it’s worth pointing them out, and saying, “This is not representative of the freedom we value.”

The arc of America had been towards greater freedom, on balance, from the time of its founding, up until the 1930s. Since then, we’ve wavered between backtracking, and moving forward. I think it’s very accurate to say that we’ve gradually lost faith in it over the last 13 years. Recently, this loss of faith has become acute. Every time I look at people’s attitudes about it, they’re often afraid of freedom, thinking it will only allow the worst of humanity to roam free, and to lay waste to what everyone else has hoped for. Yes, some bad things will inevitably happen in such an environment. Bad stuff happens on the internet every day. Does that mean we should ban it, control it, contain it? If you don’t allow the opportunity that enables bad things to happen, you will not get good things, either. I’m all in favor of prosecuting the guilty who hurt others, but if we’re always in a preventative mode, you prevent that which could make our society so much better. You can’t have one without the other. It’s like trying to have your cake, and eat it, too. It doesn’t work. If we’re so afraid of the depredations of our fellow citizens, then we don’t deserve what wonderful things they might bring, and that fact is being borne out in lost opportunities.

We have an example in our midst of what’s possible. Take the idea of it seriously, and consider how deprived your present existence would be if it wasn’t allowed to be what it is now. Take its principles, and consider widening the sphere of the system that allows all that it brings, beyond the digital, into our non-digital lives, and what wonderful things that could bring to the lives of so many.

The true promise of Interactive Computing: Leveraging our Collective IQ

The true promise of Interactive Computing: Leveraging our Collective IQ

Christina Engelbart has written an excellent summary of her late father’s (Doug Engelbart’s) ideas, and has outlined what’s missing from digital media.

Collective IQ Review

dce-jcnprofiles-interviewInteractive computing pioneer Doug Engelbart
coined the term Collective IQ
to inform the IT research agenda

Doug Engelbart was recognized as a great pioneer of interactive computing. Most of his dozens of prestigious awards cited his invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger strategic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, research the issues and options, develop responses and solutions, integrate learnings, iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we…

View original post 1,949 more words

Brendan O’Neill on the zeitgeist of the anti-modern West

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.

Which computer icons no longer make sense to a modern user?

This question made me conscious of the fact that the icons computer/smartphone and many web interfaces use are a metaphor for the way in which the industry has designed computers for a consumer market. That is, they are to be used to digitize and simulate old media.

For example, the use of the now-obsolete floppy disk to represent “save?”

Which computer icons no longer make sense to a modern user?

The way this question is asked is interesting and encouraging. These icons no longer make sense to modern users. Another interesting question is what should replace them? However, without powerful outlooks, I suspect it’s going to be difficult to come up with anything that really captures the power of this medium that is computing, and we’ll just use the default of ourselves as metaphors.

Psychic elephants and evolutionary psychology

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

Foster also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

He says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Edit 2/16/2016: I’ve added 3 more paragraphs relating to another video I’m adding, since it relates more specifically to this topic.

The video, below, is intriguing, but I can’t help but wonder if the two researchers, Cosmides and Tooby, are reading current issues they’re hearing about in political discourse into “stone age thinking” unjustifiably, because how do we know what stone age thinking was? I have to admit, I have no background in anthropology or archaeology at this point. I might need that to give more weight to this inference. The topic they discuss here, about a common misunderstanding of market economics, relates back to something they discussed in the above video, about humans trying to detect and form coalitions, and how market mechanisms have the effect of appearing to interfere with coalition-building strategies. They say this leads to resentment against the market system.

What this would seem to suggest is that the idea that humans are drastically changing our planet’s climate system for the worst is a nice salve for that desire for coalition building, because it leads one to a much larger inference that market economics (the perceived enemy of coalition strategies) is a problem that transcends national boundaries. The constant mantra of warmists that, “We must act now to solve it,” appears to demand a coalition, which to those who feel disconnected by markets feels very desirable.

One of the most frequent desires I’ve heard from those who believe that we are changing our climate for the worst is that they only want to deal with market participants, “Who care about me and my community.” What Cosmides and Tooby say is this relates back to our innate desire to build coalitions, and is evidence that these people feel that the market system is interfering, or not cooperating in that process. What they say, as Foster says, is this reflects a lack of understanding of market economics, and a total ignorance of the benefits its effects bring to humanity.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in our evolutionary experience, by comparison. So we still carry an evolutionary heritage that forms our perceptions of fairness and social survival, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions in “climate policy” that are taken in its name.

Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

For high school students, and their parents: Some things to consider before applying for college

Back in 2009 I wrote a post called “Getting an education in America.” I went through a litany of facts which indicated we as a society were missing the point of education, and wasting a lot of time and money on useless activity. I made reference to a segment with John Stossel, back when he was still a host on ABC’s 20/20, talking about the obsession we have that “everyone must go to college.” One of the people Stossel interviewed was Marty Nemko, who made a few points:

  • The bachelor’s degree is the most overvalued product in America today.
  • The idea marketed by universities that you will earn a million dollars more over a lifetime with a bachelor’s than with a high school diploma is grossly misleading.
  • The “million dollar” figure is based solely on accurate stats of ambitious high achievers, who just so happened to have gone to college, but didn’t require the education to be successful. It’s misattributed to their education, and it’s unethical that universities continue to use it in their sales pitch, saying, “It doesn’t matter what you major in.”

It turns out Nemko has had his own channel on YouTube. I happened to find a video of his from 2011 that really fleshes out the points he made in the 20/20 segment. What he says sounds very practical to me, and I encourage high school students, and their parents, to watch it before proceeding with a decision to go to college.

Nemko talks about what college really is these days: a business. He talks about how this idea that “everyone must go to college” has created a self-defeating proposition: Now that so many more people, in proportion to the general population, are getting bachelors’ degrees, getting one is not a distinction anymore. It doesn’t set you apart as someone who is uniquely skilled. He advises now that if you want the distinction that used to come from a bachelor’s, you should get a master’s degree. He talks about the economics of universities, and where undergraduates fit into their cost structure. This is valuable information to know, since students are going to have to deal with these realities if they go to college.

It’s not an issue of rejecting college, but of assessing whether it’s really worth it for you. He also outlines some other possibilities that could serve you a lot better, if what motivates you is not well suited to a 4-year program.

Nemko lays out his credentials. He’s gotten a few university degrees himself, and he’s worked at high levels within universities. He’s not just some gadfly who badmouths them. I think he knows of what he speaks. Take a listen.

A new visiting scholar comes to CU Boulder

The University of Colorado at Boulder started up a visiting scholar program for conservative thought last year. I had my doubts about it. I don’t like the idea of “affirmative action for certain ideologies.” One would think that if a university’s mission was to educate they wouldn’t care what one’s political leanings were. That’s a private matter. I would think a university, fully cognizant of its role in society, would look for people who are not only highly qualified, and show a dedication to academic work, but also seek a philosophical balance, not ideological. However, it has been noted many times how politically slanted university faculty is, at least in their political party registration. Looking at the stats, one would think that the institutions have in fact become bastions for one political party or another, and listening to the accounts from some scholars and students, you’d think that the arts & humanities colleges have become training grounds for political agitators and propagandists. I don’t find that encouraging. The fact that for many years universities have not used this apparent tilt toward ideological purity as an opportunity for introspection about what they are actually teaching, but seem to rather take it as a mark pride, is also troubling. All of the excuses I’ve heard over the years sound like prejudices against classical thought. I’d like to ask them, “Can you come up with anything qualitatively better” (if they’ve even thought about that), but I’m afraid I will be disappointed by the answer while they high-five each other.

Having actually witnessed a bit of the conservative thought program at CU (seeing a couple of the guest speakers), I’m pleased with it. It has an academically conservative slant, and, from what I’ve seen, avoids the “sales pitch” for itself. Instead, it argues from a philosophical perspective that is identified as conservative by society. The most refreshing thing is it’s open to dialogue.

The first professor in the program, Dr. Steven Hayward, wrote a couple excellent speeches I read on political discourse.

I thought I would highlight the profile that was written for the next professor in the program, Dr. Bradley Pirzer. He appears to be a man after my own heart on these matters. I’m looking forward to what he will present.

How would you characterize the state of political discourse in the United States today?

Terrible. Absolutely terrible. But, I must admit, I write this as a 46-year old jaded romantic who once would have given much of his life to one of the two major political parties.

Political discourse as of 2014 comes down to two things 1) loudness and 2) meaningless nothings. Oration is a dead art, and the news from CNN, Fox and other outlets is just superficial talking points with some anger and show. Radio is just as bad, if not worse. As one noted journalist, Virginia Postrel, has argued, we probably shouldn’t take anything that someone such as Ann Coulter says with any real concern, as she is “a performance artist/comedian, not a serious commentator.”

Two examples, I think, help illustrate this. Look at any speech delivered by almost any prominent American from 1774 to 1870 or so. The speeches are rhetorically complicated, the vocabulary immense, and the expectations of a well-informed audience high. To compare the speech of a 1830s member of Congress with one—perhaps even the best—in 2014 is simply gut-wrenchingly embarrassing.

Another example. The authors of the Constitution expected us to discuss the most serious matters with the utmost gravity. Nothing should possess more gravitas in a republic than the issue of war. Yet, as Americans, we have not engaged in a properly constitutional debate on the meaning of war since the close of World War II. We’ve seen massive protests, some fine songs, and a lot of bumper stickers, but no meaningful dialogue.

As a humanist, I crave answers for this, and I desire a return to true—not ideological—debate and conversation. Academia has much to offer the larger political world in this.

How do you view the value of higher education today, particularly given its rising cost and rising student-loan burden?

I’m rather a devoted patriot of and for liberal education. From Socrates forward, the goal of a liberal education has been to “liberate” the human person from the everyday details of this world and the tyranny of the moment. Our citizenship, as liberally educated persons, belongs to the eternal Cosmopolis, not to D.C. or London or. . . .

But, in our own titillation with what we can create, we often forget what came before and what will need to be passed on in terms of ethics and wisdom. The best lawyer, the best engineer, the best chemist, will be a better person for knowing the great ideas of the past: the ethics of Socrates; the sacrifice of Perpetua; and the genius of Augustine.

Identifying what I’m doing

As I’ve been blogging here, there have been various issues that I’ve identified, which I think need to be remedied in the world of computing, though they’re ideas that have occurred to me as I’ve written about other subjects. I went through a period, early on in writing here, of feeling lost, but exploring about, looking at whatever interested me in the moment. My goal was to rediscover, after spending several years in college, getting my CS degree, and then several more years in professional IT services, why I thought computers were so interesting so many years ago, when I was a teenager. I had this idea in my head back then that computers would lead society to an enlightened era, a new plateau of existence, where people could try out ideas in a virtual space, the way I had experienced on early computers, with semi-interactive programming. I saw the computer as always being an authoring platform, and that the primary method of creating with it would be through programming, and then through various apparatus. I imagined that people would gain new insight about their ideas through this work, the way I had. I did not see this materialize in my professional career. So I decided to leave that world for a while. I make it sound here like it was a very clear decision to me, but I’m really saying this in hindsight. Leaving that world was more of a gradual process, and for a while I wasn’t sure I wanted to do it. It felt scary to leave the only work I knew as a professional, which gave me a sense of pride, and meaning. I saw too many problems in it, though, and it eventually got to me. I expected IT to turn out better than it did, to become more sane, to make more sense. Instead the problems in IT development seemed to get worse as time passed. It made me wonder how much of what I’d done had really improved anything, or if I was just going through the motions, pretending that I was doing something worthwhile, and getting a paycheck for it. Secondly, I saw a major shift occurring in software development, perhaps for the better, but it was disorienting nevertheless. I felt the need to explore in my own way, rather than going down the beaten path that someone else, or some group, had laid out for me.

I spent as much time as I could trying to find better ideas for what I wanted to do. Secondly, I spent a lot of time trying to understand Alan Kay’s perspective on computing, as I was, and am, deeply taken with it. That’s what this blog has largely been about.

When people have asked me, “What are you doing,” “What are you up to,” I’ve been at a loss to explain it. I could explain generically, “I’ve been doing some reading and writing,” “I’m looking at an old programming language,” or, “I’m studying an operating system.” They’re humdrum explanations, but they’re the best I could do. I didn’t have any other words for it. I felt like I couldn’t say I was wandering, going here and there, sampling this and that. I thought that would sound dangerously aimless to most of the people I know and love.

I found this video by Bret Victor, “Inventing on Principle,” and finally I felt as though someone put accurate, concise words to what I’ve been trying to do all this time.

http://vimeo.com/36579366

His examples of interactive programming are wonderful. I first experienced this feeling of a fond wish come true, watching such a demonstration, when Alan Kay showed EToys on Squeak back in 2003. I thought it was one of the most wonderful things I had witnessed. Here was what I had hoped things would progress to. It was too bad such a principle wasn’t widespread throughout the industry. It should be.

What really excited me, watching Bret’s video, is that at 18 minutes in he demonstrated a programming environment that looks similar to a programming language design I sketched out last fall, except what I had in mind looked more like the “right side” of his interactive display, which showed example code, than the “left side.”

I was intrigued by an idea I heard Alan Kay express, by way of J.C.R. Licklider, of “communicating with aliens” as a general operating principle in computing. Kay said that one can think of it more simply as trying to communicate with another human being who doesn’t understand English. You can still communicate somewhat accurately by making gestures in a context. If I had to describe my sketch in concise terms, I’d call it “programming by suggestion.” I had a problem in mind of translating data from one format into another, and I thought rather than using a traditional approach of inputting data and then hard-coding what I wanted it translated into, “Why not try to express what I want to do using suggestions,” saying, “I want something like this…,” using some mutually recognizable patterns as primitives, and having the computer figure out what I want by “taking the hints,” and creating inferences about them. It sounds more like AI, though I was trying to avoid going down that path if I could, because I have zero skill in AI. Easier said than done. I haven’t given up on the idea, though as usual I’ve gone off on some other interesting diversions since then. I had an idea of building up to what I want from simpler system designs. I may yet continue to pursue that avenue.

I’ve used each problem I’ve encountered in pursuit of my technical goals as an opportunity to pursue an unexamined idea I consider “advanced” in difficulty. So I’m not throwing any of the ideas away. I’m just putting them on the shelf for a while, while I address what I think are some underlying issues, both with the idea itself, and my own knowledge of computing generally.

Bret hit it on the head for me about 35 minutes into his talk. He said that pursuing the way of life he described is a kind of social activism using technology. It’s not in the form of writing polemics, or trying to create a new law, or change an existing one. Rather, it’s creating a new way for computers to operate as a statement of principles. This gets to what I’ve been trying to do. He said finding your principle is a journey of your own making, and it can take time to define. I really liked that he addressed the issue of trying to answer the question of, “Why you’re here.” I’ve felt that pursuit very strongly since I was in high school, and I’ve gone down some dead ends since then. He said it took him 10 years to find his principle. Along the way he felt unsure just what he was pursuing. He saw things that bothered him, and things that interested him, and he paid attention to these things. It took time for him to figure out whether something was just interesting for a time, or whether it was something that fit into his reason for being. For me, I’ve been at this in earnest since 2006, when I was in my mid-30s, and I still don’t have a defined sense yet of what my principle of work is. One reason for that is I only started working along these lines a year and a half ago. I feel like a toddler. I have a sense of what my interest centers around, though; that I want to see system design improved. I’ve stated the different aspects of this in my blog from time to time. Still, I find new avenues to explore, lately in non-technical contexts.

I’ve long had an interest in what makes a modern civilization tick. I thought of minoring in political science when I was in college. My advisor joked, “So, you’ll get your computer science degree, and then run for office, eh?” Not exactly what I had in mind… I had no idea how the two related in the slightest. I ended up not going through with the minor, as it required more reading than I could stomach. I took a few poli-sci courses in the meantime, though. In hindsight, I was interested in politics (and continue to be) because I saw that it was the way our society made decisions about how we were going to relate and live with each other in a future society. I think I also saw it as a way for society to solve its problems, though I’m increasingly dubious of that view. That interest still tugs at me, though recently it’s drawn me closer to studying the different ways that humans think, and/or perhaps how we can think better, as this relates a great deal to how we conduct politics, which affects how we relate to/live with each other as a society. I see this as mattering a great deal, so it may become a major avenue as I progress in trying to define what I’m doing. I don’t see politics as a goal of mine anymore. It’s more a means for getting to other pursuits I am developing on societal issues.

When Bret talked about what “hurt” him to see, this focus on politics came into view for me. When I think about the one thing that pains me, it’s seeing people use weak arguments (created using a weak outlook for the subject) to advance major causes that involve large numbers of people’s lives, because I think the end result can only be misguided, and perhaps dangerous to a free society in the long run. When I see this I feel compelled to intervene in the discussion in some way, and to try to educate the people involved in a better way of seeing the issue they’re concerned about, though I’m a total amateur at it. Words often fail to explain the ideas I’m trying to get across, because the people receiving them don’t understand the outlook I’m assuming, which suggests that I either need a different way of expressing those ideas, or I need to get into educating children about powerful outlooks, or both. Secondly, most adults don’t like to have their ideas challenged, so their minds are closed to new ideas right from the start. And I’ve realized that while it’s legitimate to try to address this principle which I try to act on, I need to approach how I apply it differently. Beating one’s head against a wall is not a productive use of time.

It has sometimes gotten me reflecting on how societies going back more than 100 years degenerated into regimented autocracies, which also used terribly weak arguments to justify their existence, with equally terrible consequences. We are not immune from such a fate. Such regimes were created by human beings just like us. I have some ideas for what I can do to address this concern, and that of computing at the same time, though I feel I now have a more realistic sense of the breadth of the effect I can accomplish within my lifetime, which is quite limited, even if I were to fully develop the principles for my work. A way has already been paved for what I describe by another techno-social activist, but I don’t have a real sense yet of what going down that avenue means, or even if reconciling the two is really what I want to do. As I said, I’m still in the process of finding all this out.

Thinking on this, a difference between myself and Bret (or at least what he’s expressed so far) is that like Alan Kay and Doug Engelbart, I think I see my work in a societal context that goes beyond the development of technology. Kay has dedicated himself to developing a better way to educate people in the great ideas humans have invented. He uses computers as a part of that, but it’s not based in techno-centrism. His goal is not just to create more powerful “mind amplifiers” or media. The reason he does it is as part of a larger goal of creating the locus for a better future society.

I am grateful to Bret for talking so openly about his principle, and how he arrived at it. It’s reassuring to be able to put a term to what I’m trying to achieve.

Related posts:

Coding like writing

The “My Journey” series

Why I do this

Are we future-oriented?

The death of Neil Armstrong, the first man to walk on the Moon, on August 25 got me reflecting on what was accomplished by NASA during his time. I found a YouTube channel called “The Conquest of Space,” and it’s been wonderful getting acquainted with the history I didn’t know.

I knew about the Apollo program from the time I was a kid in the 1970s. I was born two months after Apollo 11, so I only remember it in hindsight. By the time I was old enough to be conscious of the Apollo program’s existence, it had been mothballed for four or five years. I could not be ignorant of its existence. It was talked about often on TV, and in the society around me. I lived in Virginia, near Washington, D.C., in my early childhood. I remember I used to be taken regularly to the Smithsonian Air and Space Museum. Of all of their museums, it was my favorite. There, I saw video of one of the moon walks, the space suits used for the missions (as mannequins), the Command Module, and Lunar Module at full scale, artifacts of a time that had come and gone. There was hope that someday we would go back to the Moon, and go beyond it to the planets. The Air and Space Museum had an IMAX movie that was played continuously, called “To Fly.” From what I’ve read, they still show it. It was produced for the museum in 1976. I remember watching it a bunch of times. It was beautifully done, though looking back on it, it had the feel of a “demo” movie, showing off what could be done with the IMAX format. It dramatizes the history of flight, from hot air balloons in the 19th century, to the jet age, to rockets to the Moon. A cool thing about it is it talked about the change in perspective that flight offered, a “new eye.” At the end it predicted that we would have manned space missions to the planets.

Why wouldn’t we have manned missions that venture to the planets, and ultimately, perhaps a hundred years off, to other star systems? It would just be an extension of the advancements in flight we had made on earth. The idea that we would keep pushing the boundaries of our reach seemed like a given, that this technological pace we had experienced would just keep going. That’s what everything that was science-oriented was telling me. Our future was in space.

In the late 1970s Carl Sagan produced a landmark series on science called “Cosmos.” He talked about the history of space exploration, mostly from the ground, and how our destiny was to travel into space. He said, introducing the series,

The surface of the earth is the shore of the cosmic ocean. On this shore we’ve learned most of what we know. Recently we’ve waded a little way out, maybe ankle deep, and the water seems inviting. Some part of our being knows this is where we came from. We long to return, and we can…

Winding down?

As I got into my twenties, in the 1990s, I started to worry about NASA’s robustness as a space program. It started to look like a one-trick pony that only knew how to launch astronauts into low-earth orbit. “When are we going to return to the Moon,” I’d ask myself. NASA sent probes out to Jupiter, Mars, and then Saturn, following in the footsteps of Voyager 1 and 2. Surely similar questions were being asked of NASA, because I’d often hear them say that the probes were forerunners to future manned space flight, that they were gathering information that we needed to know in advance for manned missions, holding out that hope that someday we’d venture out again.

The Space Shuttle was our longest running space program, from 1981 to 2011, 30 years. Back around the year 2000 I remember Vice President Al Gore announcing the winner of the contract to build the next generation space shuttle, which would take the place of the older models, but it never came to be. Under the administration of George W. Bush the Constellation program started in 2005, with the idea of further developing the International Space Station, returning astronauts to the Moon, establishing a base there for the first time, and then launching manned missions to Mars. This program was cancelled in 2010 in the Obama Administration, and there has been nothing to replace it. I heard some criticism of Constellation, saying that it was ill-defined, and an expensive boondoggle, though it was defended by Neil Armstrong and Gene Cernan, two Apollo astronauts. Perhaps it was ill-defined, and a waste of money, but it felt sad to see the Space Shuttle program end, and to see that NASA didn’t have a way to get into low-earth orbit, or to the International Space Station. The original idea was to have the first stage of the Constellation program follow, after the space shuttles were retired. Now NASA has nothing but rockets to send out space probes and robotic rovers to bodies in space. Even the Curiosity rover mission, now on Mars, was largely developed during the Bush Administration, so I hear.

I have to remember at times that even in the 1970s, during my childhood, there was a lull in the manned space program. The Apollo program was ended in the Nixon Administration, before it was finished. There was a planned flight, with a rocket ready to go, to continue the program after Apollo 17, but it never left the ground. There’s a Saturn V rocket that was meant for one of the later missions that lays today as a display model on the grounds of the Kennedy Space Center. I have to remember as well that then, as now, the program was ended during a long drawn out war. Then, it was in Vietnam. Now, it’s in the Middle East.

Manned space flight ended for a time after the SL-4 mission to the Skylab space station in 1974. It didn’t begin again for another 7 years, with the first launch of the Space Shuttle. The difference is the Shuttle was first conceptualized towards the end of the Apollo program. It was there as a goal. Perhaps we are experiencing the same gap in manned flight now, though I don’t have a sense that NASA has a “next mission” in mind. As best I can tell the Obama Administration has tasked NASA with supporting private space flight. There is good reason to believe that private space flight companies will be able to send astronauts into low-earth orbit soon. That’s a consolation. The thing is that’s likely all they’re going to do in the future–launch to low-earth orbit. They’re at the stage that the Mercury program was more than 50 years ago.

What I ask is do we have anything beyond this in mind? Do we have a sense of building on the gains in knowledge that have been made, to venture out beyond what we now know? I grew up being told that “humans want to explore, to push the boundaries of what we know.” I guess we still are that, but maybe we’re directing that impulse in new ways here on earth, rather than into space. I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence. Astrophysicist, and vocal advocate for NASA, Neil deGrasse Tyson has worried about this, too.

I realized a few years ago, to my dismay, that what really drove the creation of the space program, and our flights to the Moon, was not an ambition to push our frontiers of knowledge just for the sake of gaining knowledge. There was a major political aspect to it: beating the Soviets in “the space race” of the 1960s, establishing higher ground for ourselves, in a military sense. Yes, some very valuable scientific and engineering work was done in the process, but as Tyson would say, “science hitched a ride on another agenda.” That’s what it’s often done in human history. Many non-military benefits to our society flowed from what NASA once did, none of which are widely recognized today. Most people think that our technological development came from innovators in the private sector alone. The private sector did a lot, but they also drew from a tremendous resource in our space and defense research and development programs, as I’ve documented in earlier posts.

I’ll close with this great quote. It echoes what Tyson has said, though it’s fleshed out in an ethical sense, too, which I think is impressive.

The great enemy of the human race is ignorance. It’s what we don’t know that limits our progress. And everything that we learn, everything that we come to know, no matter how esoteric it seems, no matter how ivory tower-ish, will fit into the general picture a block in its proper place that in the end will make it possible for mankind to increase and grow; become more cosmic, if you wish; become more than a species on Earth, but become a species in the Universe, with capacities and abilities we can’t imagine now. Nor do I mean greater and greater consumption of energy, or more and more massive cities.

It’s so difficult to predict, because the most important advances are exactly in the directions that we now can’t conceive, but everything we now do, every advance in knowledge we now make, contributes to that. And just because I can’t see it, and I’m an expert at this, … doesn’t mean it isn’t there. And if we refuse to take those steps, because we don’t see what the future holds, all we’re making certain of is that the future won’t exist, and that we will stagnate forever. And this is a dreadful thought. And I am very tired when people ask me, “What’s the good of it,” because the proper answer is, “You may never know, but your grandchildren will.”

— Isaac Asimov, 1973, from the NASA film “Small Steps, Giant Strides”

Then as now, this is the lament of the scientist, I think. Scientists must ask society’s permission to explore, because they usually need funds from others to do their work, and there is no immediate payback to be had from it. It is for this reason that justifying the funding of that work is tough, because scientific work goes outside the normal set of expectations people have about what is of value. If the benefits can’t be seen here and now, many wonder, “What’s the point?” What Asimov pointed out is the pursuit of knowledge is its own reward, but to really gain its benefits you must be future-oriented. You have to think about and value the world in which your children and grandchildren will live, not your own. If your focus is on the here and now, you will not value the future, and so potential future benefits of scientific research will not seem valuable, and therefor will not seem worthy of pursuit. It is a cultural mindset that is at issue.

Edit 12-10-2012: Going through some old articles I’d saved, I came upon this essay about humanity’s capacity for intellectual thought, called “Why is there Anti-Intellectualism?”, by Steven Dutch at the University of Wisconsin-Green Bay. It provides some reasonable counter-notions to my own that seem to confirm what I’ve seen, but will still take some contemplation on my part.

There’s no science in the article. In terms of quality, at best, I’d call this an “executive summary.” Maybe there’s more detailed research behind it, but I haven’t found it yet. Dutch uses heuristics to provide his points of comparison, and uses a notion of evidence to provide some meat to the bones. He asks some reasonable questions that are worth contemplating, challenging the notion that “humans are naturally curious, and strive to explore.” He then makes observations that seem to come from his own experience. Overall, he provides a reasonable basis for answering a statement I made in this article: “I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence.” He comes down on the side of saying, in his opinion (paraphrasing), “Yes, some in the scientific community have fooled themselves on this issue.” He discusses the notion that “humans are naturally curious,” due to the behavior exhibited by children. He concludes by saying that children naturally display a shallow curiosity, which he calls “tinkering.” The harder task of creative, deep thought does not come naturally. It’s something that needs to be cultivated to take root. Hence the need for schools. The question I think we as citizens should be asking is whether our schools are actually doing this, or something else.