Feeds:
Posts
Comments

I thought I’d share the video below, since it has some valuable insights on what computer science should be, and what education should be, generally. It’s all integrated together in this presentation, and indeed, one of the projects of education should be integrating computer science into it, but not with the explicit purpose to create more programmers for future jobs, though it could always be used for that by the students. Alan Kay presents a different definition of CS than is pursued in universities today. He refers to how Alan Perlis defined it (Perlis was the one to come up with the term “computer science”), which I’ll get to below.

This thinking about CS and education provides, among other things, a pathway toward reimagining how knowledge, literature, and art can be presented, organized, dissected, annotated, and shared in a way that’s more meaningful than can be achieved with old media. (For more on that, see my post “Getting beyond paper and linear media.”) As with what the late Carl Sagan often advocated, Kay’s presentation here advocates for a general model for the education of citizens, not just professional careerists.

Another reason I thought of sharing this is several years ago I remember hearing that Kay was working on rethinking computer science curriculum. What he presents is more about suggestion, “Start thinking about it this way.” (He takes a dim view of the concept of curriculum, as it suggests setting students on a rigid path of study with the intent to create minds with a cookie cutter, though he is in favor of classical liberal arts education, with the prescriptions that entails.)

As he says in the presentation, there’s a lot to develop in the practice of education in order to bring this into fruition.

This is from 2015:

I wrote notes below for some of the segments he talks about, just because I think his presentation bears some interpretation for many readers. He uses metaphors a lot.

The bicycle

This is an analogy for how an apparatus, or a subject, is typically modified for education. We take the optimized, or the adult version of something, and add compensators, which make it so that beginners can use it without falling all over themselves. It’s seen as easier to present it this way, and as a skill-building experience, where in order to learn how to do something, you need to use “the real thing.” Beginners can put on a good show of using this sort of apparatus, or modified subject, but the problem is that it doesn’t teach a beginner how to become good at really using the real thing at its full potential. The compensators become a crutch. He said a better idea is to use an apparatus, or a component of the subject, that allows a beginner to get a feel for how to use the thing in a way that gets across significant aspects of its potential, but without the optimizations, or the way adults use it, which make it too complicated for a beginner to use under their own power. In this case, a lowbike is better. This beginner apparatus, or component, is more like the first version of the thing you’re trying to teach. Bicycles were originally more like scooters, without pedals, or a chain, where you’d sit in the seat, push it along with your legs, kind of “running while sitting,” glide, and turn by shifting your weight, and turning into the turn.

Once a beginner gets a feel for that, they can deal with the optimized version, or a scaled down adult version, with pedals and a chain to drive the bike, because all that adds is the ability to get more power out of it. It doesn’t change any of the fundamentals of how you use it.

This gets to an issue of pedagogy, that learners need to learn something in components, rather than dealing with the whole thing at once. Once they learn one capacity, they can move on to the next challenge in learning the “whole thing.”

Radiation vs. nouns

He put forward a proposition for his talk, which is that he’s mixing a bunch of ideas together, because they overlap. This is a good metaphor, because most of his talk is about what we are as human beings, and how society should situate and view education. Only a little of it is actually on computer science, but all of it is germane to talking about computer science education.

He also gives a little advice to education reformers. He pointed out what’s wrong with education as it is right now, but rather than cursing it, he said one should make a deliberate effort to “build a tribe” or coalition with those who are causing the problem, or are in the midst of the problem, and suggest ways to bring them into a dignified position, perhaps by sharing in their honors, or, as his example illustrated, their shame. I draw some of this from Plato’s Cave metaphor.

Cooperation and competition in society

I once heard Kay talk about this many years ago. He said that, culturally, modern corporations are like the ancient hunter-gatherers. They exploit the resources of an area for a while, and once it’s exhausted, they move on, and that as a culture, they have yet to learn about democracy, which requires more of a “settlement” mentality toward what they’re doing. Here, he used an agricultural metaphor to talk about a cooperative society that creates the wealth that is then used by competitive forces within it. What he means by this is that the true wealth is the knowledge that’s ultimately used to develop new products and services. It’s not all developed inside the marketplace. He doesn’t talk about this, but I will. Even though a significant part of the wealth (as he said, you can think of it as “potential energy”) is generated inside research labs, what research labs tend to lack is knowledge of what the members of society can actually understand of this developed knowledge. That’s where the competitive forces in society come in, because they understand this a lot better. They can negotiate between how much of the new knowledge to put into a product, and how much it will cost, to reach as many people as possible. This is what happened in the computer industry of the past.

I think I understand what he’s getting at with the agricultural metaphor, because farmers don’t just want to reap a crop for one season. Their livelihood depends on maintaining fertility on their land. That requires not just exploiting what’s there season after season, or else you get the dust bowl. If instead, practices are modified to allow the existing land to become fertile again, or, in the case of hunter-gathering, aggressively managing the environment to create favorable grazing to attract game, then you can create a cycle of exploitation and care such that a group can stay in one area for a long time, without denying themselves any of the benefits of how they live. I think what he suggests is that if corporations would modify their behavior to a more settled, agricultural model, to use some of their profits to contribute to educating the society in which they exist, and to funding scientific research on a private basis, that would “regenerate the soil” for economic growth, which can then fuel more research, creating a cycle of renewal. No doubt the idea he’s presenting includes the businesses who would participate in doing this. They should be allowed to profit (“reap”) from what they “sow,” but the point is they’re not the only ones who can profit. Other players in the marketplace can also exploit the knowledge that’s generated, and profit as well. That’s what’s been done in the past with private research labs.

He attributes the lack of this to culture, of not realizing that the economic model that’s being used is not sustainable. Eventually, you use up the “soil,” and it becomes “infertile,” and “blows away,” and, in the case of hunter-gathering, the “good hunting grounds” are used up.

He makes a crucial point, though, that education is not just about jobs and competitiveness. It’s also about inculcating what citizenship really means. I’m sure if he was asked to drill down on this more, he would suggest a classical education for this, along with a modified math and science curriculum that would give students a sense of what those subjects really are like.

The sense I get is he’s advocating almost more of an Andrew Carnegie model of corporate stewardship, who, after he made his money, directed his philanthropy to building schools and libraries. Kay would just add science labs to that mix. (He mentions this later in his talk.)

I feel it necessary to note that not all for-profit entities would be able to participate in funding these cooperative activities, because their profit margins are so slim. I don’t think that’s what he’s expecting out of this.

What we are, to the best of our knowledge

He gives three views into human mental capacity: the way we perceive (theatrical), how much we can perceive at any moment in time (1 ± 2), and how educators should perceive ourselves psychologically and mentally (more primate and mammalian). This relates to neuroscience, and to some extent, evolutionary psychology.

The raison d’être of computer science

The primary purpose of computer science should be developing a science of systems in process, and that means all processes: mechanical processes, technological processes, social processes, biological processes, mental processes, etc. This relates to my earlier post, “Beginning the journey of becoming a computer scientist.” It’s partly about developing a new kind of mathematics for modeling processes. Alan Turing did it, with his Turing Machine concept, though he was trying to model the process of generating mathematical statements, and doing mathematical tests on them.

Shipping the design

Kay talks about how programmers today don’t have access to anything like what designers in other fields have, where they’re able to model their design, simulate it, and then have a machine fabricate a prototype that you can actually show and use.

I had an epiphany about this about 8 or 9 years ago. I was at a friend’s party, and there were a few mechanical engineers among the guests. I overheard a couple of them talking about the computer-aided design (CAD) software they were using. One talked about a “terrible” piece of CAD software he used at work. He said he had a much better CAD system at home, but it was incompatible with the data files that were used at work. As much as he would’ve loved to use it, he couldn’t. He said the system he used at work required him to group pieces of a design together as he was building the model, and once he did that, those pieces became inflexible. He couldn’t just redesign one piece of it, or separate one out individually from the model. He couldn’t move the pieces around on the model, and have them fit. Once they were grouped, that was it. It became this static thing. He said in order to redesign one piece of it, he had to take the entire model apart, piece by piece, redesign the part, and then redesign all the other pieces in the group to make the new part fit. He said he hated it, and as he talked about it, he acted like he was so disgusted with it, he wanted to throw it in the trash, like it was a piece of garbage. He said on his CAD system at home, it was wonderful, because he could separate a part from a model any time he wanted, and the system would adjust the model automatically to “make sense” out of the part being missing. He could redesign the part, and move it to a different part of the model, “attach it” somewhere, and the system would automatically adjust the model so that the new part would fit. The way he described it gave it a sense of fluidity. Whereas the system he used at work sounded rigid. It reminded me of the programming languages I had been using, where once relationships between entities were set up, it was really difficult to take pieces of it “out” and redesign them, because everything that depended on that piece would break once I redesigned it. I had to go around and redesign all the other entities that related to it to adjust to the redesign of the one piece.

I can’t remember how this worked, but another thing the engineer talked about was the system at work had some sort of “binding” mechanism that seemed to associate parts by “type,” and that this was also rigid, which reminded me a lot of the strong typing system in the languages I had been using. He said the system he had at home didn’t have this, and to him, it made more sense. Again, his description lent a sense of fluidity to the experience of using it. I thought, “My goodness! Why don’t programmers think like this? Why don’t they insist that the experience be like this guy’s CAD system at home?” For the first time in my career, I had a profound sense of just what Alan Kay talked about, here, that the computing field is retrograde. It has not advanced anywhere close to the kind of engineering that exists in other fields, where they would insist on this sort of experience. We accept so much less, whereas modern engineers have a hard time standing for it, because they know they have better options.

Don’t be fooled by large efforts below threshold

Before I begin this part, I want to share a crucial point that Kay makes, because it’s one of the big ones:

Think about what thinking is. Thinking is not being logical. Thinking is choosing the environment that you’re going to think in before you start rationalizing.

Kay had something very interesting, and startling, to say about the Apollo space program, using that as a metaphor for large reform initiatives in education generally. I recently happened upon a video of testimony he gave to a House committee on educational computing back in 1982, chaired by then-congressman Al Gore, and Kay talked about this same example back then. He said that the way the Apollo rockets were designed was a “hack.” They were not the best design for space travel, but it was the most expedient for the mission of getting to the Moon by the end of the 1960s. Here, in this presentation, he talks about how each complete rocket was the height of a 45-story building (1-1/2 football fields in length), most of it high explosives, with only a tiny capsule at the top that could fit 3 astronauts. This is not a model that could scale to what was needed for space travel.

It became this huge worldwide cultural event when NASA launched it, and landed men on the Moon, but Kay said it was really not a great accomplishment. I remember Rep. Gore said in jest, “The walls in this room are shaking!” The camera panned around a bit, showing pictures on the wall from NASA. How could he say such a thing?! This was the biggest cultural event of the century, perhaps in all of human history. He explained the same thing here: that the Apollo program didn’t advance space travel beyond the mission to the Moon. It was not technology that would get us beyond that, though, in hindsight we can say technology like it enabled launching probes throughout the Solar System.

Now, what he means by “space travel,” I’m not sure. Is it manned missions to the outer planets, or to other star systems? Kay is someone who has always thought big. So, it’s possible he was thinking of interstellar travel. What he was talking about was the problem of propulsion, getting something powerful enough to make significant discoveries in space exploration possible. He said chemical propellant just doesn’t do it. It’s good enough for launching orbital vehicles around our planet, and launching probes, but that’s really it. The rest is just wasting time below threshold.

Another thing he explained is that large initiatives which don’t cross a meaningful threshold can be harmful to efforts to advancing any endeavor, because large initiatives come with extended expectations that the investment will continue to be used, and they must be satisfied, or else there will be no cooperation in doing the initial effort. The participants will want their return on investment. He said that’s what happened with NASA. The ROI had to play out, but that ruined the program, because as that happened, people could see we weren’t advancing the state of the art that much in space travel, and the science that was being produced out of it was usually nothing to write home about. Eventually, we got what we now see: People are tired of it, and have no enthusiasm for it, because it set expectations so low.

What he was trying to do in his House committee testimony, and what he’s trying to do here, is provide some perspective that science offers, vs. our common sense notion of how “great” something is. You cannot get qualitative improvement in an endeavor without this perspective, because otherwise you have no idea what you’re dealing with, or what progress you’re building on, if any. Looking at it from a cultural perspective is not sufficient. Yes, the Moon landing was a cultural milestone, but not a scientific or engineering milestone, and that matters.

Modern science and engineering have a sense of thresholds, that there can come a point where some qualitative leap is made, a new perspective on what you’re doing is realized that is significantly better than what existed prior. He explains that once a threshold has been crossed, you can make small improvements which continue to build on that significant leap, and those improvements will stick. The effort won’t just crash back down into mediocrity, because you know something about what you have, and you value it. It’s a paradigm shift. It is so significant, you have little reason to go back to what you were doing before. From there, you can start to learn the limits of that new perspective, and at some point, make more qualitative leaps, crossing more thresholds.

“Problem-finding”/finding the goal vs. problem-solving

Problem solving begins with a current context, “We’re having a problem with X. How do we solve it?” Problem finding asks do we even have a good idea of what the problem is? Maybe the reason for the problems we’ve been seeing has to do with the fact that we haven’t solved a different problem we don’t know about yet. “Let’s spend time trying to find that.”

The story he tells about MacCready illustrates working with a good modeling system. He needed to be able to fail with his ideas a lot, before he found something that worked. So he needed a simple enough modeling system that he could fail in, where when he crashed with a particular model, it didn’t take a lot to analyze why it didn’t work, and it didn’t take a lot to put it back together differently, so he could try again.

He made another point about Xerox PARC, that it took years to find the goal, and it involved finding many other goals, and solving them in the interim. I’ve written about this history at “A history lesson on government R&D” Part 2 and Part 3. There, you can see the continuum he talks about, where ARPA/IPTO work led into Xerox PARC.

This video with Vishal Sikka and Alan Kay gives a brief illustration of this process, and what was produced out of it.

Erosion gullies

There are a couple metaphors he uses to talk about the lack of flexibility that develops in our minds the more we focus our efforts on coping, problem solving, and optimizing how we live and work in our current circumstances. One is erosion gullies. The other is the “monkey trap.”

Erosion gullies channel water along a particular path. They develop naturally as water erodes the land it flows across. These “gullies” seem to fit with what works for us, and/or what we’re used to. They develop into habits about how we see the world–beliefs, ideas which we just accept, and don’t question. They allow some variation in the path that’s followed, but they provide boundaries that don’t allow the water to go outside the gully (leaving aside the exception of floods, for the sake of argument). He uses this to talk about how “channels” develop in our minds that direct our thinking. The more we focus our thoughts in that direction, the “deeper” the gully gets. Keep at it too long, and the “gully” won’t allow us to see anything different than what we’re used to. He says that it may become inconceivable to think that you could climb out of it. Most everything inside the “gully” will be considered “right” thinking (no reason why), and anything outside of it will be considered “wrong” (no reason why), and even threatening. This is why he mentions that wars are fought over this. “We’re all in different erosion gullies.” They don’t meet anywhere, and my “right” is your “wrong,” and vice-versa. The differences are irreconcilable, because the idea of seeing outside of them is inconceivable.

He makes two points with this. One is that we have erosion gullies re. stories that we tell ourselves, and beliefs that we hold onto. Another is that we have erosion gullies even in our sensory perceptions that dictate what we see and don’t see. We can see things that don’t even exist, and typically do. He uses eyewitness testimony to illustrate this.

I think what he’s saying with it is we need to watch out for these “gullies.” They develop naturally, but it would be good if we had the flexibility to be able to eventually get out of our “gully,” and form a new “channel,” which I take is a metaphor for seeing the world differently than what we’re used to. We need a means for doing that, and what he proposes is science, since it questions what we believe, and tests our ideas. We can get around our beliefs, and thereby get out of our “gullies” to change our perspective. It doesn’t mean we abandon “gullies,” but just become aware that other “channels” (perspectives) are possible, and we can switch between them, to see better, and achieve better outcomes.

Regarding the “monkey trap,” he uses it as a metaphor for us getting on a single track, grasping for what we want, not realizing that the very act of being that focused, to the exclusion of all other possibilities, is not getting us anywhere. It’s a trap, and we’d benefit by not being so dogged in pursuing goals if they’re not getting us anywhere.

“Fast” vs. “slow”

He gets into some neuroscience that relates to how we perceive, what he called “fast” and “slow” response. You can train your mind through practice in how to use “fast” and “slow” for different activities, and they’re integral to our perception of what we’re doing, and our reactions to it, so that we don’t careen into a catastrophe, or miss important ideas in trying to deal with problems. He said that cognitive approaches to education deal with the “slow” systems, but not the “fast” ones, and it’s not enough to help students in really understanding a subject. As other forms of training inherently deal with the “fast” systems, educators need to think about how the “fast” systems responds to their subjects, and incorporate that into how they are taught. He anticipates this will require radically redesigning the pedagogy that’s typically used.

He says that the “fast” systems deal with the “atoms” of ideas that the “slow” system also deals with. By “atoms,” I take it he means fundamental, basic ideas or concepts for a subject. (I think of this as the “building blocks of molecules.”)

The way I take this is that the “slow” systems he’s talking about are what we use to work out hard problems. They’re what we use to sit down and ponder a problem for a while. The “fast” systems are what we use to recognize or spot possible patterns/answers quickly, a kind of quick, first-blush analysis that can make solving the problem easier. To use an example, you might be using “fast” systems now to read this text. You can do it without thinking about it. The “slow” systems are involved in interpreting what I’m saying, generating ideas that occur to you as you read it.

This is just me, but “fast” sounds like what we’d call “intuition,” because some of the thinking has already been done before we use the “slow” systems to solve the rest. It’s a thought process that takes place, and has already worked some things out, before we consciously engage in a thought process.

Science

This is the clearest expression I’ve heard Kay make about what science actually is, not what most people think it is. He’s talked about it before in other ways, but he just comes right out and says it in this presentation, and I hope people watching it really take it in, because I see too often that people take what they’ve been taught about what science is in school and keep reiterating it for the rest of their lives. This goes on not only with people who love following what scientists say, but also in our societal institutions that we happen to associate with science.

…[Francis] Bacon wrote a book called “The Novum Organum” in 1620, where he said, “Hey, look. Our brains are messed up. We have bad brains.” He called the ways of messing up “idols.” He said we get serious errors because of our genetics. We get serious errors because of the culture we’re in. We get serious errors because of the languages we use. They don’t represent what’s actually out there. We get serious errors from the way that academia hangs on to bad ideas, and teaches them over again. These are his four “idols.” Anyone ever read Bacon? He said we need something to get around our bad brains! A set of heuristics, is the term we’d use today.

What he called for was … science, because that’s what “Novum Organum,” the rest of the title, was: “A new way of dealing with knowledge.”

Science is not the knowledge, because knowledge is in this context. What science is is a negotiation between what’s out there and what we can represent.

This is the big idea. This is the idea they don’t teach in school. This is the idea we should be teaching. It’s one of the biggest ideas of all time.

It isn’t the knowledge. It’s the relationship, because what’s out there is only knowable by a phenomena that is being filtered in every possible way. We don’t even know if our brain is capable of representing the stuff.

So, to think about science as the truth is completely wrong! It’s not the right way to look at it. But if you think about it as a negotiation between the best you can do right now and stuff that’s out there, where you’re not completely sure, you’re in a very strong position.

Science has been the most important, powerful thought system humans have ever invented, because it gave up the idea of truth, and it substituted for it a thousand variations of false, some of which are incredibly powerful. This is the big idea.

So, if we’re going to think about computing, this is one way … of thinking about, “Wow! Computers!” They are representers. We can learn about representations. We can simulate ideas. We can get a very good–much better sense of dealing with thinking about these complexities.

“Getting there”

The last part demonstrates what I’ve seen with exploration. You start out thinking you’re going to go from Point A to Point B, but you take diversions, pathways that are interesting, but related to your initial search, because you find that it’s not a straight path from Point A to Point B. It’s not as straightforward as you thought. So, you try other ways of getting there. It is a kind of problem solving, but it’s really what Kay called “problem finding,” or finding the goal. In the process, the goal is to find a better way to get to the goal, and along the way, you find problems that are worth solving, that you didn’t anticipate at all when you first got going. In that process, you’ll find things you didn’t expect to learn, but which are really valuable to your knowledge base. In your pursuit of trying to find a better way to get to your destination, you might even get through a threshold, and find that your initial goal is no longer worth pursuing, but there are better goals to pursue in this new perception you’ve obtained.

—Mark Miller, https://tekkie.wordpress.com

10 years

I started this blog on May 31, 2006. It’s been 10 years.

When I started on it, I expected to be leaving technical advice for programmers and IT people, since I had been doing that in comment forums for a while. The advice I was leaving was becoming repetitive, and rather involved. The same problems kept coming up for people. I saw a blog as a way of optimizing the process. I thought I’d just write it here once, and refer people to it. Very quickly, though, a new interest was emerging for me, which grew into re-evaluating and exploring computer science, and that’s mostly what I’ve been writing about since.

The purpose I’ve held for this blog is to share what I’ve learned as I’ve tried to understand in much more depth the perspective on computing that drew me to the field when I was young. The reason I wanted to share it is I understood quickly that if it’s going to be something I’m going to pursue down the road, it has to be more than just me who sees value in it. I wanted to bring other people along. I thought of it as a “trail of breadcrumbs” for people like myself. I don’t know if it’s had that effect. It seems more like there have been certain subjects that have drawn a large audience, but the interest is focused there, and on a few related posts, and not on the blog as a whole. That’s alright. At least it’s having some impact, as I see when people leave appreciative comments, and on occasion e-mail me to extend the conversation. This is just an impression, but it appears I’m on a path that most don’t want to follow, and that may have to do with the fact that most don’t have the luxury. Nevertheless, I will continue posting about what I learn. It helps my learning process to write about it, and it helps my writing to do it with the knowledge that others will see it.

Related posts:

My journey, Part 6

The 150th post

I’ve had it on my mind to post this for a while, though I do it with hesitation, since this has been so politicized. I wrote about the ban on incandescents in 2008, in “Say goodbye to incandescent bulbs. Say hello to the ‘new normal’.”

I found this interesting nugget as I was writing this. Back then, I had an idea for how to make CFLs safer. I don’t know why no one thought of it before I did. From a consumer safety standpoint, it would seem like an obvious solution:

[M]aybe they could put a plastic bulb around the coil that would absorb the shock, or at least contain the contents if the coil broke from jarring. This would require them to make the coils smaller, but the added safety would be worth it.

Several months ago, while shopping for bulbs at the grocery store, I noticed they were selling CFLs with a hard plastic bulb, just like I suggested! Too little, too late, I guess. I am happy to say that CFLs are going away at the end of this year.

I actually stocked up on incandescents on clearance. I haven’t been using CFLs for most of my lighting. I ended up using them for a couple overhead lights, just because an inspector came through a couple years ago to make sure the apartment complex I live in was “smart regs-compliant,” and I didn’t want to embarrass my landlord by creating any excuse to call the dwelling non-compliant. “I love my minders…”

This doesn’t mean incandescents are coming back. They’re still banned by legislation passed in 2007, but at least there are now better lighting options. Since a few years ago, those who prefer incandescents can get halogen bulbs that are the same size as the old incandescents, and are just as bright. They cost more, but they’re supposed to use less electricity, and last longer. What I care about is that they operate on the same principle. They use a filament. So if you break one, all you have to do is clean up the glass, and vacuum or mop. No toxic spill. Yay!

As the guys in the video say, there are now very good LED lighting options, which are also non-toxic to you.

So I see this as reason to celebrate. The market actually succeeded in getting rid of an expensive, hazardous form of lighting that, it seems to me, our government tried to force on us. I wish the ban would be lifted. I don’t see the point in it. I know people will say it reduces pollution, because we can lower the amount of electricity we use, but as I explained years ago, this is a dumb way to go about it. Lighting is not the major source of electricity usage. Our air conditioning is the elephant in the room, each summer!

Aaron Renn had a good article on what was really going on with the light bulb ban, in “The Return of the Monkish Virtues,” written in 2012. It’s interesting reading. The idea always was to force people to think about their impact on the environment, not to actually do anything substantial about it.

 

“We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.”
— “SIGACT trying to get children excited about CS,”
by Michael R. Fellows and Ian Parberry

There have been many times where I’ve thought about writing about this over the years, but I’ve felt like I’m not sure what I’m doing yet, or what I’m learning. So I’ve held off. I feel like I’ve reached a point now where some things have become clear, and I can finally talk about them coherently.

I’d like to explain some things that I doubt most CS students are going to learn in school, but they should, drawing from my own experience.

First, some notions about science.

Science is not just a body of knowledge (though that is an important part of it). It is also venturing into the unknown, and developing the idea within yourself about what it means to really know something, and to understand that knowing something doesn’t mean you know the truth. It means you have a pretty good idea of what the reality of what you are studying is like. You are able to create a description that somehow resembles the reality of what you’re studying to such a degree that you are able to make predictions about it, and those predictions can be confirmed within some boundaries that are either known, or can be discovered, and subsequently confirmed by others without the need to revise the model. Even then, what you have is not “the truth.” What you have might be a pretty good idea for the time being. As Richard Feynman said, in science you never really know if you are right. All you can be certain of is that you are wrong. Being wrong can be very valuable, because you can learn the limits of your common sense perceptions, and knowing that, direct your efforts towards what is more accurate, what is closer to reality.

The field of computing and computer science doesn’t have this perspective. It is not science. My CS professors used to admit this openly, but my understanding is CS professors these days aren’t even aware of this distinction anymore. In other words, they don’t know what science is, but they presume that what they practice is science. CS, as it’s taught, has some value, but what I hope to introduce people to here is the notion that after they have graduated with a CS degree, they are in kindergarten, or perhaps first grade. They have learned some basics about how to read and write, but they have not learned what is really powerful about that skill. I will share some knowledge I have gleaned as someone who has ventured as a beginner in this perspective.

The first thing to understand is that an undergraduate computer science education doesn’t tend to give you any notions about what computing is. It doesn’t explore models of computing, and in many cases, it doesn’t even present much in the way of different models of programming. It presents one model of computing (the Von Neumann model), but it doesn’t venture deeply into concepts of how the model works. It provides some descriptions of entities that are supposed to make it work, but it doesn’t tie them together so that you can see the relationships; see how the thing really works.

Science is about challenging and forming models of phenomena. A more powerful notion of computing is realized by understanding that the point of computer science is not about the application of computing to problems, but about creating, criticizing, and studying the strengths and weaknesses of different models of computing systems, and that programming is a means of modeling our ideas about them. In other words, programming languages and development environments can be used to model, explore, and test, and criticize our notions about different computing system models. It behooves the computer scientist to either choose an existing computing architecture represented by an existing language, or to create a new architecture, and a new language, that is suitable to what is being attempted, so that the focus can be maintained on what is being modeled, and testing that model without a lot of distraction.

As Alan Kay has said, the “language” part of “programming language” is really a user interface. I suggest that it is a user interface to a model of computing, a computing architecture. We can call it a programming model. As such, it presents a simulated environment of what is actually being accomplished in real terms, which enables a certain way of seeing the model you are trying to construct. (As I said earlier, we can use programming (with a language) to model our notions of computing.) In other words, you can get out of dealing with the guts of a certain computing architecture, because the runtime is handling those details for you. You’re just dealing with the interface to it. In this case, you’re using, quite explicitly, the architecture embodied in the runtime, not an API, as a means to an end. The power doesn’t lie in an API. The power you’re seeking to exploit is the architecture represented by the language runtime (a computing model).

This may sound like a confusing round-about, but what I’m saying is you can use a model of computing as a means for defining a new model of computing, whatever best suits the task. I propose that, indeed, that’s what we should be doing in computer science, if we’re not working directly with hardware.

When I talk about modeling computing systems, given their complexity, I’ve come to understand that constructing them is a gradual process of building up more succinct expressions of meaning for complex processes, with a goal of maintaining flexibility, in terms of how sub-processes that are used for higher levels of meaning can be used. As a beginner, I’ve found that starting with a powerful programming language in a minimalist configuration, with just basic primitives, was very constructive for understanding this, because it forced me to venture into constructing my own means for expressing what I want to model, and to understand what is involved in doing that. I’ve been using a version of Lisp for this purpose, and it’s been a very rewarding experience, but other students may have other preferences. I don’t mean to prejudice anyone toward one programming model. I recommend finding, or creating one that fits the kind of modeling you are pursuing.

Along with this perspective, I think, there must come an understanding of what different programming languages are really doing for you, in their fundamental architecture. What are the fundamental types in the language, and for what are they best suited? What are the constructs and facilities in the language, and what do they facilitate, in real terms? Sure, you can use them in all sorts of different ways, but to what do they lend themselves most easily? What are the fundamental functions in the runtime architecture that are experienced in the use of different features in the language? And finally, how are these things useful to you in creating the model you want to attempt? Seeing programming languages in this way provides a whole new perspective on them that has you evaluating how they facilitate your modeling practice.

Application comes along eventually, but it comes late in the process. The real point is to create the supporting architecture, and any necessary operating systems first. That’s the “hypothesis” in the science you are pursuing. Applications are tests of that hypothesis, to see if what is predicted by its creators actually bears out. The point is to develop hypotheses of computing systems so that their predictive limits can be ascertained, if they are not falsified. People will ask, “Where does practical application come in?” For the computer scientist involved in this process, it doesn’t, really. Engineers can plumb the knowledge base that’s generated through this process to develop ideas for better system software, software development systems, and application environments to deploy. However, computer science should not be in service to that goal. It can merely be useful toward it, if engineers see that it is fit to do so. The point is to explore, test, discuss, criticize, and strive to know.

These are just some preliminary thoughts on computer systems modeling. In my research, I’ve gotten indications that it’s not healthy for the discipline to be insular, just looking at its own artifacts for inspiration for new ideas. It needs to look at other fields of study to introduce unorthodox models into the discussion. I haven’t gotten into that yet. I’m still trying to understand some basic ideas of a computer system, using a more mathematical approach.

Related post: Does computer science have a future?

— Mark Miller, https://tekkie.wordpress.com

As this blog has progressed, I’ve gotten farther away from the technical, and moved more towards a focus on the best that’s been accomplished, the best that’s been thought (that I can find and recognize), with people who have been attempting to advance, or have made some contribution to what we can be as a society. I have also put a focus on what I see happening that is retrograde, which threatens the possibility of that advancement. I think it is important to point that out, because I’ve come to realize that the crucible that makes those advancements possible is fragile, and I want it to be protected, for whatever that’s worth.

As I’ve had conversations with people about this subject, I’ve been coming to realize why I have such a strong desire to see us be a freer nation than we are. It’s because I got to see a microcosm of what’s possible within a nascent field of research and commercial development, with personal computing, and later the internet, where the advancements were breathtaking and exciting, which inspired my imagination to such a height that it really seemed like the sky was the limit. It gave me visions of what our society could be, most of them not that realistic, but they were so inspiring. It took place in a context of a significant amount of government-funded, and private research, but at the same time, in a legal environment that was not heavily regulated. At the time when the most exciting stuff was happening, it was too small and unintelligible for most people to take notice of it, and society largely thought it could get along without it, and did. It was ignored, so people in it were free to try all sorts of interesting things, to have interesting thoughts about what they were accomplishing, and for some to test those ideas out. It wasn’t all good, and since then, lots of ideas I wouldn’t approve of have been tried as a result of this technology, but there is so much that’s good in it as well, which I have benefitted from so much over the years. I am so grateful that it exists, and that so many people had the freedom to try something great with it. This experience has proven to me that the same is possible in all human endeavors, if people are free to pursue them. Not all goals that people would have in it would be things I think are good, but by the same token I think there would be so much possible that would be good, from which people of many walks of life would benefit.

Glenn Beck wrote a column that encapsulates this sentiment so well. I’m frankly surprised he thought to write it. Some years ago, I strongly criticized Beck for writing off the potential of government-funded research in information technology. His critique was part of what inspired me to write the blog series “A history lesson on government R&D.” We come at this from different backgrounds, but he sums it up so well, I thought I’d share it. It’s called The Internet: What America Can And Should Be Again. Please forgive the editing mistakes, of which there are many. His purpose is to talk about political visions for the United States, so he doesn’t get into the history of who built the technology. That’s not his point. He captures well the idea that the people who developed the technology of the internet wanted to create in society: a free flow of information and ideas, a proliferation of services of all sorts, and a means by which people could freely act together and build communities, if they could manage it. The key word in this is “freedom.” He makes the point that it is we who make good things happen on the internet, if we’re on it, and by the same token it is we who can make good things happen in the non-digital sphere, and it can and should be mostly in the private sector.

I think of this technological development as a precious example of what the non-digital aspects of our society can be like. I don’t mean the internet as a verbatim model of our society. I mean it as an example within a society that has law, which applies to people’s actions on the internet, and that has an ostensibly representational political system already; an example of the kind of freedom we can have within that, if we allow ourselves to have it. We already allow it on the internet. Why not outside of it? Why can’t speech and expression, and commercial enterprise in the non-digital realm be like what it is in the digital realm, where a lot goes as-is? Well, one possible reason why our society likes the idea of the freewheeling internet, but not a freewheeling non-digital society is we can turn away from the internet. We can shut off our own access to it, and restrict access (ie. parental controls). We can be selective about what we view on it. It’s harder to do that in the non-digital world.

As Beck points out, we once had the freedom of the internet in a non-digital society, in the 19th century, and he presents some compelling, historically accurate examples. I understand he glosses over significant aspects of our history that was not glowing, where not everyone was free. In today’s society, it’s always dangerous to harken back romantically to the 19th, and early 20th centuries as “golden times,” because someone is liable to point out that it was not so golden. His point is to say that people of many walks of life (who, let’s admit it, were often white) had the freedom to take many risks of their own free will, even to their lives, but they took them anyway, and the country was better off for it. It’s not to ignore others who didn’t have freedom at the time. It’s using history as an illustration of an idea for the future, understanding that we have made significant strides in how we view people who look different, and come from different backgrounds, and what rights they have.

The context that Glenn Beck has used over the last 10 years is that history as a barometer on progress is not linear. Societal progress ebbs and flows. It has meandered in this country between freedom and oppression, with different depredations visited on different groups of people in different generations. They follow some patterns, and they repeat, but the affected people are different. The depredations of the past were pretty harsh. Today, not so much, but they exist nevertheless, and I think it’s worth pointing them out, and saying, “This is not representative of the freedom we value.”

The arc of America had been towards greater freedom, on balance, from the time of its founding, up until the 1930s. Since then, we’ve wavered between backtracking, and moving forward. I think it’s very accurate to say that we’ve gradually lost faith in it over the last 13 years. Recently, this loss of faith has become acute. Every time I look at people’s attitudes about it, they’re often afraid of freedom, thinking it will only allow the worst of humanity to roam free, and to lay waste to what everyone else has hoped for. Yes, some bad things will inevitably happen in such an environment. Bad stuff happens on the internet every day. Does that mean we should ban it, control it, contain it? If you don’t allow the opportunity that enables bad things to happen, you will not get good things, either. I’m all in favor of prosecuting the guilty who hurt others, but if we’re always in a preventative mode, you prevent that which could make our society so much better. You can’t have one without the other. It’s like trying to have your cake, and eat it, too. It doesn’t work. If we’re so afraid of the depredations of our fellow citizens, then we don’t deserve what wonderful things they might bring, and that fact is being borne out in lost opportunities.

We have an example in our midst of what’s possible. Take the idea of it seriously, and consider how deprived your present existence would be if it wasn’t allowed to be what it is now. Take its principles, and consider widening the sphere of the system that allows all that it brings, beyond the digital, into our non-digital lives, and what wonderful things that could bring to the lives of so many.

Christina Engelbart has written an excellent summary of her late father’s (Doug Engelbart’s) ideas, and has outlined what’s missing from digital media.

Collective IQ Review

dce-jcnprofiles-interviewInteractive computing pioneer Doug Engelbart
coined the term Collective IQ
to inform the IT research agenda

Doug Engelbart was recognized as a great pioneer of interactive computing. Most of his dozens of prestigious awards cited his invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger strategic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, research the issues and options, develop responses and solutions, integrate learnings, iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we…

View original post 1,949 more words

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.

This has been on my mind for a while, since I had a brush with it. I’ve been using Jungle Disk cloud-based backup since about 2007, and I’ve been pretty satisfied with it. I had to take my laptop “into the shop” early this year, and I used another laptop while mine was being repaired. I had the thought of getting a few things I was working on from my cloud backup, so I tried setting up the Jungle Disk client. I was dismayed to learn that I couldn’t get access to my backed up files, because I didn’t have my Amazon S3 Access Key. I remember going through this before, and being able to recover my key from Amazon’s cloud service, after giving Amazon my sign-in credentials. I couldn’t find the option for it this time. After doing some research online, I found out they stopped offering that. So, if you don’t have your access key, and you want your data back, you are SOL. You can’t get any of it back. Period, even if you give Amazon’s cloud services your correct sign-on credentials. I also read stories from very disappointed customers who had a computer crash, didn’t have their key, and had no means to recover their data from the backup they had been paying for for years. This is an issue that Jungle Disk should’ve notified customers about a while ago. It’s disconcerting that they haven’t done so, but since I know about it now, and my situation was not catastrophic, it’s not enough to make me want to stop using it. The price can’t be beat, but this is a case where you get what you pay for as well.

My advice: Write down–on a piece of paper, or in some digital note that you know is secure and recoverable if something bad happens to your device–your Amazon S3 Access Key. Do. It. NOW! You can find it by bringing up Jungle Disk’s Activity Monitor, and then going to its Desktop Configuration screen. Look under Application Settings, and then Jungle Disk Account information. It’s a 20-character, alphanumeric code. Once you have it, you’re good to go.

Edit 12/23/2015: I forgot to mention that to re-establish a connection with your backup account, you also need what’s called a Secret Key. You set this up when you first set up Jungle Disk. You should keep this with your S3 Access Key. From my research, though, it seems the most essential thing for re-establishing the connection with your backup account is keeping a copy of your S3 Access Key. The Secret Key is important, but you can generate a new one, if you don’t know what it is. Amazon no longer reveals your Secret Key. Where’s My Secret Access Key? talks about this. It sounds relatively painless to generate a new one, so I assume you don’t lose access to your backup files by doing this. Amazon’s system limits you to two generated Secret Keys “at a time.” You can generate more than two, but you have to go through a step of deleting one of the old ones first, if you reach this limit. The article explains how to do that.

I’m really late with this, because it came out in late May, when I was super busy with a trip I was planning. I totally missed the announcement until I happened upon it recently. In past posts I had made a mention or two about Disney working on a sequel to Tron Legacy. Well, they announced that it isn’t happening. It’s been cancelled. The official announcement said that the movie release schedule in 2017 was just too full of other live-action films Disney is planning. “RaginRonin” on YouTube gave what I think is the best synopsis of movie industry pundit analysis. It sounds like it comes down to one thing: Disney is averse to live-action films that don’t relate to two genres that have been successful for them: Pirates of the Caribbean, and anything related to its classic fairy tale franchise. Other than that, they want to focus on their core competency, which is animation.

All pundit analysis focused on one movie: Tomorrowland. It was a Disney live-action sci-fi movie that flopped. They figured Disney took one look at that and said, “We can’t do another one of those.”

One thing contradicts this, though, and I’m a bit surprised no one I’ve heard so far picked up on this: Disney is coming out with a live-action sci-fi film soon. It’s called Star Wars: The Force Awakens… though it is being done through its Lucasfilm division, and it’s their staff doing it, not Disney’s. Maybe they think that will make a difference in their live-action fortunes. Disney paid a lot of money for Lucasfilm, and so of course they want it to produce. They want another series. No, more than one series!

Like with the first Tron film in 1982, Legacy did well at the box office, but not well enough to wow Disney. Apparently they were expecting it to be a billion-dollar film in ticket sales domestically and internationally, and it grossed $400 million instead. Secondly, Tron: Uprising, the animated TV series that was produced for Disney XD, and got some critical acclaim, did not do well enough for Disney’s taste, and was cancelled after one season. Though, I think the criticism that, “Of course it didn’t do well, since they put it on an HD channel when most viewers don’t have HD,” is valid, it also should be said that it wasn’t a “killer app” that drew people to HD, either. Maybe it’s more accurate to say that Tron as a genre is not a hot seller for Disney, period. It’s profitable, but it doesn’t knock their socks off.

One pundit said that he’s confident Disney will return to Tron sometime in the future, just as it did with Legacy, but the way things are looking now, Disney wants to focus on its profitable properties. I can buy that, but I wonder if the challenge was the story. Olivia Wilde, the actress who played “Quorra” in Legacy, mentioned this in an April interview. Shooting for the sequel, in which the original cast was slated, was scheduled for October, yet they didn’t have a screenplay. They had plenty of time to come up with one. Disney hired a writer for this sequel a couple years ago.

This has happened before. As I’ve talked about in previous posts, there was an attempt to make a Tron sequel back in 2003. It was supposed to be a combination release of a video game and a movie, called Tron 2.0. The video game came out for PCs, and later, game consoles. There was a clear, dramatic storyline in the game that jumped off from the characters, and a bit of the story, from the original Tron. The whole aesthetic of the video game was very nostalgic. A lot of the focus was on the subject of computer viruses, and various forms of malware, and some pretty interesting story lines about the main characters. I had to admit, though, that it took the focus off of what was really interesting about Tron, which was the philosophical and political arguments it made about what computing’s role should be in society. Steven Lisberger, who was driving the effort at Disney at the time, said that an idea he had was to talk about (I’m paraphrasing), “What is this thing called the internet? What should it represent?” He said, “It’s got to be something more than a glorified phone system!” Disney had developed some concept art for the movie. It looked like it might have a chance, but it was cancelled. Tron Legacy, which came out in 2010, was a worthy successor to the first movie in this regard, and I think that had something to do with it getting the green light. Someone had finally come up with something profound to say about computing’s role in society (I talk about this here). I think there’s more to this story than the market for live-action sci-fi movies from Disney. I think they haven’t found something for a sequel to communicate, and they were running up against their production deadline. I suspect that Lisberger and Kosinski did not want to rush out something that was unworthy of the title. Canceling it will give more time for someone down the road to do it right.

Follow

Get every new post delivered to your Inbox.