Feeds:
Posts
Comments

Posts Tagged ‘computer science’

Prologue

SICP reaches a point, in Chapter 3, where for significant parts of it you’re not doing any coding. It has exercises, but they’re all about thinking about the concepts, not doing anything with a computer. It has you do substitutions to see what expressions result. It has you make diagrams that focus in on particular systemic aspects of processes. It also gets into operational models, talking about simulating logic gates, how concurrent processing can work (expressed in hypothetical Scheme code). It’s all conceptual. Some of it was good, I thought. The practice of doing substitutions manually helps you really get what your Scheme functions are doing, rather than guessing. The rest didn’t feel that engaging. One could be forgiven for thinking that the book is getting dry at this point.

It gets into some coding again in Section 3.3, where it covers building data structures. It gets more interesting with an architecture called “streams” in Section 3.5.

One thing I will note is that the only way I was able to get the code in Section 3.5 to work in Racket was to go into “Lazy Scheme.” I don’t remember what language setting I used for the prior chapters, maybe R5RS, or “Pretty Big.” Lazy Scheme does lazy evaluation on Scheme code. One can be tempted to think that this makes using the supporting structures for streams covered in this section pointless, because the underlying language is doing the delayed evaluation that this section implements in code. Anyway, Lazy Scheme doesn’t interfere with anything in this section. It all works. It just makes the underlying structure for streams redundant. For the sake of familiarity with the code this section discusses (which I think helps in preserving one’s sanity), I think it’s best to play along and use its code.

Another thing I’ll note is this section makes extensive use of knowledge derived from calculus. Some other parts of this book do that, too, but it’s emphasized here. It helps to have that background.

I reached a stopping point in SICP, here, 5 years ago, because of a few things. One was I became inspired to pursue a history project on the research and development that led to the computer technology we use today. Another is I’d had a conversation with Alan Kay that inspired me to look more deeply at the STEPS project at Viewpoints Research, and try to make something of my own out of that. The third was Exercise 3.61 in SICP. It was a problem that really stumped me. So I gave up, and looked up an answer for it on the internet, in a vain attempt to help me understand it. The answer didn’t help me. It worked when I tried the code, but I found it too confusing to understand why it produced correct results. The experience was really disappointing, and disheartening. Looking it up was a mistake. I wished that I could forget I’d seen the answer, so I could go back to working on trying to figure it out, but I couldn’t. I’d seen it. I worked on a few more exercises after that, but then I dropped SICP. I continued working on my history research, and I got into exploring some fundamental computing concepts on processors, language parsing, and the value of understanding a processor as a computing/programming model.

I tried an idea that Kay told me about years earlier, of taking something big, studying it, and trying to improve on it. That took me in some interesting directions, but I hit a wall in my skill, which I’m now trying to get around. I figured I’d continue where I left off in SICP. One way this diversion helped me is I basically forgot the answers for the stuff I did. So, I was able to come back to the problem, using a clean slate, almost fresh. I had some faded memories of it, which didn’t help. I just had to say to myself “forget it,” and focus on the math, and the streams architecture. That’s what finally helped me solve 3.60, which then made 3.61 straightforward. That was amazing.

A note about Exercise 3.59

It was a bit difficult to know at first why the streams (cosine-series and sine-series) were coming out the way they were for this exercise. The cosine-series is what’s called an even series, because its powers are even (0, 2, 4, etc.). The sine-series is what’s called an odd series, because its powers are odd (1, 3, 5, etc.). However, when you’re processing the streams, they just compute a general model of series, with all of the terms, regardless of whether the series you’re processing has values in each of the terms or not. So, cosine-series comes out (starting at position 0) as: [1, 0, -1/2, 0, 1/24, …], since the stream is computing a0x0 + a1x1 + a2x2 …, where ai is each term’s coefficient, and some of the terms are negative, in this case. The coefficients of the terms that don’t apply to the series come out as 0. With sine-series, it comes out (starting at position 0) as: [0, 1, 0, -1/6, 0, 1/120, …].

What’s really interesting is that exercises 3.59, 3.61, and 3.62 are pretty straightforward. Like with some of the prior exercises, all you have to do is translate the math into stream terms (and it’s a pretty direct translation from one to the other), and it works! You’re programming in mathland! I discovered this in the earlier exercises in this section, and it amazed me how expressive this is. I could pretty much write code as if I was writing out the math. I could think in terms of the math, not so much the logistics of implementing computational logic to make the math work. At the same time, this felt disconcerting to me, because when things went wrong, I wanted to know why, computationally, and I found that working with streams, it was difficult to conceptualize what was going on. I felt as though I was just supposed to trust that the math I expressed worked, just as it should. I realize now that’s what I should have done. It was starting to feel like I was playing with magic. I didn’t like that. It was difficult for me to trust that the math was actually working, and if it wasn’t, that it was because I was either not understanding the math, and/or not understanding the streams architecture, not because there was something malfunctioning underneath it all. I really wrestled with that, and I think it was for the good, because now that I can see how elegant it all is, it looks so beautiful!

Exercise 3.60

This exercise gave me a lot of headaches. When I first worked on it 5 years ago, I kind of got correct results with what I wrote, but not quite. I ended up looking up someone else’s solution to it, which worked. It was kind of close to what I had. I finally figured out what I did wrong when I worked on it again recently: I needed to research how to multiply power series, but in a computationally efficient manner. It turns out you need to use a method called the Cauchy product, but there’s a twist, because of the way the streams architecture works. A lot of math sources I looked up use the Cauchy product method, anyway (including a source that covers power series multiplication that I cite below), but the reason it needs to be used is it automatically collects all like terms as each term of the product is produced. The problem you need to work around is that you can’t go backwards through a stream, except by using stream-ref and indexes, and I’ve gotten the sense by going through these exercises that when it comes to doing math problems, you’re generally not supposed to be doing that, though there’s an example later where they talk about a method Euler devised for accelerating computation of power series where they use stream-ref to go backwards and forwards inside a stream. I think that’s an exception.

One hint I’ll leave here is that there is a way to do the Cauchy product without using mutable state in variables, nor is it necessary to do anything particularly elaborate. You can do it just using the stream architecture. Thinking about the operations involved, you don’t have to do the computations strictly left to right, because all of the operations are commutative.

Here are a couple sources that I found helpful when trying to work this out:

Formal Power Series, from Wikipedia. Pay particular attention to the section on “Operations on Formal Power Series.” It gives succinct descriptions on multiplying power series, inverting them (though I’d only pay attention to the mathematical expression of this in Exercise 3.61), and dividing them (which you’ll need for Exercise 3.62).

I found this video on the Cauchy product very helpful in coming up with my solution for this exercise. Notice the pattern by which the products are computed. This pattern should influence your thinking in figuring out how to carry out the Cauchy product using streams.

It is crucial that you get the solution for this exercise right, because the next two exercises (3.61 and 3.62) will give you no end of headaches if you don’t. They build on each other, and this exercise.

The exercise says you can try out mul-series using sin(x) and cos(x), squaring both, and adding them together, and you should get 1. How is “1” represented in streams? Well, it makes sense that it will be in the constant-term position (the zeroth position) in the stream. That’s where a scalar value would be in a power series.

There is a related concept to working with power series called a unit series. A unit series is just a list of the coefficients in a power series. If you’ve done the previous exercises where SICP says you’re working with power series, this is what you’re really working with (though SICP doesn’t mention this), and it’s why in 3.61 it has you write a function called “invert-unit-series”.

The unit series equivalent for the scalar value 1 is [1, 0, 0, 0, 0, …].

A note about Exercise 3.62

The exercise talks about using your div-series function to compute tan-series. Here was a good source for finding out how to do that:

Trigonometric Functions

The unit series for tan-series should come out as: [0, 1, 0, 1/3, 0, 2/15, 0, 17/315, …]

Advertisements

Read Full Post »

This is going to strike people as a rant, but it’s not. It’s a recommendation to the fields of software engineering, and by extension, computer science, at large, on how to elevate the discussion in software engineering. This post was inspired by a question I was asked to answer on Quora about whether object-oriented programming or functional programming offers better opportunities for modularizing software design into simpler units.

So, I’ll start off by saying that what software engineering typically calls “OOP” is not what I consider OOP. The discussion that this question was about is likely comparing abstract data types, in concept, with functional programming (FP). What they’re also probably talking about is how well each programming model handles data structures, how the different models implement modular functionality, and which is preferable for dealing with said data structures.

Debating stuff like this is so off from what the discussion could be about. It is a fact that if an OOP model is what is desired, one can create a better one in FP than what these so-called “OOP” proponents are likely talking about. It would take some work, but it could be done. It wouldn’t surprise me if there already are libraries for FP languages that do this.

The point is what’s actually hampering the discussion about what programming model is better is the ideas that are being discussed, and more broadly, the goals. A modern, developed engineering discipline would understand this. Yes, both programming models are capable of decomposing tasks, but the whole discussion about which does it “better” is off the mark. It has a weak goal in mind.

I remember recently answering a question on Quora regarding a dispute between two people over which of two singers, who had passed away in recent years, was “better.” I said that you can’t begin to discuss which one was better on a reasonable basis until you look at what genres they were in. In this case, I said each singer used a different style. They weren’t trying to communicate the same things. They weren’t using the same techniques. They were in different genres, so there’s little point in comparing them, unless you’re talking about what style of music you like better. By analogy, each programming model has strengths and weaknesses relative to what you’re trying to accomplish, but in order to use them to best effect, you have to consider whether each is even a good fit architecturally for the system you’re trying to build. There may be different FP languages that are a better fit than others, or maybe none of them fit well. Likewise, for the procedural languages that these proponents are probably calling “OOP.” Calling one “better” than another is missing the point. It depends on what you’re trying to do.

Comparisons about programming models in software engineering tend to wind down to familiarity, at some point; which languages can function as a platform that a large pool of developers know how to use, because no one wants to be left holding the bag if developers decide to up and leave. I think this argument misses a larger problem: How do you replace the domain knowledge those developers had? The most valuable part of a developer’s knowledge base is their understanding of the intent of the software design; for example, how the target business for the system operates, and how that intersects with technical decisions that were made in the software they worked on. Usually, that knowledge is all in their heads. It’s hardly documented. It doesn’t matter that you can replace those developers with people with the same programming skills, because sure, they can read the code, but they don’t understand why it was designed the way it was, and without that, it’s going to be difficult for them to do much that’s meaningful with it. That knowledge is either with other people who are still working at the same business, and/or the ones who left, and either way, the new people are going to have to be brought up to speed to be productive, which likely means the people who are still there are going to have to take time away from their critical duties to train the new people. Productivity suffers even if the new people are experts in the required programming skills.

The problem with this approach, from a modern engineering perspective, goes back to the saying that if all someone knows how to use is a hammer, every problem looks like a nail to them. The problem is with the discipline itself; that this mentality dominates the thinking in it. And I must say, computer science has not been helping with this, either. Rather than exploring how to make the process of building a better-fit programming model easier, they’ve been teaching a strict set of programming models for the purpose of employing students in industry. I could go on about other nagging problems that are quite evident in the industry that they are ignoring.

I could be oversimplifying this, but my understanding is modern engineering has much more of a form-follows-function orientation, and it focuses on technological properties. It does not take a pre-engineered product as a given. It looks at its parts. It takes all of the requirements of a project into consideration, and then applies analysis technique and this knowledge of technological properties to finding a solution. It focuses a lot of attention on architecture, trying to make efficient use of materials and labor, to control costs. This focus tends to make the scheduling and budgeting for the project more predictable, since they are basing estimates on known constraints. They also operate on a principle that simpler models (but not too simple) fail less often, and are easier to maintain than overcomplicated ones. They use analysis tools that help them model the constraints of the design, again, using technological properties as the basis, taking cognitive load off of the engineers.

Another thing is they don’t think about “what’s easier” for the engineers. They think about “what’s easier” for the people who will be using what’s ultimately created, including engineers who will be maintaining it, at least within cost constraints, and reliability requirements. The engineers tasked with creating the end product are supposed to be doing the hard part of trying to find the architecture and materials that fit the requirements for the people who will be using it, and paying for it.

Clarifying this analogy, what I’m talking about when I say “architecture” is not, “What data structure/container should we use?” It’s more akin to the question, “Should we use OOP or FP,” but it’s more than that. It involves thinking about what relationships between information and semantics best suite the domain for which the system is being developed, so that software engineers can better express the design (hopefully as close to a “spec” as possible), and what computing engine design to use in processing that programming model, so it runs most efficiently. When I talk about “constraints of materials,” I’m analogizing that to hardware, and software runtimes, and what their speed and load capacity is, in terms of things like frequency of requests, and memory. In short, what I’m saying is that some language and VM/runtime design might be necessary for this analogy to hold.

What this could accomplish is ultimately documenting the process that’s needed—still in a formal language—and using that documentation to run the process. So, rather than requiring software engineers to understand the business process, they can instead focus on the description and semantics of the engine that allows the description of the process to be run.

What’s needed is thinking like, “What kind of system model do we need,” and an industry that supports that kind of thinking. It needs to be much more technically competent to do this. I know this is sounding like wishful thinking, since people in the field are always trying to address problems that are right in front of them, and there’s no time to think about this. Secondly, I’m sure it sounds like I’m saying that software engineers should be engaged in something that’s impossibly hard, since I’m sure many are thinking that bugs are bad enough in existing software systems, and now I’m talking about getting software engineers involved in developing the very languages that will be used to describe the processes. That sounds like I’m asking for more trouble.

I’m talking about taking software engineers out of the task of describing customer processes, and putting them to work on the syntactic and semantic engines that enable people familiar with the processes to describe them. Perhaps I’m wrong, but I think this reorientation would make hiring programming staff based on technical skills easier, in principle, since not as much business knowledge would be necessary to make them productive.

Thirdly, “What about development communities?” Useful technology cannot just stand on its own. It needs an industry, a market around it. I agree, but I think, as I already said, it needs a more technically competent industry around it, one that can think in terms of the engineering processes I’ve described.

It seems to me one reason the industry doesn’t focus on this is we’ve gotten so used to the idea that our languages and computing systems need to be complex, that they need to be like Swiss Army knives that can handle every conceivable need, because we seem to need those features in the systems that have already been implemented. They reality is the reason they’re complex is a) they have been built using semantic systems that are not well suited to the problem they’re trying to solve, and b) they’re really designed to be “catch all” systems that anticipate a wide variety of customer needs. So, the problem you’re trying to solve is but a subset of that. We’ve been coping with the “Swiss Army knife” designs of others for decades. What’s actually needed is a different knowledge set that eschews from the features we don’t need for the projects we want to complete, that focuses on just the elements that are needed, with a practice that focuses on design, and its improvement.

Very few software engineers and computer scientists have had the experience of using a language that was tailored to the task they were working on. We’ve come to think that we need feature-rich languages and/or feature-rich libraries to finish projects. I say no. That is a habit, thinking that programming languages are communication protocols not just with computers, but with software engineers. What would be better is a semantic design scheme for semantic engines, having languages on top of them, in which the project can be spec’d out, and executed.

As things stand, what I’m talking about is impractical. It’s likely there are not enough software engineers around with the skills necessary to do what I’m describing to handle the demand for computational services. However, what’s been going on for ages in software engineering has mostly been a record of failure, with relatively few successes (the vast majority of software projects fail). Discussions, like the one described in the question that inspired this post, are not helping the problem. What’s needed is a different kind of discussion, I suggest using the topic scheme I’ve outlined here.

I’m saying that software engineering (SE) needs to take a look at what modern engineering disciplines do, and do their best to model that. CS needs to wonder what scientific discipline it can most emulate, which is what’s going to be needed if SE is going to improve. Both disciplines are stagnating, and are being surpassed by information technology management as a viable solution scheme for solving computational problems. However, that leads into a different problem, which I talked about 9 years ago in IT: You’re doing it completely wrong.

Related posts:

Alan Kay: Rethinking CS education

The future is not now

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

It’s been 5 years since I’ve written something on SICP–almost to the day! I recently provided some assistance to a reader of my blog on this exercise. In the process, I of course had to understand something about it. Like with Exercise 1.19, I didn’t think this one was written that clearly. It seemed like the authors assumed that the students were familiar with the Miller-Rabin test, or at least modular mathematics.

The exercise text made it sound like one could implement Miller-Rabin as a modification to Fermat’s Little Theorem (FLT). From providing my assistance, I found out that one could do it as a modification to FLT, though the implementation looked a bit convoluted. When I tried solving it myself, I felt more comfortable just ignoring the prior solutions for FLT in SICP, and completely rewriting the expmod routine. Along with that, I wrote some intermediate functions, which produced results that were ultimately passed to expmod. My own solution was iterative.

I found these two Wikipedia articles, one on modular arithmetic, the other on Miller-Rabin, to be more helpful than the SICP exercise text in understanding how to implement it. Every article I found on Miller-Rabin, doing a Google search, discussed the algorithm for doing the test. It was still a bit of a challenge to translate the algorithm to Scheme code, since they assumed an imperative style, and there is a special case in the logic.

It seemed like when it came right down to it, the main challenge was understanding how to calculate factors of n – 1 (n being the candidate tested for primality), and how to handle the special case, while still remaining within the confines of what the exercise required (to do the exponentiation, use the modulus function (remainder), and test for non-trivial roots inside of expmod). Once that was worked out, the testing for non-trivial roots turned out to be pretty simple.

If you’re looking for a list of primes to test against your solution, here are the first 1,000 known primes.

Read Full Post »

“We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.”
— “SIGACT trying to get children excited about CS,”
by Michael R. Fellows and Ian Parberry

There have been many times where I’ve thought about writing about this over the years, but I’ve felt like I’m not sure what I’m doing yet, or what I’m learning. So I’ve held off. I feel like I’ve reached a point now where some things have become clear, and I can finally talk about them coherently.

I’d like to explain some things that I doubt most CS students are going to learn in school, but they should, drawing from my own experience.

First, some notions about science.

Science is not just a body of knowledge (though that is an important part of it). It is also venturing into the unknown, and developing the idea within yourself about what it means to really know something, and to understand that knowing something doesn’t mean you know the truth. It means you have a pretty good idea of what the reality of what you are studying is like. You are able to create a description that somehow resembles the reality of what you’re studying to such a degree that you are able to make predictions about it, and those predictions can be confirmed within some boundaries that are either known, or can be discovered, and subsequently confirmed by others without the need to revise the model. Even then, what you have is not “the truth.” What you have might be a pretty good idea for the time being. As Richard Feynman said, in science you never really know if you are right. All you can be certain of is that you are wrong. Being wrong can be very valuable, because you can learn the limits of your common sense perceptions, and knowing that, direct your efforts towards what is more accurate, what is closer to reality.

The field of computing and computer science doesn’t have this perspective. It is not science. My CS professors used to admit this openly, but my understanding is CS professors these days aren’t even aware of this distinction anymore. In other words, they don’t know what science is, but they presume that what they practice is science. CS, as it’s taught, has some value, but what I hope to introduce people to here is the notion that after they have graduated with a CS degree, they are in kindergarten, or perhaps first grade. They have learned some basics about how to read and write, but they have not learned what is really powerful about that skill. I will share some knowledge I have gleaned as someone who has ventured as a beginner in this perspective.

The first thing to understand is that an undergraduate computer science education doesn’t tend to give you any notions about what computing is. It doesn’t explore models of computing, and in many cases, it doesn’t even present much in the way of different models of programming. It presents one model of computing (the Von Neumann model), but it doesn’t venture deeply into concepts of how the model works. It provides some descriptions of entities that are supposed to make it work, but it doesn’t tie them together so that you can see the relationships; see how the thing really works.

Science is about challenging and forming models of phenomena. A more powerful notion of computing is realized by understanding that the point of computer science is not about the application of computing to problems, but about creating, criticizing, and studying the strengths and weaknesses of different models of computing systems, and that programming is a means of modeling our ideas about them. In other words, programming languages and development environments can be used to model, explore, and test, and criticize our notions about different computing system models. It behooves the computer scientist to either choose an existing computing architecture represented by an existing language, or to create a new architecture, and a new language, that is suitable to what is being attempted, so that the focus can be maintained on what is being modeled, and testing that model without a lot of distraction.

As Alan Kay has said, the “language” part of “programming language” is really a user interface. I suggest that it is a user interface to a model of computing, a computing architecture. We can call it a programming model. As such, it presents a simulated environment of what is actually being accomplished in real terms, which enables a certain way of seeing the model you are trying to construct. (As I said earlier, we can use programming (with a language) to model our notions of computing.) In other words, you can get out of dealing with the guts of a certain computing architecture, because the runtime is handling those details for you. You’re just dealing with the interface to it. In this case, you’re using, quite explicitly, the architecture embodied in the runtime, not an API, as a means to an end. The power doesn’t lie in an API. The power you’re seeking to exploit is the architecture represented by the language runtime (a computing model).

This may sound like a confusing round-about, but what I’m saying is you can use a model of computing as a means for defining a new model of computing, whatever best suits the task. I propose that, indeed, that’s what we should be doing in computer science, if we’re not working directly with hardware.

When I talk about modeling computing systems, given their complexity, I’ve come to understand that constructing them is a gradual process of building up more succinct expressions of meaning for complex processes, with a goal of maintaining flexibility, in terms of how sub-processes that are used for higher levels of meaning can be used. As a beginner, I’ve found that starting with a powerful programming language in a minimalist configuration, with just basic primitives, was very constructive for understanding this, because it forced me to venture into constructing my own means for expressing what I want to model, and to understand what is involved in doing that. I’ve been using a version of Lisp for this purpose, and it’s been a very rewarding experience, but other students may have other preferences. I don’t mean to prejudice anyone toward one programming model. I recommend finding, or creating one that fits the kind of modeling you are pursuing.

Along with this perspective, I think, there must come an understanding of what different programming languages are really doing for you, in their fundamental architecture. What are the fundamental types in the language, and for what are they best suited? What are the constructs and facilities in the language, and what do they facilitate, in real terms? Sure, you can use them in all sorts of different ways, but to what do they lend themselves most easily? What are the fundamental functions in the runtime architecture that are experienced in the use of different features in the language? And finally, how are these things useful to you in creating the model you want to attempt? Seeing programming languages in this way provides a whole new perspective on them that has you evaluating how they facilitate your modeling practice.

Application comes along eventually, but it comes late in the process. The real point is to create the supporting architecture, and any necessary operating systems first. That’s the “hypothesis” in the science you are pursuing. Applications are tests of that hypothesis, to see if what is predicted by its creators actually bears out. The point is to develop hypotheses of computing systems so that their predictive limits can be ascertained, if they are not falsified. People will ask, “Where does practical application come in?” For the computer scientist involved in this process, it doesn’t, really. Engineers can plumb the knowledge base that’s generated through this process to develop ideas for better system software, software development systems, and application environments to deploy. However, computer science should not be in service to that goal. It can merely be useful toward it, if engineers see that it is fit to do so. The point is to explore, test, discuss, criticize, and strive to know.

These are just some preliminary thoughts on computer systems modeling. In my research, I’ve gotten indications that it’s not healthy for the discipline to be insular, just looking at its own artifacts for inspiration for new ideas. It needs to look at other fields of study to introduce unorthodox models into the discussion. I haven’t gotten into that yet. I’m still trying to understand some basic ideas of a computer system, using a more mathematical approach.

Related post: Does computer science have a future?

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

This presentation by Bret Victor could alternately be called “Back To The Future,” but he called it, “The Future of Programming.” What’s striking is this is still a possible future for programming, though it was all really conceivable back in 1973, the time period in which Bret set it.

I found this video through Mark Guzdial. It is the best presentation on programming and computing I’ve seen since I’ve watched what Alan Kay has had to say on the subject. Bret did a “time machine” performance set in 1973 on what was accomplished by some great engineers working on computers in the 1960s and ’70s, and generally what those accomplishments meant to computing. I’ve covered some of this history in my post “A history lesson in government R&D, Part 2,” though I de-emphasized the programming aspect.

Bret posed as someone who was presenting new information in 1973 (complete with faux overhead projector), and someone who tried to predict what the future of programming and computing would be like, given these accomplishments, and reasoning about the future, if what was then-current thinking progressed.

The main theme, which has been a bit of a revelation to me recently, is this idea of programming being an exercise in telling the computer what you want, and having the computer figure out how to deliver it, and even computers figuring out how to get what they need from each other, without the programmer having to spell out how to do these things step by step. What Bret’s presentation suggested to me is that one approach to doing this is having a library of “solvers,” operating under a set of principles (or perhaps a single principle), that the computing system can invoke at any time, in a number of combinations, in an attempt to accomplish that goal; that operational parameters are not fixed, but relatively fluid.

Alan Kay talked about Licklider’s idea of “communicating with aliens,” and how this relates to computing, in this presentation he gave at SRII a couple years ago. A “feature” of many of Alan’s videos is that you don’t get to see the slides he’s showing…which can make it difficult to follow what he’s talking about. So I’ll provide a couple reference points. At about 17 minutes in “this guy” he’s talking about is Licklider, and a paper he wrote called “Man-Computer Symbiosis.” At about 44 minutes in I believe Alan is talking about the Smalltalk programming environment.

http://vimeo.com/22463791

I happened to find this from “The Dream Machine,” by M. Mitchell Waldrop, as well, sourced from an article written by J.C.R. Licklider and Robert Taylor called, “The Computer as a Communication Device,” in a publication called “Science & Technology” in 1968, also reprinted in “In Memoriam: J.C.R. Licklider, 1915-1990,” published in Digital Systems Research Center Reports, vol. 61, 1990:

“Modeling, we believe, is basic and central to communications.” Conversely, he and Taylor continued, “[a successful communication] we now define concisely as ‘cooperative modeling’–cooperation in the construction, maintenance, and use of a model. [Indeed], when people communicate face to face, they externalize their models so they can be sure they are talking about the same thing. Even such a simple externalized model as a flow diagram or an outline–because it can be seen by all the communicators–serves as a focus for discussion. It changes the nature of communication: When communicators have no such common framework, they merely make speeches at each other; but when they have a manipulable model before them, they utter a few words, point, sketch, nod, or object.”

I remember many years ago hearing Alan Kay say that what’s happened in computing since the 1970s “has not been that exciting.” Bret Victor ably justified that sentiment. What he got across (to me, anyway) was a sense of tragedy that the thinking of the time did not propagate and progress from there. Academic computer science, and the computer industry acted as if this knowledge barely existed. The greater potential tragedy Bret expressed was, “What if this knowledge was forgotten?”

The very end of Bret’s presentation made me think of Bob Barton, a man that Alan has sometimes referred to when talking about this subject. Barton was someone who Alan seemed very grateful to have met as a graduate student, because he disabused the computer science students at the University of Utah in their notions of what computing was. Alan said that Barton freed their minds on the subject, which opened up a world of possibilities that they could not previously see. In a way Bret tried to do the same thing by leaving the ending of his presentation open-ended. He did not say that meta-programming “is the future,” just one really interesting idea that was developed decades ago. Many people in the field think they know what computing and programming are, but these are still unanswered questions. I’d add to that message that what’s needed is a spirit of adventure and exploration. Like our predecessors, we’ll find some really interesting answers along the way which will be picked up by others and incorporated into the devices the public uses, as happened before.

I hope that students just entering computer science will see this and carry the ideas from it with them as they go through their academic program. What Bret presents is a perspective on computer science that is not shared by much of academic CS today, but if CS is to be revitalized one thing it needs to do is “get” what this perspective means, and why it has value. I believe it is the perspective of a real computer scientist.

Related post: Does computer science have a future?

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I heard earlier this month that Google Wave was going to be shut down at the end of the year, because it was not a hit with academia (h/t to Mark Guzdial). I was surprised. What was interesting and rather shocking is one of the reasons it failed was that academics couldn’t figure out how to use it. They said it was too complex. In other words, the product required some training. I’ve used Wave and it’s about as easy to use as an e-mail application, though there are some extra concepts that need to be acquired to use it. I saw it as kind of a cross between e-mail and teleconferencing. Maybe if people had been told this they would’ve been able to pick up the concepts more easily. It’s hard for me to say.

I thought for sure Wave would be a hit somewhere. I assumed business and bloggers would adopt it. The local Lisp user group I’ve been attending has been using Wave for several months now, and we’ve made it an integral part of our meetings. I think we’ll get along without it, but we were sorry to hear that it’s going away.

As I’ve reflected on it I’ve realized that Wave has some significant shortcomings, though it was a promising technology. I first heard about it in a comment that was left on my blog a year ago in response to my post “Does computer science have a future?” Come to think of it, Google Wave’s fate is an interesting commentary on that topic. Anyway, I watched Google’s feature video where it was debuted. It reminded me a bit of Engelbart’s NLS from 1968, though it really didn’t hold a candle to NLS. It had a few similarities I could recognize, but its vision was much more limited. As the source article for this story reveals, the Google Wave team used the idea of extending the concept of e-mail as their inspiration, adding idioms of instant messaging to the metaphor. That explains some things. What I think is a shame is they could’ve done so much more with it. One thing I could think of right off the bat was having the ability to link between waves, and entries in different waves, so that people could refer back and forth to previous/subsequent conversations on related topics. I mean, how hard would that have been to add? In business terms wouldn’t this have “added value” to the product concept? What about bringing over an innovation from NLS and adding video conferencing? Maybe businesses would’ve found Wave appealing if it had some of these things. It’s hard to say, but apparently the Wave team didn’t think of them.

At our Lisp user group meeting last week we got to talking about the problems with Wave after we learned that it was going away. A topic that came up was that Google realized Wave was going to raise costs in computing resources, given the way it was designed, as a client/server model. All Waves originated on Google’s servers. An example I heard thrown out was that if one person had 10,000 friends all signed up on one wave, and they were all on there at the same time, every single keystroke made by one person would cause Google’s server to have to send the update to every one of the people signed up for the wave–10,000 people. There was something about how they couldn’t monetize the service effectively as well, especially for that kind of load.

As they talked about this it reminded me of what Alan Kay said about Engelbart’s NLS system at ETech 2003. NLS had a similar client/server system design to Google Wave. I remember he said that the scaling factor with NLS was 2N, because of its system architecture. This clearly was not going to scale well. It was perhaps because of this that several members of Engelbart’s team left to go work on the concept of personal computing at Xerox PARC. What they were after at PARC was a network scaling factor of N2, and the idea was they would accomplish this using a peer-to-peer architecture. I speculated that this tied in with the invention of Ethernet at PARC, which is a peer-to-peer networking scheme over TCP/IP. I remember Kay and a fellow engineer showed off the concept with Croquet at ETech. It was quite relevant to what I talk about here, actually, because the demo showed how updates of multiple graphical objects on one Squeak instance (Croquet was written in Squeak) were propagated very quickly to another instance over the network, without a mediating central server. And the scheme scaled to multiple computers on the same network.

Google Wave’s story is a sorry tale, a clear case where the industry’s collective ignorance of computer history has hurt technological progress. In this case it’s possible that one reason Wave did not go well is because the design team did not learn the lessons that had already been learned about 38 years ago.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

As I’ve gone through SICP I’ve noticed a couple exercises that are very similar to programming problems we were given when I took Comparative Programming Languages in my sophomore year of college, when we were covering Lisp: 8 Queens (in Section 2.2.3 of SICP), and symbolic differentiation (in Section 2.3.2). I wonder if our professor got these problems out of this book, though they could’ve been classic computer science or AI problems. I’m going to focus on the symbolic differentiation exercises in this post.

The problem we were given in Comparative Programming Languages was to create a series of functions that would carry out all of the symbolic differentiation rules (for addition/subtraction, multiplication, division, and exponentiation) for expressions that had two, maybe a few more operands per operator (except for exponents), and simplify the resulting algebraic expressions. Our professor added, “I’ll give extra credit to anyone who implements the Chain Rule.” He gave us some helper functions to use, but we were expected to create the bulk of the code ourselves. In addition we could use no destructive operations (no set or setq). He wanted everything done in a pure functional style right off the bat. We had about a week or so to create a solution. We had only started covering Lisp the week before, and it was the first time in my life I had ever seen it. He never explained to us how a functional language worked. He was one of the hardest CS professors I had. What made it worse was he was terrible at teaching what he was trying to convey. He often came up with these really hard but interesting problems, though.

Creating the rules for addition/subtraction, and exponentiation were pretty easy once I had beaten my head against the wall for a while trying to understand how Lisp worked (I don’t think I ended up implementing the rules for multiplication and division). The hard part, which I didn’t even try, was simplifying the resulting expressions. I didn’t have the skill for it.

Having gone through that experience I was surprised by the treatment SICP gave to this exercise. On the first iteration it gave away almost everything I had tried to figure out myself for this assignment. I was wondering, “Gosh. This seemed like such a challenging book in the first chapter. It’s practically giving away the answers here!”

I’ll give the code for “deriv” as I think it’ll provide some clarity to what I’m talking about:

(define (deriv exp var)
   (cond ((number? exp) 0)
         ((variable? exp)
          (if (same-variable? exp var) 1 0))
         ((sum? exp)
          (make-sum (deriv (addend exp) var)
                    (deriv (augend exp) var)))
         ((product? exp)
          (make-sum
             (make-product (multiplier exp)
                           (deriv (multiplicand exp) var))
             (make-product (deriv (multiplier exp) var)
                           (multiplicand exp))))
         (else (error "unknown type expression -- DERIV" exp))))

The other functions that are needed to make this work are in the book. I modified the “else” clause, because PLTScheme doesn’t implement an “error” function. I use “display” instead.

SICP doesn’t ask the student to implement the differentiation rule for division, or the Chain Rule, just addition/subtraction, multiplication, and one of the exponentiation rules. The point wasn’t to make the exercise something that was hard to implement. It wasn’t about challenging the student on “Do you know how to do symbolic differentiation in code, recognize patterns in the algebra, and figure out how to simplify all expressions no matter what they turn out to be?” It wasn’t about making the student focus on mapping the rules of algebra into the computing domain (though there is some of that). It was about realizing the benefits of abstraction, and what a realization it is!

SICP asked me to do something which I had not been asked to do in all the years that I took CS, and had only occasionally been asked to do in my work. It asked me to take an existing structure and make it do something for which it did not appear to be intended. It felt like hacking, almost a little kludge-ish. I can see why I wasn’t asked to do this in college. What CS generally tried to do at the time was create good and proper programmers, instilling certain best practices. Part of that was writing clear, concise code, which was generally self-explanatory. A significant part of it as well was teaching structured programming, and demanding that students use it, in order to write clear code. That was the thinking.

What amazed me is SICP had me using the same “deriv” function–making no changes to it at all, except to add abstractions for new operations (such as exponentiation and subtraction)–to operate on different kinds of expressions, such as:

(+ (+ (expt x 2) (* 2 x)) 3)

(+ (+ x 3) (* 2 x) 3)

(((x + 3) + (2 * x)) + 3)

(x + 3 + 2 * x + 3) — (with operator precedence)

It could deal with all of them. All that was required was for me to change the implementation beneath the abstractions “deriv” used for each type of expression.

It brought to mind what Alan Kay said about objects–that an object is a computer. The reason I made this connection is the exercises forced me to break the semantic mapping I thought “deriv” was implementing, so that I could see “deriv” as just a function that computed a result–a specialized computer. Kay said that the abstraction is in the messages that are passed between objects in OOP, not in the objects. Here, I could see that the abstraction was in the function calls that were being made inside of “deriv”, not the code of “deriv”. The code is a pattern of use which is structured to follow certain rules. I could look at the function algebraically. In reality, the function names don’t mean anything that’s hard and fast. I thought to myself, “If I changed the implementation beneath the abstractions yet again–still not changing ‘deriv’–I might be able to do useful computing work on a different set of symbols, ones that aren’t even related to classical algebra.”

I recalled a conversation I remember reading online several years ago where a bunch of programmers were arguing over what were the best practices for creating abstractions in OOP. The popular opinion was that the objects should be named after real world objects, and that their behavior should closely model what happened in the real world. One programmer, though, had a different opinion. Whoever it was said that the goal shouldn’t be to structure objects and their relationships according to the real world equivalents, virtualizing the real world. Rather the goal should be to find computational processes that happen to model the real world equivalents well. So in other words, the goal, according to the commenter, should be to find a computer (I use the term “find” broadly, meaning to create one) that does what you need to do, and does it well, not try to take any old computing model and force it into a fashion that you think would model the real world equivalents well. The computer world is not some idealized version of the real world. It is its own world, but there are places where computing and real world processes intersect. The challenge is in finding them.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I want to discuss what’s introduced in Section 2.1.3 of SICP, because I think it’s a really important concept.

When we think of data in typical CS and IT settings, we think of strings and numbers, tables, column names (or a reference from an ordinal value), XML, etc. We think of things like “Item #63A955F006-006”, “Customer name: Barbara Owens”, “Invoice total: $2,336.00”.

How do we access that data? We typically use SQL, either directly or through stored procedures, using relations to serialize/deserialize data to/from a persistence engine that uses indexing (an RDBMS). This returns an organized series of strings and numbers, which are then typically organized into objects for a time by the programmer, or a framework. And then data is input into them, and/or extracted from them to display. We send data over a network, or put it into the persistence engine. Data goes back and forth between containers, each of which use different systems of access, but the data is always considered to be “dead” numbers and strings.

Section 2.1.2 introduces the idea of data abstraction, and I’m sure CS students have long been taught this concept, that data abstraction leads to more maintainable programs because you “program to the interface, not to the object”. This way you can change the underlying implementation without affecting too much how the rest of the program runs. SICP makes the same point.

In 2.1.2 it uses a simple abstraction, a rational number (a fraction), as an example. It defined 3 functions: make-rat, numer, and denom. Make-rat would create a rational number abstraction out of two numbers: a numerator and denominator. Numer and denom return the numerator and denominator from the abstraction, respectively. It calls make-rat a “constructor”, and numer and denom “selectors”.

In 2.1.3 it questions our notion of “data”, though it starts from a higher level than most people would. It asserts that data can be thought of as this constructor and set of selectors, so long as a governing principle is applied which says a relationship must exist between the selectors such that the abstraction operates the same way as the actual concept would. So if you use (in pseudo-code):

x = (make-rat n d)

then (numer x) / (denom x) must always produce the same result as n / d. The book doesn’t mention this at this point, but in order to be really useful, the abstraction would need to work the same way that a rational number would when arithmetic operators are applied to it.

So rather than creating a decimal out of a fraction, as would be the case in most languages:

x = 1 / 4 (which equals 0.25)

You can instead say:

x = (make-rat 1 4)

and this representation does not have to be a decimal, just the ratio 1 over 4, but it can be used in the same way arithmetically as the decimal value. In other words, it is data!

In Section 2.1.2, the abstraction used for a rational number was a “pair”, a cons structure (created by executing (cons a b)). In 2.1.3 it shows that the same thing that was done to create a rational number data abstraction can be applied to the concept of a “pair”:

This point of view can serve to define not only “high-level” data objects, such as rational numbers, but lower-level objects as well. Consider the notion of a pair, which we used in order to define our rational numbers. We never actually said what a pair was, only that the language supplied procedures cons, car, and cdr for operating on pairs. But the only thing we need to know about these three operations is that if we glue two objects together using cons we can retrieve the objects using car and cdr. That is, the operations satisfy the condition that, for any objects x and y, if z is (cons x y) then (car z) is x and (cdr z) is y. Indeed, we mentioned that these three procedures are included as primitives in our language. However, any triple of procedures that satisfies the above condition can be used as the basis for implementing pairs. This point is illustrated strikingly by the fact that we could implement cons, car, and cdr without using any data structures at all but only using procedures. Here are the definitions:

(define (cons x y)
  (define (dispatch m)
    (cond ((= m 0) x)
          ((= m 1) y)
          (else (error "Argument not 0 or 1 -- CONS" m))))
  dispatch) 

(define (car z) (z 0)) 

(define (cdr z) (z 1))

This use of procedures corresponds to nothing like our intuitive notion of what data should be. Nevertheless, all we need to do to show that this is a valid way to represent pairs is to verify that these procedures satisfy the condition given above.

This example also demonstrates that the ability to manipulate procedures as objects automatically provides the ability to represent compound data. This may seem a curiosity now, but procedural representations of data will play a central role in our programming repertoire. This style of programming is often called message passing.

So, just to make it clear, what “cons” does is construct a procedure that “cons” calls “dispatch” (I say it this way, because “dispatch” is defined in cons’s scope), and return that procedure to the caller of “cons”. The procedure that is returned can then be used as data since the above “car” and “cdr” procedures produce the same result as the “car” and “cdr” procedures we’ve been using in the Scheme language.

SICP mentions in this section that message passing will be used further as a basic tool in Chapter 3 for more advanced concepts. Note that I have talked about message passing before. It is central to the power of object-oriented programming.

Note as well that the text implies an analog between procedures, in a Lisp language, and objects. The above example is perhaps a bad way to write code to get that across. In the corresponding lecture for this section (Lecture 2b, “Compound Data”) Abelson used a lambda ( (lambda (m) (cond …)) ) instead of “(define (dispatch m) …)”. What they meant to get across is that just as you can use “cons” multiple times in Scheme to create as many pairs as you want, you can use this rendition of “cons” to do the same. Instead of creating multiple blobs of memory which just contain symbols and pointers (as pairs are traditionally understood), you’re creating multiple instances of the same procedure, each one with the different pair values that were assigned embedded inside them. These procedure instances exist as their own entities. There is not just one procedure established in memory called “dispatch”, which is called with pieces of memory to process, or anything like that. These procedure instances are entities “floating” in memory, and are activated when parameters are passed to them (as the above “car” and “cdr” do). These procedures are likewise garbage collected when nothing is referring to them anymore (they pass out of existence).

In the above example, “cons” is the constructor (of a procedure/object called “dispatch”), and “car” and “cdr” are the selectors. What we have here is a primitive form of object-orientation, similar to what is known in the Smalltalk world, with the mechanics of method dispatch exposed. The inference from this is that objects can also be data.

What SICP begins to introduce here is a blurring of the lines between code and data, though at this point it only goes one way, “Code is data.” Perhaps at some point down the road it will get to sophisticated notions of the inverse: “Data is code”. The book got into this concept in the first chapter just a little, but it was really simple stuff.

Getting back to our traditional notions of data, I hope this will begin to show how limited those notions are. If procedures and objects can be data, why can’t we transmit, receive, persist, and search for them just as we can with “dead” data (strings and numbers)? In our traditional notions of CS and IT (assuming they’re focused on using OOP) we use classes to define methods for dealing with data, and elemental interactions with other computing functions, such as data storage, display, interaction between users and other computers, etc. We have a clear notion of separating code (functionality) from data, with the exception of objects that get created while a program is running. In that case we put a temporary “persona” over the real data to make it easier to deal with, but the “persona” is removed when data is stored and/or we exit the program. What we in the whole field neglect is that objects can be data, but they can only really be such if we can do everything with them that we now do with the basic “dead” forms of data.

There is at least one solution out there that is designed for enterprises, and which provides inherent object persistence and search capabilities, called Gemstone. It’s been around for 20+ years. The thing is most people in IT have never heard of it, and would probably find the concept too scary and/or strange to accept. From what I’ve read, it sounds like a good concept. I think such a system would make application design, and even IT system design if it was done right, a lot more straightforward, at least for systems where OO architecture is a good fit. Disclaimer: I do not work for Gemstone, and I’m not attempting to advertise for them.

As I think about this more advanced concept, I can see some of where the language designers of Smalltalk were going with it. In it, objects and their classes, containing information, can be persisted and retrieved in the system very easily. A weakness I notice now in the language is there’s no representational way to create instance-specific functionality, as exists in the above Scheme example. Note that the values that are “consed” are inlined in “dispatch”. There’s a way to put instance-specific code into an object in Smalltalk, but it involves creating a closure–an anonymous object–and you would essentially have to copy the code for “dispatch”. You would have to treat Smalltalk like a functional language to get this ability. The reason this interests me is I remember Alan Kay saying once that he doesn’t like the idea of state being stored inside of an object. Well, the above Scheme example is a good one for demonstrating “stateless OOP”.

Edit 7-9-2010: After I posted this article I got the idea to kind of answer my own question, because it felt insensitive of me to just leave it hanging out there when there are some answers that people already know. The question was, “Why don’t we have systems of data that allow us to store, retrieve, search, and transmit procedures, objects, etc. as data?” I’ve addressed this issue before from a different perspective. Alan Kay briefly got into it in one of his speeches that’s been on the internet. Now I’m expanding on it.

The question was somewhat rhetorical to begin with. It’s become apparent to me as I’ve looked at my programming experience from a different perspective that a few major things have prevented it from being resolved. One is we view computers as devices for automation, not as a new medium. So the idea of computers adding meta-knowledge to information, and that this meta-knowledge would be valuable to the user is foreign, though slowly it’s becoming less so with time, particularly in the consumer space. What’s missing is a confidence that end users will be capable of, and want to, manipulate and modify that meta-knowledge. A second reason is that each language has had its own internal modeling system, and for quite a while language designers have gotten away with this, because most people weren’t paying attention to that. This is beginning to change, as we can see in the .Net and Java worlds, where a common modeling system for languages is preferred. A third reason is that with the introduction of the internet, openness to programming is seen as a security risk. This is because of a) bad system design, and b) in the name of trying to be “all things to all people” we have systems that try to compromise, introducing some risk, to promote backward compatibility with older models of interaction that became less secure when wide scale networking was introduced, and to cater to programmers’ desires for latitude and features so that the platform would be more widely accepted, so the thinking goes.

No one’s figured out a way to robustly allow any one modeling system to interact or integrate with other modeling systems. There have been solutions developed to allow different modeling systems to interact, either via. XML using networking protocols, or by developing other languages on top of a runtime. I’ve seen efforts to transmit objects between two specific runtimes: .Net and the JVM, but this area of research is not well developed. Each modeling system creates its own “walled garden”, usually with some sort of “escape hatch” to allow a program written in one system to interact with another (usually using a C interface or some FFI (foreign functionality interface)) which isn’t that satisfactory. It’s become a bit like what used to be a plethora of word processing or spreadsheet document file formats, each incompatible with the other.

Programmers find file format standards easier to deal with, because it involves loading binary data into a data structure, and/or parsing a text file into such a structure, but this limits what can be done with the information that’s in a document, because it takes away control from the recipient in how the information can be used and presented. As I’ve discussed earlier, even our notions of “document” are limiting. Even though it could serve a greater purpose, it seems not much time has been invested in trying to make different modeling systems seamless between each other. I’m not saying it’s easy, but it would help unleash the power of computing.

Code is encoded knowledge. Knowledge should be fungible such that if you want to take advantage of it in a modeling system that is different from the one it was originally encoded in, it should still be accessible through some structure that is compatible with the target system. I understand that if the modeling system is low level, or more primitive, the result is not going to look pretty, but it could still be accessed through some interface. What I think should be a goal is a system that makes such intermingling easy to create, rather than trying to create “runtime adapters”. This would involve understanding what makes a modeling system possible, and focusing on creating a “system for modeling systems”. You can see people striving for this in a way with the creation of alternative languages that run on the JVM, but which have features that are difficult to include with the Java language itself. Java is just used as a base layer upon which more advanced modeling systems can be built. This has happened with C as well, but Java at least allows (though the support is minimal) a relatively more dynamic environment. What I’m talking about would be in a similar vein, except it would allow for more “machine models” to be created, as different modeling systems expect to run on different machine architectures, regardless of whether they’re real or virtual.

I think the reason for this complexity is, as Alan Kay has said, we haven’t figured out what computing is yet. So what language designers create reduces everything down to one or more aspects of computing, which is embedded in the design of languages.

So my question is really a challenge for future reference.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Once again, I return to the technique of using a metaphor created by a recent South Park episode (Season 13, Episode 10) to illustrate the way things are (in an entirely different subject). Warning: There is adult, offensive language in the South Park video clips I link to below. My usual admonition applies. If you are easily offended, please skip them.

The episode is about the popular perception of wrestling, promoted by the WWF, vs. the real sport. It uses the same theme as an earlier episode I used, which satirized Guitar Hero: The pop culture is taking something which involves discipline and hard work, with some redeeming quality, and turning it into something else entirely with no redeemable value. Rather than challenging us to be something more, it revels in what we are.

This is a follow-up post to “Does computer science have a future?” It fleshes out the challenge of trying to promote a real science in computing…with some satire.

I’ve been getting more exposure to the evolution of CS in universities, which extends to the issue of propping up CS in our high schools (which often just means AP CS), though there’s been some talk of trying to get it into a K-12 framework.

University CS programs have been hurting badly since the dot-com crash of 2001. Enrollments in CS dropped off dramatically to levels not seen since the 1970s. We’re talking about the days when mainframes and minicomputers seemed to be the only options. It’s not too much of a stretch to say that we’ve returned to that configuration of mainframes and terminals, and the thinking that “the world needs only 5 computers.”

After looking at the arc of the history of computing over the last 30 years, one could be excused for thinking that an open network like the internet and an open platform like the personal computer were all just a “bad trip,” and that things have been brought back to a sense of reasonable sobriety. This hides a deeper issue: Who is going to understand computing enough to deal with it competently? Can our universities orient themselves to bring students who have been raised in an environment of the web, video game consoles, iPods, and smart phones–who have their own ideas about what computers are–to an understanding of what computing is, so they can carry technological development forward?

Going even deeper, can our universities bring themselves to understand the implications of computing for their own outlook on the world, and convey that to their students? I ask this last question partly because of my own experience with computing and how it has allowed me to think and perceive. I also ask it because Alan Kay has brought it up several times as an epistemological subject that he’d like to see schools ponder, and in some way impart to students: thinking about systems in general, not just in computers.

The point of the internet was to create a platform upon which networks could be built, so they could be used and experimented with. It wasn’t just built so that people could publish porn, opinion, news, music, and video, and so that e-commerce, information transfer, chat, and internet gaming could take place.

The personal computer concept was not just created so that we could run apps. on it, and use it like a terminal. It was created with the intent of making a new medium, one where ideas could be exchanged and transactions could take place, to be sure, but also so that people could mold it into their own creation if they wanted. People could become authors with it, just as with pen, paper, and the typewriter, but in a much different sense.

But, back to where we are now…

Most high schools across the country have dropped programming courses completely, or they’re just hanging on by a thread (and in those cases an AP course is usually all that’s left). The only courses in computer literacy most schools have offered in the last several years are in how to use Microsoft Word and Excel (and maybe how to write scripts in Office using VBA), and how to use the web, Google, and e-mail.

CS departments in universities are desperate to find something that gives them relevance again, and there’s a sense of “adapt or die.” I’ve had some conversations with several computing educators at the university and high school level about my own emerging perspective on CS, which is largely based on the material that Alan Kay and Chris Crawford have put forward. I’ve gotten the ear of some of them. I’ve participated in discussions in online forums where there are many CS professors, but only a rare few seem to have the background and open mindedness to understand and consider what I have to say. These few like entertaining the ideas for a bit, but then forget about it, because they don’t translate into their notions of productive classroom activities (implementations). I must admit I don’t have much of a clue about how to translate these ideas into classroom activities yet, either, much less how to sell them to curriculum committees. I’m just discovering this stuff as I go. I guess I thought that by putting the ideas out there, along with a few other like-minded people, that at least a few teachers would catch on, become inspired, and start really thinking about translating them into curriculum and activities.

I’ve been getting a fair sense from afar about how hard it is for computing educators to even think about doing this. For one, academics need support from each other. Since these ideas are alien to most CS educators, it’s difficult to get support for them. I’ve been surprised, because I’ve always thought that it’s the teachers who come up with the curriculum, and in this “time of crisis” for CS they would be interested in a new perspective. It’s actually an old perspective, but it hasn’t been implemented successfully on a wide scale yet, and it’s a more advanced perspective than what has been taught in CS in most places for decades.

The challenge is understandable to me now at one level, because any alternative point of view on CS, even the ones currently being taught in some innovative university programs, run into the buzz saw of the pre-existing student perspective on computing (prepare, there’s an analogy coming):

WWE: “You took my girl, and my job”

They come in with a view of computing driven by the way the pop culture defines it. In the story (follow the above link), the WWE defines wrestling for the kids. Trying to teach anything akin to “the real thing” (using computing in deep and creative ways to create representations of our ideas and test them, and possibly learn something new in the process) tends to get weird looks from students (though not from all):

The fine art of “wrastling”

There’s a temptation on the part of some to take away from this experience the idea that the material they’re presenting is no longer relevant, and that the students’ perspectives should be folded into what’s taught. I agree with this up to a point. I think that the presentation of the material can be altered to help students better relate to the subject, but I draw the line at watering down the end goal. I think that subjects have their own intrinsic worth. Rather than trying to find a way to help more students pass by throwing out subjects that seem too difficult, there should instead be remedial support that helps willing students rise up to a level where they can understand the material. To me, the whole idea of a university education is to get students to think and perceive at a more sophisticated level than when they came in. CS should be no different. You’d be surprised, though, how many CS professors think the more important goal is to teach only about existing technology and methodologies, the pop culture.

Talking to most other CS educators at the high school and university level has been an eye-opening and disheartening experience, because we get our terminology confused. I mean to say one thing, and to my amazement I get people who honestly think I’m saying the complete opposite. In other cases we have a different understanding of the same subject, and so the terms we use get assigned different meanings, and I come to a point where I realize we can’t understand each other. The background material for where we’re coming from is just too different.

Many CS professors (whom I have not met, though I’ve been hearing about them) apparently think all of this “real thing” stuff is quaint, but they have real work to do, like “training IT professionals for the 21st century.” Phah! Guess what, people. Java is the new Cobol (in lieu of Fortran, no less…). Web apps. are the new IBM 3270 apps. Remember those days? Welcome to the 1970s! And by the way, isn’t CS starting to look like CIS? Just an observation…

(A little background on the clip below: In the story, after the failed attempt at learning real wrestling (in the clip above), the South Park kids form their own “wrestling show,” which is just play acting with violence, complete with the same scandal-ridden themes of the “WWE wrestling” they admire so much. A bunch of “redneck” adults in South Park gather around to watch in rapt attention.)

“What you mean it ain’t real?”

Note I am not trying to call anyone “stupid” (the wrestling teacher in South Park has a penchant for calling people that), though I share in the character’s frustration with people who are unwilling to challenge their own assumptions.

I get a sense that with many CS professors the bottom line is it’s all about supporting the standard curriculum in order to turn out what they believe are trained IT professionals who are capable of working with existing technology to create solutions employers need. I wonder if they’ve talked to any software developers who work out in the field. I’m sure they’d get an earful about how CS graduates tend to not be of high quality anymore.

The faculty support their departmental goals and the existing educational field. The belief is, “Dammit! We’ve gotta have something called ‘CS’, because something is better than nothing.” This ends up meaning that they accommodate the dominant perspective that students have on computing. This ultimately goes back to the dominant industry vision, which now has nothing to do with CS.

When I’ve found a receptive CS professor, what I’ve suggested as a remedy, along with “moving the needle” back to something resembling a real science, is that the CS community needs to invent educational tools: Their own educational artifacts, and their own programming languages. This still goes on, though it’s not happening nearly as much as it did 40 years ago. I think that the perspectives of students need to be taken into consideration. The models used for teaching need to create a bridge between how they think of computing coming in, and where the CS discipline can take them.

The standard languages that most CS departments have been using for the past 14 years or so are C++, and now Java. C++ came from Bell Labs. Java came from Sun Microsystems. Both were created for industrial use by professionals, not people who’ve never programmed a computer before in their lives! In addition, they’re not that great for doing “real CS”. I’ve only heard of Scheme being recommended for that (along with some select course material, of course). I haven’t worked with it, but Python seems to get recommended from time to time by innovative people in CS. It’s now used in the introductory CS course at MIT. My point is, though, there needs to be more of an effort put into things like this in the CS academic community.

What’s at stake

In my previous post I link to above I asked the question, which Alan Kay asked in his presentation, “Does CS have a future?” I kind of punted, giving a general answer that’s characterized the field through the last several decades. I figured it would continue to be the case. This is going to sound real silly, but this next scene in South Park actually helped me see what Kay was talking about: We could lose it all! The strand that spans the gulf between ignorance and “the real thing” (a connection which is already extremely tenuous) could snap, and CS would be all but dead:

This isn’t “wrastling”!

What came to mind is that it wouldn’t necessarily mean that the programs called “CS” would be dead, but what they teach would end up being something so different and shallow as to not even resemble the traditional CS curriculum that’s existed for 30+ years.

What I’ve learned is there’s a lot of resistance to bringing a more powerful form of CS into universities and our school system. This is not only coming from the academics. It’s also coming from the students. This is because, ironically, it’s seen as weak and irrelevant. I’ve already given you a feel for where the student perspective is at. Alan Kay identified a reason why most CS academics don’t go for it: They think what they’re teaching is it! They think they’re teaching “real CS.” They have little to no knowledge of the innovative CS that was done in the 1960s and 70s. They couldn’t even recall it if they tried, because they never studied it. They’re historically illiterate, just like our industry.

High school “CS” teachers (why not just call their courses “programming.” That’s what they used to be called) are hapless followers of the university system. They’re less educated than the CS professors about what they do. From what I’ve heard about the standards committees that govern curriculum for the public schools, they’d have no clue about what “real CS” is if it was shown to them. This isn’t so much a gripe of mine as an acknowledgment that the public schools are not designed for teaching the different modes of thought that have been invented through the ages. There’s no point in complaining about a design, except to identify that it’s flawed, and why. The answer is to redesign the system and then criticize it if the intent of the design is not being met.

My sense of the historical ignorance in our field, having been a part of it since I was a teenager, is people take it as an article of faith that there’s been a linear progression of technological improvement. So why look at the old artifacts? They’re old and they’re obviously worse than what we have now. That’s the thinking. They don’t have to verify this notion. They know it in their bones. If they’re exposed to technical details of some of the amazing artifacts of the past, they’ll find fault with the design (from their context). How the design came about is not seen as valuable in and of itself. Only the usefulness that can be derived from it is seen as worthy. Most CS academics are not too distant from this level of ignorance. I get a sense that this comes from their own sense of inadequacy. They know they couldn’t design something like it. They wouldn’t expect themselves to. As far as they’re concerned electrical engineers and programming “gods” created that stuff. They also tend to not have the first clue about the context in which it was designed. They’ll pick out some subsystem that’s antiquated by what’s used now and throw the baby out with the bathwater, calling the whole thing worthless, or at best an antique that should be displayed dead in a museum. You may get the concession out of them that it was useful as a prototype for its time, but what’s the point in studying it now? The very idea of understanding the context for the design is an alien thought to them.

The designs of these old systems weren’t perfect. They were always expected to be works in progress. What gets missed is the sheer magnitude of the improvement that occurred, the leaps in thinking about computing’s potential, and what these new perspectives allowed the researchers to think about in terms of improving their designs, the quality of which were orders of magnitude greater than what came before them (that point is often missed in modern analysis). Technologists often have the mindset of, “What can this technology do for me now?” They take for granted what they have, as in, “Now that I know how to use and do this, that old thing over there is ancient! My god. Why do you think that thing is so amazing?”

Another thing I’ve seen people dismiss is the idea that the people who created those amazing artifacts had an outlook that’s any different from their own, except for the fact that they were probably “artists”. That’s a polite way of dismissing what they did. Since they were so unique and “out there,” many of their ideas can be safely ignored, rather like how many in the open source community politely dismiss the ideas of the Free Software Foundation, and Richard Stallman. The stance is basically, “Thanks, but no thanks.” Yes, they came up with some significant new ideas, but it’s only useful to study them insofar as some of their ideas helped expand the marketability of technology. Their other ideas are too esoteric to matter. In fact we secretly think they’re a little nuts.

As I have written about on this blog, I share Kay’s view that computing has great historical and societal significance. What I’ve come to realize is that most CS academics don’t share that view. Sure, they think computing is significant, but only in terms of doing things faster than we once did, and in allowing people to communicate and work more efficiently at vast distances. The reasons are not as simple as a closed-mindedness. Kay probably nailed this: Their outlook is limited.

What it comes down to is most computer science academics are not scientists in any way, shape, or form. They’re not curious about their field of work. They’re not interested in examining their own assumptions, the way scientists do. Yet they dominate the academic field.

I’ve given some thought to some analogies for the CS major, as it exists today. One that came to mind is it’s like an English major, because the main focus seems to be on teaching students to program, and to do it well. As I’ve said before, programming in the realm of computing is just reading and writing. Where the English major excels beyond what CS teaches is there’s usually a study of some classics. This doesn’t happen in most CS curricula.

In discussions I’ve heard among CS academics, they now regard the field as an engineering discipline. That sounds correct. It fits how they perceive their mission. What generally goes unrecognized is that good engineering is backed by good science. Right now we have engineering backed by no science. All the field has are rules of thumb created from anecdotal evidence. The evidence is not closely scrutinized to see where the technique fits in other applications, or if it could be improved upon.

I am not arguing for a field that is bereft of an engineering focus. Clearly engineering is needed in computing, but without a science behind it, the engineering is poor, and will remain poor until a science is developed to support it.

My own vision of a science of computing could be described as a “science of design,” whose fundamental area of study would be architectures of computing. Lately Alan Kay has been talking about a “systems science,” which as he talked about in the presentation I link to above, includes not just architecture, but a “psychology of systems,” which I translated to mean incorporating a concept of collective “behavior” in a system of interactive systems. Perhaps that’s a shallow interpretation.

As you can tell, I now hold a dim view of traditional CS. However, I think it would be disruptive to our society to just say “let it die”. There are a lot of legacy systems around that need maintaining, and some of them are damn important to our society’s ability to function. There’s a need for knowledgeable people to do that. At the same time, Kay points ahead to the problem of increasing system complexity with poor architectural designs. It’s unsustainable. So it’s not enough to just train maintainers. Our technological society needs people who know how to design complex systems that can scale, and if nothing else, provide efficient, reliable service. That’s just a minimum. Hopefully one day our ambitions will go beyond that.

The discussion we’re not having

I happened to catch a presentation on C-SPAN last month by the National Academy of Engineering and the National Research Council on bringing engineering education into the K-12 curriculum (the meeting actually took place in September). I’m including links to the two-part presentation here, because the people involved were having the kind of discussion that I think that CS academics should be having, but they’re not. The quality of the discussion was top notch, in my opinion.

Engineering in K-12 Education, Part 1, 3 hours

Engineering in K-12 Education, Part 2, 2 hours

What was clear in this presentation, particularly in Part 2, was that the people promoting engineering in K-12 have a LOT of heavy lifting to do to achieve their goals. They are just at the infant stage right now.

Readers might be saying, “How can they possibly think of teaching engineering in primary school? What about all the other subjects they have to learn?” First of all, the organizations that held this event said that they weren’t pressing for adding on classes to the existing curriculum, because they understand that the existing school structure is already full of subjects. Rather, they wanted to try integrating engineering principles into the existing course structure, particularly in math and science. Secondly, they emphasized the engineering principles of design and systems thinking. So it didn’t sound like it would involve shoving advanced math down the throats of first graders, though teaching advanced math principles (again, principles, not drill and practice) in a fashion that young students can understand wouldn’t be such a bad idea. Perhaps that could be accomplished through such a curriculum.

I had a discussion recently with Prof. Mark Guzdial, some other CS professors, and high school teachers on his blog, related to this subject. Someone asked a similar question to what I posed above: “I agree [with what you say about a real science of computing]. But how in the world do you sell that to a K-12 education board?” I came to a similar conclusion as the NAE/NRC: Integrate CS principles wherever you can. For example, I suggested that by introducing linguistics into the English curriculum, it would then be easier to introduce concepts such as parse trees and natural language processing into it, and it would be germane. Linguistics is an epistemological subject. Computing and epistemology seem to be cousins. That was just one example I could think of.

If CS academics would actually think about what computing is for a while, and its relevance to learning, they wouldn’t have to ask someone like me these questions. I’m just a guy who got a BSCS years ago, went into IT for several years, and is now engaged in independent study to expose himself to a better form of CS in hopes of fulfilling a dream.

Universities need to be places where students actually get educated, not just “credentialed.” Even if we’re thinking about the economic prospects of the U.S., we’re fooling ourselves if we think that just by handing a student a diploma that they’re being given a bright future. Our civil and economic future is going to be built on our ability to think, perceive, and create, even if most of us don’t realize it. Our educational systems had best spend time understanding what it will take to create educated citizens.

Read Full Post »

I came upon a video recently, titled “Dangerous Knowledge,” produced by the BBC. It profiles the life and times of three mathematicians (Georg Cantor, Kurt Gödel, and Alan Turing) and one scientist (Ludwig Boltzmann). For some reason it drew me in. I can’t embed the video here, but you can watch it by following the link.

The show makes allusions to an atheism which I find difficult to relate to this subject of crumbling certainty, the rise of contradictions or imperfections in logical systems, and the decay of what was thought to be permanent or static. Commentators in the show kept making reference to this idea that “God is dead.” It feels irrelevant to me. The important thing going on in the story is the destruction of the concept of the Newtonian mechanical universe. This is not the destruction of Newton’s theories of motion, but rather people’s misperception of his theories. Perhaps this is a European idea. Their concept of God, as it’s portrayed in the show, seemed to be deeply tied to this idea of certainty. Since these four people (not to mention the world around them) were destroying the idea of certainty, their concept of God was being destroyed as well.

Edit 11/17/2012: This was also about the limits of some of our mental perceptual tools, math and science. I think what drew me to this is it felt closer to the truth–that the notion that mathematics provides clarity, truth, and that our world or Universe are permanent was just a prejudice, which doesn’t hold up against scrutiny; that in fact our notions of truth have limits, and that everything in our world is in flux, constantly in a state of change.

I had heard of Cantor when I read “The Art of Mathematics,” by Jerry King. He talked about how Cantor had come up with this very controversial idea that there were infinities of different sizes, using the mathematical concept of sets. He proved the paradoxical idea, for example, that the infinite set of even (or odd) numbers was the same size as the infinite set of natural numbers.

What’s interesting to me is that through this program you can see a line of thought, beginning with Cantor, running through several decades, to some of the first baby steps of computer science with Turing. Turing’s inspiration for his mathematical concept of the computer (the Turing machine) came from the work of Gödel, and another mathematician named Hilbert. The work of Gödel’s that Turing was specifically interested in was inspired by the paradoxes raised by Cantor.

The video below is from the television production of Breaking the Code, which was originally written as a play by the same name. It’s about Turing’s life and work deciphering the Nazis’ Enigma encoding/decoding system, and his ideas about computing. The TV production puts the emphasis on Turing’s homosexuality and how that put him in conflict with his own government. The play got much more into his ideas about computing. There are a couple scenes in the TV version that talk about his ideas. This is one of them. I think it’s a perfect epilogue to “Dangerous Knowledge”:

Read Full Post »

Older Posts »