Feeds:
Posts
Comments

Archive for May, 2010

SICP Exercise 2.32

This is a real interesting one! I highly recommend working on it. I rank it up there with Exercise 1.11 in that once you get the answer, you’ll be amazed that the method really works. This is the kind of problem where functional programming really shines. All of the rules for it make the solution work out beautifully.

The end result, what should come out of the “subsets” function, comes from the classical mathematical notion of subsets. The set of subsets for a set contains at least two members: the original set itself, and the empty set. You see that in the example that’s given. Since order does not matter in a set, you don’t need to deal with repetitions, like (1 2) and (2 1). In fact, such repetitions should not show up in the result.

I’m not totally positive about this, but I think the way you achieve the answer has nothing to do with classical mathematics. Once again, I think this is solved using computing processes, not ones from classical mathematics. In that way it’s like Exercise 1.11 as well.

What I did to begin solving this was to look at the logic I was given, and do a walkthrough of it until I reached the point where I needed to fill in my own logic. I found it helpful to use the example set, and compare what I was getting to the expected result. As you back out of the recursion, look at what can be produced by using “map” with some logic that you think makes sense at each stage. And let your own logic evolve, if need be, to what you think you need to produce at each stage.

Happy programming!

Advertisements

Read Full Post »

I’ve jumped ahead some, since I’m going through the book with a local Lisp users group. They’ve decided to not go through all of the exercises, just some that we select. I may backtrack at some point to cover some of the ones I missed.

I won’t say much about this exercise. If it mystifies you from the start, look up information on “lambda calculus”. I had to review this as well before I started into it, because it had been a couple years since I last looked at stuff like this.

I found that it didn’t help at all to try to execute stuff in a Scheme interpreter. That misses the point. I found it helpful to just write down, or type out in a text editor, a walkthrough of the logic in the “zero” and “add-1” functions as I used them, to figure out part of the exercise.

Getting to “one” and “two” was pretty easy. What the exercise says to do (it says “use substitution”) is use the functions you are given to figure out what “one” and “two” are. In other words, carry out the logic and see what you get.

The challenging part is figuring out what the “+” operation should be. It helps to look at this at a conceptual level. You don’t have to do a math proof to get this, though understanding how to do an inductive proof helps in giving you ideas for how to proceed with this part (if you need practice with doing an inductive proof, try Exercise 1.13). Keep in mind that this is a computing problem, not a classical mathematics problem (though there is definitely mathematics of a sort going on as you go through the logic).

The idea of the exercise is to get a different idea of what the concept of number can mean via. computing. It helps to be familiar with mathematical logic for this one, to see what you’re really doing, though I think you can get by without it if that’s not one of your competencies. If you’ve ever seen a mathematical explanation for the existence of natural numbers, this is reminiscent of that.

I don’t want to give away the answer, except to mention an article I found, called “The Genius of Alonzo Church (rerun)”, by Mark Chu-Carroll. NOTE: Do not read this article until you’ve done the problem! It will give away the answer. I only mention it because Mark says something quite beautiful about what the answer means. It ties right into a concept in object-oriented programming.

Read Full Post »

“Information has structure. The computer should enable that structure to be modeled, and for that model to be manipulated in any representation that is appropriate to our understanding of it.” — just a thought I had

I came upon a few videos that were all basically about the same thing: We should be getting beyond paper with our computers, because paper is the old medium. The computer is a new medium. A rare few people have been working to find out how it can be uniquely beneficial to society, to find things that the computer can do, but paper can’t. I will focus on two of them in this post: Doug Engelbart, and Ted Nelson. They came up with some answers 40+ years ago, but old habits die hard. For the most part these ideas were ignored. Most people couldn’t understand them. There is a strong temptation with new technology to optimize the old, to make it work better. This is what society has chosen to do with computers thus far.

This could be wrong, but it seems to me the reason this state of affairs exists is that most people don’t understand computing concepts as a set of ideas. Most of us know how to use a computer, but we don’t have a real literacy in computing. This, I think, has created the bane of the existence of real computer scientists, applications, or what Ted Nelson calls “lumps”. A lot of people will say they “don’t understand computers”. They just use them. The argument has been made for decades that people don’t need to understand computers, because that would be like people needing to understand the engine and transmission in the car they drive. Most people now understand computers the way that they understand other machines: A machine is something that gets a specific task done. The “realization” that’s been made is that with a computer you can “change it into any machine you want”. So we go from “virtual machine” to “virtual machine”, each one carrying out specific tasks. There’s no sense of generality, beyond the GUI, because this would require understanding some things that people trained in computer science understand, for example that information can be subdivided into small structures, and links can be made between these structures. Computer science typically emphasizes that you want to do this for efficient access to that information. Engelbart and Nelson used the same principle for a different purpose: recognizing that information has structure, and organizing it with relationships and links makes it more accessible in terms of understanding it.

Business has been going for the “paperless office” for 30 years, but what they’re really doing is evolving from using physical paper to “digital paper”, and in effect their work process has not changed. Doug Engelbart and Ted Nelson point the finger at Xerox PARC for this concept of computing. A sad irony. While there were some great ideas developed at PARC, as far as I know only a couple, the bitmapped display and Ethernet, have been adopted by the wider industry. Alan Kay said recently that the GUI, developed at PARC, which was once heralded as a crowning achievement by the computer industry, was really just a means to an end: to give children an interface so they could learn computing ideas, according to their capabilities. He didn’t intend to have adults use this interface, or at least the style of it that they had developed. It wasn’t the really important idea that came out of the Learning Research Group.

The video below demonstrates some functions with Nelson’s data structure that would typically be done today by function-specific applications. The potential I see is a scheme like this could replace data-oriented applications, and make information more relevant and cohesive, without having to duplicate information into different application formats, and without having to open different applications to access the same information in different hierarchies. This would of course be dependent on people understanding computing concepts, not treating the computer as a guide about how to store and retrieve information. This is not too far fetched, given that there are plenty of people who have learned how to use spreadsheets over the last few decades.

When I first got into studying Alan Kay’s work 4 years ago, I remember he said that when he was working at PARC, he and his team thought they had “safely killed off applications”, and in my opinion, after viewing the above video, it’s easy to see why. The idea they had was to look at what computers could deal with, as a medium, find an intersection with human capabilities which could be used for their goal of societal advancement, and come up with a generalized computing structure that would fill the bill, and then some. The problem was the wider culture had no idea what these people had been up to. It was illiterate of computing ideas. So the very notion of what they were doing was alien. Not to mention that a few people at PARC were actively contributing ideas that supported the concept of applications, and making computers model what could be done with paper. Applications kept chugging along out in industry, like they always had.

As Kay has pointed out, this is not the first time this sort of thing has happened in our history. It happened to books after the introduction of the printing press. We are now rapidly moving to a technology world where old media is being digitized: books, magazines, images, video, and audio. The technology that is being created to support this doesn’t provide structured access to the underlying bits of information. As Nelson says, they are just provided as “lumps”. The point is this material is being digitized in its original format. It’s serial, linear. As with everything else in technology, it’s happening because this is what we’re used to, not because we’re really recognizing what the computer’s full potential is, even though many people think we are. We can understand this limited view by recognizing that most consumers use computers as data processing engines whose sole purpose is to compress/decompress data, convert a stream of bits into readable type, convert from digital to analog (and vice versa), persist “blobs” of information, and transmit them over networks. There is some structure to what we do with computers, but as Nelson points out, it’s simplistic. This makes us have to work harder when we start dealing with complexity.

What I like about Nelson’s emphasis is he wants to make it easier for people to study content in a deep and engaging way.

The computer industry has not yet really tried to create a new medium. The companies that have won out historically don’t have a concept for what that would be. Even if they did, they would know that it would alienate their customers if they made it into a product.

Doug Engelbart built upon the same ideas that Nelson discussed to try to achieve a higher goal: to improve the effectiveness of groups in pursuing knowledge and figuring things out. The way in which he tried to do this was to use a computer to take the complexity of an issue and make it clear enough so we could understand it more fully than we otherwise would. His ultimate goal was to create a systemic process for improvement of the group’s effectiveness (ie. a process to improve the group’s improvement process) in understanding information and carrying on constructive arguments about it. What’s kind of a mind-bender is he applied this same principle of group improvement to the process of learning how to more fully understand and create NLS itself!

An idea I’m giving short shrift here (undeservedly) is the “bootstrapping” concept that Engelbart talked about. I have talked about it some in a previous post. I’ve had a feeling for a while now that there’s something very powerful about it, but I have not come to completely understand it yet. Engelbart explains it some in the video.

Edit 5/18/2010: Christina Engelbart, Doug Engelbart’s daughter, and Executive Director of the Doug Engelbart Institute left a comment, complimenting my post here. I’d like to highlight the post she wrote on her blog, called “Collective IQ”, which she says was inspired by mine here. Gosh, I’m flattered. 🙂 Anyway, be sure to give her article a read. She goes more into the subject of augmenting the human intellect than I did here.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »