Archive for January, 2011

The 150th post

This is my 150th post, so I thought it would be a good time to do a retrospective of where I’ve  been. I would’ve done this earlier, but I “missed” the 100th post I wrote back in September 2008. It just blew right past me.

Honestly, when I started this blog I wasn’t sure it was going to go anywhere. I was inspired to start it because I kept running into people on tech forums who had the same technical problems, or repeated the same tired arguments, and I got tired of repeating myself trying to correct them. I figured I would just write my thoughts once, and the next time someone talked about the same thing, I would provide a link to what I wrote here.

I started blogging (kinda) in 2004. I had set up my own website with a .Net service provider back in 2003, I think, so that I could create some ASP.Net demos that other people could see. This was during the “tech depression” after the dot-com crash of 2001. I was looking for work, and looking for opportunities to market myself. I got the idea to start writing and publishing on that site, probably from seeing other ASP.Net developers do the same thing. Only thing was I hadn’t set up any automated system for publishing what I wrote. I had some ideas about that, but I never got around to it. The main reason it was a pain to try to blog on my own was I wrote my articles in Microsoft Word, and exported them to HTML. That was never fun. Word put in TONS of tags for all sorts stuff I didn’t need, and there was no “simple HTML export”. My memory is that the way Word did things created problems on my website, but I can’t remember what. I don’t remember what I did to make things copacetic. Maybe I just did my own HTML “back porting” to fix the problems. So I only wrote a couple articles back then.

I think I had found other people who had blogs around this time, though I wasn’t real clear on what they were. How were they different from regular websites? I didn’t find the answer to that for a while. I finally did a search for blogging services in 2006. I found it was easy to set one up online, and the fact that they were free was a nice perk! I checked around at a bunch of different blogging sites. I had heard (I think) that Chris Sells of Microsoft fame had a popular blog on WordPress, so I checked this place out. The main thing I was looking for at the time was the ability to back date posts, because I wanted to import the two articles I had on my website, and date them to when I actually wrote them. WordPress was the only one I found that allowed this. So this is where I set up shop.

I’ve sometimes had to suffer through bugs in the WordPress blog editor. I’ve seen a few major ones. One of them made me want to tear my hair out a few years ago, but in most cases these were one-off instances where I wanted to do something new with an article, so I didn’t run into this crap that often. In any case, all of the nuisances I encountered were eventually fixed. If there’s one thing I’d really like is the ability to edit an article as if I were in “preview”. I hate this mode of working where I have to type something up in an editor that kinda-sorta does WYSIWYG, but not quite, and the only time I see what it’s really going to look like to readers is when I hit “preview”, and I can’t edit that! It would be a wonderful improvement if I could edit in a “live” WYSIWYG mode. We used to have this on PCs dammit! (There I go again!…)

I’ve already posted the top posts for 2010. Here are the top 10 most read articles that I’ve written since I started this blog on May 31, 2006. This is based on hit statistics to date, with #1 being the most popular:

(Update 5-13-2013: I deleted “The joy of Squeak” from my blog, since the source material I wrote about (the episode of Leo Laporte’s show, “The Lab,” which featured Squeak) no longer exists on the internet.)

1. Great moments in modern computer history, posted 8/22/2006, 5,762 hits

2. Java: Let it be, posted 3/6/2008, 5,301 hits

3. Squeak is like an operating system, posted 7/19/2007, 4,011 hits

4. Lisp: Surprise! You’re soaking in it, posted 5/10/2007, 3,909 hits

5. Exploring Squeak and Seaside, posted 10/10/2006, 3,836 hits

6. A history of the Atari ST and Commodore Amiga, posted 5/16/2008, 2,764 hits

7. Having fun playing with fire, er…C, posted 7/2/2007, 2,313 hits

8. Microsoft patches Visual Studio 2005 for Vista. Confused? Answers here, posted 3/12/2007, 2,171 hits

9. How to secure Windows XP against malware, posted 9/22/2006, 2,121 hits

10. The joy of Squeak, posted 3/17/2008, 2,094 hits

“Java: Let it be” was the most popular post I had written, for a couple years, but “Great moments in modern computer history” has gotten a consistent amount of attention since I posted it, and finally edged out my Java article as the most popular.

My favorites

Since I started this blog for myself (though I’m thankful that others have found it valuable enough to read along), I’ve put in my own favorites as well. These are articles I’ve referred back to sometimes, and represent good memories or some realization I’ve made. I’ll just list them from the most recent to the oldest:

SICP: Exercise 1.19 – optimizing Finonacci, posted 11/22/2010

SICP: Exercise 2.6: Church numerals, posted 5/22/2010

Getting beyond paper and linear media, posted 5/6/2010

Realizing Babbage, posted 1/30/2010

SICP Exercise 1.11: “There is no spoon”, posted 1/18/2010

The death of certainty and the birth of computer science, posted 8/29/2009

Why I do this, posted 8/17/2009

Does computer science have a future?, posted 8/12/2009

Michael Jackson dead at 50, posted 6/26/2009

The beauty of mathematics denied, posted 6/19/2009

Tales of inventing the future, posted 1/23/2009

The “My Journey” series, posted from 12/29/2008 through 1/18/2009

The culture of “air guitar”, posted 6/10/2008

Kitties, posted 5/4/2008

“Reminiscing”, parts 34, and 6, posted from 10/31 through 11/13, 2007

Redefining computing, Part 2, posted 7/11/2007

Having fun playing with fire, er…C, posted 7/2/2007

On our low-pass filter, posted 3/12/2007

Great moments in modern computer history, posted 8/22/2006

Rediscovering Dr. Dijkstra and Giving Lisp a Second Chance, posted 5/31/2006

My favorites have changed with time. If I had made this list two years ago my emphasis would’ve been different. My tastes have changed as I’ve learned to see things differently.

As you can tell I haven’t written anything in the last 2-1/2 years that’s been a “big hit” with the internet reading public. As I compare the popular posts with my favorites, there are only two that are in both lists: “Great moments in modern computer history”, and “Having fun playing with fire, er…C”. I notice that all of the other ones that were popular have to do with a platform or a programming language. It’s nice to see that topics on computer/software history seem to be popular, though. “Great moments in modern computer history”, “Having fun playing with fire, er…C”, and “Java: Let it be” all had an emphasis on software history.

On I go…

Read Full Post »

** Warning: This article contains spoiler information. Do not read further if you don’t want the story revealed. I wrote this article for people who have seen the movie. **

I was disappointed in Tron Legacy at first. I didn’t get the same thrill out of it that I got out of Tron when I first saw it at age 14. In some ways it met my expectations. Based on the previews, I figured it would suggest that technologists have gotten obsessive about technology, and they need to “get out more.” It did that, but at first blush it appeared to do nothing more. I thought about what I had watched, and some things came more into focus that made me like it a lot more.

I’ve read a couple movie reviews on it, and I feel as though they missed the point, but that should not be surprising. Like with the original Tron, this movie works on a few levels. If you are a typical moviegoer, the movie’s story line will entertain you, and the special effects will dazzle you. A couple reviews have said that it follows the story line of a particular kind of a fairy tale. I think it does, but this is just superficial.

With Tron in 1982, the “surface” story was a bit like the movie The Incredible Shrinking Man in that a character is transported into a micro-reality, and everything that once appeared small and insignificant became huge and menacing. The main character, Kevin Flynn, had to face the games he created, inside the system created by his former employer. A virtual, mysterious and reclusive master overlord (the MCP) sought to grab up other entities (called “programs”) and systems to merge with itself, so it could become more powerful. A recurring theme was a kind of atheism. Programs were expected to believe that only their own reality existed, that there were no such things as “users,” (something greater than themselves, off in another reality they could not relate to, but which had a direct relationship with them). This was so that the programs would feel helpless, and would not fight the overlord. Flynn, a user, is sucked in because the system is so arrogant it thinks it can defeat him as well.

The message embedded in the film, which technologists would understand, was political: Were we going to have centralized control of computing, which had implications for our rights, or was computer access going to be democratized (a “free system”), so that people could have transparent access to their alter-egos inside the computer world? This was a futuristic concept in one sense, because most people were not aware of this relationship, even though it was very real at the time (but not in the sense of “little computer people”). I thought of it as expressing as well that the computer world reflected the consciousness of whichever user had the most influence over it (ie. Dillinger).

The director of “Tron,” Steven Lisberger, talked about how we had alter-egos in the computer world in the form of our tax records, and our financial transactions, and that in the future this alter-ego was only going to grow in its sophistication as more data was gathered about us. The future that was chosen largely agrees with the preferred outcome in the movie. Though we have this alter-ego that exists in the computer world, computer access was democratized, just not quite in the way the movie predicted.

There was a metaphysical message that’s more universal: Just as computer programs have users, perhaps we have “users” as well in some reality to which we can’t relate. The creators of the movie deliberately tried to make the real world look a little like the computer world to make this point. The theme that Lisberger has talked about many times now is that perhaps we all have a “better self,” and the question is are we going to strive to access that better self, or are we going to go through life never trying to get in touch with it?

What drew me into “Tron” when I first saw it in about 1983 was the idea that in the computer world things could be shaped by our thoughts and consciousness. I had a feel for that, since I had started programming computers 2 years earlier. Dr. Walter Gibbs’s confrontation with Dillinger particularly resonated with me:

You can remove men like Alan and me from the system, but we helped create it! And our spirit remains in every program we design for this computer!

Tron Legacy is a decidedly different movie from the old Tron. It has some elements that are reminiscent of it, but the message is different. I won’t talk too much about the fairy tale aspect, but instead focus on the message that I think is meant for technologists. This will be my own interpretation of it. This is what it inspired for me.

Instead of talking about a complaint about current conditions, as if they had no antecedent, the movie subtly complains about a problem that’s existed from the time when “Tron” was made, in our world: The legacy of the technical mentality that came into dominance at the same time that the democratization of computer access occurred, and has existed ever since.

On the surface, in the real world (in the movie), the computer industry is slouching towards cynical commercialism. Kevin Flynn disappeared 21 years earlier, leaving behind his son, Sam. Encom lost its visionary, and innovation at the company gradually slowed. In the present, the idea of technological innovation is dead. Encom is set to release yet another version of its operating system (Version 12), which they claim is the most secure they’ve ever released. Alan Bradley, a member of the board, asks something to the effect of, “What’s really new about this product?” He’s told, “We put the number 12 on it.” They decide to sell the OS commercially (as I recall, it was given away freely in prior versions, according to the history told in the movie). Alan is part of the company, but he doesn’t have much power. Instead of talking about what their R&D has produced (if any R&D existed), one of the executives touts the fact that Encom stock will soon be traded 24 hours a day, all around the world. The company has lost its soul, and is now only concerned with coasting, milking its past innovation for all it’s worth.

Sam exists outside Encom for the most part, but is the company’s largest stockholder. In a nod to the free software crowd, when he hears about the Encom announcement, he decides to break into the company (like his father did many years earlier), hack into its data center and make the operating system freely available on the internet (odd…I thought the operating system was the most secure ever…), dashing the company’s plans to sell it. Shortly thereafter, Alan shows up at Sam’s garage apartment, telling him he received a page from an old number his father used at the “Flynn’s” arcade. Sam is alienated and uninterested, saying his father is dead. He seems lost, and without purpose. His only involvement in the story is to create mischief. Going deeper into this, we can see in mischief a desire to be involved, to change things, and yet not take responsibility for it, to not really try to do better. Maybe the reason is there’s a sense of incompatibility with one’s surroundings, but the mischief makers can’t quite put their finger on what the problem is. So their only answer is to attack what is.

For years Alan said that Flynn was still alive. He persists with Sam, saying there must be a good reason his father disappeared, that it wasn’t because he had intentionally abandoned him. He throws Sam the keys to the old arcade, and thus the voyage “down the rabbit hole” begins…

Inside the computer world, Sam goes through a similar initiation that his father went through in “Tron,” and then he is entered into gladiatorial games–most of the same games that his father competed in, only more advanced and modernized. Sam competes well, and survives. After a similar escape from “the game grid” as his father pulled off in the original movie (except with the help of a computer character named Quorra), Sam meets his father, Kevin Flynn, in an isolated cave (though with very nice accommodations). The look of this “cave” is reminiscent of the end scene in 2001: A Space Odyssey. I won’t go into the details of what Sam and Kevin talk about. What I found interesting was that Kevin had spent a significant amount of time studying philosophy. Based on this background, he plays the role of a wise, though defeated, sage.

Kevin tells the story of how he became trapped in a world of his own creation (rather like in “Tron,” but this time Kevin never found a way out). A theme that emerges is the danger of perfectionism, a seductive quality of computer systems. This is embodied in a program Kevin created, named Clu. In the beginning of the computer world, Clu was helpful. As the system was being built from the inside, some mysterious entities “emerged” in the system. Kevin called them “isomorphs.” He marveled at them, and hoped they would become a part of the system. Their programming had such complexity and sophistication he had trouble understanding their makeup.

I recognize the idea of “emergence” from my days studying CS in college. There were many people back then who had this romantic idea that as the internet grew larger and larger, an “intelligence” would eventually “emerge” out of the complexity.

Later in the history told in the movie, Clu turned dogmatic about perfection. He killed off the isomorphs, and threatened Kevin. Kevin tried fighting Clu, but the more he did so, the stronger Clu got. So he hid in his cave, all this time. Meanwhile Clu built the game grid into his vision of “the perfect system.” Everything is “perfect” in his world. One would think this is ideal, but there is a flip side. Imperfections are rejected. Eventually Kevin came to understand that his desire to create “the perfect system” led to one that’s hostile, not utopian as he had imagined. He realizes he made a mistake. There is an interesting parallel between this story line and what happened with Encom, and indeed what happened with the computer industry in our world. By being trapped in his own system, being exposed to the isomorphs, and seeing how his vision was incompatible with this wonderful and mysterious new entity, and himself, Kevin is forced to come face to face with himself, and the vision he had for the computer world. He is given the opportunity to reconsider everything.

There were some subtle messages conveyed. I noticed that anytime one of the programs in the gladiatorial games, or one of Clu’s henchmen got hit with a weapon, or hit a barrier, they died instantly–derezzing. However, with Quorra, Kevin’s companion in the cave, when she gets hurt in a fight, the damaged part of her derezzes, but she remains alive. What this communicated to me is that Kevin and Clu imposed designs on the system whose only purpose was to serve a single goal. They imposed an image of perfection on everything they created, which meant that any imperfection that was introduced into one of these programs (a “wound”) caused it to fall apart and die. Is this not like our software?

Quorra was not created by Flynn, and her system did not demand perfection. She was fault-tolerant. If a piece of her system was damaged, the rest of her was affected (she goes into a “dormant” state), but she did not die. Sam realizes after she is damaged that Quorra is an isomorph. The last of her kind.

I realized, reading an article just recently on Category Theory, and it’s application to programmable systems, called, “Programmers go bananas,” by José Ortega-Ruiz, that “isomorph” is a term used in mathematics. Just translating the term, it means “equal form,” but if you read the article, you’ll get somewhat of a sense of what Quorra and the isomorphs represented:

A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the ‘arrows’ are provided by the relation “less or equal” (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b). What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock’s trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous [my emphasis]; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter‘s ideas).

Ruiz went on to talk about relationships between objects and categories being isomorphic if one object, or a set of objects in a category O could be transformed into another object/category O’, and back to O again. In other words, there was a way to make two different entities “equal” or equivalent with each other via. transforming functions (or functors). I think perhaps this is what they were getting at in the movie. Maybe an isomorph was equivalent to a biological entity, perhaps even a human, in the computer world, but in computational terms, not biological.

I offer a few quotes from my post, “Redefining computing, Part 2,” to help fill in the picture some more Re. the biological/computational analogy. In this post, I used Alan Kay’s keynote address at OOPSLA ’97, called “The Computer Revolution Hasn’t Happened Yet.” The goal of his presentation was to talk about software and network architecture, and he used a biological example as a point of inspiration, specifically an E. Coli bacterium. He starts by talking about the small parts of the bacterium:

Those “popcorn” things are protein molecules that have about 5,000 atoms in them, and as you can see on the slide, when you get rid of the small molecules like water, and calcium ions, and potassium ions, and so forth, which constitute about 70% of the mass of this thing, the 30% that remains has about 120 million components that interact with each other in an informational way, and each one of these components carries quite a bit of information [my emphasis]. The simple-minded way of thinking of these things is it works kind of like OPS5 [OPS5 is an AI language that uses a set of condition-action rules to represent knowledge. It was developed in the late 1970s]. There’s a pattern matcher, and then there are things that happen if patterns are matched successfully. So the state that’s involved in that is about 100 Gigs. … but it’s still pretty impressive as [an] amount of computation, and maybe the most interesting thing about this structure is that the rapidity of computation seriously rivals that of computers today, particularly when you’re considering it’s done in parallel. For example, one of those popcorn-sized things moves its own length in just 2 nanoseconds. So one way of visualizing that is if an atom was the size of a tennis ball, then one of these protein molecules would be about the size of a Volkswagon, and it’s moving its own length in 2 nanoseconds. That’s about 8 feet on our scale of things. And can anybody do the arithmetic to tell me what fraction of the speed of light moving 8 feet in 2 nanoseconds is?…[there’s a response from the audience] Four times! Yeah. Four times the speed of light [he moves his arm up]–scale. So if you ever wondered why chemistry works, this is why. The thermal agitation down there is so unbelievably violent, that we could not imagine it, even with the aid of computers. There’s nothing to be seen inside one of these things until you kill it, because it is just a complete blur of activity, and under good conditions it only takes about 15 to 18 minutes for one of these to completely duplicate itself. …

Another fact to relate this to us, is that these bacteria are about 1/500th the size of the cells in our bodies, which instead of 120 million informational components, have about 60 billion, and we have between 1012, maybe 1013, maybe even more of these cells in our body.

So to a person whose “blue” context might have been biology, something like a computer could not possibly be regarded as particularly complex, or large, or fast. Slow. Small. Stupid. That’s what computers are. So the question is how can we get them to realize their destiny?

So the shift in point of view here is from–There’s this problem, if you take things like doghouses, they don’t scale [in size] by a factor of 100 very well. If you take things like clocks, they don’t scale by a factor of 100 very well. Take things like cells, they not only scale by factors of 100, but by factors of a trillion. And the question is how do they do it, and how might we adapt this idea for building complex systems?

So a lot of the problem here is both deciding that the biological metaphor [my emphasis] is the one that is going to win out over the next 25 years or so, and then committing to it enough to get it so it can be practical at all of the levels of scale that we actually need. Then we have one trick we can do that biology doesn’t know how to do, which is we can take the DNA out of the cells, and that allows us to deal with cystic fibrosis much more easily than the way it’s done today. And systems do have cystic fibrosis, and some of you may know that cystic fibrosis today for some people is treated by infecting them with a virus, a modified cold virus, giving them a lung infection, but the defective gene for cystic fibrosis is in this cold virus, and the cold virus is too weak to actually destroy the lungs like pneumonia does, but it is strong enough to insert a copy of that gene in every cell in the lungs. And that is what does the trick. That’s a very complicated way of reprogramming an organism’s DNA once it has gotten started.

Recall that when Kevin works on Quorra’s damaged body, he brings up a model of her internal programming, which looks like DNA. Recall as well that when Alan Bradley talked to Sam about the page he got, he told Sam about a conversation he had with Kevin Flynn before he disappeared. Kevin said that he had found something that was revolutionary, that would change science, religion, medicine, etc. I can surmise that Kevin was talking about the isomorphs. When I thought back on that, I thought about what I quoted above.

Moving on with Alan Kay’s presentation, here’s a quote that gets close to what I think is the heart of the matter for “Tron Legacy.” Kay brings up a slide that on one side has a picture of a crane, and on the other has a picture of a collection of cells. More metaphors:

And here’s one that we haven’t really faced up to much yet, that now we’ll have to construct this stuff, and soon we’ll be required to grow it. [my emphasis] So it’s very easy, for instance, to grow a baby 6 inches. They do it about 10 times in their life. You never have to take it down for maintenance. But if you try and grow a 747, you’re faced with an unbelievable problem, because it’s in this simple-minded mechanical world in which the only object has been to make the artifact in the first place, not to fix it, not to change it, not to let it live for 100 years.

So let me ask a question. I won’t take names, but how many people here still use a language that essentially forces you–the development system forces you to develop outside of the language [perhaps he means “outside the VM environment”?], compile and reload, and go, even if it’s fast, like Virtual Cafe (sic). How many here still do that? Let’s just see. Come on. Admit it. We can have a Texas tent meeting later. Yeah, so if you think about that, that cannot possibly be other than a dead end for building complex systems, where much of the building of complex systems is in part going to go to trying to understand what the possibilities for interoperability is with things that already exist.

Now, I just played a very minor part in the design of the ARPANet. I was one of 30 graduate students who went to systems design meetings to try and formulate design principles for the ARPANet, also about 30 years ago, and if you think about–the ARPANet of course became the internet–and from the time it started running, which is around 1969 or so, to this day, it has expanded by a factor of about 100 million. So that’s pretty good. Eight orders of magnitude. And as far as anybody can tell–I talked to Larry Roberts about this the other day–there’s not one physical atom in the internet today that was in the original ARPANet, and there is not one line of code in the internet today that was in the original ARPANet. Of course if we’d had IBM mainframes in the orignal ARPANet that wouldn’t have been true. So this is a system that has expanded by 100 million, and has changed every atom and every bit, and has never had to stop! That is the metaphor we absolutely must apply to what we think are smaller things. When we think programming is small, that’s why your programs are so big!

Another thing I thought of, relating to the scene where Quorra is damaged, and then repaired, is the way the Squeak system operates, just as an example. It’s a fault-tolerant system that, when a thread encounters a fault, pauses the program, but the program doesn’t die. You can operate on it while it’s still “alive.” The same goes for the kernel of the system if you encounter a “kernel panic.” The whole system pauses, and it’s possible to cancel the operation that caused the panic, and continue to use the system. As I thought back on the movie, I could not help but see the parallels to my own experience learning about this “new” way of doing things.

The problem with creating a “perfect system” is it requires the imposition of rules that demand perfection. This makes everything brittle. The slightest disturbance violates the perfection and causes the whole thing to fall apart. Also, perfection is limiting, because it must have a criteria, and that criteria must necessarily be limited, because we do not know everything, and never will. Not that striving for perfection is itself bad. Having a high bar, even if it’s unachievable, can cause us to strive to be the best that we can be. However, if we’re creating systems for people to use, demanding perfection in the system in effect demands perfection from the people who design and use it. This causes systems that are vulnerable and brittle. This makes people feel scared to use them, because if they do something wrong, it’ll go to pieces on them, or misinterpret their actions, and do something they don’t want. It also allows hackers to exploit the human weaknesses of the software’s designers to cause havoc in their systems and/or steal information.

Just thinking this out into the story of “Tron Legacy,” Quorra’s characteristics were probably the reason Clu tried to kill off the isomorphs. He didn’t understand their makeup, or their motivation. Their very being did not have the same purpose as he had for the overall system. They were not designed to contribute to a singular goal. Instead, their makeup promoted self-sustaining, self-motivated entities. Their allowance for imperfection, their refusal to adhere to a singular goal, in his mind, was dangerous to the overall system goal.

The “end game” in the story goes as such. Sam’s entry into the system opened a portal inside the computer system back to the real world. This presents an opportunity for Clu. He has been secretly generating an army that he hopes to use to take over the real world, and he plans to use the opportunity of the open portal. This particular sequence brought to mind the formation of the Grand Army of the Republic in the 2nd movie in the most recent Star Wars trilogy, Attack of the Clones. It also had a “sorcerer’s apprentice” (from Fantasia) quality about it, in that Kevin created something, but despite his intentions, his creation got out of his control, and became dangerous.

To me, the main computer characters in the movie were metaphors for different philosophies of what computer systems are supposed to be. Clu and his army are symbolic of the dominant technical mentality in computing today that imposes perfection on itself, and humanity, a demand that can never completely be satisfied. Quorra represents a different idea, and I’ve occasionally puzzled over it. Were the origins of the isomorphs meant to convey an evolutionary process for computer systems whereby they can evolve on their own, or was that just a romantic idea to give the story a sense of mystery? The best way I can relate to what Quorra is “made of” is that she and the isomorphs are a computational “organism” that is made up of elements in a sophisticated architecture that is analogous to human anatomy, in terms of the scale her software is able to achieve, the complexity it is able to encompass, the intelligence she has, her curiosity, and her affinity for humanity. She represents the ideal in the story, in contrast to Flynn’s notions of perfection.

I think the basic idea of the movie is that in the early going, people like Flynn were smart, witty, and clever. They were fascinated by what they could create in the computer world, and they could create a lot, but it was divorced from humanity, and they became fascinated by the effort of trying to make that creation better. What they missed is that the ideal creation in the computer world is more like us, both in terms of its structure and intelligence.

The final scene in the movie was a bit of a surprise. At first blush it was the most disappointing part of it for me, because I wondered, “How did they make *that* work??” I was tempted to extend the “isomorph” idea still further as I wrote this post, to say that a computer entity could be transformed into a human, just as the term “isomorphic” suggests, but I don’t know if that’s what the creators of the movie were going for. After all, Clu thought he could bring an entire army of computer entities into the real world, including himself. How was that going to work? I think that interpretation was too literal. Going along with the idea that Quorra is a symbolic character, what I took away from it was that Sam was taking this new idea out of Flynn’s “enclave,” and bringing it into the world. He said to Alan Bradley after they got out that he was going to take the helm at Encom. Sam had found a purpose in his life. It was a message of hope. It was a way for him to honor his father, to learn from his mistakes, and try to do better. There’s also a hopeful message that computers will join us in our world, and not require us to spend a lot of time in the world we’ve created inside the computer.

“Tron Legacy” asks technologists to reassess what’s been created at a much deeper technical level than I expected. It does not use technical jargon much, but subtly suggests some very sophisticated ideas. The philosophical issues it presents have deep implications for technology that are much more involved than how our computer access is organized (a technical theme of the first movie). It prompts us to ask uncomfortable, fundamental questions about “what we have wrought” in our information age, not in the sense of the content we have produced, but about how we have designed the systems we have given to the world. It also prompts us to ask uncomfortable questions about, “What do we need to do to advance this?” How do we get to a point where we can create the next leap that will bring us closer to this ideal? I think it dealt with issues which I have talked about on this blog, but extends far beyond them.

It will be interesting to see the DVD when it comes out. I wonder what the creators of the movie will say about their inspirations for what they put into it. I have not played the video game Tron Evolution, which I’ve heard tells the story from the time of the first movie up to “Tron Legacy.” Maybe it tells a very different story.

I leave you with a PC graphics/sound demo called “Memories from the MCP,” created in 2005 by a group called “Brain Control”. It looks pretty cool, and is vaguely Tron-ish. It has a “Tron Legacy” look and sound to it.

Edit 12-14-2012: I noticed there was some traffic coming to this post from Reddit. I looked at the discussion going on there, and someone referred to this video, created by Bob Plissken, called “Tron: Fight For The Users.” It kind of dovetails with what I talk about here, but it offers a different perspective. I like that it shows how both “Tron” and “Tron Legacy” try to get across similar messages. I don’t entirely agree with the idea that “we are the users of ourselves.” I’d say we are currently the users of notions of system organization, given to us by designers. These notions have had implications, which we have seen play out over the last 20 years in particular. Still, I like this message for its depth of perception. This sort of deep look at the symbolic implications of the stories, as they relate to the technological society we live in, is the way I like to look at the Tron movies.

At the end it refers people to a blog called Yori Lives. I checked it out, and a post called “Tron and the Future” nicely explains the message in this video.

On another subject, there’s been a lot of buzz lately about a sequel to “Tron Legacy” (for lack of a better term, it seems people are calling it “Tron 3”). Apparently there is some reason to be excited about it, as there’s been news that Disney is actively pursuing this. Yori Lives also talks about it.

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Antikythera Mechanism, from Wikipedia.org

I heard about this mechanism some years ago. At first the speculation was it was an “ancient computer,” but no description of it was given. That was fantastic enough. “The ancient Greeks created a computer? Come on!” In the last several years it’s been described definitively as an astronomical computer. Pat_S over at tammybruce.com wrote a post on Christmas Eve on the latest research results. (Note: This site normally devotes itself to political topics. Just so you know.) There are a few great videos you can watch from there. The video over at Nature Magazine’s website is pretty interesting (there are links to this from Pat_S’s article).

The mechanism was discovered in a Roman shipwreck, off the island of Antikythera in the Mediterranean. The ship was bringing treasures from the Greek world. It’s estimated the mechanism was made in 140 BC. At first it appeared to be just a lump of rock (the mechanism was encased in it). It was pretty much ignored. Then one day the material split, and someone was able to see what looked like gears inside. Recently researchers started using sophisticated X-Ray technology with it, and they’ve been able to get a very detailed view of how the mechanism worked. They’ve even gotten detailed views of writing that was embossed into the parts.

The mechanism has a sophistication that has excited researchers. It’s way beyond what historians thought the Greeks were capable of. It has a compactness of design that was not duplicated by later, similar mechanisms, which only appeared about 1,400 years after it’s estimated this device was made.

This is an exciting find. I would rank it up there with the discovery of the Archimedes Palimpsest, which reveals some things about Archimedes’s mathematical knowledge. This was also quite stunning.

I recently wrote a post on the computers and designs that Charles Babbage created, and I said that when I was in Jr. high school he was the earliest creator of mechanical computers I had found. What I did not say (though I talked about it in an earlier post) is that when I was in high school, I had found out about an earlier mechanism, the Pascaline, invented by Blaise Pascal in 1642, though it was much simpler. It was an adding machine. In terms of the sophistication of computing devices, the Antikythera Mechanism bests everything before Babbage, as far as we know, who began his work on automatic computers in 1821.

The Middle Ages has been an interesting time in history for me, particularly as it contrasts with the ancient Greek and Roman civilizations. The lesson we can learn from looking at this history is that progress is not necessarily linear and inevitable. Archimedes, it turns out, discovered some principles of Calculus 1,900 years before Newton, but that knowledge was eventually forgotten sometime in the Middle Ages. A story I heard about why this knowledge died out in the Western world was that Archimedes’s work was considered sacrosanct. He was “the great master” of mathematics, and no one was allowed to question his work, or try to improve on it. So his knowledge was just handed down from generation to generation, for centuries. It became dead, because it wasn’t presented as a “live” subject, something that could grow and evolve. Eventually it became devalued and forgotten. Newton rediscovered this knowledge of Calculus through his own work with mathematics about 500 years after it passed out of people’s knowledge base.

If you read about the works of Hero (or “Heron”) of Alexandria (10-70 AD), you’ll find that he discovered some basic principles of steam power, and jet propulsion. Again, knowledge that was lost (or kept, but ignored) for 1,700 years, and then reacquired, and advanced. I understand that the reason this particular area of knowledge was not advanced in society when it was discovered, at least by modern analysis, is that the Roman society that Heron lived in (though Heron was Greek) had no social use for advanced steam power. Not that advanced technology was developed, but there was no incentive to take it to the next level. For one thing they had slaves to do the work, and this was an institution that was useful to the Romans for maintaining their dominance over other cultures. It would take a modern, reformed understanding of political power and economics to make advanced steam power something that was useful and accepted by society.

It’s a humbling thing to realize that knowledge that could be advanced, even today, can be lost for hundreds of years, even if it is documented, and available for others to learn. It’s a fragile thing. It’s also kind of discouraging that knowledge is not always applied to societal progress, because the social/cultural environment is not compatible with it. It has to wait for a later age, and perhaps be forgotten and rediscovered.

Read Full Post »

Top posts of 2010

I noticed other bloggers are talking about their top posts from the past year. So I thought I’d get in on it.

Edit 1-7-2011: I should’ve noted that these are not the top posts I wrote in 2010, but of all the articles on this blog, these were the most viewed in 2010.


Does computer science have a future?, 13 comments


A history of the Atari ST and Commodore Amiga, 4 comments


Great moments in modern computer history, 8 comments


The beauty of mathematics denied, 3 comments


Thoughts on the 3D GUI, 4 comments

Read Full Post »