The Tron sequel is coming along

I wrote a post a year ago talking about how a Tron sequel was “possible”, given that a Tron “test reel” (video) was shown by some people from Disney at last year’s ComicCon. I’ve been reading updates (there are newer updates here as well) over at Tron 2.0 News, and a Tron movie sequel has definitely been in the works. A couple months after the “test reel” was shown at ComicCon it was announced that Jeff Bridges had signed on, to reprise his role as Kevin Flynn. Bridges said that shooting for the sequel would begin in Spring 2009. Later several more actors were announced, including Bruce Boxleitner, reprising his role as Alan Bradley (no word yet on whether the character “Tron” will be making an appearance). A new generation of main characters was introduced, played by twenty-something actors, one of them being Flynn’s son.

One of the early announcements was that the film was going to be shown in 3D. That’s a hot trend right now.

An interesting story that Tron 2.0 News talked about just recently was that the “test reel” was never supposed to have been shown at ComicCon, or anywhere else in the first place. As of last July, Disney was on the fence about making a sequel. They didn’t want the “test reel” shown to anybody because they didn’t want to raise expectations and then disappoint their audience. It really was just a test to see if the technology could handle the vision, and it was supposed to be “inside Disney only”. It sounds like a few renegade executives at Disney snuck it into ComicCon in hopes of forcing Disney’s hand. According to the story some people at Disney got taken to the woodshed for this. It didn’t take long for them to decide, though, to go ahead with the sequel, probably because of the excitement generated by the “sneak preview” at ComicCon, and the bootleg video of it that was taken there, which went viral on the internet.

Disney created an unreleased test reel for the first “Tron” film back when it was first being developed. It was just a test to show the capabilities of the gels, backlighting, and rotoscoping techniques that were going to be used in the final film. It had a story line which was not in the released film, of a character inside a computer fighting a bad guy and then freeing another character from prison.

Tron 2.0 News revealed a while back that John Lasseter, who came back to Disney once Pixar merged with them, was instrumental in getting the sequel “test reel” made. It sounded like he had to really push for it hard, even working on it semi-secretly with an outside CGI firm. Lasseter was blown away by the original “Tron” movie when he worked at Disney back then. He’s been famously quoted as saying that if Tron hadn’t been made “there would’ve been no Toy Story”. “Toy Story” being Pixar’s first feature film. In the end us Tron fans may have Lasseter to thank for doing what needed to be done to get Disney to make the sequel.

According to the latest news I’ve read, shooting for the sequel just got wrapped up about a week ago in Canada. The post-production CGI work is projected to take another year. The early projection is that the movie will be released during Christmas 2010, but I could easily imagine the release being pushed back to 2011. An exciting tidbit is that IMAX might be looking at making an IMAX version of it. That would be very cool!

The current buzz is Disney is going to release an “updated” teaser trailer for it at this year’s ComicCon. Hopefully it’ll be released on the internet. I’ll be looking for it!

Another piece of news is that Disney is working on a video game to be released with the movie…a different one from “Tron 2.0″…and they’re finally getting the release of the two right this time! A PC/Mac game called “Tron 2.0” was released several years ago by Disney, which had a sequel story line. An XBox version eventually came out as well, to lackluster reviews. It was supposed to be released with a movie sequel that Disney had planned for release around 2003/04, but the project died. The game was released, oddly, without a movie to go with it.

The plot of the game was that Alan Bradley and Dr. Lora Baines (having gotten married) had a son, Jet, played as a twenty-something in the game. Lora suffered some mysterious death (not shown in the game, just talked about by the characters), but Alan preserved her consciousness in an AI sub-system, called “ma3a”. Alan was still alive and working for Encom. I forget what happened to Flynn. There might’ve been some story line about how he “rezzed” himself inside of Encom’s computers, but he doesn’t show up in the game at all.

Jet is the main character in the game. He gets sucked into Encom’s computer system (through the “laser process”) when a corruption in the system is detected. The theme of virus corruption plays prominently in the game. Jet is immediately considered part of the system corruption by the system’s guards, and so he has to fight the system, while pleading his case that he’s not a threat. He meets up with “ma3a” to try to fight the corruption. A mystery emerges about “ma3a”, but in the process of trying to discover what it is, tragedy strikes. In my opinion this is the reason to play the game. It’s a real interesting plot twist. Jet continues trying to fight the corruption and find his way out of the computer world, with some help from his father when he’s able to make contact with him. Along the way Jet has some powerful flashbacks that reveal his family’s past.

Meanwhile in the real world an evil, thuggish corporation takes over Encom, and the higher ups imprison Alan to get him to give up some technology secrets. Jet discovers what’s been happening in the real world (and Alan discovers what happened to Jet) and uses the computer system to help his father escape captivity. In turn his father is more able to help him from the outside.

I thought the story line that was put into the game was really interesting, and made it worthwhile to play it all the way through. Everything was great…until the ending. It was written like an afterthought. It sucked. Still, I like the game. I’ve been a fan of it for a long time. There’s a good story, some great “eye candy”, and some good “retro” parts that took me back years.

Bruce Boxleitner and Cindy Morgan lent their voices to their game characters, Alan Bradley and “ma3a”. Syd Mead designed a new light cycle, and perhaps some of the other stuff for the game.

========================================

A bit of the plot has been revealed for the movie sequel. This is obviously spoiler material, so you may wish to skip it.

The backstory is that Flynn disappeared into the Encom system years ago and has been missing ever since. In the present day Flynn’s son investigates his father’s disappearance and along the way gets sucked into the Encom system. He finds his father inside the system, and along with a female character they go on a journey that’s much more perilous than the one that Flynn, and Tron and Yori (computer programs written by Alan and Lora) embarked on in the first movie. Obviously Alan Bradley plays a part in the story somehow, but that has not been revealed yet.

========================================

The producers of the sequel say that while it will pay homage to the first film, in parts, they’re creating it as a stand-alone movie. The audience will not need to have seen the first film to understand it. This makes sense as it’ll have been nearly 30 years since the first film was made. It reminds me of the way that the newer Battlestar Galactica series was done. There were a few references to things from the original TV series, though it was disorienting the way they cast it as “the first war”. I always thought it would’ve been better if they had just left the old references out of it and cast the series as a remake. For the most part they created it anew.

Edit 7-27-09: Disney has released an updated version of the “test reel” that they showed at Comic-Con 2008, on YouTube. It looks and sounds A LOT clearer than the out-of-focus, fuzzy, bootleg version that’s been on the internet for a year! The title they seem to be going with for the movie is “Tron Legacy”. And they say right on it that they expect the movie to be released in 2010. They also mention “IMAX 3D”. Awesome! 🙂

“I’m not a scientist, but I play one on TV…”

“A man must learn on this principle, that he is far removed from the truth”
– Democritus

Science is a way of thinking. As Neil deGrasse Tyson has said, “It is a philosophy of discovery.” I reflected recently on what being a scientist is really all about. Good scientists are constantly trying to change their perception of reality. No, they’re not using psychedelic drugs (hopefully). They are rather like art appreciators trying to see what the artist is saying more clearly, except that their method is to guess at what the “artist meant” and then test the guess by going out and using instruments to help them observe the object of the guess more clearly than our native senses can. They share their observations with other scientists so that they can be validated or invalidated by peer review. Think of it as a “sanity check” on what you’ve found. It’s really like reverse-engineering nature if you think about it. I don’t mean to mislead with these analogies. I’m not trying to say that I know there is an “artist” of the world, or an “engineer”. I have my own opinions about that. Science has not found a creator for the world and since I’m talking about science I will respect that here. I’m trying to convey how scientists discover and use different perspectives to try to get at what’s really going on in our world and universe. They try to get beyond what the untrained eye and mind can see.

In our everyday lives we have a saying, “If it’s too good to be true it probably is.” Scientists try to get beyond the obvious, because they know that if it looks obvious it probably isn’t in reality. Scientists are natural skeptics. I’ve heard the saying that the best scientists are people who are always trying to prove themselves wrong. The most succinct description I’ve heard for how scientists come up with guesses (hypotheses) is that they must come up with something that can be “falsified”. In other words, it must be something that can be observed and/or measured so that others can say, “I came up with the same thing” or “No, this is not right. You missed ‘this’, and/or you did not consider ‘that’.” Most hypotheses are proved wrong in some way. There is a constant process of “debugging” our own notions of what’s happening. If a hypothesis cannot be falsified it is not science.

Even if a hypothesis turns out to be valid, scientists try to find the limits of its validity. This is commonly called “error”. It’s a reflection of what scientists think is the confidence level of their result(s), and there is always some error in science. The way I view it is to think of yourself inside a partially opaque sphere. You can see through it some, but the shapes of the objects outside of it look foggy, unclear. Science is the process of wiping away bits of what’s obscuring the image. You create a “window” through which you can see a bit of reality, but not all of it. Part of being a good scientist is understanding where there is a decent level of clarity and what the boundaries of it are. There is always a limit on it. Even scientific instruments have the potential to introduce error into observations, and scientists must be aware of these limitations. Science is about the process of trying to expand that “window” more and more, sometimes in small steps, sometimes big ones, to see reality more clearly.

Recently I’ve begun to wonder if our schools are teaching science correctly. A week or two ago I began having a debate with a newspaper columnist by the name of Mike Ellis at the Daily Camera, Boulder, Colorado’s daily newspaper. We’d chat in various comment sections on the Daily Camera web site whenever the issue of global warming came up. At the time Ellis asserted that CO2 levels had remained very consistent for thousands of years, and had only recently begun to change to levels not seen for a very, very long time (hundreds of thousands of years, I believe he said). I’ve also seen him say that prior to the Industrial Revolution climate change happened because of changes in Earth’s orbit. One could throw in continental drift as well, in all seriousness. He said that the CO2 levels correlate very well with the trend we’ve seen of rising temperature since the Industrial Revolution, and that CO2 is entirely responsible for this. I had been paying attention to the debate on this issue off and on for several years. From what I’ve heard from the proponents of the theory of anthropogenic global warming (AGW–climatic warming caused by human activity), even they don’t make such a claim. They would say that CO2 is a significant contributor to the rise in temperature, but not that it’s responsible for all of it. I asked over at WattsUpWithThat.com about this argument he made and they were struck by it. One commenter said, “Not even the IPCC makes that claim.” What bugged me is Ellis made such a hard and fast scientific claim based on a correlation he said he saw in the data. Correlations in data can be deceiving. They can make you think you’ve found a relationship between phenomena when you haven’t. The question is what is the relationship, if any? It must be tested scientifically by observation before the relationship can be legitimately claimed to be realistic. It turns out it has been tested, but the scientific conclusion is unclear to me now. For now I’m taking the position that it’s not settled, at least in my own mind. I’ll need to look into this further.

I came upon an article in the opinion section of the Daily Camera web site titled “Global warming whodunit”, written by Ellis. He is a blogger and his credit says that he studies climate change as a hobby. Okay, so he’s not a professional scientist. I read through most of the article and thought he used an interesting literary framework for making the argument. He puts CO2 “on trial”. However, when I got to the last three paragraphs I thought, “Wait a minute. There’s something wrong here.” Ellis asserts:

Still not convinced? I loaded as much publicly available data as I could into Microsoft Excel. The result? An 88 percent correlation between global temperatures and atmospheric CO2 concentration. The temperature correlation peaks about 12 years after the CO2 stimulus, and falls off slowly over decades. This is huge evidence that CO2 drives temperatures, and that the oil we burn today causes the most warming 10 to 15 years from now. [my emphasis]

Notice his use of the term “evidence”. I thought surely he had some scientific source to back up what is unmistakably a scientific claim. It sounded kind of like the arguments we had earlier, but this time he added this 10-15 year delay factor. I and others asked Ellis in the comments section (I’m “mmille10” in the comments) to show how he came up with this conclusion. He posted the URL to a blog posting where he asserted the same thing, showing charts he had produced in Excel, created from combining two data sets (for CO2 and global average temperature) and shifting the temperature data set 12 years to show the correlation in higher relief. Take a look at them. The correlation he talks about looks quite beautiful…and obvious. In his article he talked about other correlations he tried with other purported causes, such as sun spots, but they were not as good of a fit as CO2 to temperature. The implication he leaves the reader with is that CO2 is most definitely the “culprit”.

Ellis admitted in the comments that he had no scientific source to back up his claim that there was this 10-15 year relationship between CO2 and temperature (though he did reference scientific information which he said proved that CO2 causes global warming), and that he had seen nothing that contradicted his “results” (hah!), but that it didn’t matter because it was an opinion column. That’s just a cop out. There’s an old saying I’ve heard in journalism: “We have the right to our own opinions, but we do not have the right to our own facts.” Ellis used his own statistical analysis, had the audacity to dress it up as a scientific claim, and then used it as fact in his argument. This is pseudo-science at its most brazen. It’s not even that hard to figure out that he’s doing it. His whole column hinges on this claim. He doesn’t even give the old saw of, “Most of the world’s scientists believe this is true.” He could’ve used that instead and it would’ve had more legitimacy than this red herring.

Ellis’s whole point is about the correlation that matches up so well. He begs his readers, “What else could it be?” (I’m paraphrasing) Well that’s the thing. It could be something else. The only way to eliminate or diminish that possibility is to test the relationship out in nature. I’ll ignore for the moment that Ellis said this was “evidence”, which it most certainly is not. At best it is a hypothesis, but is it falsifiable?

Leaving his pseudo-science aside, I think where Ellis made an error is he assumes that there can only be one or two major factors that affect temperature. A thought I had was maybe one of the data sets that he threw out, due to the fact that it doesn’t correlate well by itself against temperature, might actually correlate well if he put in other factors and events which climatologists also think affect temperature. Just doing a simple-minded analysis is not good enough for science.

I respect Ellis’s right to his opinions, but I think it goes beyond the pale for even an opinion columnist to mislead the reading public using the platform of a newspaper of record. I haven’t read him extensively so I can’t speak to the quality of his other work. I’m talking about this one article, but due to the platform he has I found his article and his ignorance of the scientific method offensive. The Daily Camera should try for better quality than this in a city that has three major science labs (NOAA, NIST, and NCAR) and one of the premiere science and engineering universities in the country (C.U. Boulder). Publishing this drivel was an insult to our intelligence.

I kind of understand what’s going on here. Newspapers are really hurting right now financially. The Rocky Mountain News, a paper that’s been around for more than a century, went out of business a few months ago. Newspapers are desperate to find avenues to seem more relevant in order to attract readers. In this case the Camera has brought in a blogger. It hasn’t helped the quality of what they publish. I can tell you that much.

On the use of computers in science

I’m including some material here on what Dr. Alan Kay says about science education and computers, because this has implications for what I talk about above. Incidentally, Kay graduated with a B.A. in molecular biology and mathematics from the University of Colorado, back in the 1960s, I believe. He received his masters and doctorate in computer science from the University of Utah. In addition to being a pioneer in computing, he’s done a lot of pioneering work in developing math and science education principles, using computers for childhood education, outside of academia.

He’s said the appropriate way to approach teaching scientific principles to children using computers, is to create simulations of what they see. This is important, because doing it backwards, presenting a model and expecting reality to match it is pseudoscience. He said in one presentation on this, before a group of teachers (which was called “What is Squeak?”):

You can’t do science on a computer or with a book, because [with] the computer–like a book, like a movie–you can make up anything. We can have an inverse cube law of gravity on here, and the computer doesn’t care. No language system that we have knows what reality is like. That’s why we have to go out and negotiate with reality by doing experiments.

There was a Q & A session after his talk. I couldn’t hear the question exactly, but I think a teacher asked whether a simulation that Kay had up on screen was pseudoscience. Kay said, “This is a model. If you present it to the kids as fact, it is pseudoscience.” The idea being that a model is something that the kids should construct after having experienced the actual phenomenon in order to explore what they have just observed. By the way, scientists who are using computers properly to create simulations use the same process. By going through it, students and scientists can learn something more: the mechanics of more about what they have observed, and come up with insights that lead to new questions. Kids also get the added benefit of learning some mathematical principles in the process. He makes a big point about not taking a pre-existing model and just showing it to kids as if it was fact, because then you lose what science is about as a thought process.

After watching his presentation, I began wondering about how computers are used by professional scientists. For example, meteorologists use computer models as part of their weather prediction process. I don’t know for sure but I feel fairly certain that they didn’t create these models themselves. They may alter parameters that go into the model. I don’t know enough about meteorological practice to say that for sure. So are they participating in pseudo-science? I’d say one difference is meteorologists do not just say, “The computer models says X, so that’s our prediction.” They actually use several prediction models at once, because there’s not just one “right” model–they come up with different results. From what I’ve heard about this process they use them to set boundaries for what could happen, within a certain boundary of error (I’m doing some hand waving here). One question I have is do these models have an error rating? It seems to me this could theoretically be established with time, comparing a long series of model predictions with what actually happened. The question is can the error be measured?

Meteorologists use their own skills, gathering data like temperature, humidity, barometric pressure, etc., in addition to looking at the models to make a prediction. Even so, due to the chaotic nature of the atmosphere, the only weather prediction you can have some confidence in is the one for the next day.

I don’t know for sure but I feel fairly certain that computers are used by NASA to try to determine the flight paths of the spacecraft they launch, taking gravity wells into consideration. Even there the science is not exact, which I think is proven out by the number of failed landings that NASA has had on Mars. There have also been a number of times when NASA has had to make unplanned corrections to the flight path of a spacecraft in flight, which had nothing to do with equipment malfunctions.

What about using a computer model to demonstrate what has been found scientifically? I think if the computer is just used to display scientifically gathered data (it could be in any form: a chart, an animation, etc.), that’s different from running a model that’s actually computing its way through a process (a simulation) and saying, “This is reality.” Even if they are used in prediction, I can tell you from experience that there is some error involved, as is true of any scientific instrument. Having said this, I think if a computer is used in a scientific presentation of observed data there should be accompanying materials which demonstrate the error in the observations. People have a perception that computers are precise, exacting, and therefor reveal absolute truth. It can be a challenge to try to convey error through a demonstration on a computer.

Bringing this full circle, one might think that Mike Ellis in his article did what I just described, using a computer to display data. The difference is he drew unwarranted conclusions from the “data display” and parlayed it as “huge evidence.” It would’ve been scientifically valid for Ellis to point out that a data correlation exists between CO2 and temperature. That’s interesting. It could be used as a hypothesis, which could be used as motivation to do scientific research on it. The point where he stepped into pseudoscience was when he said this correlation showed a strong cause and effect relationship existed between the two. He did tried to do “science on a computer,” as Kay would put it.