Feeds:
Posts
Comments

I’ve had it on my mind to post this for a while, though I do it with hesitation, since this has been so politicized. I wrote about the ban on incandescents in 2008, in “Say goodbye to incandescent bulbs. Say hello to the ‘new normal’.”

I found this interesting nugget as I was writing this. Back then, I had an idea for how to make CFLs safer. I don’t know why no one thought of it before I did. From a consumer safety standpoint, it would seem like an obvious solution:

[M]aybe they could put a plastic bulb around the coil that would absorb the shock, or at least contain the contents if the coil broke from jarring. This would require them to make the coils smaller, but the added safety would be worth it.

Several months ago, while shopping for bulbs at the grocery store, I noticed they were selling CFLs with a hard plastic bulb, just like I suggested! Too little, too late, I guess. I am happy to say that CFLs are going away at the end of this year.

I actually stocked up on incandescents on clearance. I haven’t been using CFLs for most of my lighting. I ended up using them for a couple overhead lights, just because an inspector came through a couple years ago to make sure the apartment complex I live in was “smart regs-compliant,” and I didn’t want to embarrass my landlord by creating any excuse to call the dwelling non-compliant. “I love my minders…”

This doesn’t mean incandescents are coming back. They’re still banned by legislation passed in 2007, but at least there are now better lighting options. Since a few years ago, those who prefer incandescents can get halogen bulbs that are the same size as the old incandescents, and are just as bright. They cost more, but they’re supposed to use less electricity, and last longer. What I care about is that they operate on the same principle. They use a filament. So if you break one, all you have to do is clean up the glass, and vacuum or mop. No toxic spill. Yay!

As the guys in the video say, there are now very good LED lighting options, which are also non-toxic to you.

So I see this as reason to celebrate. The market actually succeeded in getting rid of an expensive, hazardous form of lighting that, it seems to me, our government tried to force on us. I wish the ban would be lifted. I don’t see the point in it. I know people will say it reduces pollution, because we can lower the amount of electricity we use, but as I explained years ago, this is a dumb way to go about it. Lighting is not the major source of electricity usage. Our air conditioning is the elephant in the room, each summer!

Aaron Renn had a good article on what was really going on with the light bulb ban, in “The Return of the Monkish Virtues,” written in 2012. It’s interesting reading. The idea always was to force people to think about their impact on the environment, not to actually do anything substantial about it.

 

“We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.”
— “SIGACT trying to get children excited about CS,”
by Michael R. Fellows and Ian Parberry

There have been many times where I’ve thought about writing about this over the years, but I’ve felt like I’m not sure what I’m doing yet, or what I’m learning. So I’ve held off. I feel like I’ve reached a point now where some things have become clear, and I can finally talk about them coherently.

I’d like to explain some things that I doubt most CS students are going to learn in school, but they should, drawing from my own experience.

First, some notions about science.

Science is not just a body of knowledge (though that is an important part of it). It is also venturing into the unknown, and developing the idea within yourself about what it means to really know something, and to understand that knowing something doesn’t mean you know the truth. It means you have a pretty good idea of what the reality of what you are studying is like. You are able to create a description that somehow resembles the reality of what you’re studying to such a degree that you are able to make predictions about it, and those predictions can be confirmed within some boundaries that are either known, or can be discovered, and subsequently confirmed by others without the need to revise the model. Even then, what you have is not “the truth.” What you have might be a pretty good idea for the time being. As Richard Feynman said, in science you never really know if you are right. All you can be certain of is that you are wrong. Being wrong can be very valuable, because you can learn the limits of your common sense perceptions, and knowing that, direct your efforts towards what is more accurate, what is closer to reality.

The field of computing and computer science doesn’t have this perspective. It is not science. My CS professors used to admit this openly, but my understanding is CS professors these days aren’t even aware of this distinction anymore. In other words, they don’t know what science is, but they presume that what they practice is science. CS, as it’s taught, has some value, but what I hope to introduce people to here is the notion that after they have graduated with a CS degree, they are in kindergarten, or perhaps first grade. They have learned some basics about how to read and write, but they have not learned what is really powerful about that skill. I will share some knowledge I have gleaned as someone who has ventured as a beginner in this perspective.

The first thing to understand is that an undergraduate computer science education doesn’t tend to give you any notions about what computing is. It doesn’t explore models of computing, and in many cases, it doesn’t even present much in the way of different models of programming. It presents one model of computing (the Von Neumann model), but it doesn’t venture deeply into concepts of how the model works. It provides some descriptions of entities that are supposed to make it work, but it doesn’t tie them together so that you can see the relationships; see how the thing really works.

Science is about challenging and forming models of phenomena. A more powerful notion of computing is realized by understanding that the point of computer science is not about the application of computing to problems, but about creating, criticizing, and studying the strengths and weaknesses of different models of computing systems, and that programming is a means of modeling our ideas about them. In other words, programming languages and development environments can be used to model, explore, and test, and criticize our notions about different computing system models. It behooves the computer scientist to either choose an existing computing architecture represented by an existing language, or to create a new architecture, and a new language, that is suitable to what is being attempted, so that the focus can be maintained on what is being modeled, and testing that model without a lot of distraction.

As Alan Kay has said, the “language” part of “programming language” is really a user interface. I suggest that it is a user interface to a model of computing, a computing architecture. We can call it a programming model. As such, it presents a simulated environment of what is actually being accomplished in real terms, which enables a certain way of seeing the model you are trying to construct. (As I said earlier, we can use programming (with a language) to model our notions of computing.) In other words, you can get out of dealing with the guts of a certain computing architecture, because the runtime is handling those details for you. You’re just dealing with the interface to it. In this case, you’re using, quite explicitly, the architecture embodied in the runtime, not an API, as a means to an end. The power doesn’t lie in an API. The power you’re seeking to exploit is the architecture represented by the language runtime (a computing model).

This may sound like a confusing round-about, but what I’m saying is you can use a model of computing as a means for defining a new model of computing, whatever best suits the task. I propose that, indeed, that’s what we should be doing in computer science, if we’re not working directly with hardware.

When I talk about modeling computing systems, given their complexity, I’ve come to understand that constructing them is a gradual process of building up more succinct expressions of meaning for complex processes, with a goal of maintaining flexibility, in terms of how sub-processes that are used for higher levels of meaning can be used. As a beginner, I’ve found that starting with a powerful programming language in a minimalist configuration, with just basic primitives, was very constructive for understanding this, because it forced me to venture into constructing my own means for expressing what I want to model, and to understand what is involved in doing that. I’ve been using a version of Lisp for this purpose, and it’s been a very rewarding experience, but other students may have other preferences. I don’t mean to prejudice anyone toward one programming model. I recommend finding, or creating one that fits the kind of modeling you are pursuing.

Along with this perspective, I think, there must come an understanding of what different programming languages are really doing for you, in their fundamental architecture. What are the fundamental types in the language, and for what are they best suited? What are the constructs and facilities in the language, and what do they facilitate, in real terms? Sure, you can use them in all sorts of different ways, but to what do they lend themselves most easily? What are the fundamental functions in the runtime architecture that are experienced in the use of different features in the language? And finally, how are these things useful to you in creating the model you want to attempt? Seeing programming languages in this way provides a whole new perspective on them that has you evaluating how they facilitate your modeling practice.

Application comes along eventually, but it comes late in the process. The real point is to create the supporting architecture, and any necessary operating systems first. That’s the “hypothesis” in the science you are pursuing. Applications are tests of that hypothesis, to see if what is predicted by its creators actually bears out. The point is to develop hypotheses of computing systems so that their predictive limits can be ascertained, if they are not falsified. People will ask, “Where does practical application come in?” For the computer scientist involved in this process, it doesn’t, really. Engineers can plumb the knowledge base that’s generated through this process to develop ideas for better system software, software development systems, and application environments to deploy. However, computer science should not be in service to that goal. It can merely be useful toward it, if engineers see that it is fit to do so. The point is to explore, test, discuss, criticize, and strive to know.

These are just some preliminary thoughts on computer systems modeling. In my research, I’ve gotten indications that it’s not healthy for the discipline to be insular, just looking at its own artifacts for inspiration for new ideas. It needs to look at other fields of study to introduce unorthodox models into the discussion. I haven’t gotten into that yet. I’m still trying to understand some basic ideas of a computer system, using a more mathematical approach.

Related post: Does computer science have a future?

— Mark Miller, https://tekkie.wordpress.com

As this blog has progressed, I’ve gotten farther away from the technical, and moved more towards a focus on the best that’s been accomplished, the best that’s been thought (that I can find and recognize), with people who have been attempting to advance, or have made some contribution to what we can be as a society. I have also put a focus on what I see happening that is retrograde, which threatens the possibility of that advancement. I think it is important to point that out, because I’ve come to realize that the crucible that makes those advancements possible is fragile, and I want it to be protected, for whatever that’s worth.

As I’ve had conversations with people about this subject, I’ve been coming to realize why I have such a strong desire to see us be a freer nation than we are. It’s because I got to see a microcosm of what’s possible within a nascent field of research and commercial development, with personal computing, and later the internet, where the advancements were breathtaking and exciting, which inspired my imagination to such a height that it really seemed like the sky was the limit. It gave me visions of what our society could be, most of them not that realistic, but they were so inspiring. It took place in a context of a significant amount of government-funded, and private research, but at the same time, in a legal environment that was not heavily regulated. At the time when the most exciting stuff was happening, it was too small and unintelligible for most people to take notice of it, and society largely thought it could get along without it, and did. It was ignored, so people in it were free to try all sorts of interesting things, to have interesting thoughts about what they were accomplishing, and for some to test those ideas out. It wasn’t all good, and since then, lots of ideas I wouldn’t approve of have been tried as a result of this technology, but there is so much that’s good in it as well, which I have benefitted from so much over the years. I am so grateful that it exists, and that so many people had the freedom to try something great with it. This experience has proven to me that the same is possible in all human endeavors, if people are free to pursue them. Not all goals that people would have in it would be things I think are good, but by the same token I think there would be so much possible that would be good, from which people of many walks of life would benefit.

Glenn Beck wrote a column that encapsulates this sentiment so well. I’m frankly surprised he thought to write it. Some years ago, I strongly criticized Beck for writing off the potential of government-funded research in information technology. His critique was part of what inspired me to write the blog series “A history lesson on government R&D.” We come at this from different backgrounds, but he sums it up so well, I thought I’d share it. It’s called The Internet: What America Can And Should Be Again. Please forgive the editing mistakes, of which there are many. His purpose is to talk about political visions for the United States, so he doesn’t get into the history of who built the technology. That’s not his point. He captures well the idea that the people who developed the technology of the internet wanted to create in society: a free flow of information and ideas, a proliferation of services of all sorts, and a means by which people could freely act together and build communities, if they could manage it. The key word in this is “freedom.” He makes the point that it is we who make good things happen on the internet, if we’re on it, and by the same token it is we who can make good things happen in the non-digital sphere, and it can and should be mostly in the private sector.

I think of this technological development as a precious example of what the non-digital aspects of our society can be like. I don’t mean the internet as a verbatim model of our society. I mean it as an example within a society that has law, which applies to people’s actions on the internet, and that has an ostensibly representational political system already; an example of the kind of freedom we can have within that, if we allow ourselves to have it. We already allow it on the internet. Why not outside of it? Why can’t speech and expression, and commercial enterprise in the non-digital realm be like what it is in the digital realm, where a lot goes as-is? Well, one possible reason why our society likes the idea of the freewheeling internet, but not a freewheeling non-digital society is we can turn away from the internet. We can shut off our own access to it, and restrict access (ie. parental controls). We can be selective about what we view on it. It’s harder to do that in the non-digital world.

As Beck points out, we once had the freedom of the internet in a non-digital society, in the 19th century, and he presents some compelling, historically accurate examples. I understand he glosses over significant aspects of our history that was not glowing, where not everyone was free. In today’s society, it’s always dangerous to harken back romantically to the 19th, and early 20th centuries as “golden times,” because someone is liable to point out that it was not so golden. His point is to say that people of many walks of life (who, let’s admit it, were often white) had the freedom to take many risks of their own free will, even to their lives, but they took them anyway, and the country was better off for it. It’s not to ignore others who didn’t have freedom at the time. It’s using history as an illustration of an idea for the future, understanding that we have made significant strides in how we view people who look different, and come from different backgrounds, and what rights they have.

The context that Glenn Beck has used over the last 10 years is that history as a barometer on progress is not linear. Societal progress ebbs and flows. It has meandered in this country between freedom and oppression, with different depredations visited on different groups of people in different generations. They follow some patterns, and they repeat, but the affected people are different. The depredations of the past were pretty harsh. Today, not so much, but they exist nevertheless, and I think it’s worth pointing them out, and saying, “This is not representative of the freedom we value.”

The arc of America had been towards greater freedom, on balance, from the time of its founding, up until the 1930s. Since then, we’ve wavered between backtracking, and moving forward. I think it’s very accurate to say that we’ve gradually lost faith in it over the last 13 years. Recently, this loss of faith has become acute. Every time I look at people’s attitudes about it, they’re often afraid of freedom, thinking it will only allow the worst of humanity to roam free, and to lay waste to what everyone else has hoped for. Yes, some bad things will inevitably happen in such an environment. Bad stuff happens on the internet every day. Does that mean we should ban it, control it, contain it? If you don’t allow the opportunity that enables bad things to happen, you will not get good things, either. I’m all in favor of prosecuting the guilty who hurt others, but if we’re always in a preventative mode, you prevent that which could make our society so much better. You can’t have one without the other. It’s like trying to have your cake, and eat it, too. It doesn’t work. If we’re so afraid of the depredations of our fellow citizens, then we don’t deserve what wonderful things they might bring, and that fact is being borne out in lost opportunities.

We have an example in our midst of what’s possible. Take the idea of it seriously, and consider how deprived your present existence would be if it wasn’t allowed to be what it is now. Take its principles, and consider widening the sphere of the system that allows all that it brings, beyond the digital, into our non-digital lives, and what wonderful things that could bring to the lives of so many.

Christina Engelbart has written an excellent summary of her late father’s (Doug Engelbart’s) ideas, and has outlined what’s missing from digital media.

Collective IQ Review

dce-jcnprofiles-interviewInteractive computing pioneer Doug Engelbart
coined the term Collective IQ
to inform the IT research agenda

Doug Engelbart was recognized as a great pioneer of interactive computing. Most of his dozens of prestigious awards cited his invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger strategic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, research the issues and options, develop responses and solutions, integrate learnings, iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we…

View original post 1,949 more words

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.

This has been on my mind for a while, since I had a brush with it. I’ve been using Jungle Disk cloud-based backup since about 2007, and I’ve been pretty satisfied with it. I had to take my laptop “into the shop” early this year, and I used another laptop while mine was being repaired. I had the thought of getting a few things I was working on from my cloud backup, so I tried setting up the Jungle Disk client. I was dismayed to learn that I couldn’t get access to my backed up files, because I didn’t have my Amazon S3 Access Key. I remember going through this before, and being able to recover my key from Amazon’s cloud service, after giving Amazon my sign-in credentials. I couldn’t find the option for it this time. After doing some research online, I found out they stopped offering that. So, if you don’t have your access key, and you want your data back, you are SOL. You can’t get any of it back. Period, even if you give Amazon’s cloud services your correct sign-on credentials. I also read stories from very disappointed customers who had a computer crash, didn’t have their key, and had no means to recover their data from the backup they had been paying for for years. This is an issue that Jungle Disk should’ve notified customers about a while ago. It’s disconcerting that they haven’t done so, but since I know about it now, and my situation was not catastrophic, it’s not enough to make me want to stop using it. The price can’t be beat, but this is a case where you get what you pay for as well.

My advice: Write down–on a piece of paper, or in some digital note that you know is secure and recoverable if something bad happens to your device–your Amazon S3 Access Key. Do. It. NOW! You can find it by bringing up Jungle Disk’s Activity Monitor, and then going to its Desktop Configuration screen. Look under Application Settings, and then Jungle Disk Account information. It’s a 20-character, alphanumeric code. Once you have it, you’re good to go.

Edit 12/23/2015: I forgot to mention that to re-establish a connection with your backup account, you also need what’s called a Secret Key. You set this up when you first set up Jungle Disk. You should keep this with your S3 Access Key. From my research, though, it seems the most essential thing for re-establishing the connection with your backup account is keeping a copy of your S3 Access Key. The Secret Key is important, but you can generate a new one, if you don’t know what it is. Amazon no longer reveals your Secret Key. Where’s My Secret Access Key? talks about this. It sounds relatively painless to generate a new one, so I assume you don’t lose access to your backup files by doing this. Amazon’s system limits you to two generated Secret Keys “at a time.” You can generate more than two, but you have to go through a step of deleting one of the old ones first, if you reach this limit. The article explains how to do that.

I’m really late with this, because it came out in late May, when I was super busy with a trip I was planning. I totally missed the announcement until I happened upon it recently. In past posts I had made a mention or two about Disney working on a sequel to Tron Legacy. Well, they announced that it isn’t happening. It’s been cancelled. The official announcement said that the movie release schedule in 2017 was just too full of other live-action films Disney is planning. “RaginRonin” on YouTube gave what I think is the best synopsis of movie industry pundit analysis. It sounds like it comes down to one thing: Disney is averse to live-action films that don’t relate to two genres that have been successful for them: Pirates of the Caribbean, and anything related to its classic fairy tale franchise. Other than that, they want to focus on their core competency, which is animation.

All pundit analysis focused on one movie: Tomorrowland. It was a Disney live-action sci-fi movie that flopped. They figured Disney took one look at that and said, “We can’t do another one of those.”

One thing contradicts this, though, and I’m a bit surprised no one I’ve heard so far picked up on this: Disney is coming out with a live-action sci-fi film soon. It’s called Star Wars: The Force Awakens… though it is being done through its Lucasfilm division, and it’s their staff doing it, not Disney’s. Maybe they think that will make a difference in their live-action fortunes. Disney paid a lot of money for Lucasfilm, and so of course they want it to produce. They want another series. No, more than one series!

Like with the first Tron film in 1982, Legacy did well at the box office, but not well enough to wow Disney. Apparently they were expecting it to be a billion-dollar film in ticket sales domestically and internationally, and it grossed $400 million instead. Secondly, Tron: Uprising, the animated TV series that was produced for Disney XD, and got some critical acclaim, did not do well enough for Disney’s taste, and was cancelled after one season. Though, I think the criticism that, “Of course it didn’t do well, since they put it on an HD channel when most viewers don’t have HD,” is valid, it also should be said that it wasn’t a “killer app” that drew people to HD, either. Maybe it’s more accurate to say that Tron as a genre is not a hot seller for Disney, period. It’s profitable, but it doesn’t knock their socks off.

One pundit said that he’s confident Disney will return to Tron sometime in the future, just as it did with Legacy, but the way things are looking now, Disney wants to focus on its profitable properties. I can buy that, but I wonder if the challenge was the story. Olivia Wilde, the actress who played “Quorra” in Legacy, mentioned this in an April interview. Shooting for the sequel, in which the original cast was slated, was scheduled for October, yet they didn’t have a screenplay. They had plenty of time to come up with one. Disney hired a writer for this sequel a couple years ago.

This has happened before. As I’ve talked about in previous posts, there was an attempt to make a Tron sequel back in 2003. It was supposed to be a combination release of a video game and a movie, called Tron 2.0. The video game came out for PCs, and later, game consoles. There was a clear, dramatic storyline in the game that jumped off from the characters, and a bit of the story, from the original Tron. The whole aesthetic of the video game was very nostalgic. A lot of the focus was on the subject of computer viruses, and various forms of malware, and some pretty interesting story lines about the main characters. I had to admit, though, that it took the focus off of what was really interesting about Tron, which was the philosophical and political arguments it made about what computing’s role should be in society. Steven Lisberger, who was driving the effort at Disney at the time, said that an idea he had was to talk about (I’m paraphrasing), “What is this thing called the internet? What should it represent?” He said, “It’s got to be something more than a glorified phone system!” Disney had developed some concept art for the movie. It looked like it might have a chance, but it was cancelled. Tron Legacy, which came out in 2010, was a worthy successor to the first movie in this regard, and I think that had something to do with it getting the green light. Someone had finally come up with something profound to say about computing’s role in society (I talk about this here). I think there’s more to this story than the market for live-action sci-fi movies from Disney. I think they haven’t found something for a sequel to communicate, and they were running up against their production deadline. I suspect that Lisberger and Kosinski did not want to rush out something that was unworthy of the title. Canceling it will give more time for someone down the road to do it right.

Several years ago, while I was taking in as much of Alan Kay’s philosophy as I could, I remember him saying that he wanted to see science be integrated into education. He felt it necessary to clarify this, that he didn’t mean teaching everything the way people think of science–as experimental proofs of what’s true–but rather science in the sense of its root word, scientia, meaning “to know.” In other words, make practices of science the central operating principle of how students of all ages learn, with the objective of learning how to know, and situating knowledge within models of epistemology (how one knows what they know). Back when I heard this, I didn’t have a good sense of what he meant, but I think I have a better sense now.

Kay has characterized the concept of knowledge that is taught in education, as we typically know it, as “memory.” Students are expected to take in facts and concepts which are delivered to them, and then their retention is tested. This is carried out in history and science curricula. In arithmetic, they are taught to remember methods of applying operations to compute numbers. In mathematics they are taught to memorize or understand rules for symbol manipulation, and then are asked to apply the rules properly. In rare instances, they’re tasked with understanding concepts, not just applying rules.

Edit 9/16/2015: I updated the paragraph below to flesh out some ideas, so as to minimize misunderstanding.

What I realized recently is missing from this construction of education are the ideas of being skeptical and/or critical of one’s own knowledge, of venturing into the unknown, and trying to make something known out of it that is based on analysis of evidence, with the goal of achieving greater accuracy to what’s really there. Secondly, it also misses on creating a practice of improving on notions of what is known, through practices of investigation and inquiry. These are qualities of science, but they’re not only applicable to what we think of as the sciences, but also to what we think of as non-scientific subjects. They apply to history, mathematics, and the arts, to name just a few. Instead, the focus is on transmitting what’s deemed to be known. There is scant practice in venturing into the unknown, or in improving on what’s known. After all, who made what is known, as far as a curriculum is concerned, but other people who may or may not have done the job of analyzing what is known very well. This isn’t to say that students shouldn’t be exposed to notions of what is known, but I think they ought to also be taught to question it, be given the means and opportunity to experience what it’s like to try to improve on its accuracy, and realize its significance to other questions and issues. Furthermore, that effort on the part of the student must be open to scrutiny and rational, principled criticism by teachers, and fellow students. I think it’d even be good to have professionals in the field brought into the process to do the same, once students reach some maturity. Knowledge comes through not just the effort to improve, but arguments pro and con on that effort.

A second ingredient Kay has talked about in recent years is the need for outlooks. He said in a presentation at Kyoto University in 2009:

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that.

He has called outlooks “brainlets,” which have been developed over time for getting around our misperceptions, so we can see more clearly. One such outlook is science. A couple others are logic, and mathematics. And there are more.

The education system we have has some generality to it, but as a society we have put it to a very utilitarian task, and as I think is accurately reflected in the quote from Kay, we rob ourselves of the ability to gain important insights on our work, our worth, and our world by doing this. The sense I get about this perspective is that as a society, we use education better when we use it to develop how to think and perceive, not to develop utilitarian skills that apply in an A-to-B fashion to some future employment. This isn’t to say that skills used, and needed in real-world jobs are unimportant. Quite the contrary, but really, academic school is no substitute for learning job skills on the job. They try in some ways to substitute for it, but I have not seen one that has succeeded.

What I’ll call “skills of mind” are different from “skills of work.” Both are important, and I have little doubt that the former can be useful to some employers, but the point is it’s useful to people as members of society, because outlooks can help people understand the industry they work in, the economy, society, and world they live in better than they can without them. I know, because I have experienced the contrast in perception between those who use powerful outlooks to understand societal issues, and those who don’t, who fumble into mishaps, never understanding why, always blaming outside forces for it. What pains me is that I know we are capable of great things, but in order to achieve them, we cannot just apply what seems like common sense to every issue we face. That results in sound and fury, signifying nothing. To achieve great things, we must be able to see better than the skills with which we were born and raised can afford us.

This question made me conscious of the fact that the icons computer/smartphone and many web interfaces use are a metaphor for the way in which the industry has designed computers for a consumer market. That is, they are to be used to digitize and simulate old media.

For example, the use of the now-obsolete floppy disk to represent “save?”

Which computer icons no longer make sense to a modern user?

The way this question is asked is interesting and encouraging. These icons no longer make sense to modern users. Another interesting question is what should replace them? However, without powerful outlooks, I suspect it’s going to be difficult to come up with anything that really captures the power of this medium that is computing, and we’ll just use the default of ourselves as metaphors.

Follow

Get every new post delivered to your Inbox.