Feeds:
Posts
Comments

As this blog has progressed, I’ve gotten farther away from the technical, and moved more towards a focus on the best that’s been accomplished, the best that’s been thought (that I can find and recognize), with people who have been attempting to advance, or have made some contribution to what we can be as a society. I have also put a focus on what I see happening that is retrograde, which threatens the possibility of that advancement. I think it is important to point that out, because I’ve come to realize that the crucible that makes those advancements possible is fragile, and I want it to be protected, for whatever that’s worth.

As I’ve had conversations with people about this subject, I’ve been coming to realize why I have such a strong desire to see us be a freer nation than we are. It’s because I got to see a microcosm of what’s possible within a nascent field of research and commercial development, with personal computing, and later the internet, where the advancements were breathtaking and exciting, which inspired my imagination to such a height that it really seemed like the sky was the limit. It gave me visions of what our society could be, most of them not that realistic, but they were so inspiring. It took place in a context of a significant amount of government-funded, and private research, but at the same time, in a legal environment that was not heavily regulated. At the time when the most exciting stuff was happening, it was too small and unintelligible for most people to take notice of it, and society largely thought it could get along without it, and did. It was ignored, so people in it were free to try all sorts of interesting things, to have interesting thoughts about what they were accomplishing, and for some to test those ideas out. It wasn’t all good, and since then, lots of ideas I wouldn’t approve of have been tried as a result of this technology, but there is so much that’s good in it as well, which I have benefitted from so much over the years. I am so grateful that it exists, and that so many people had the freedom to try something great with it. This experience has proven to me that the same is possible in all human endeavors, if people are free to pursue them. Not all goals that people would have in it would be things I think are good, but by the same token I think there would be so much possible that would be good, from which people of many walks of life would benefit.

Glenn Beck wrote a column that encapsulates this sentiment so well. I’m frankly surprised he thought to write it. Some years ago, I strongly criticized Beck for writing off the potential of government-funded research in information technology. His critique was part of what inspired me to write the blog series “A history lesson on government R&D.” We come at this from different backgrounds, but he sums it up so well, I thought I’d share it. It’s called The Internet: What America Can And Should Be Again. Please forgive the editing mistakes, of which there are many. His purpose is to talk about political visions for the United States, so he doesn’t get into the history of who built the technology. That’s not his point. He captures well the idea that the people who developed the technology of the internet wanted to create in society: a free flow of information and ideas, a proliferation of services of all sorts, and a means by which people could freely act together and build communities, if they could manage it. The key word in this is “freedom.” He makes the point that it is we who make good things happen on the internet, if we’re on it, and by the same token it is we who can make good things happen in the non-digital sphere, and it can and should be mostly in the private sector.

I think of this technological development as a precious example of what the non-digital aspects of our society can be like. I don’t mean the internet as a verbatim model of our society. I mean it as an example within a society that has law, which applies to people’s actions on the internet, and that has an ostensibly representational political system already; an example of the kind of freedom we can have within that, if we allow ourselves to have it. We already allow it on the internet. Why not outside of it? Why can’t speech and expression, and commercial enterprise in the non-digital realm be like what it is in the digital realm, where a lot goes as-is? Well, one possible reason why our society likes the idea of the freewheeling internet, but not a freewheeling non-digital society is we can turn away from the internet. We can shut off our own access to it, and restrict access (ie. parental controls). We can be selective about what we view on it. It’s harder to do that in the non-digital world.

As Beck points out, we once had the freedom of the internet in a non-digital society, in the 19th century, and he presents some compelling, historically accurate examples. I understand he glosses over significant aspects of our history that was not glowing, where not everyone was free. In today’s society, it’s always dangerous to harken back romantically to the 19th, and early 20th centuries as “golden times,” because someone is liable to point out that it was not so golden. His point is to say that people of many walks of life (who, let’s admit it, were often white) had the freedom to take many risks of their own free will, even to their lives, but they took them anyway, and the country was better off for it. It’s not to ignore others who didn’t have freedom at the time. It’s using history as an illustration of an idea for the future, understanding that we have made significant strides in how we view people who look different, and come from different backgrounds, and what rights they have.

The context that Glenn Beck has used over the last 10 years is that history as a barometer on progress is not linear. Societal progress ebbs and flows. It has meandered in this country between freedom and oppression, with different depredations visited on different groups of people in different generations. They follow some patterns, and they repeat, but the affected people are different. The depredations of the past were pretty harsh. Today, not so much, but they exist nevertheless, and I think it’s worth pointing them out, and saying, “This is not representative of the freedom we value.”

The arc of America had been towards greater freedom, on balance, from the time of its founding, up until the 1930s. Since then, we’ve wavered between backtracking, and moving forward. I think it’s very accurate to say that we’ve gradually lost faith in it over the last 13 years. Recently, this loss of faith has become acute. Every time I look at people’s attitudes about it, they’re often afraid of freedom, thinking it will only allow the worst of humanity to roam free, and to lay waste to what everyone else has hoped for. Yes, some bad things will inevitably happen in such an environment. Bad stuff happens on the internet every day. Does that mean we should ban it, control it, contain it? If you don’t allow the opportunity that enables bad things to happen, you will not get good things, either. I’m all in favor of prosecuting the guilty who hurt others, but if we’re always in a preventative mode, you prevent that which could make our society so much better. You can’t have one without the other. It’s like trying to have your cake, and eat it, too. It doesn’t work. If we’re so afraid of the depredations of our fellow citizens, then we don’t deserve what wonderful things they might bring, and that fact is being borne out in lost opportunities.

We have an example in our midst of what’s possible. Take the idea of it seriously, and consider how deprived your present existence would be if it wasn’t allowed to be what it is now. Take its principles, and consider widening the sphere of the system that allows all that it brings, beyond the digital, into our non-digital lives, and what wonderful things that could bring to the lives of so many.

Christina Engelbart has written an excellent summary of her late father’s (Doug Engelbart’s) ideas, and has outlined what’s missing from digital media.

collective iq review

dce-jcnprofiles-interviewInteractive computing pioneer Doug Engelbart
coined the term Collective IQ
to inform the IT research agenda

Doug Engelbart was recognized as a great pioneer of interactive computing. Most of his dozens of prestigious awards cited his invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger strategic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, research the issues and options, develop responses and solutions, integrate learnings, iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we…

View original post 1,949 more words

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

I was taken with this interview on Reason.tv with Mr. O’Neill, a writer for Spiked, because he touched on so many topics that are pressing in the West. His critique of what’s motivating current anti-modern attitudes, and what they should remind us of, is so prescient that I thought it deserved a mention. He is a Brit, so his terminology will be rather confusing to American ears.

He called what’s termed “political correctness” “conservative.” I’ve heard this critique before, and it’s interesting, because it looks at group behavior from a principled standpoint, not just what’s used in common parlance. A lot of people won’t understand this, because what we call “conservative” now is in opposition to political correctness, and would be principally called something approaching “liberal” (as in “classical liberal”). I’ve talked about this with people from the UK before, and it goes back to that old saying that the United States and England are two countries separated by a common language. What we call “liberal” now, in common parlance, would be called “conservative” in their country. It’s the idea of maintaining the status quo, or even the status quo ante; of shutting out, even shutting down, any new ideas, especially anything controversial. It’s a behavior that goes along with “consolidating gains,” which is adverse to anything that would upset the applecart.

O’Neill’s most powerful argument is in regards to environmentalism. He doesn’t like it, calling it an “apology for poverty,” a justification for preventing the rest of the world from developing as the West did. He notes that it conveniently avoids the charge of racism, because it’s able to point to an amorphous threat, justified by “science,” that inoculates the campaign from such charges.

The plot thickens when O’Neill talks about himself, because he calls himself a “Marxist/libertarian.” He “unpacks” that, and explains what he means is “the early Marx and Engels,” when he says they talked about freeing people from poverty, and from state diktat. He summed it up quoting Trotsky: “We have to increase the power of man over Nature, and decrease the power of man over man.” He also used the term “progressive,” but Nick Gillespie explained that what O’Neill called “progressive” is often what we would call “libertarian” in America. I don’t know what to make of him, but I found myself agreeing a lot with what he said in this interview, at least. He and I see much the same things going on, and I think he accurately voices why I oppose what I see as anti-modern sentiment in the West.

Edit 1/11/2016: Here’s a talk O’Neill gave with Nick Cater of the Centre for Independent Studies, called, “Age of Endarkenment,” where they contrast Enlightenment thought with what is the concern of “the elect” today. What he points out is the conflict between those who want ideas of progress to flourish and those who want to suppress societal progress has happened before. It happened pre-Enlightenment, and during the Enlightenment, and it will sound a bit familiar.

I’m going to quote a part of what he said, because I think it cuts to the chase of what this is really about. He echoes what I’ve learned as I’ve gotten older:

Now what we have is the ever-increasing encroachment of the state onto every aspect of our lives: How well we are, what our physical bodies are like, what we eat, what we drink, whether we smoke, where we can smoke, and even what we think, and what we can say. The Enlightenment was really, as Kant and others said, about encouraging people to take responsibility for their lives, and to grow up. Kant says all these “guardians” have made it seem extremely dangerous to be mature, and to be in control of your life. They’ve constantly told you that it’s extremely dangerous to run your own life. And he says you’ve got to ignore them, and you’ve got to dare to know. You’ve got to break free. That’s exactly what we’ve got to say now, because we have the return of these “guardians,” although they’re no longer kind of religious pointy-hatted people, but instead a kind of chattering class, and Greens, and nanny-staters, but they are the return of these “guardians” who are convincing us that it is extremely dangerous to live your life without expert guidance, without super-nannies telling you how to raise your children, without food experts telling you what to eat, without anti-smoking campaigners telling you what’s happening to your lungs. I think we need to follow Kant’s advice, and tell these guardians to go away, and to break free of that kind of state interference.

And one important point that [John Stuart] Mill makes in relation to all this is that even if people are a bit stupid, and make the wrong decisions when they’re running their life, he said even that is preferable to them being told what to do by the state or by experts. And the reason he says that’s preferable is because through doing that they use their moral muscles. They make a decision, they make a choice, and they learn from it. And in fact Mill says very explicitly that the only way you can become a properly responsible citizen, a morally responsible citizen, is by having freedom of choice, because it’s through that process, through the process of making a choice about your life that you can take responsibility for your life. He says if someone else is telling you how to live and how to think, and what to do, then you’re no better than an ape who’s following instructions. Spinoza makes the same point. He says you’re no better than a beast if you’re told what to think, and told what to say. And the only way you can become a man, or a woman these days as well–they have to be included, is if you are allowed to think for yourself to determine what your thought process should be, how you should live, and so on. So I think the irony of today, really Nick, is that we have these states who think they are making us more responsible by telling us not to do this, and not to do that, but in fact they’re robbing us of the ability to become responsible citizens. Because the only way you can become a responsible citizen is by being free, and by making a choice, and by using your moral muscles to decide what your life’s path should be.

This has been on my mind for a while, since I had a brush with it. I’ve been using Jungle Disk cloud-based backup since about 2007, and I’ve been pretty satisfied with it. I had to take my laptop “into the shop” early this year, and I used another laptop while mine was being repaired. I had the thought of getting a few things I was working on from my cloud backup, so I tried setting up the Jungle Disk client. I was dismayed to learn that I couldn’t get access to my backed up files, because I didn’t have my Amazon S3 Access Key. I remember going through this before, and being able to recover my key from Amazon’s cloud service, after giving Amazon my sign-in credentials. I couldn’t find the option for it this time. After doing some research online, I found out they stopped offering that. So, if you don’t have your access key, and you want your data back, you are SOL. You can’t get any of it back. Period, even if you give Amazon’s cloud services your correct sign-on credentials. I also read stories from very disappointed customers who had a computer crash, didn’t have their key, and had no means to recover their data from the backup they had been paying for for years. This is an issue that Jungle Disk should’ve notified customers about a while ago. It’s disconcerting that they haven’t done so, but since I know about it now, and my situation was not catastrophic, it’s not enough to make me want to stop using it. The price can’t be beat, but this is a case where you get what you pay for as well.

My advice: Write down–on a piece of paper, or in some digital note that you know is secure and recoverable if something bad happens to your device–your Amazon S3 Access Key. Do. It. NOW! You can find it by bringing up Jungle Disk’s Activity Monitor, and then going to its Desktop Configuration screen. Look under Application Settings, and then Jungle Disk Account information. It’s a 20-character, alphanumeric code. Once you have it, you’re good to go.

Edit 12/23/2015: I forgot to mention that to re-establish a connection with your backup account, you also need what’s called a Secret Key. You set this up when you first set up Jungle Disk. You should keep this with your S3 Access Key. From my research, though, it seems the most essential thing for re-establishing the connection with your backup account is keeping a copy of your S3 Access Key. The Secret Key is important, but you can generate a new one, if you don’t know what it is. Amazon no longer reveals your Secret Key. Where’s My Secret Access Key? talks about this. It sounds relatively painless to generate a new one, so I assume you don’t lose access to your backup files by doing this. Amazon’s system limits you to two generated Secret Keys “at a time.” You can generate more than two, but you have to go through a step of deleting one of the old ones first, if you reach this limit. The article explains how to do that.

I’m really late with this, because it came out in late May, when I was super busy with a trip I was planning. I totally missed the announcement until I happened upon it recently. In past posts I had made a mention or two about Disney working on a sequel to Tron Legacy. Well, they announced that it isn’t happening. It’s been cancelled. The official announcement said that the movie release schedule in 2017 was just too full of other live-action films Disney is planning. “RaginRonin” on YouTube gave what I think is the best synopsis of movie industry pundit analysis. It sounds like it comes down to one thing: Disney is averse to live-action films that don’t relate to two genres that have been successful for them: Pirates of the Caribbean, and anything related to its classic fairy tale franchise. Other than that, they want to focus on their core competency, which is animation.

All pundit analysis focused on one movie: Tomorrowland. It was a Disney live-action sci-fi movie that flopped. They figured Disney took one look at that and said, “We can’t do another one of those.”

One thing contradicts this, though, and I’m a bit surprised no one I’ve heard so far picked up on this: Disney is coming out with a live-action sci-fi film soon. It’s called Star Wars: The Force Awakens… though it is being done through its Lucasfilm division, and it’s their staff doing it, not Disney’s. Maybe they think that will make a difference in their live-action fortunes. Disney paid a lot of money for Lucasfilm, and so of course they want it to produce. They want another series. No, more than one series!

Like with the first Tron film in 1982, Legacy did well at the box office, but not well enough to wow Disney. Apparently they were expecting it to be a billion-dollar film in ticket sales domestically and internationally, and it grossed $400 million instead. Secondly, Tron: Uprising, the animated TV series that was produced for Disney XD, and got some critical acclaim, did not do well enough for Disney’s taste, and was cancelled after one season. Though, I think the criticism that, “Of course it didn’t do well, since they put it on an HD channel when most viewers don’t have HD,” is valid, it also should be said that it wasn’t a “killer app” that drew people to HD, either. Maybe it’s more accurate to say that Tron as a genre is not a hot seller for Disney, period. It’s profitable, but it doesn’t knock their socks off.

One pundit said that he’s confident Disney will return to Tron sometime in the future, just as it did with Legacy, but the way things are looking now, Disney wants to focus on its profitable properties. I can buy that, but I wonder if the challenge was the story. Olivia Wilde, the actress who played “Quorra” in Legacy, mentioned this in an April interview. Shooting for the sequel, in which the original cast was slated, was scheduled for October, yet they didn’t have a screenplay. They had plenty of time to come up with one. Disney hired a writer for this sequel a couple years ago.

This has happened before. As I’ve talked about in previous posts, there was an attempt to make a Tron sequel back in 2003. It was supposed to be a combination release of a video game and a movie, called Tron 2.0. The video game came out for PCs, and later, game consoles. There was a clear, dramatic storyline in the game that jumped off from the characters, and a bit of the story, from the original Tron. The whole aesthetic of the video game was very nostalgic. A lot of the focus was on the subject of computer viruses, and various forms of malware, and some pretty interesting story lines about the main characters. I had to admit, though, that it took the focus off of what was really interesting about Tron, which was the philosophical and political arguments it made about what computing’s role should be in society. Steven Lisberger, who was driving the effort at Disney at the time, said that an idea he had was to talk about (I’m paraphrasing), “What is this thing called the internet? What should it represent?” He said, “It’s got to be something more than a glorified phone system!” Disney had developed some concept art for the movie. It looked like it might have a chance, but it was cancelled. Tron Legacy, which came out in 2010, was a worthy successor to the first movie in this regard, and I think that had something to do with it getting the green light. Someone had finally come up with something profound to say about computing’s role in society (I talk about this here). I think there’s more to this story than the market for live-action sci-fi movies from Disney. I think they haven’t found something for a sequel to communicate, and they were running up against their production deadline. I suspect that Lisberger and Kosinski did not want to rush out something that was unworthy of the title. Canceling it will give more time for someone down the road to do it right.

Several years ago, while I was taking in as much of Alan Kay’s philosophy as I could, I remember him saying that he wanted to see science be integrated into education. He felt it necessary to clarify this, that he didn’t mean teaching everything the way people think of science–as experimental proofs of what’s true–but rather science in the sense of its root word, scientia, meaning “to know.” In other words, make practices of science the central operating principle of how students of all ages learn, with the objective of learning how to know, and situating knowledge within models of epistemology (how one knows what they know). Back when I heard this, I didn’t have a good sense of what he meant, but I think I have a better sense now.

Kay has characterized the concept of knowledge that is taught in education, as we typically know it, as “memory.” Students are expected to take in facts and concepts which are delivered to them, and then their retention is tested. This is carried out in history and science curricula. In arithmetic, they are taught to remember methods of applying operations to compute numbers. In mathematics they are taught to memorize or understand rules for symbol manipulation, and then are asked to apply the rules properly. In rare instances, they’re tasked with understanding concepts, not just applying rules.

Edit 9/16/2015: I updated the paragraph below to flesh out some ideas, so as to minimize misunderstanding.

What I realized recently is missing from this construction of education are the ideas of being skeptical and/or critical of one’s own knowledge, of venturing into the unknown, and trying to make something known out of it that is based on analysis of evidence, with the goal of achieving greater accuracy to what’s really there. Secondly, it also misses on creating a practice of improving on notions of what is known, through practices of investigation and inquiry. These are qualities of science, but they’re not only applicable to what we think of as the sciences, but also to what we think of as non-scientific subjects. They apply to history, mathematics, and the arts, to name just a few. Instead, the focus is on transmitting what’s deemed to be known. There is scant practice in venturing into the unknown, or in improving on what’s known. After all, who made what is known, as far as a curriculum is concerned, but other people who may or may not have done the job of analyzing what is known very well. This isn’t to say that students shouldn’t be exposed to notions of what is known, but I think they ought to also be taught to question it, be given the means and opportunity to experience what it’s like to try to improve on its accuracy, and realize its significance to other questions and issues. Furthermore, that effort on the part of the student must be open to scrutiny and rational, principled criticism by teachers, and fellow students. I think it’d even be good to have professionals in the field brought into the process to do the same, once students reach some maturity. Knowledge comes through not just the effort to improve, but arguments pro and con on that effort.

A second ingredient Kay has talked about in recent years is the need for outlooks. He said in a presentation at Kyoto University in 2009:

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that.

He has called outlooks “brainlets,” which have been developed over time for getting around our misperceptions, so we can see more clearly. One such outlook is science. A couple others are logic, and mathematics. And there are more.

The education system we have has some generality to it, but as a society we have put it to a very utilitarian task, and as I think is accurately reflected in the quote from Kay, we rob ourselves of the ability to gain important insights on our work, our worth, and our world by doing this. The sense I get about this perspective is that as a society, we use education better when we use it to develop how to think and perceive, not to develop utilitarian skills that apply in an A-to-B fashion to some future employment. This isn’t to say that skills used, and needed in real-world jobs are unimportant. Quite the contrary, but really, academic school is no substitute for learning job skills on the job. They try in some ways to substitute for it, but I have not seen one that has succeeded.

What I’ll call “skills of mind” are different from “skills of work.” Both are important, and I have little doubt that the former can be useful to some employers, but the point is it’s useful to people as members of society, because outlooks can help people understand the industry they work in, the economy, society, and world they live in better than they can without them. I know, because I have experienced the contrast in perception between those who use powerful outlooks to understand societal issues, and those who don’t, who fumble into mishaps, never understanding why, always blaming outside forces for it. What pains me is that I know we are capable of great things, but in order to achieve them, we cannot just apply what seems like common sense to every issue we face. That results in sound and fury, signifying nothing. To achieve great things, we must be able to see better than the skills with which we were born and raised can afford us.

This question made me conscious of the fact that the icons computer/smartphone and many web interfaces use are a metaphor for the way in which the industry has designed computers for a consumer market. That is, they are to be used to digitize and simulate old media.

For example, the use of the now-obsolete floppy disk to represent “save?”

Which computer icons no longer make sense to a modern user?

The way this question is asked is interesting and encouraging. These icons no longer make sense to modern users. Another interesting question is what should replace them? However, without powerful outlooks, I suspect it’s going to be difficult to come up with anything that really captures the power of this medium that is computing, and we’ll just use the default of ourselves as metaphors.

In my time on Quora.com, I’ve answered a bunch of questions on object-oriented programming (OOP). In giving my answers, I’ve tried my best to hew as closely as I can to what I’ve understood of Alan Kay’s older notion of OOP from Smalltalk. The theme of most of them is, “What you think is OOP is not OOP.” I did this partly because it’s what’s most familiar to me, and partly because writing about it helped me think clearer thoughts about this older notion. I hoped that other Quora readers would be jostled out of their sense of complacency about OO architecture, and it seems I succeeded in doing that with a few of them. I’ve had the thought recently that I should share some of my answers on this topic with readers on here, since they are more specific descriptions than what I’ve shared previously. I’ve linked to these answers below (they are titled as the questions to which I responded).

What is Alan Kay’s definition of object-oriented? (Quora member Andrea Ferro had a good answer to this as well.)

A big thing I realized while writing this answer is that Kay’s notion of OOP doesn’t really have to do with a specific programming language, or a programming “paradigm.” As I said in my answer, it’s a method of system organization. One can use an object-oriented methodology in setting up a server farm. It’s just that Kay has used the same idea in isolating and localizing code, and setting up a system of communication within an operating system.

Another idea that had been creeping into my brain as I answered questions on OOP is that his notion of interfaces was really an abstraction. Interfaces were the true types in his message passing notion of OOP, and interfaces can and should span classes. So, types were supposed to span classes as well! The image that came to mind is that interfaces can be thought to sit between communicating objects. Messages, in a sense, pass through them, to dispatch logic in the receiving object, which then determines what actual functionality is executed as a result. Even the concept of behavior is an abstraction, a step removed from classes, because the whole idea of interfaces is that you can change the class that carries out a behavior with a completely new implementation (supporting the same interface), and the expected behavior for it will be exactly the same. In one of my answers I said that objects in OOP “emulate” behavior. That word choice was deliberate.

This seemed to finally make more sense for me than it ever had before about why Kay said that OOP is about what goes on between objects, not what goes on with the objects themselves (which are just endpoints for messages). The abstraction is interstitial, in the messaging (which is passed between objects), and the interfaces.

This is a conceptual description. The way interfaces were implemented in Smalltalk were as collections of methods in classes. They were strictly a “gentleman’s agreement,” both in terms of the messages to which they matched, and their behavior. They did not have any sort of type identifiers, except for programmers recognizing a collection of method headers (and their concomitant behaviors) as an interface.

Are utility classes good OO design?

Object-Oriented Programming: In layman’s terms, what is the difference between abstraction, encapsulation and information hiding?

What is the philosophical genesis of object-oriented programming methodology?

Why are C# and Java not considered as Object Oriented Language compared to the original concept by Alan Kay?

The above answer also applies to C++.

What does Alan Kay mean when he said: “OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP”?

What does Darwin’s theory of evolution bring to object oriented programming methodology?

This last answer gets to what I think are some really interesting thoughts that one can have about OOP. I posted a video in my answer from a presentation by philosopher Daniel Dennet on “Free Will Determinism and Evolution.” Dennet takes an interesting approach to philosophy. He seems almost like a scientist, but not quite. In this presentation, he uses Conway’s game of “life” to illustrate a point about free will in complex, very, very, very large scale systems. He proposes a theory that determinism is necessary for free will (not contrary to it), and that as systems survive, evolve, and grow, free will becomes more and more possible, whereby multiple systems that are given the same inputs will make different decisions (once they reach a structured complexity that makes it possible for them to make what could be called  “decisions”). I got the distinct impression after watching this that this idea has implications for where object-orientation can go in the direction of artificial intelligence. It’s hard for me to say what this would require of objects, though.

—Mark Miller, https://tekkie.wordpress.com

Peter Foster, a columnist for the National Post in Canada, wrote what I think is a very insightful piece on a question that’s bedeviled me for many years, in “Why Climate Change is a Moral Crusade in Search of a Scientific Theory.” I have never seen a piece of such quality published on Breitbart.com. My compliments to them. It has its problems, but there is some excellent stuff to chew on, if you can ignore the gristle. The only ways I think I would have tried to improve on what he wrote is, one, to go deeper into the identification of the philosophies that are used to “justify the elephantine motivations.” As it is, Foster uses readily identifiable political labels, “liberal,” “left,” etc. as identifiers. This will please some, and piss off others. I really admired Allan Bloom’s efforts to get beyond these labels to understanding and identifying the philosophies, and the mistakes that he perceives were made with “translating” them into an American belief system that were at the root of his complaints, the philosophers who came up with them, and the consequences that have been realized through their adherents. There’s much to explore there.

Foster also “trips” over an analogy that doesn’t really apply to his argument (though he thinks it does) in his reference to supposed motivations for thinking Earth was the center of the Universe, though aspects of the stubbornness of pre-Copernican thinking on it, to which he also refers, apply.

He says a lot in this piece, and it’s difficult to fully grasp it without taking some time to think about what he’s saying. I will try to make your thinking a little easier.

He begins with trying to understand the reasons for, and the motivational consequences of, economic illiteracy in our society. He uses a notion of evolutionary psychology (perhaps from David Henderson, to whom he refers), that our brains have been in part evolutionally influenced by hundreds of thousands of years (perhaps millions. Who knows how far back it goes) of tribal society, that our natural perceptions of human relations, regarding power and wealth, and what is owed as a consequence of social status, are influenced by our evolutionary past.

Here is a brief video from Reason TV on the field of evolutionary psychology, just to get some background.

Edit 2/16/2016: I’ve added 3 more paragraphs relating to another video I’m adding, since it relates more specifically to this topic.

The video, below, is intriguing, but I can’t help but wonder if the two researchers, Cosmides and Tooby, are reading current issues they’re hearing about in political discourse into “stone age thinking” unjustifiably, because how do we know what stone age thinking was? I have to admit, I have no background in anthropology or archaeology at this point. I might need that to give more weight to this inference. The topic they discuss here, about a common misunderstanding of market economics, relates back to something they discussed in the above video, about humans trying to detect and form coalitions, and how market mechanisms have the effect of appearing to interfere with coalition-building strategies. They say this leads to resentment against the market system.

What this would seem to suggest is that the idea that humans are drastically changing our planet’s climate system for the worst is a nice salve for that desire for coalition building, because it leads one to a much larger inference that market economics (the perceived enemy of coalition strategies) is a problem that transcends national boundaries. The constant mantra of warmists that, “We must act now to solve it,” appears to demand a coalition, which to those who feel disconnected by markets feels very desirable.

One of the most frequent desires I’ve heard from those who believe that we are changing our climate for the worst is that they only want to deal with market participants, “Who care about me and my community.” What Cosmides and Tooby say is this relates back to our innate desire to build coalitions, and is evidence that these people feel that the market system is interfering, or not cooperating in that process. What they say, as Foster says, is this reflects a lack of understanding of market economics, and a total ignorance of the benefits its effects bring to humanity.

Foster says that our modern political and economic system, which frustrates tribalism, has only been a brief blink of an eye in our evolutionary experience, by comparison. So we still carry an evolutionary heritage that forms our perceptions of fairness and social survival, and it emerges naturally in our perceptions of our modern systems. He says this argument is controversial, but he justifies using it by saying that there is an apparent pattern to the consequences of economic illiteracy. He notices a certain consistency in the arguments that are used to morally challenge our modern systems, which does not seem to be dependent on how skilled people are in their niche areas of knowledge, or unskilled in many areas of knowledge. It’s important to keep what he says about this in mind throughout the article, even though he goes on at length into other areas of research, because it ties in in an important way, though he does not refer back to it.

He doesn’t say this, but I will. At this point, I think that the only counter to the natural tendencies we have (regardless of whether Foster describes them accurately or not) is an education in the full panoply in the outlooks that formed our modern society, and an understanding of how they have advanced since their formation. In our recent economy, there’s a tendency to think about narrowing the scope of education towards specialties, since each field is so complex, and takes a long time to master, but that will not do. If people don’t get the opportunity to explore, or at least experience, these powerful ways of understanding the world, then the natural tendency towards a “low-pass filter” will dominate what one sees, and our society will have difficulty advancing, and may regress. A key phrase Foster uses is, “Believing is seeing.” We think we see with our eyes, but we actually see with our beliefs (or, in scientific terms, our mental models). So it is crucial to be conscious of our beliefs, and be capable of examining and questioning them. Philosophy plays a part in “exercising” and developing this ability, but I think this really gets into the motivation to understand the scientific outlook, because this is truly what it’s about.

A second significant area Foster explores is a notion of moral psychology from Jonathan Haidt, who talks about “subconscious elephants,” which are “driving us,” unseen. We justify our actions using moral language, giving the illusion that we understand our motivations, but we really don’t. Our pronouncements are more like PR statements, convincing others that our motivations are good, and should be allowed to move forward, unrestricted. However, without examining and understanding our motivations, and their consequences, we can’t really know whether they are good or not. Understanding this, we should be cautious about giving anyone too much power–power to use money, and power to use force–especially when they appeal to our moral sensibility to give it to them.

Central to Foster’s argument is that “climate change” is a moral crusade, a moral argument–not a scientific one, that uses the authority that our society gives to science to push aside skepticism and caution regarding the actions that are taken in its name, and the technical premises that motivate them. Foster excuses the people who promote the hypothesis of anthropogenic global warming (AGW) as fact, saying they are not frauds who are consciously deceiving the public. They are riding pernicious, very large, “elephants,” and they are not conscious of what is driving them. They are convinced of their own moral rightness, and they are honest, at least, in that belief. That should not, however, excuse their demands for more and more power.

I do not mean what I say here to be a summary of Foster’s article. I encourage you to read it. I only mean to make the complexity of what he said a bit more understandable.

Related posts: Foster’s argument has made me re-examine a bit what I said in the section on Carl Sagan in “The dangerous brew of politics, religion, technology, and the good name of science.”

I’m taking a flying leap with this, but I have a suspicion my post called “What a waste it is to lose one’s mind,” exploring what Ayn Rand said in her novel, “Atlas Shrugged,” perhaps gets into describing Haidt’s “motivational elephants.”

Follow

Get every new post delivered to your Inbox.