The result of capitalizing on peak envy

I consider this a worthy follow-up to my post, Our political pathology and its consequences, because it’s looking at a related, pernicious phenomenon that is spreading through organizations of all stripes. The most important thing to get about this is it’s a deliberate strategy of organizational takeover. As these guys discussed, it sounds unbelievable to the uninitiated, but it’s true. If you pay enough attention to what is happening to organizations that are being hit with this, you can see it.

This is from Aug. 9, 2019:

What James Lindsay and Peter Boghossian described is a takeover of administrative positions, using the cudgel of some kind of “justice.” What ensues is the attraction of followers that believe in this particular ideology, and then the “middle” of the organization, which it needs to survive, leaves, because they find the environment uninteresting, and threatening. Then comes either organizational death, or it becomes a client of some wealthy funder that keeps it on life support, and pays the salaries of the people who feed on the carcass, and continue spreading this ideology elsewhere.

The people who promote this have found a formula that works to achieve their aims, and they do it over and over again. What seems to feed it is a) envy of the target organization, and b) the target’s lack of faith in itself. All it needs is some shoving, and it collapses from within. As these people discuss in the above video, this dynamic creates social isolation, though the dream of those who are pursuing this strategy is to create a unified vision in all of the world’s inhabitants. They are utopians. If no one disagrees about anything, the thinking is conflict will end, which will have manifold social benefits. Their strategy is to destroy everything that is not in line with this ideology of “equity,” so it will be the only choice to fulfill people’s needs and desires.

As one can surmise, individual thought and action is sacrificed. Reason, logic, truth, beauty, and all that goes with them, are also sacrificed; no philosophy, no religion, no art, no mathematics, no science, no engineering–with one carve-out: The only moral imperative, the only belief allowed is in malignant indiscriminate neutrality. Applying reason or logic, in any form, in any forum, is to be deemed a hate crime. What’s often added to this is a carve-out for a single identity group, that the privileged identity is the only thing that can’t be criticized. This is an incarnation of identity politics, typically found in organizations colonized by the ideology of intersectionality.

This is being phased in over time. The idea is to eventually dissolve all faith in any system of belief, save this notion of “equity.”

What is being pursued is not achievable. No utopian dream is, though as the panel in the video describes, valuable institutions for our society are being destroyed in the pursuit.

I have sometimes wondered if the reason they are being destroyed is precisely because they are already weak. If this form of destruction did not come for them, then some other would. All of it is avoidable. All I think it really requires is a clarity of understanding what one believes, and why one believes it, and being stubborn enough to say no to what opposes it, and possibly “die on that hill.” Part of what this means is being able to categorize things accurately, and insisting on arguing within strong contexts, which means understanding and valuing those contexts for what they achieve. What I see, though, is this knowledge is very isolated. At best, it’s only in the single digits, percentage-wise, in our population.

The following is from Tim Pool, from August 28, 2019, talking about the cultish nature of this:

Edit 6/22/2020: I decided to add in this video, a conversation with Karlyn Borysenko and Kari Smith, because these are two women who were part of this ideology for many years, and found their way out of it. I thought it was a good talk. If you don’t have time for this, there’s a good (much shorter) excerpt of the same conversation at A former social justice warrior explains why social justice is a toxic, evil ideology. It’s from June 10, 2020.

The following is a conversation between Benjamin Boyce and Simon Chen from 2019. Chen and his parents experienced the Cultural Revolution in China. He sees a very similar pattern of social behavior emerging here.

The following video is from Ben Boyce from May 9, 2018.

The part at the beginning is not what concerns me, but rather the former professor’s message in response to his firing over it. Though it is delivered in the context of a university’s mission, what this professor speaks to is the heart of what is being grappled with here: Civilizational suicide. It has happened before, particularly in the first half of the 20th century, in certain countries in Europe. It can happen here. The only thing that’s missing from this picture is a viable military invader, or paramilitary force, to complete the deed, and effect a military takeover, leading either to dictatorship, or a puppet government. Though, perhaps I’m missing the potential military battlefield.

Our political pathology and its consequences

This is a post I’ve been sitting on for a couple years. The main reason being that the source I use, a YouTuber named Aaron Clarey, makes a prognosis for our society that I wasn’t too sure about, and I don’t like very much. I think we have enough evidence, though, that it’s accurate for the foreseeable future. So, I’m going ahead with it.

This is something I’d been trying to put my finger on for a long time. I feel as though the quality of our political dialogue has fallen off a cliff in the last 14 or so years. What I mean is it’s largely vacuous, cynical, and spiteful, and I can remember a time when it wasn’t like this. I’ve feared that this decline in our dialogue will lead to our society repeating the horrors that have occurred in authoritarian regimes, inside the United States, not necessarily in racial terms (though that could happen), but the same horrors nonetheless. I’ve seen some of the symptoms of this pathology, but I didn’t put the pieces together. The following videos, put together by Aaron Clarey in 2010, seem to paint a coherent picture of this. They cut a bit below the surface, to get at the inner lives of the people who influence our politics, and set government and corporate policies. It is just a hypothesis, but it matches with the symptoms I’ve seen. It is not a clinical study of these influencers. I nevertheless find it compelling.

The topic is on “crusaderism.” Clarey’s introduction was a bit disingenuous, because he said he was going to talk about the problem, “and what can be done about it.” As I get to below, he doesn’t really talk about what can be done about it. I mean, he ticks off some remedies that I strongly agree with, but you can tell his attitude is, “You can do this, but it’s likely going to be for naught. So, don’t bother.” There is reason to believe he was right about that.

He mostly covered the political Left, but he said a similar dynamic has been happening on the Right. His criticism of the Right seemed to match what I’ve seen with people who say they are conservatives, but who can’t justify their positions, or articulate anything about policy.

The short of Clarey’s hypothesis was that there are people who come from wealthy families, and are not willing to struggle, to get their hands dirty, to make a contribution to society. They go to college, get “easy A” degrees, and because they come to realize they can’t add anything of worth to society with what they’ve learned, through some sort of productive occupation, they feel the need to validate their ego. So they enter politics. They look for a cause. It doesn’t matter what it is, and they latch onto it like a dog with a bone. They advocate on behalf of the cause, but it’s really advocacy for themselves. They don’t care about the cause in the sense of trying to help society ameliorate a social or technical problem. The cause is a proxy for their life’s meaning. It’s not for something truly moral, where they’d care if their efforts resulted in some societal improvement. They don’t believe any of what they’re saying, in furtherance of their “cause.” The cause is meaningless. They advocate for it because being active in something that has an effect on society gives their life some meaning. They find purpose in having power over other people, and they use a guise of helpfulness as a shield against the pushback they get. Clarey said they find validation for their ego by being a destroyer of what other people have worked to build, because after all, it’s easier to destroy than create, and it’s affecting something beyond their home life. As I’ll get to later, I think this particular bit of analysis might skim over an underlying cause for “the destructive principle” that Clarey talked about.

His thesis is that this pathology has come to dominate our politics. One reason for this is the activity is well-funded. In some cases, their “causes” serve as paid jobs for them, funded by non-profits, and organizations like the United Nations. In the other cases, they’re “trust fund babies,” and create a “bonfire of their vanity,” using their own money, promoting their “cause.”

Before I continue, I feel it necessary to put a couple disclaimers here:

First, please excuse the typographical errors in Aaron’s presentation.

Second, I don’t agree with a couple of his assertions:

In Part 6, he talked about stats in useful vs. useless college degrees, and claimed that most undergraduates are getting useless degrees. He justified this with a chart, showing degrees in some categories, like arts and humanities vs. engineering. In the stats I’ve looked at, the useful degrees students have been getting outnumber the useless ones (though this may depend on what he and I consider “useful”), though I was looking at these stats several years after he made this. So, they may have changed.

In Part 7, he made a statement about “raising taxes to pay for increased spending.” I would strongly encourage people to look into the analysis that economist Art Laffer did on tax revenue as a percentage of GDP. The results are surprising, and worth looking at.

I looked up the Bob Geldof quote Clarey used in the third video, “We must do something, even if it doesn’t work,” because I was sure readers would find it so nonsensical that they would want to know if he really said it. I found the quote, attributed to Geldof, in a 2014 article, “How Charity Can be Selfish: Father Sirico on bad almsgiving.” from Forbes.com, though the wording is a bit different,

We need to do something, even if it doesn’t work or help.

It’s from a collection of short documentary films called “PovertyCure.”

The subject matter that Sirico and Jerry Bowyer talked about in the article, when discussing Geldof, sounds like it would relate to William Easterly’s book, “The Tyranny of Experts,” where Easterly describes the dysfunctional nature of our foreign aid efforts.

A note about Al Gore, since Clarey talked about him. Gore also “tried” with respect to the internet. He sponsored a bill in the late ’80s that ultimately led to the broadband version of the internet that we all eventually came to enjoy. A forward-looking move, though admittedly, he did not work on building the broadband internet. He did not manage the process of building it out, or of actually installing it. Other people at the NSF, various government agencies, and at universities around the country did that, several years before the internet was privatized.

In the segment below, Clarey made some connections between the obtaining of useless degrees and non-productive careers people enter, and the government policies that are enacted, which dampen economic activity. This results in declining wealth in society. Nevertheless, people try to live beyond their means, because they’re not willing to accept a decline in their standard of living. This results in increasing private and public debt. He also explained that this is why we see the rent-seeking that political interests have instituted in government policy. It’s a way of not producing value for society, but getting an income, anyway. This is why if a problem appears to be solved, these people find a new cause to champion. For they must have something to occupy themselves, and give their existence purpose. “The problem” can never really be solved. Otherwise, what would they do with their lives?

It could be easy to dismiss this part of his presentation, but I think if one looks at financial indicators on a societal level, his analysis stands up. There are some critical fiscal problems that are not being addressed at all. As this continues, there will be negative consequences to our economy. That is a sure thing. It’s just a matter of time.

Just a correction to what Clarey said at about 6:15 in the above video. He talked about average annual hours worked per capita, by country, and showed that in the U.S. we’d gone from about 2,000 down to “8,000.” What he meant was “about 1,800,” though by the chart he shows, I’d take it to between 1,700 and 1,800.

We are living a lie. I agree with Clarey that this lie cannot continue, but he said that even as our economic system collapses in the future, it’s unlikely that our society will acknowledge the lie. He talked about an economic model that I think is worth listening to and pondering, not in the sense of advocating for it, but understanding what’s really happening, called “the carrion economy.” Like scavengers, rather than producing wealth, we increasingly feed on the “carcass,” until there’s nothing left. This is not what we should want, but it’s what we as a society have been creating.

This next part got to some hard things to face: What do we do about this? He presented some things you can do, which he called, “The path of most resistance,” because if you do any of these things, you’re going to be swimming against the prevailing tide in our society. However, you can do these things knowing that you are at least promoting the well-being of the next generation. Whether the next generation ends up being better off than the previous one is another matter. These things cannot work to improve our standard of living in isolation.

The other hard part is facing the fact that the odds are against these efforts improving much of anything. So, he closed with what he recommended, which is “do nothing.” Don’t waste your time on a futile effort. Go Galt. He published a book called, “Enjoy The Decline,” where he talked about what’s coming, and what you can do to maximize your enjoyment of life while society grinds itself into the dirt. Of course, this sounds like a terribly immoral course of action. I am resistant to accepting it myself, but he considered it the most likely, and practical.

The reason being that the irony of history is that as a society grows wealthier, it creates this effect he described, in which the elite of society lose their virtue, because the social and economic conditions don’t demand that they maintain it in order for them to gain what they want. They don’t need to work to produce something of value, and so their motivations turn toward the useless, ignorant, and counter-productive. As a consequence of their lazy thought, they become resentful, and attack the very society that has given them all they have. I’ve heard this from multiple sources who are knowledgeable about history. In fact, one historian, Thomas West, reviewed the political theory the Founders had for the United States in his book, “The Political Theory of the American Founding.” He said that many of them pondered in their private writings, “What if our republic succeeds? What do we do then?”, because history had shown that too much wealth in a society leads to its downfall. This was one of their great unanswered questions. What they hoped for was that our society’s cultural institutions would keep us on the right track morally, maintaining our society’s essential character, but it was a vague hope. They really didn’t know what to do about it, since history provided no guidance, only repeated cycles of societal ascension and decline.

Many years ago, I sat through a presentation given by Mike Rosen, who presented a lot of evidence from government statistics, and he concluded the same thing as Clarey did: Our entitlement programs are unsustainable, and they are going to collapse, but not before they collapse our economy, because instead of making them sustainable, which would require some measures the majority of us are not going to like, we’re going to squeeze these programs, until the money runs out, but not before draconian economic measures are exhausted to maintain them. He gave the presentation not to give solutions that might give us hope, because in his view, we passed by our chances to do anything about this. His message instead was, “Prepare for the worst.” Rosen has a background in corporate finance, and I give his analysis credence. Indeed, the die is cast.

The decline is going to take time. We won’t likely see obvious signs of societal decay for a while, perhaps for many years, but as I’ve watched the political decisions our society has made in light of structural problems that I’ve seen unaddressed for decades, I’ve seen us be incapable of having some critical discussions, and making critical decisions, which must be had, and made, if we are to maintain and advance our society into the distant future.

We are a reactionary society. We only deal with crises when they arise, not before. That makes us freak out in the midst of crises, but it’s what we prefer. Arguing for addressing looming crises ahead of time doesn’t go anywhere, because too many people don’t want to focus on it. It doesn’t serve our short-term interests, and it’s too easily dismissed as alarmism promoted by demagogues who are seeking attention, and/or power. So, as I said above, there’s a strong argument for just preparing for the crap to hit the fan, because you can pretty well predict that’s what’s going to happen, no matter what you try to do to avert it.

Getting to a point I made at the beginning about Clarey’s analysis, Stephen Hicks gave a speech covering what Nietzsche saw as an envy dynamic that he predicted would arise in the future (from the perspective of someone in the 19th century). It gets to something that I don’t think Clarey explained that well: Why do our elite attack things that work well in our society, which promote the very things they say they support? Clarey just said that they attack big, obvious targets. Hicks said that there’s a deeper motivation: Envy, due to their sense of weakness. It fits with what Clarey said: They come to realize they can’t do anything that our society values. Yet, they have the expectation of power and prestige, to be someone whom people look up to. The fact that they don’t have anything society values makes them feel worthless. It gnaws at them, and hatred grows within them, because they feel entitled to be given a certain value in society. Nietzsche said, though, that they can’t live with themselves just hating their circumstances, and the society in which they live. They form their own morality that justifies their weakness. Secondly, because of their hatred of the society that does not value them, they attack its strengths, attempting to discredit its worth among as many as they can. As Clarey would say, they don’t care about society. It’s all about them, and their validation.

A YouTuber named Benjamin Boyce I think best summarized the mentality that Aaron Clarey talked about in his video, “Revenge of the Talentless.” He contrasted this with what developing a talent means. I like how he presented it, because it’s basic, true, and approachable; something for which those without a talent can aspire.

A couple of guys on YouTube, calling themselves A Dormant Dynasty, produced the video below, based on an article on “biological Leninism” by a different author, simply named “spandrell”. I’m including it here, because it adds another dimension to the discussion, of the political relationship between the elites that Clarey talked about, and the dependents who support them. A common trait that the elite class described here shares psychologically with the dependent class, in this analysis, is that the dependent class also feels that society does not value them. This elite approaches the dependent, and promises to give them their worth, so long as they grant them their loyalty. What transpires is that the dependent class feels that they only gain their value in society from this association. Without it, they feel worthless and powerless. This creates a powerful coalition structure that is united against the people who know that they have value to society; who seek to increase their standard of living by trading that value for a share of society’s wealth.

One thing I take exception to is these guys don’t always pick the best graphics for presenting their ideas. They have some cleverness, which keeps it entertaining, but at times I think they go over the line into cynicism that detracts from what they’re trying to get across. In this video, they use a graphic that uses the term “unfit” to describe the dependent class. I don’t agree with this terminology, as it makes them seem irredeemable. I’d rather think of them as “unpurposed.”

What I kind of like about the above video is that these guys take a long view with this analysis, that societies go through cycles of growth and decline (on one axis), and cycle through systems of social organization (on another axis), from the feudal, to the totalitarian, to the more liberal (not necessarily in that order).

This is a theory of the cycles of history, given by the same duo, which I think is worth contemplating, given what we are seeing. It’s a grim prognosis for the foreseeable future. I’m not fatalistic about this, but it may be something to prepare for, in whatever way you see fit.

This video covers some background before getting to that. I’ve skipped over most of it, to get to the point. If you want to see more of it, move the slider back to the beginning.

Related posts:

Psychic elephants and evolutionary psychology

Coming to grips with a frustrating truth

—Mark Miller, https://tekkie.wordpress.com

A starting point on virtual machines and architecture

Stack machines

Learning about stack machines is an adjustment for someone like myself, since I’m used to register machines. I tried my hand at it, reading the last chapter of, “Smalltalk-80: The Language and its Implementation,” to look at the description of, and the source code for the Smalltalk VM, and got lost pretty quickly.

I read articles on Stack machines, Burroughs large systems, and the Burroughs instruction set on Wikipedia. I think these are good backgrounders for what one needs to understand before digging into an implementation.

Trivia note: Four years ago, just for exploration, I tried my hand at learning the Forth programming language, reading “Starting Forth,” by Leo Brodie, and doing the exercises. If you know Forth, and look at the Burroughs instruction set, you will see similarities. Charles Moore got inspiration for creating Forth from the Burroughs systems.

I also found the following video helpful in organizing my thoughts on this subject. I was reading “Squeak: Open Personal Computing and Multimedia,” by Mark Guzdial and Kim Rose, and the chapters on the Squeak VM and its memory management got me asking some questions about the basic components of a VM. The title of the video below is “How to build a virtual machine,” but it’s really about “How to build a virtual stack machine.” The term “virtual machine” means a simulation of a computer architecture. In the video, Terrance Parr showed how to build a very simple virtual stack machine in Java. I like the scale of this example. It’s not threatening. If you have basic programming skills, you can do everything you see in this walk-through, and get a stack machine that works! He did not, however, demonstrate how to build a compiler, so you can write high-level source code for the stack machine, nor a garbage collector for memory management, nor a debugger, so you can step through the code, look at values interactively, and trace through the stack. 🙂 However, he demonstrated a trace function that you can use for primitive debugging.

He created some simple programs to execute in bytecode, by writing some extra code as part of the VM project. The set of bytecodes (just a set of numbers) are submitted to the virtual processor by making a function call. The result of each program (a single value) shows up on the virtual machine stack.

Parr’s presentation gets a bit disorganized when he talks about the call stack. Just have some patience. He clears it up towards the end.

Edit 6/22/2020: I realized once I tried following Terence’s presentation, to recreate his VM, that he didn’t clearly show all of his code. You can see most of the code he uses by pausing the video when he has parts of it shown in his IDE, but there’s some he doesn’t show at all, or he modifies it a lot. So, I’m going to paste in some code here to try to clear up the left out, and confusing spots.

He changed the cpu() routine a lot. So, I figured I would show how it ended up in the later part of this presentation, when he added stack tracing, and memory dump capability. Also, I’ve included the code he used for an important routine for tracing, called stackString().

void cpu()
{
   while(ip<code.length)
   {
      int opcode = code[ip];
      if(trace) System.err.printf("%-35s", disInstr());
      ip++;
      switch(opcode)
      {
      // case statements for bytecode operators
      }
      if(trace) System.err.println(stackString());
   }

   if(trace)
   {
      System.err.printf("%-35s", disassemble());
   }

   if(trace) System.err.println(stackString());
   if(trace) dumpDataMemory();
}

String stackString()
{
   StringBuilder buf = new StringBuilder();
   buf.append("stack=[");
   for(int i = 0;i <= sp; i++)
   {
      int o = stack[i];
      buf.append(" ");
      buf.append(o);
   }
   buf.append(" ]");
   return buf.toString();
}

In the third phase of his presentation, when he showed how to compute Fibonacci numbers with this VM, he didn’t show some bytecode operators that are necessary for making the Fibonacci program work. I’m showing the code here for those bytecodes, which I wrote myself. I must admit, I have not tested this code in Java, as I translated Terence’s code into a different language to try it out. If this doesn’t work, I apologize in advance. I can say the logic is sound. So, any errors this code generates just requires a little debugging on my misunderstanding of how Java specifies things.

These case statements go inside the switch statement in the cpu() function. Note that Terence created a couple enumerations, TRUE and FALSE, which hold the values 1 and 0, respectively, which you should include in your code, as well:

case ILT:
   int val1 = stack[sp—];
   int val2 = stack[sp—];
   if (val2 < val1)
   {
      stack[++sp] = TRUE;
   }
   else
   {
      stack[++sp] = FALSE;
   }
   break;
case ISUB:
   val1 = stack[sp—];
   val2 = stack[sp—];
   stack[++sp] = val2 - val1;
   break;
case IMUL:
   val1 = stack[sp—];
   val2 = stack[sp—];
   stack[++sp] = val2 * val1;
   break;

You may or may not notice this, but in the rush of assembling the code for his presentation, he ended up duplicating code in disInstr() and disassemble() (he shows the code for both). When he’s through putting everything together, these functions end up with the exact same code in them, though his code calls both of them. Of course, under normal circumstances, he would’ve just used a single function to disassemble the bytecodes.

To get some ideas for “where to go from here,” I liked reading How to build a simple stack machine, by Nicola Malizia. Though Nicola’s English is wobbly, what I like is that he sketches out some basic logic for implementing control structures, variables, and procedures in a way that’s mostly compatible with the machine model that Terence laid out. An element he adds is labels for branching, so we can have mnemonics for addresses, instead of just numbers. Nicola defined a language for doing all of this.

Where he departs from Terence’s model is he implements what I’d call an interpreter, not a VM, but these distinctions get blurry. He doesn’t create an assembler for his machine. So, he has no bytecodes to execute. In Nicola’s scheme, the instruction pointer literally points to an array of instruction strings that are parsed, as the program executes. However, he implements a stack, with stack frames.

Getting acquainted with the concept of architecture

In my learning process, I’ve liked working with a low-scale (8-bit) architecture from my youth (in an emulator), based on the 6502 processor. It’s kept things more comfortable, as I’m getting used to ideas about using computing for modeling. I really enjoyed the presentation below, by David Holz, which is about a 16-bit VM that runs on another vintage 8-bit computer, the Commodore 64. The C-64 uses a 6510 processor, a very close relative to the 6502. This 16-bit VM is called ArcheronVM.

If you’re not already familiar with it, you’ll probably want to learn some 6502 assembly language, to understand what Holz talked about, since his scheme uses 6502 code as a kind of microcode for the 16-bit virtual processor. 6502 assembly is a simple CISC instruction set (compared to today’s processors).

What I particularly like is his presentation gets across a basic idea in something Alan Kay told me about many years ago, when I first started corresponding with him:

[T]o me, most of programming is really architectural design (and most of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess).

What I used to understand about the term “architectural design” had to do with designing data structures and APIs (perhaps abstract data types), but that’s not really what Kay was talking about. I had some experience with it by going through the third chapter in SICP, seeing what’s possible with computing mathematical series using lazy evaluation (I wrote about a few exercises in that chapter, at SICP: Chapter 3 and exercises 3.59, 3.60, and 3.62). What comes through in Holz’s presentation is the importance (and power) of architecture in the design of a VM.

A few years back, I read about the Lisp Machines K-Machine, a computer that Lisp Machines, Inc. never released. The linked article talks about a design feature that Holz also used in his architecture, called “tag bits.” It’s a method of encoding meta-information about an instruction, or an address. It was also used in the Burroughs machines.

 

I’ve been interested in this topic for years, because I had a “2×4 across the head” experience while I was working in IT services in the mid-90s. I was working on a software project that was part of an in-house data-driven application platform. It was an interpreter that implemented a page description language, which also allowed some “scripting.” It was used for generating reports. I did a lot of wrestling with it, since it ran on MS-DOS, and was written in C. The combination of these was an awful experience. At the time, it inspired me to think about how to make the interpreter’s architecture better (because I didn’t know any other approach was possible). I’m using “architecture,” in the old sense of how I thought about it. The approach I used ended up being like trying to graft onto/augment a badly designed machine. In hindsight, what I was really trying to improve upon was the combination of running C on MS-DOS, without modifying the architecture offered by either. My effort produced some improvement, but the code was a lot of make-work. In hindsight, what I came up with was the kind of coding scheme that justified the use of meta-languages with code generators in the 2000s, as it became very repetitive. What I’d say is when you see that pattern developing, it might actually be a signal that you need to improve the architecture you’re using (here, I’m using the term in the new sense), because what you’re really doing is compensating for a systemic weakness, exemplified by the operating system and/or the programming language you’re using. I had no idea how to do that. It wasn’t part of the curriculum in my computer science education, which I regret.

As a commenter to Terrence Par’s video said, “It’s too bad that this isn’t part of a software engineer’s education anymore.” The commenter mentioned a couple examples of where this design method was used to good effect in commercial software in the 1980s, and mentioned Don Knuth’s book, “The Art of Computer Programming,” which uses a VM, called MMIX.

Though, I can’t help reflecting on something else Kay said back in the ’70s, “People who are really serious about software should make their own hardware.” For me, that’s going to have to wait for another day.

Alan Kay: Basic reading material for computer science students (and programmers)

Once again, Alan Kay gave some basic starting material for those who want to understand what computing makes possible. He seemed to take the bootstrapping approach to “build something better than what’s out there” as a basic project that every aspiring computer scientist/programmer should work on, beginning with some good ideas that have already been made.

Experienced programmers and computer scientists, what are some really old (or even nearly forgotten) books you think every new programmer should read?

I love that “2006” and “2008” (in another answer) must be considered “really old” (which is what the question requests) …

I’m still a big fan of the “Lisp 1.5 Programmers Manual” (MIT Press — still in print). This version of the language is no longer with us, but the book — first written ca 1962 — by John McCarthy, who invented, and his colleagues, who implemented, is a perfect classic.

It starts with a version of John’s first papers about Lisp, and develops the ideas in a few pages of examples to culminate on page 13 with Lisp eval and apply defined in itself. There are many other thought provoking ideas and examples throughout the rest of the book.

The way to grow from this book is to deeply learn what they did and how they did it, and then try to rewrite page 13 in a number of ways. How nicely can this be written in “a lisp” using recursion. How nicely can this be written without recursion? (In both cases, look ahead in the book to see that Lisp 1.5 had gotten to the idea of EXPRs and FEXPRs (functions which don’t eval their arguments before the call — thus they can be used to replace all the “special forms” — do a Lisp made from FEXPRs and get the rest by definition, etc.).

What is a neat bootstrapping path? How could you combine this with Val Shorre’s “Meta II” programmatic parser to make a really extensible language? What does it take to get to “objects”? What are three or four really interesting (and different ways) to think about objects here? (Hints: how many different ways can you define “closures” in a language that executes? What about using Lisp atoms as a model for objects? Etc.)

The idea is that Lisp is not just a language but a really deep “building material” that is tidy enough to “think with” not just make things (it’s a “building material” for thoughts as well as computer processes).

Dani Richard reminded me to mention: “Computation: Finite and Infinite Machines” by Marvin Minsky (Prentice-Hall, 1967), which — since it is one of my favorite books of all time — I’m surprised I didn’t include in the original list. Marvin could really write, and in this book he is at his best. It is actually a “math book” — with lots of ideas, theorems, proofs, etc., — but presented in the friendliest way imaginable by a great mind who treated everyone — including children — as equal to him, and as fellow appreciators of great ideas. There are lots of interesting things to ponder in this book, but perhaps it is the approach that beckons to the reader to start thinking “like this” that is the most rewarding.

“Advances in Programming and Non-Numerical Computation” (Ed. L. Fox) mid-60s. The papers presented at a 1963 summer workshop in the UK. The most provocative ones were by Christopher Strachey and several by Peter Landin. This was one of the books that Bob Barton had us read in his famous advanced systems design class in 1967.

Try “The Mythical Man-Month” by Fred Brooks, for an early look and experience with timeless truths (and gotchas) from systems building with teams …

Try “The Sciences of the Artificial” by Herb Simon. A much stronger way to think about computing — and what “Computer Science” might mean — by a much stronger thinker than most today.

“A Programming Language” by Ken Iverson (ca 1962). This has the same thought expanding properties of Lisp. And, like Lisp, the way to learn from these really old ideas is to concentrate on what is unique and powerful in the approach (we know how to better improve both Lisp and APL today, but the deep essence is perhaps easier to grasp in the original manifestations of the ideas). Another book that Barton had us read.

I like Dave Fisher’s 1970 CMU Thesis — “Control Structures for Programming Languages” — most especially the first 100 pages. Still a real gem for helping to think about design and implementations.

More recent: (80s) “The Meta-Object Protocol” by Kiczales, et al. The first section and example is a must to read and understand.

Joe Armstrong’s PhD thesis — after many years of valuable experience with Erlang — was published as a book ca 2003 …

Lots more out there for curious minds ….

The cost of deinstitutionalization

I am linking to a video (I can’t embed it here), because it is a comprehensive look at a problem I’ve been seeing for many years; the history of what caused it, and its costs on the cohesion of our society. I am speaking of the societal impact of mental illness, due to the fact that mental institutions have been closed, and that it is much more difficult to institutionalize the mentally ill than it was 65 years ago.

To watch the video, click on this link: The Destruction of America’s Mental Health Care System (and its consequences)

This problem has become closer to top of mind for me since the late ’90s, because of the repeated pattern I’ve seen, where in most cases when a rampage killing spree happens, it is committed by someone with severe mental illness, such as schizophrenia.

In my own mind, it is inhumane that we expect such people to try to make it in our society. They lack mental capacities to govern their own actions, which is a basic, and necessary expectation for those who live in it.

What Stefan Molyneux lays out in great detail is that by shutting down state mental institutions, which has been going on since the late 1950s, we have merely changed the status of those who would have been kept in mental hospitals in the past, into inmates in our prison system, since so many of them end up committing violent crimes, or into homeless people who live and sleep on the streets. The idea that housing them in mental hospitals violates their rights is tragically comical, given this evidence. Neither the streets, nor our prison system is an appropriate place for the mentally ill. What more evidence does one need that these people cannot function in a society?

We can agree that there are mild forms of mental illness that do not warrant institutionalization, such as depression. However, to say that most forms of mental illness do not deserve consideration for it I think does a disservice to the mentally ill themselves, because putting them in prison, or leaving them to the responsibilities and stimuli of society that they struggle to understand and deal with unnecessarily leads them and those they potentially harm into a life of tragedy.

Simple computers

A while back, Alan Kay and I got into a discussion about what a computer is, and he wished to correct my notion of them. I had this idea that a computer is something that can do an operation (like store/fetch memory, do an arithmetic operation), do comparisons, branch, and do this repeatedly, without the need of human intervention. I contrasted this with a typical, non-programmable calculator, which I thought wasn’t a computer. He said that a computer is something that can help people realize larger ideas, that it doesn’t necessarily need what I outlined, and that even a calculator is a computer. He said you can make a simple computer using two rulers. He used this example with children when he was involved with education.

I came upon “Hands-On: Smarty Cat is Junior’s First Slide Rule” from Hackaday. The main subject is on a children’s version of the slide rule, called Smarty Cat, that can do addition and subtraction, and multiplication and division–a four-function calculator for integers. The article also shows the two-ruler example that Alan talked about, to do addition and subtraction on integers and some rational numbers. It uses this as an intro. to talking about real slide rules. They operate on the same principle, but facilitate more sophisticated operations.

Ransomware attacks on government IT systems: One reason we need better software architecture

I got word that there has been a spate of ransomware attacks on municipal governments, and in one case, the State of Colorado’s Dept. of Transportation (CDOT).

I’ve done a little research, looking for details about how this happened. The best I’m getting is that the governments that got infected didn’t keep their software up to date with the latest versions and patches. They also didn’t maintain good network security. That seems to be the way these ransomware attacks happen. However, in the case of the City of Baltimore, the attack method was different from those in the past. The infection appears to have been installed intentionally on the governments’ computers by someone, or some group, who got into their systems through some means, perhaps through a remote desktop capability that was left unsecured. It didn’t come in through an infected website, or through a viral e-mail that’s used as bait.

Baltimore’s out-of-date and underfunded IT system was ripe for ransomware attack – Baltimore Brew

Some time back, I got a tip from Alan Kay to look at Mark S. Miller‘s work (no relation) on creating secure software infrastructure, using object capabilities, in an Actor sort of way. It was educational, largely because he said that in the 1960s and ’70s, there were two schools of thought on how to design a secure system architecture. One was using access control lists (ACLs), based around user roles. The other was based around what’s been called “capabilities” (or “cap”). The access control list model is what almost every computer system now in existence uses, and it is the primary cause of the security vulnerabilities we’ve been dealing with for three decades.

He said the problem with ACLs is that whenever you run a piece of software logged in using a certain user credential, that software has all the power and authority that you have. It doesn’t matter if you ran a piece of software by mistake, or someone is impersonating you. The software has as much run of the system as you do. If it manages to elevate its privileges through a hack, making itself appear to the system as the “root” user (or “Admin”), then it has complete run of the system, no holds barred.

The capability model doesn’t allow this. To use an analogy, it sandboxes software, based on the capabilities, or designation that spawned it, not the user’s credentials. Capabilities are defined for each object and component in the system, based on what resources it needs. If one component spawns another, the spawned component adds some capabilities of its own, but otherwise, it gets capabilities that the component that spawned it allows it to have, but only from its own repertoire. A user can give software more capabilities, if they desire it.

The strength of this model is there is no avenue for a piece of software to suddenly gain access to all of the system’s resources, just because it fooled you into running it, or someone got your credentials, and installed it. Even if an impersonator gains access to the system, they don’t have the ability to rampage through the system’s most sensitive resources, because it doesn’t use a user-role access model.

Part of the security infrastructure of object capabilities is that objects are not globally accessed. Using the Actor model of access, object references are only obtained through “introduction,” through an interface.

I’m including a couple videos from Miller to illustrate this principle, since he created a couple programming languages (and runtimes), called E and Caja, that use this capability model in Windows, and JavaScript/EcmaScript, respectively.

The first video demonstrates software written in E, from 2002

This next one demonstrates a web app. written in Caja, from 2011. A major topic of discussion here is using object capabilities in a distributed environment. So, cryptographic keys are an important part of it, to provide secret identifiers between objects, to restrict system access to sensitive resources.

In the Caja demo, he showed that an object that’s unknown to the system can be brought into it, and allowed to run, without breaking the object’s operation, and without threatening the system’s security. Again, this is because of the architecture’s sandboxing, which is deep. It’s also because of the object-oriented nature of this system. Everything is late-bound, and everything executes through interfaces. Emulation objects that use the same interfaces as sensitive system resources can provide sandboxed versions of them.

Miller has approached capabilities from a standpoint of pragmatism, “the operating system ship has sailed.” We’re stuck with the access control list model as our OS foundation for running software, because too much depends on it. We have a very loooooong legacy of software that uses it, and it’s not practical to just pitch it and start over. He’s talked about “tunneling” from ACLs to capabilities, as a gradualist path to a better, more secure model for designing software. This still leaves some security vulnerabilities open for attack in his system, because he hasn’t rebuilt the entire operating system from scratch. He created a layer (runtimes), on top of which his own software runs, so that it can operate in this way, without falling prey to attack as much as the other software on the system.

This pragmatic approach has its virtues, but it would not have prevented the attacks I’m talking about above, because they used system resources that are built into the operating system. What these attacks highlighted for me is that the system architecture we’re using does not provide adequate security for operating on a network. Sure, you can keep patching and upgrading your system software to stay ahead of cyber threats, but get behind on that, while not adequately compensating for the system security model’s inherent weaknesses, and you compromise a large chunk of your IT investment.

As I am not terribly familiar with the capability model yet, this is as much as I’ll say about it. I think Miller’s demos of it were very impressive. What this is meant to get across is the need for better software architecture, not in the sense of using something like design patterns, but a different system of semantics that is surfaced through one or more programming interfaces designed to use it.

The architecture Miller demonstrated is object-oriented, and I mean that in the real sense of the word, not in the sense of Java, C++, C#, etc., but in the sense, as I said earlier, of the Actor model, which carries on some fundamental features that were invented in Smalltalk, and adds some more. The implementations he created seem to be synchronous programming models (unlike Actor), using the operating system’s multitasking facility to run things concurrently.

It would behoove government and industry to fund basic research into secure system software architecture for the very reason that our governments now depend so much on computers running securely. What Miller demonstrated is one idea for addressing that, but it’s as old as a lot of other ideas in computer science, and it no doubt has some weaknesses that could be overcome by a better model. Though, reading Miller’s commentary, he said he’s biased in favor of thinking that the capability model is the best. He hasn’t seen anything better.

Google+ is gone

It shut down April 2nd. I got word of this last fall. Google originally said they were going to keep it up for another year, but then chopped that timeline in half.

I’ve gotten used to the fact that many Google services are temporary. I had been on Google+ since 2012, it seems. Its best feature was its groups. It wasn’t as nice a platform as Facebook, and the groups I was interested in were not very active, but I had some of my best conversations on there. The experience was really great if I turned off G+’s ability to suggest other posts outside of my interests. There was quite a bit of “trash” on there that I just assumed ignore, and a nice thing about G+ is it allowed me to do that.

I’ve preserved most of my G+ posts on Mark’s favorites, a WordPress blog I set up out of frustration with Facebook many years ago. I’ve been using it as a “dumping ground” for stuff that’s had some interest for me, but I don’t think groups and people I’ve met on other social media platforms would care about.

I expect I’ll be posting most new stuff on here, and occasionally posting small items of interest on “Mark’s favorites”.

On improving a computing model

I’ve been struggling for at least a year to come up with a way to talk about this that makes some sort of sense, since I would like to help people start somewhere in understanding what it means to get on the track of “computer science as an aspirational goal.” For me, the first steps have been to understand that some tools have been invented that can be used in this work, and to work on a conceptual model of a processor. Part of that is beginning to understand what constitutes a processor, and so it’s important to do some study of processors, even if it’s just at a conceptual level. That doesn’t mean you have to start out understanding the electronics, though that can also be interesting. It could be (as has been the case with me) looking at the logic that supports the machine instructions you use with the processor.

The existing tools are ones you’ve heard about, if you’ve spent any time looking at what makes up traditional computing systems, and what developers use: operating systems, compilers, interpreters, text editors, debuggers, and probably processor/logic board emulators. These get you up to a level where, rather than focusing on transistor logic, or machine code, you can think about building processors out of other conceptual processors, with the idea being that you can build something that carries out processes that map to clear, consistent, formal meanings in your mind.

What started me on this was I referred back to a brief discussion I had with Alan Kay on what his conception of programming was, back in the late 2000s. His basic response to my question was, “Well, I don’t think I have a real conception of programming”. In part of this same response he said,

Smalltalk partly came from “20 examples that had to be done much better”, and “take the most difficult thing you will have to do, do it, and then build everything else out of that architecture”.

He emphasized the importance of architecture, saying that “most of programming is really architectural design.”

It took me several years to understand what he meant by that, but once I did, I wanted to start on figuring out how to do it.

A suggestion he made was to work on something big. He suggested taking a look at an operating system, or a programming language, or a word processor, and then work on trying to find a better architecture for doing what the original thing did.

I got started on this about 5 years ago, as I remember. As I look back on it, the process of doing this reminds me a lot of what Kay said at the end of the video I talked about in Alan Kay: Rethinking CS education.

He talked about a process that looks like a straight line, at first, but you run into some obstacles. In my experience, these obstacles are your own ignorance. The more you explore, the more you discover some interesting questions about your own thinking on it, and some other goals that might be of value, which take you some distance away from the original goal you had, but you feel motivated to explore. You make some progress on those, more than what you started out trying to do, but still run into other obstacles, again from your own ignorance, which, when you explore some more, you find other goals that again, take you farther away from your goal, and you make even more progress on that. If you think on it, everything you’re doing is somehow related to the goal, but it can give you some concern, because you may think that you’re wasting time on trivial goals, rather than the goal you had. The way I’ve thought about this is like “Backing up from the problem I thought I was trying to solve,” looking at more basic problems. It seems to me the “terrain” that Kay was showing in the virtual map was really the contours of one’s own ignorance.

Describing how I got into this (you don’t have to take this path), I looked at a very simple operating system, from a computer I used when I was a teenager. I looked at a disassembly of its memory, since it was built into ROM. The first realization from looking at the code was to see that an operating system really is just a program, or a program with an attached library of code that the hardware, and user software often jumps into temporarily to accomplish standard tasks. The other realization was answering the question, “How does this program get started?” (It turns out microprocessors have a pre-programmed boot address they go to in the computer’s address space, which is set up as boot code in ROM, where they begin fetching and executing instructions, once you power on the processor.) In modern computers, the boot ROM is what’s traditionally called a BIOS (Basic Input/Output System), but in the computer I was studying, it was the OS.

The basic outline of my project was to study this OS, and try to see if I could improve on its architecture, reimplement it in the new architecture, and then “make everything else” out of that architecture.

I quickly realized a couple things. The first was that I didn’t want to be dealing just in assembly language for the whole project. The second was a sneaking feeling I’ve learned to listen to in my learning process that (in this case) I was ignorant of the underlying processor architecture, and I needed to learn that. My answer to both of these problems is that I need to model some higher level process concepts (to get out of working in assembly, yet still use it to generate a new implementation out of a higher level model–a compiler), and I need to construct a conceptual model of the CPU, so I can gain some understanding of the machine the OS is running on. In my learning process, I’ve found it most helpful to understand what’s going on underneath what I’m using to get things done, because it actually does matter for solving problems.

I needed a tool for both creating a programming model (for my higher-level process concepts), and for modeling the CPU. I chose Lisp. It felt like a natural candidate for this, though this is not a prescription for what you should use. Do some exploration, and make your own choices.

I started out using an old dialect of Lisp that was watered down to run on the old computer I was studying. I had a goal with it that I’ve since learned was not a good idea: I wanted to see if I could use the old computer to write its own OS, using this method. What I’ve since learned through some historical research was when Kay was talking about what they did at Xerox Parc, they definitely didn’t use this method I’m describing, anything but! They used more powerful hardware to model the computer they were building, to write its system software. Nevertheless, I stuck with the setup I had, because I was not ready to do this. I found a different benefit from using it, though: The Lisp interpreter I’ve been using starts out with so little, that in order to get to some basic constructs I wanted, I had to write them myself. I found that I learned Lisp better this way than using a modern Lisp environment, because in that, I kept being tempted to learn to use what was already available in it, rather than “learn by doing.”

I had to step back from my goal, again, to learn some basics of how to gain power out of Lisp. I have not regretted this decision. In the process, I’ve been learning about how to improve on the Lisp environment I’ve been using. So, it’s become a multi-stage process. I’ve put the idea of improving the OS on hold, and focused on how I can improve the Lisp environment, to make it into something that’s powerful for me, in a way that I can both understand, and in a way that serves the larger goal I have with understanding the target system.

As I’ve worked with this underpowered setup, I’ve been getting indications now and then that I’m going to need to move up to a more powerful programming environment (probably a different Lisp), because I’m inevitably going to outgrow the current implementation’s limitations.

The deviations I’m describing here are about first pursuing a goal as a motivator for taking some action, but realizing that I had more to learn about the means of achieving the goal, putting the initial goal on hold, and spending time learning the means. It’s a process of motivated exploration. It can be a challenge at times to keep my eye on the initial goal that got me started on this whole thing, which I still have, because I get so focused on learning basic stuff. I just have to remind myself from time to time. It’s like wanting to climb Everest when you have no experience climbing mountains. There are some basic skills you have to develop, some basic equipment you have to get, and physical conditioning you have to do first. Just focusing on that can seem like the entire goal after a while. Part of the challenge is reminding yourself where you want to get to, but not getting impatient with that. Patience is something I’ve had to put some deliberate focus on, because I can feel like I’m not moving fast enough, because it’s taken me a long time to get to where I am. I think back to how I started out programming for the first time, when I was about 12 years old. The process of doing this has felt a lot like that, and I’ve used that experience as a reminder of what I’m doing, as opposed to my expectations from the more experienced context I am trying to give up, from my many years in the “pop culture” of computing. When I was first learning programming, it took a few years before I started feeling confident in my skill, where I could solve problems more and more by myself, without a lot of hand-holding from someone more experienced.

I remind myself of a quote from “The Early History of Smalltalk”:

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

What I see myself doing with this attempt is gaining practice in improving computing models. I’ve selected an existing model that is somewhat familiar to me as my platform to practice on, but it contains some elements that are utterly unfamiliar. It contains enough context that I can grasp basic building blocks quickly, making it less intimidating. (Part of this approach is I’m working with my own psychology; what will motivate me to keep at this.) This approach has also enabled me to push aside the paraphernalia of modern systems that I find distracting in this process, for the purpose of gaining practice. (I haven’t thrown away modern systems for most things. I’m writing this blog from one.)

You cannot plan discovery. So, I’m focused on exploring, improve my understanding, being receptive, and  questioning assumptions. I’ve found that being open to certain influences, even ones that take me away from goals, in an unplanned fashion (just from events in life forcing me to reprioritize my activity), can be very helpful in thinking new thoughts. That’s the goal, improving the way I think, learn, and see.

Coming to grips with a frustrating truth

I’d heard about Mensa many years ago, and for many years I was kind of interested in it. Every once in a while I’d see “intelligence quizzes,” which were supposed to get one interested in the group (it worked). Mensa requires an IQ test, and a minimum score, to join. Nevertheless, I looked at some samples of the discussions members had on topics related to societal issues, though, and it all looked pretty mundane. It wasn’t the intelligent discussion I expected, which was surprising. Later, I found out I’m below their entrance threshold. Based on taking a peek, though, I don’t feel like I’m missing anything.

I came upon a video by Stefan Molyneux recently (it was made in 2012, Update 7/2/2020: It’s now gone), and it seems to explain what I’ve been seeing generally for more than ten years in the same sort of societal discussions (though I can’t say what the IQ level of the participants was). I’ve frequently run into people who seem to have some proactive mental ability, and yet what they come out with when thinking about the society they live in is way below par. I see it on Quora all the time. Most of the answers to political questions are the dregs of the site–really bad. I’ve had no explanation for this inconsistency, other than perhaps certain people with less than stellar intelligence are drawn to the political questions, until I saw this analysis. Molyneux said it’s the result of a kind of cognitive abuse.

The reason I’m bothering with this at all is seeing what he described play out has bothered me for many years, though I’ve assumed it’s due to popular ignorance. It’s part of what’s driven my desire to get into education, though now I feel I have to be more humble about whether that’s really a good answer to this.

I found his rational explanation for this confusing. I needed to listen to it a few times, and take notes. I’ll attempt to summarize.

This is a generalization, but the point is to apply it to people who are more intelligent than average, and refuse to allow inquiry into their beliefs about society:

Children who are in the “gifted” categories of IQ are told a certain moral message when they’re young, about how they are to behave. However, when those same children try to apply that morality to their parents, and the adults around them–in other words, demand consistency–they are punished, humiliated, and/or shamed for it. They eventually figure out that morality has been used to control them, not teach them. (This gave me the thought, based on other material by Molyneux, that perhaps this is one reason atheism is so prevalent among this IQ category. Rather than morality being a tool to uplift people to a higher state of being, it’s seen purely as a cynical means of control, which they understandably reject.) As soon as they try to treat morality as morality, in other words, as a universal set of rules by which everyone in their society is governed, they are attacked as immoral, uncaring, brutish, wrong, and are slandered. This is traumatic to a young mind trying to make sense of their world.

The contradiction they encounter is they’re told they’re evil for not following these rules as a child, and then they’re told they’re evil for attempting to apply those same rules to other adults when they grow up. They are punished for attempting to tell the truth, even though they were told when they were young that telling the truth is a virtue (and that lying is evil). If they attempt to tell the truth about their society, they are punished by the same adults who cared for them.

The image he paints is, to me, analogous to Pavlov’s dog, where all of its attempts to follow its instincts in a productive way are punished, leading to it quivering in a corner, confused, afraid, and despondent, unable to respond at all in the presence of food. In this case, all attempts to apply a moral code consistently are punished, leading to a disabled sense of social morality, and a rejection of inquiry into this battered belief system, in an attempt to protect the wound.

Molyneux comes to an ugly truth of this situation. This inability to question one’s societal beliefs is the product of a master-slave society: In slave societies, rules are applied to the slaves that are not applied to the masters. They operate by a different set of rules. Morality that is dispensed to the ignorant is used as a cynical cover for control. Those subjected to this inconsistent reality deal with it by trying their best to not look at it. Instead of pushing through the shaming, and demanding consistency, risking the rejection that entails from the society in which they grew up, they blindly accept the master-slave dichotomy, and say, “That’s just the way it is.” Those who question it are attacked by these same people, because engaging in that leads them back to the pain they suffered when they did that themselves.

He also addressed a psychological phenomenon called “projection.” He said,

… they must take the horrors of their own soul and project them upon the naive and honest questioner. Every term that is used as an attack against you for engaging in these conversations is an apt and deeply known description of their own souls, or what’s left of them.

Molyneux sort of addressed the evolutionary reasons for motivated reasoning in another video (Update 7/2/2020: Also now gone), but I have liked Jonathan Haidt’s explanation for it better, since he gets into the group dynamic of shared beliefs, and justifies them, saying that they played some role in the survival of our species, up until recently: Those who had this group-belief trait lived to reproduce. Those who did not died out. That isn’t to say that it’s essential to our survival today, but that it deserves our respectful treatment, since it was a trait that got what we are here.

What’s also interesting is that Molyneux related the trait of motivated reasoning to the practice of science, quoting Max Planck (I’ve heard scientists talk about this) in saying that science really only advances when the older generation of scientists dies. This creates room for other ideas, supported by evidence, and critical analysis, to flourish, perhaps setting a new paradigm. If so, it becomes a new sort of orthodoxy in a scientific discipline for another generation or so, until it (the orthodoxy), too, dies away with the scientists who came up with it, repeating the cycle.

Related posts:

Psychic elephants and evolutionary psychology

The dangerous brew of politics, religion, technology, and the good name of science