On the necessity of a proper respect for the irrational in the pursuit of truth

I’m featuring a video by Dr. Steve Turley, because I think this is an important point. He discusses what I would call moral education, which I have also concluded is missing from education. The video is from Apr. 11, 2017. He uses the uproar at the University of Missouri (“Mizzou”) as the lead-in to this topic.

We need education in how to reason, and to understand what knowledge is. We need education in how to categorize modes of thought, and to look at the strengths and weaknesses of each. We also need education in what is good and what isn’t, by which I mean, broadly, to understand that there is such a thing as better and worse, not only for oneself, but for the society in which we exist, and that it is moral and ethical to prefer the better, even though there will always be negative consequences for some as a result of doing so (in modern terms, we’d roughly call this a “cost/benefit analysis”). It should also be pointed out that we in the West have been learning about that which is better. It’s not a static thing. We’ve learned that in order to learn what is better, we need to venture beyond that which we think is good, to try new things, and see whether they add to our betterment, or not.

What’s come into challenge in the last 15 years is the very ideal of betterment itself; the pursuit of truth. I think Turley’s point regarding “the ordering of our loves” is profound here.

Where I would say we especially need education, in this regard, is where the perception of what benefits us comes in conflict with what someone else believes benefits them, or what we believe benefits society. How do we choose who has the better claim? Who will get the prerogative to act on it, and who must demur? Society has a claim on moral virtue, if it is going to survive, but who speaks for it other than a collective, common understanding of some virtues? The common virtues I speak of for our society are things like freedom of speech, freedom of religion, freedom of assembly, equal protection under the law, equal rights, etc. At times, I’ve called these virtues a meta-morality. They are not part of how we conduct our daily lives, person to person, but we appeal to them when making societal decisions. Even these common virtues have come into challenge in what is taught in our schools. Thereby, their perceived value to our society has been diminished, precisely because they have a moral foundation.

These virtues are not givens. People knowledgeable in history know this.

What’s becoming increasingly clear to me is that what we regard as empirical and philosophical knowledge has a moral basis for its practice, which is the pursuit of truth. The pursuit of truth is in itself a moral claim to virtuous action, precisely because even it has negative consequences for those who do not pursue truth, and see little value in it. It is a deliberate choice that accepts this reality, and prefers it over the historical alternatives, which have resulted in a lower quality of life, a lack of dignity for the vast majority of people, for most of our tragic human history.

I’ll note that with truth also comes beauty, something we also value. I don’t speak just of physical beauty. It also manifests in how we express ourselves, how we act with others, and what we contribute. It’s important, as it nourishes what we would otherwise call “the soul of humanity.”

I’m reminded that the mathematician Jerry King pointed out that many of the most beautiful mathematical truths have also turned out to be useful in science and engineering. So, beauty is not entirely frivolous, as we are tempted to think.

Truth and beauty can be taken to excess. As Aristotle said, any virtue taken to excess becomes a vice. In a healthy moral system, its virtues check each other, hemming in excess.

The benefits of the pursuit of truth are things that our society has taken for granted, but which are now being highlighted by the increasing power of postmodern thought in our education system, and increasingly in our politics, and the policies under which we live.

Turley has brought up repeatedly how in the late 19th century, American universities stopped teaching moral virtues as part of the curriculum, and just focused on empiricism as the basis of knowledge. One might wonder if this started then, why didn’t what we see now in our social state emerge much earlier. My guess is that what made up for it, and may have allowed schools to take this turn without a more obvious impact, is that Americans were still mostly engaged with organized religions, in whatever belief system they chose, and so their religious leadership gave them a moral education that schools no longer provided. Perhaps people became convinced moral education would be safe there. I think we see now this is not the case, as organized religious practice, particularly the Christian variety, has been on the decline in the United States in the last 15, or so, years, and has been in decline in Western Europe for even longer.

The fact is science cannot answer the question of how we should relate to each other. It cannot tell us what is better. It can inform us in making that determination, but it can’t tell us who is more worthy to live or die; who should live free, and who should be punished for violating our society’s standard of peaceful coexistence. It cannot answer what that standard should be. Yet, all of this is of critical importance. Without moral and ethical standards of good quality, we are prey to standards that are slipshod, short-sighted, and which end up degrading our quality of life, and the life of our society. The reason being it’s impossible for humans to act on the basic things that sustain our lives without having a belief system. It’s a fool’s bargain to neglect teaching quality moralities and ethics to the next generation, believing that science can substitute for them. One reason we have tried to substitute one for the other is the basis for these quality belief systems–the theological stories that give rise to them–have problems when looked at under the lens of our society’s analytical tools, or these belief systems cause some negative consequences in society. We’ve seen disappointing, and what are to us horrifying consequences when people have followed religious practices that are of poor quality, or if they’ve followed perverted versions of our society’s religious traditions. What’s beginning to be apparent, though, is when we neglect teaching quality moralities in our society, we undermine the very basis for preferring our analytical tools for determining what’s better. Without a quality moral/ethical foundation, why even pursue truth, when we can imagine that we can create our own world out of whole cloth with our desires (which ends up degrading quality of life)? 

Science doesn’t allow for that way of thinking, but if people see science as a limiting method, rather than a liberating one–when “moral truth is relative,” and our potential is seen as stifled; tricked away from us by our traditions–why use science to inform us about anything? Or, why shouldn’t science only be used as a voice of authority for a moralizing movement of our choosing–a ruse?

For a pursuit of truth to exist, it must first have a moral/ethical foundation that values it, the sense that truth is right, good, and elevates and liberates humanity, by virtue of the fact that it works relatively well for the societies that base themselves on it, and gives people a dignity for which they can strive, and which they can deserve.

The phrase “pursuit of truth” implies a kind of relativism, because it states that the truth is not perfectly known. What we should have in a pluralistic society is multiple moralities contending against each other in discussion and debate, because, since the truth is not known–though it is assumed as an absolute, with reason, to exist–different moralities are going to have some better ideas for addressing it than others, and people can reformulate what moralities they want to follow, based on their judgment of which ideas are better for them. However, what we used to do was make modifications to long-standing traditions, as some among us explored their own moralities. That practice had value. What’s been done recently is to throw out moral traditions entirely, sometimes keeping their names, but throwing out their substance. This came partly from our exercise of rational thought, and partly because we saw the traditions as power plays upon us, which limited our potential.

Throwing out religious traditions doesn’t mean our proclivity to believe in myths ends. What’s happened instead is new, irrational moral paralogies, as James Lindsay has called them, have arisen, seemingly formulated out of whole cloth, but which have roots in political traditions. They do not offer the kind of personal or societal life that traditional religions and philosophies teach. People may believe that these pseudo-religions offer a better life, but from what I’m seeing, they do not. However, that is up to each individual to judge.

Circling back to what I said about the need for a moral education, I don’t mean to suggest it must return to schools. My real intent with this is to appeal to our desire for a better life, and a better future for the next generation. We need to have a discussion in our society about how this should be carried out.

A point that I think needs to be made in our present circumstance is that since we need a moral basis for the pursuit of truth, I think we need to talk to those who would discourage the practice of various traditional religions–because of their failure when subjected to some rational analysis–to point out that what they’re doing is destructive to their own goals. They are undermining the basis for pursuing truth, and as I think we’re seeing, that leads to a growing chaos.

A thought that’s I’m sure disquieting to proponents of rationalism is that our betterment through rational means rests on an irrational foundation. We’ve mistakenly believed that we can toss aside our traditional irrational foundation, and substitute a rationally-derived orthodoxy in its place, because of our desire to be consistent. Along with that, we’ve brought in a worship of what rational thought, we think, has given us entirely on its own. Given human nature, though, this is really asking to have our cake, and eat it, too. While it has seemed to us that giving space to the irrational destroys the benefits of what rational modes of thought give us, actually, shunning, and discrediting certain traditions that have irrational bases creates the environment for lower irrational beliefs to take their place. By doing this, we are in fact embracing a contradiction to our nature. We pride ourselves on our consistency of thought for doing this, but neglect that we are acting in denial to what’s really going on, and what is required to maintain, and advance our society. Indeed, as we have been watching play out in our society for years, rationality can undermine its own foundation.

If anyone brings up the importance of a belief in a deity, our intelligentsia mocks them for having “imaginary friends.” Well, let me address that. I will not assert here that deities do or do not exist, but looking at it from a standpoint I find very useful for regarding such matters, there are such things as important notional inventions that we are neglecting here. Take our notion of time, for example. Yes, we notice that there is a progression of change. Nothing stays the same. To help us deal with this, and measure this change, humans invented a notion of time, but this is our formal idea of change. That notion is not objectively real, but could our society survive if we completely eliminated this idea, because of this? No, it could not. I contend that people’s religious traditions are in a similar category. We take those away, and disrespect them, at our peril. People need moral guidance from traditions that have been instrumental in furthering our civilization. We are not born with that guidance, and a secular “going through the motions” of passing on the religiously-derived notions of right and wrong (by the way!) falls to pieces on critical analysis, because, “Why do any of it?… Why should I care how other people feel, or think about what I do, or whether what I do makes them suffer? What if my desires are the most important to me, or if it’s my identity, and everything I think that’s owed? What if that’s all there is to it? What if I don’t care who judges me?”

If you listen to Scott Adams, and what he describes regarding human cognition and psychology, this will become clearer. If you look around you at the irrational moral claims that dominate our popular discussion now, you can see that nothing about our human nature has changed by quashing our religious traditions, and the moral lessons they convey.

Qualitative differences in morality matter. They make a difference in our quality of life. Making sense requires the pursuit of truth. This requires moral imperatives that value that pursuit, and motivate people toward it. This requires acknowledging and accepting that our irrational nature exists, and must be accommodated, but also kept under some control. That’s where the pursuit of truth becomes essential. It is a tool for keeping our irrational nature from going “off road” to such an extent that it ruins us.

What’s grown up in place of our displaced foundation is an irrational discourse that does not value the pursuit of truth. It behooves us to realize this, to understand what we are losing, and that our future does not have to follow linearly from our present.

Statesmen, my dear Sir, may plan and speculate for Liberty, but it is Religion and Morality alone, which can establish the Principles upon which Freedom can securely stand.

–John Adams, letter to Zabdiel Adams (21 June 1776)

The psychology of totalitarianism

I happened to find the linked video below, with Prof. Mattias Desmet, a psychoanalytic psychotherapist, discussing this matter with Dr. Reiner Fuellmich, an attorney, and Dr. Wolfgang Wodarg, an internist, pulmonologist, and social medicine specialist, at the Corona Foundation Committee (Stiftung Corona Ausschuss), from August 2021. It struck me as an important discussion, as it gets to the roots of totalitarian political systems that we all need to be aware of.

See video at this link:

Prof. Mattias Desmet – The Psychology of Totalitarianism

The following are my notes from this video.

Desmet talked about four mass psychological factors that can become present in society, which promote “mass formation,” as he called it, for a totalitarian political system:

  • Social isolation/lack of social bonds among the mass population
  • A lack of sense-making in the mass population
  • Many people experiencing a lot of free-floating anxiety
  • A lot of free-floating psychological discontent in the population

By “free-floating” he means a sense of “I’m anxious, or I’m feeling angry/depressed/disappointed in life, but I don’t know why.”

He cited as evidence for this (my guess is he was referring to Germany) the amount of antidepressants that were being taken by the population 2-3 years ago, hundreds of millions of doses.

He put emphasis on the fact that free-floating anxiety is the most painful psychological condition a person can experience.

He then discussed the triggers that can move this mass phenomenon toward totalitarianism:

  • First, mass media in the society provides a “story” (a notion of a sequence of events) that describes an object of anxiety, and at the same time puts forward a convincing strategy for dealing with this anxiety-causing phenomenon. This causes the free-floating anxiety being experienced by the mass population to become defined in the object that is put before them. Now their anxiety is no longer seen as free-floating. It seems to have a cause. People are then willing to implement the strategy they’re given to deal with this object, in an attempt to relieve themselves of the anxiety they’ve been feeling, no matter what the cost.
  • Second, masses of people engage in what they see as an epic battle with this object of anxiety. This causes a new kind of social bond to emerge between these people who had been socially isolated. Along with that, they collectively find a new kind of sense-making for themselves. It is not rigorous. It is not that rational, but it gives them a sense of making sense of the world they experience. Their life is then directed at battling the object of anxiety. It is through this that they find social connections with other people who are engaged in the same sense of fighting against this object. There is a dramatic flip from social isolation to a massive social connection, through this sense of fighting a war against the cause of their collective anxiety. This then leads to what he described as “mental intoxication,” which is equal to mass hypnosis.

Mass formation is a form of hypnosis

Once this happens, it doesn’t matter whether the story that this population has been given can be rationally, scientifically torn to pieces. What matters is their social connection, which led them to this mental intoxication. They will continue to conduct themselves as if the story is true, no matter what. The reason is they will do anything to avoid going back (this is their fear) to their prior state of free-floating anxiety, where it had no definition, identity, or discernable source, and their previous social isolation. They fear that if they accept anything counter to what brought them out of their prior state, they will go back to their prior state. The motive is as simple as pleasure over pain. Searching for truth is irrelevant.

So, Desmet said, the crucial matter is acknowledging this painful state of anxiety, and then searching for how we got into this state of social isolation, lack of social bond, and lack of sense-making, which led to free-floating anxiety, and massive psychological discontent.

He crystalized this mass social phenomenon as a symptomatic solution to what’s a very real psychological problem.

It’s his contention (and I am sympathetic to this POV) that the sense of crisis over Covid-19 is really much more of a societal and psychological crisis than a biological one.

He said that the mental intoxication that’s experienced leads to a narrowing of attention, to only pay attention to what the story they’ve been given tells them is important. This explains why these people only see the harm done by Covid-19, and are oblivious to the collateral damage done by the lockdowns. They are also unable to feel empathy for the victims of the lockdowns. He emphasized this is not from selfishness, but from the effects of this intoxication, the “mass formation,” as he’s termed it.

He said this effect is so powerful, it so focuses their attention, that you can diminish their physical well-being, and they won’t notice it. This goes back to what he said about how it’s a kind of hypnosis. People who are hypnotized can be injured, and be oblivious to the pain.

  • A third action that takes place in mass formation is the population at large becomes intolerant of dissonant voices (dissent). I imagine this is because it’s seen as interfering with the sense of social connection, and the intoxication it produces. Again, the people in the throes of this mass phenomenon do this, because they fear going back to their prior state of free-floating anxiety, and social disconnection.

He indicated that mass formation is not widely known among psychologists. They are not aware of it, and so they are not aware of it happening in their world today.

Desmet was asked by Fuellmich what characterizes the totalitarian leaders, “What kind of person does this?”

Desmet said:

  • They don’t have the same kind of mentality as common criminals, even though their ideology is criminal. They know how to follow their society’s social rules.
  • When they are in power, they make up their own rules for the society, and follow them.
  • They are true believers in their ideologies, and they believe they are creating a paradise.
  • They feel like it’s acceptable to sacrifice a portion of their own population to realize their paradise.

Two books he recommended people read are by Hannah Arendt:

He said that from what’s been observed of such “mass formation” events, it’s impossible to wake up masses of people who are under the influence of it, unless by some catastrophic event. However, he also said free speech is extremely important for tamping down the severity of the crimes committed under these conditions,

You can make the hypnosis less deep by continuing to talk, and that’s what we all have to do.

Dr. Justus Hoffmann, an attorney, made the point that what makes totalitarian regimes so attractive in the short term is that totalitarians create very orderly societies. He said this makes it difficult to talk to people about the danger of such a regime, because they say, “Look, there’s no chaos. … We still have rule of law. Everything’s fine.” Such regimes have a very strict rule of law. He contended they create more law, more agencies, more policing, etc.

Desmet disagreed, saying that there’s a distinction. Totalitarians do not enforce law, they impose rules, and they’re rules that they make up from moment to moment. There is no consistency in either the rules, nor in how they are created.

Desmet talked about a typical distribution with the mass phenomenon: Thirty percent of the population are taken with the story that explains their sense of anxiety, and they create an atmosphere of fear around contradicting that story. Another 40% quietly do not accept the story, but are too afraid to publicly contradict it. There’s another 25-30% who do not accept the story, and speak out.

There was some speculation about what kind of people were resistant to mass formation/the totalitarian drive, and those who are most amenable to it. Desmet seemed more sure about the people who are most likely to join in the mass formation; that they are people who believe they are doing everything to help “the others” (probably society).

Everything is done out of a sense of citizenship. They do it all for the [collective], for the community. They’re convinced of that. That’s also what Hitler said, “I expect of every German that he sacrifices his life without hesitation,” he said, “for the German people.” … That is what Stalin said [as well].

Fuellmich pointed out that it’s been his experience that people who have less formal education, but work a trade, are very educated on weighty issues, and are far more open to having discussions about them than are academics. Desmet responded that Gustav Le Bon saw this in the 19th century, that the higher degree of education you have, the more susceptible you are to mass formation. Viviane Fischer, an attorney, asked why that is. Desmet said that it comes down to what is seen as the purpose of education: Whether it’s an exercise in learning to think for yourself, or whether it’s to convince you to think about everybody else, before yourself. Wodarg added, “You learn to obey.”

They got back to the question of, “What do we do about this?” Desmet threw another activity on the table: Humor is important to “breaking the spell” of mass formation. He said that mass formation relies upon everyone recognizing one authority. The more that someone gives authority to a figure, the more susceptible they are to being hypnotized by that figure. He said it’s important for the humor to be gentle and polite. If it’s too aggressive, it arouses the aggression of the masses. This kind of gentle, polite humor is a good antidote to mass formation, because it subtly delegitimizes the authority without arousing the aggressive response from the masses.

Desmet came back to the topic of cause, though, saying that even if many people come out of their hypnosis in the current sense of crisis, they will fall prey to some other sense of crisis in the future, and go right back into this behavior of mass formation, because what causes this behavior is their sense of anxiousness, disappointment in life, lack of social connection, and lack of sense-making.

He said it’s his educated opinion that a root cause is our culture’s materialistic, mechanistic view of ourselves that causes destruction of our social structures; of social connection, and the feeling in ourselves that “life makes sense.” If you hold the belief that you are just a machine, then by definition, this implies that life is senseless. He asks, what’s the sense of a life that is reduced to just a little part of the larger machine of the Universe? If that’s all we are, then one can reasonably ask what is the point of having meaningful social relations? You don’t have to follow ethical principles, because “the machine” governs what happens and doesn’t happen. There is no right or wrong way for anything to happen. It just is, and will be. This concept destroys one’s “psychological energy,” as he put it, one’s sense of social connectedness, and you end up in this free-floating anxiety he’s talked about.

Wodarg added that in this concept of being “a small piece in the bigger machine,” you also get this sense that you’re a burden for the machine, “It doesn’t need you.” He said the healthier materialist concept is that “You are ‘the machine’. You’re a wonder.” You are not a small cog in the larger mechanism. You are the universe that’s worth something.

Fischer prompted Desmet to take the long view, that the 40% “silent majority” will eventually “run the other way” from this totalitarianism, because a constant in history is that totalitarianism is always self-destructive. The 30% that are hypnotized will never snap out of their delusion, no matter how much destruction happens as the result of their actions and decisions.

Fischer asked whether any sort of positive reward bestowed by the authority on the compliant was necessary to get people to buy in to the totalitarianism. Desmet said that Le Bon observed that the masses prefer harsh and strict leaders who are cruel to their own people. I’m not sure what he meant by “harsh and cruel,” because it hasn’t been my experience that the majority prefers what I think “harsh and cruel” means.

Fischer noted at the end of their discussion that they were livestreaming on a bunch of video services, including YouTube, but that YouTube took down its livestream. That says a lot about them, doesn’t it?…

I’ll end with a nice summary of Solzhenitsyn’s “The Gulag Archipelago,” which covers a couple of the same points:

Edit 10/13/2021: Dr. Peter McCullough, who has been treating Covid patients, has been observing what Dr. Desmet describes, with fellow doctors, and other professionals. He calls it a “trance.” I encourage you to listen to what he says starting at 1:10:00 in the following video, because he illustrated what Desmet was talking about:

Vaccination—Concerns, Challenges, and Questions Dr Peter McCullough

The rest of the video is worth watching, as well, but it’s solely on the data relating to Covid treatment, and what therapies have been shown to work.

Some good comments from Alan Kay on goals for improvement in programming systems

This came out of a question about what prompted him to invent object-oriented programming, but he gets into some goals that have not been implemented yet, which sound like good ideas.

 

The earliest programming was in the forms of the earliest computers: to find resources in memory — usually numbers, or numbers standing for something (like a text character) — and doing something with them: often changing them or making something and putting the results in memory. Control was done by simple instructions that could test and compare, and branch to one part of code or another: often to a part of code that had already been done to create a loop. An instruction not in the hardware could be simulated if there was a way to branch and capture where the branch originated, thus producing the idea of “subroutine” (first used in full glory with a “library” on arguably the first working programmable computer, the EDSAC by Maurice Wilkes at Cambridge, late 40s).

Beginning programming was and is most often taught in this style, and it has been noted that the first programming language and style one learns tends to manifest most deeply throughout the rest of a career. Not a lot has changed 70 years later, partly because many languages started off with this style in mind, and thus the new languages were attempts to make this style more convenient to use (Lisp and APL were different early interesting exceptions).

Another way to look at this is to note that (1) the degrees of freedom of a computer, and of the possible problems to be solved, coupled with the limitations of the human mind, means that anticipating all the tools needed will be essentially impossible. This means that *how to define new things* becomes more and more important, and can start to dominate the “do this, do that” style.

Along with this (2) soon came *systems* — dynamic relationships “larger” than simple programs. Programs are simple systems, but the idea doesn’t scale up very well to deal with qualitatively new properties that arise. Historically, this never quite subsumed “programming” (and the teaching of “programming”). It gave rise to a different group of computerists and did not affect “ordinary programming” very much.

I think it is fair to say today that the majority of programmers reflect this history: most do not regard *definition* as a central part of their job, and most do not exhibit “systems consciousness” in their designs and results.

I think quite a bit of this has to do with the ways programming is taught today (more about this gets even more off topic).

Looking at this, the earliest real “computer scientists” could see that e.g. subroutines were an extension mechanism, but they were weak — for example, to make a new kind of “data structure” was fragile and could not be made a real extension to the language. This led to a search for “extensible languages”.

Other computer scientists could see that “data structures” were not a great idea e.g. sending a data structure somewhere required the receiving programmer to know many details, and the structure itself might not fit well on a different kind of computer. A vanilla data structure was vulnerable to having a field changed by an assignment statement “somewhere” in the code by “somebody”. And so forth.

Most of the programmers were used to the idea of commanding changes to “data”, and so some of the fixes were mechanisms that allowed data structures to be invented and defined: one of the major styles today is “abstract data structures”.

Along with all this were several ideas for dealing with simple smashing of variables (and the essential “variable” that is a data field). This was scattershot and reinvented in different ways. The most prominent way in strong use today is for very large structures: “data bases” that are controlled by the intermediaries of “atomic transactions” and “versioning”, which effective wrap the state with many procedures to ensure that a valid history is kept and relationships between parts of the data base are not violated. Eventually, it was realized that “data” didn’t capture all the important questions that could be asked — for example: “date of birth” could be “data”, but “age of” had to be computed on the fly. This was originally done externally, for some data bases, procedures could be included. (This required a “data base” to eventually be able to do what a whole computer could do — maybe “data” is not the operative idea here, but instead “dynamic relationships relative to time” works better. If so, then the current implementations of “data bases” are poor.

In computer terms, modern data bases” are subsets of the idea of a “server”.

Another line of thought — which goes back before there were workable computers — is that (3) certain easy enough to make computers can simulate any kind of mechanism/computer that can be thought of. This partly led to several landmark early systems such as Sketchpad, and the language SImula.

If you take in the above, and carry to the extreme, its worth noting that only one abstract idea is needed to make anything and everything else: the notion of “computer” itself. Every other kind of thing can — by definition — be represented in terms of a virtual computer. These entities (I’m sorry I used the term “objects” for them) are used like servers, and mimic the behaviors of (literally) any kind) that are desired.

A key point here is that just having practical means for creating objects doesn’t indicate what should be simulated with them. And here is where the actual history has been and continues to be unfortunate. The most use of the idea — still today — has been to simulate old familiar ideas such as procedures and data structures complete with imperative destructive commands to bash state. This again goes back partly to the way programming is still taught, and to the rather high percentage of programmers today who are uncomfortable with design and “meta”.

For example, since “an object” is a completely encapsulated virtual computer, it could be set up to act like a transactional versioned date-base. Or something much better and more useful than that.

Note that most interesting representations of things do “change over time” so something has to be done to deal with this problem. So-called “Functional Programming” has to add features — e.g. “monads” — to allow state to advance “in a more functional way”. This might not be the nicest way to deal with this problem, but something does have to be done.

And note that if you have gotten religious about “FP”, then it is really easy to make a pure FP system and language by using the universal definitional properties of “real objects” (being able to define what you want is the deep main idea!) But before you do, it will be good to ponder in larger terms.

As Bob Barton once remarked “Good ideas don’t often scale” — and neither do most simple programming paradigms. This means that another of the new things that can be built with “objects” — but have to be invented first — are less fragile ways to organize systems.

Along the Barton “qualitative changes” line of hints, one could start contemplating a kind of “request for goals” kind of organization where the semantics of the worlds being dealt with are more richly human and the main center of discourse is about the “whats that are needed” rather than the “hows” that the system ultimately uses.

This was one of the impulses behind some of the HLLs in the 50s and 60s, but the field gave up too early. The original idea behind a “compiler” was to take a “what” and do the work necessary to find and synthesize the “hows” to accomplish the “what”. 60 years ago the “whats” were limited enough to allow compilers to find the “hows”. But the field decided to sit on these and not uplift the “whats” that would require the compilers to do much more work and use more knowledge to synthesize the “hows”. This is another way to miss out on the changes of scaling.

In a “real object language” — with “universal objects” — it should be possible to define new ways to program and define and design any new ideas in computing — I think this is necessary, and that it has to be done “as a language” in order to be graceful enough to be learnable and usable.

Historically and psychologically, *tools* have had a somewhat separate status from what is made with tools (and the people who make tools, and make tools to help make tools, etc. are also somewhat separate from the average maker). But a computer is always also a tool making shop and factory, you don’t have to go to the hardware story to buy a hammer etc. This requires a change in mindset in order to really do computing.

At Xerox Parc in the 70s, we made a “real object language” to walk both sides of the street (a) we wanted to invent and make a whole graphical personal computing system, and (b) we wanted to be able to easily remake the tools we used for this as we learned more. I.e. we wanted to “co-evolve” our ignorance in both areas to reflect our increased understanding. We were motivated both by “beauty” and that we had to go super high level in order to fit our big ideas into the tiny Alto.

This process resulted in five languages, one every two years (thanks to the amazing Dan Ingalls and his colleagues), with one deep qualitative change between the 2nd and 3rd languages. That these languages could be useful “right away” was due to the way they were made (and partly because the languages contained considerable facilities for improving and changing themselves). To make progress on the personal computing parts, the constructs made in the languages had to be extremely high level so that the system could be rewritten and reformulated every few years.

The 5th version of this process was released to the public in the 80s, and to our enormous surprised was not qualitatively improved again, despite that it included the tools and the reflective capabilities to do this. The general programmers used the language as though it came “tight” from a vendor and chose not to delve into even higher level semantics that could help the new problems with the new scalings brought by Moore’s Law. (This was critical because there were somethings we didn’t do at Parc because of the scalings that needed to be done to deal with “10 years later” scalings, etc.)

To answer the current question after the “long wind” here: there are usually enough things “not right enough” in computing to need new inventions to help. Most people try to patch their favorite ways of doing things. A few will try to raise the outlook and come up with new ways to look at things. The deep “object” idea, being one of “universal definition” can be used for both purposes. Using it for the former tends to just put off real trouble a little bit. I think programming is in real trouble, and needs another round of deep rethinking and reinventing. Good results from this will be relatively easy to model using “real objects”.

The sociology of our science

Philosopher Matthew Crawford talked in the interview below about the nature of what science in its technical practice has become, at least in certain fields, which takes it away from its telos. He blames what I’ve heard termed “big science,” because it’s made scientific research big and bureaucratic, requiring the raising of lots of money to build equipment, and to maintain it, so that scientists can push the boundaries of what we know. The problem is the nature of the organization that’s necessary to build this apparatus is political. And so politics is going to be a part of what’s happening with it, while scientists are trying to work within these organizations to carry out their research. It inevitably comes into their research, as part of these organizations, because the power dynamic within them favors the political actors over the search for truth.

What this suggests to me is that “big science” has a point of diminishing returns, where once you reach a tipping point, the endeavor becomes more political than scientific, and so there really isn’t a point in going further in the growth of these particular, ostensibly scientific institutions, because the political focus of the organization just increases as it grows, crowding out the science that it was originally intended to foster.  He also discussed some dynamics that have occurred with the climate issue, specifically, as illustrations of this social effect he was talking about.

Related posts:

The dangerous brew of politics, religion, technology, and the good name of science

Psychic elephants and evolutionary psychology

Reconsidering Darwinian evolution

I liked reading Giving Up Darwin: A fond farewell to a brilliant and beautiful theory, by David Gelertner, because it’s the most thought-provoking article I’ve read in a long time. As I read it, I really wondered if he was going to come out as a believer in Intelligent Design, because he says that theory makes some important points about Darwinian evolution, but that’s not where he goes. He doesn’t reveal this until the end, where he pokes holes in ID, as well. Really what he says is we need a new theory. Some aspects of Darwin’s theory of evolution still hold, but some need reconsideration, because accumulating evidence falsifies them.

I’d think to a scientist, this is an exciting prospect, because it means there’s something significant to discover about the morphogenesis of the species we see in the fossil record.

Why the computer revolution hasn’t happened yet

I consider this a follow-up to another post I wrote called Reviving programming as literacy. I wrote the following answer in 2020 to a question on Quora, asking What happened in the past 80 years that produced a much cruder world than the rich one that science, engineering, math, tinkering, and systems-thinking experts in the ARPA-IPTO/PARC community predicted? I got my first upvote from Alan Kay for it, which I take as a “seal of approval” that it must’ve been a pretty good account of what’s happened, or that it at least has a good insight or two. I thought I’d share it here.

I think the short answer is the world proceeded with its own momentum, despite what ARPA/PARC offered. Nothing “happened,” which is to say that nothing in society was fundamentally changed by it. Many would ask how one could say that, since the claim would be made that the computer, and some of what was invented at PARC, was “transformative” to our society, but what this is really talking about is optimization of existing processes and goals, not changing fundamental assumptions about what’s possible, given what computing represents, as a promising opportunity to explore ideas about system processes. It’s true that optimization opens up possibilities that would be difficult to achieve otherwise, but my point is that what ARPA/PARC anticipated was that computing would help us to think as no humans have thought before, not just do as no humans have done before. These are not by any means the same transformations. The former was what ARPA/PARC was after, and my understanding is this is what many of the researchers experienced. That experience, though, didn’t get much outside of “the lab,” and while this experience has expanded into the sciences, becoming a fundamentally important tool for scientists to do their work, it still is “in the lab.”

What Alan Kay, one of the ARPA/PARC researchers, realized was that there was more work to be done to lay the groundwork for it. He, Seymour Papert, Jerome Bruner, and others, tried to bootstrap some processes in society, which Kay thought would do that. Their efforts failed, though. They either didn’t last long, or the intent was “lost in translation,” lasted for many years, doing something that ended up being unproductive, and ended in failure later.

The buzzsaw they ran into was the expectations and understandings of parents and educators re. what education was about. A complaint I’ve heard from Kay is that educators tend to not have a good grasp of what math and science are, even if they teach those subjects. So, with the approach that was being used by Kay and others, it was impossible to get the fundamental point across in a way that would scale.

I remember listening to a small presentation Kay gave 11 years ago, where he talked about a study on education reform that was done in the UK years earlier. It was found that in order for any order-of-magnitude improvement in the curriculum to take hold successfully in the study groups, a change in the curriculum must also involve the parents, as well as the children and teachers, because parents fundamentally want to be able to help their kids with their homework. If parents can’t understand what the curriculum is going after, why the change is being made, understand the material being given to their kids, and buy into the benefits of it, they resist it intensely, and the reform effort fails. So, any improvement in education really requires educating the community in and around schools. Just treating the school system as the authority, and the teachers as transmitters of knowledge to children, without involving the parents, did not work.

A natural tendency among parents and educators is to transmit the culture in which they were raised to the children, thus providing continuity. This includes what and how they were taught in school. This is not to say that any change from that is good, but there is resistance to anything but that. To “live in the future,” and help students get there, requires acquiring some mental tools to realize that you are in a context, and that other contexts, some of them more powerful, are possible.

A warning about our current state

I have been paying a lot of attention to a growing ideology in our country for the past couple years, almost to the exclusion of my CS study. It’s not something I enjoy, but I cannot avoid a train that’s coming down the track at me.

It’s an ideology that masquerades as a truth teller, revealing the “true nature” of our country; revealing it as an oppressor of anyone who lacks “whiteness,” as a clear and present threat to such people. It’s been setting up an alienating war focus against the society we all inhabit. Looking at history, it’s a trend that should really worry us. The cultural markers described here have led to societal suicide in countries around the world, when they are allowed to metastasize. There’s growing awareness of it in my country, but I’m not sure if it’s enough yet.

One perspective on it, in its immediate form, is described in a segment on Tucker Carlson Tonight, from January 19, 2021.

James Lindsay, Peter Boghossian, and Helen Pluckrose have done a lot to analyze the history of this ideology, which uses postmodernism as a basis. It’s commonly called by various names, intersectionality, critical race theory, etc. It has gained power in our society, and these three researchers have tried to expose the shortcomings of this belief system, which had its genesis inside our academic institutions.

Lindsay founded New Discourses to talk about his findings, and to help educate about them.

The first video I’ve put here from New Discourses gives a heads-up about what will be a driving force in government policy, and in many quarters, corporate policy, and why it’s happening. It dovetails with what Carlson described above.

Another video by Lindsay, below, gives background on all this, and why a knowledge of history should cause alarm about the rise of this trend: It’s the kind of thing that’s led to oppression of out groups, and mass death. In this case, the out group that is being set up for recriminations is anyone who displays “whiteness,” or “white supremacy,” and the interesting thing is it’s not really about race at all. It’s about an outlook on life. What’s being targeted is other belief systems than the one I’m talking about here.

In this ideology, the very idea of discrimination has been so twisted out of shape that it doesn’t mean what it used to, which is judging someone solely based on things they can’t change (as if you could tell all about them just from looking at them). That was the sin that our society used to know. Now, “whiteness” or “white supremacy” just means something like doing well in school, or on the job. Though, white people in particular are suspect by default, in this ideology. However, it doesn’t even matter if your skin color is white. If you’re not, and you display qualities like diligence, a work ethic, etc., you’re charged with “internalizing whiteness.” This actually has a long history. I can remember twenty years ago hearing about certain blacks being characterized as “acting white,” just because they did well in school, for example. It was used as a smear then, too.

As described in the videos above, there are people in power now who want to weaponize this ideology. If you’ve been paying attention, you’ve seen this for years, with people losing work over this stuff.

I’m not posting this to be a voice of doom. I’m putting this here to spread the message, so that people of good will can do their part to stop this early, and peacefully. Perhaps there is still time.

This ideology is not going to solve discrimination, though it markets itself as the solution. It is only going to enhance racial divides; the dehumanizing of certain people, which if allowed to reach its logical conclusion, will lead to the worst horror.

The problem with trying to address problems like this is it may not be evident to you. It may seem like a small problem now, if you’ve even heard of it, or seen someone negatively affected by it. All too often, if I hear people talk about it, they say they choose to keep quiet. They’re hoping to avoid the negative consequences of stepping out of line, like running away from a grizzly bear, thinking it will eat the slower ones first. A problem with that strategy is this ideology is not static. It changes who and what it targets, always looking to take someone else down.

What should be clear by now is that this is not going to just blow over. The more we stay out of its way, the more it grows. The time to confront it is now, however you think best to do it, if it is in your midst.

I want to be clear. This is not a call for people to separate from the rest of society and “find friends” in racial identity groups of their own. I think that would be one of the worst things. Though, that could very well happen, as a result of this ideology putting people of certain races down at every turn. What I’m calling for is for people to assert America’s civic virtues of tolerance, and equal rights. Both are anathema to this ideology, which sees them through a cynical eye, saying it’s all a cover for oppression, using disparity stats to make the point. Equal outcomes are what’s demanded by this ideology. Its promoters believe that lowering standards will accomplish the desired goal, since lower standards are easier for more people to meet. They don’t say that directly, but they have various ways of accomplishing this goal, calling it by various names, such as “diversity,” “equity,” “positive discrimination,” “anti-racism,” etc. Any deviation from this strategy is seen as discriminatory, and therefor wrong. The problem is this lowering of standards for the sake of equal outcomes squashes all other priorities that make an advanced society like ours function.

Edit 2/1/2021: Bari Weiss wrote a great article with some more suggestions for reversing this trend, in 10 ways to fight back against woke culture.

One could chalk up what I’m describing as the malaise of a self-loathing society, but every time I’ve looked at it, what’s really at root is laziness; the unwillingness to do the hard work of improving ourselves, and the society around us. This results in us “pressing the ‘easy’ button” on hard problems, which results in lots of misperceptions about each other, and our world. It makes for a worse future.

Here’s a description of that from Benjamin Boyce.

Another “culprit,” if you will, is a fundamental belief that all people are owed all good things that society has to offer, because we all are equal, and we should all have a say. This is conflating things that don’t go together well, but democratic societies throughout history have shown a proclivity to believe this. Such societies need institutions that work against this tendency, so that they can continue to function. Right now, the only one that really does is private enterprise.

This next video is about Lindsay’s article Psychopathy and the Origins of Totalitarianism. The important take-away from this is to understand the distinction between reality and pseudo-reality, and the social dynamics that enforce pseudo-realities, and enable them to spread far and wide. Being conscious of this enables us to be aware of when a pseudo-reality is being put into effect, and opens opportunities to stop it. This relates intimately to Lindsay’s first video here.

Edit 3/22/2021: Re, Lindsay’s point that Very Smart People are the useful idiots, I thought I’d illustrate. 🙂 The following is the battle of wits from the movie, “The Princess Bride.”

Edit 8/2/2021: This segment with Tucker Carlson from 2/2/21 seemed like a good follow-up to what was covered here:

Related articles:

The result of capitalizing on peak envy

Our political pathology and its consequences

Trying to arrive at clarity on the state of higher education

Explaining the landscape of the internet

After having some conversations about what’s transpired with social media, I’ve been prompted to write this, because it seems like there’s confusion about who has access to what, and where.

There’s much gnashing of teeth (and celebrating) over social media banning people, and technology companies banning access to certain apps. I have less worry about that, because I’ve been on the internet since 1989. I have some understanding of how these things work. Though, perhaps I should worry about the negative social effects of these bans, as many keep warning. What I hope to do with this post is educate people about what’s really going on, from a technical perspective, and to reassure people who want to continue getting the content they want, regardless of what the lords and ladies of social media and technology think and do.

First of all, social media and (specific) technology companies are not the internet.

This has been obvious to me, since I knew the internet way before these companies had anything to do with it. So, when I started hearing complaints about bans on social media and app stores, it didn’t seem like a big deal to me. I could just find the people I followed, and services I connected to through social media, through the web, if I wanted. If they are big enough personalities, they often have their own websites.

However, after talking to some tech-connected, but less tech-savvy people, I’ve come to the uncomfortable conclusion that a lot of people don’t know this. They think that people are being banned off the internet, so that nobody can hear them. These bans are disruptive, in the sense that it takes work to move what people have shared to another platform. The move is disruptive to our social networks (the people with whom we feel connected emotionally), and takes time for people to catch on to where others went. It’s a setback, but in reality, it’s not a death blow.

Each social media platform is in fact a closed, proprietary network that uses the internet as the way for people to reach them. To some extent, these social media networks allow pathways out of their network to the internet, or to other social networks. What the most popular social media platforms have been doing is controlling those entry and exit points, and also exercising control over the relative visibility of what remains on their platform. The complaints really boil down to the fact that these services don’t operate by any discernable rules. Their decisions and punishments are arbitrary and capricious.

The app. stores also exercise some control, if you use apps. from them, because, as we’ve seen, those stores will say they don’t want to carry certain ones. Some of the apps. that have “gotten into trouble” access social media, and exclusively allow access to those platforms.

My sense has been that when people think they’re being denied access to someone online, or some service, because social media has suspended or banned them, it’s because the complainers are using phone/tablet apps., which access specific services on the internet. It seems like people have preferred these apps. to using a web browser (maybe because they think that’s old hat). They feel like the apps. make accessing these services easier. The downside (if you consider it a downside) is if that’s all you use, that strengthens the control those services exercise over what you can see, because those apps. don’t allow you to see anything outside of their services. (You have to switch apps. to switch to another service.)

If you want pretty much free sharing with potentially large numbers of others, you should consider using services like Parler, MeWe, and if you’re into video, something like BitChute. My understanding is they’re committed to free expression, free speech, with minimal restrictions. Facebook, Twitter, and to some extent, YouTube, are not what they used to be. They’re increasingly curated, using minders to watch what is posted, so that nobody “gets any ideas” (like they’d know what those are).

To get out of the limitations I’ve described, I’d really recommend going somewhat “old school”; using a web browser, using a search engine to find the content you want (probably something like duckduckgo, rather than Google, since Google uses many of the same tactics as Facebook, Twitter, etc.). I’d also recommend using e-mail to stay in contact with people, at least as a backup, rather than relying so much on social media groups, and private/direct messaging. A problem with this, though, is a lot of people may have e-mail addresses, but don’t use them. So, staying in touch with individuals or groups via. e-mail can be a challenge.

I’m writing this primarily for an American audience. Though, what I’m saying applies to some other parts of the world, as well.

The internet is still a free place (while it lasts). If you want free access to information and people, spend some time on the web, as well as on social media. Get used to both. Get outside the closed networks.

Update: News arrived shortly after I posted this that Parler may go offline for a while, if not permanently. Amazon’s cloud service pulled the plug on them today. Unfortunately, Parler put all its eggs in one basket. This is something I was worried about with cloud services many years ago, when they were starting to become popular. Not that I expected service would be pulled over politics, but rather for any reason.

Edit 2/1/2021: I thought this interview with Jeff Brown of Brownstone Research would be interesting to readers. Part of what I think is good here is that he gets into some reasons that this censorship is happening. It ties into something I’ve suspected: There’s a large cultural component to this. These tech companies have in a sense been overrun with people with a monolithic ideology, both inside and outside them. From what he describes, these people are pushing their weight around, to force these companies into taking these actions.

What he describes is reminiscent of what happened with Brendan Eich at Mozilla several years ago, except it’s happening on a much larger scale. In that case, Eich, the CEO of Mozilla, was forced out by the illegal disclosure (doxxing) of a campaign contribution he made, something like 6 years earlier, and the development community around Mozilla, which threatened retributive actions against the company if he didn’t leave. That was it, as far as I know. (Though, that was bad enough.) In this case, people by the tens of thousands are being kicked out of these platforms for similar reasons. And, of course, as we know, Parler was taken down, seemingly because President Trump announced he was moving over to that platform, after being forcefully taken off Twitter.

Brown talked about part of what could be done about this culture of internet censorship, anticipating what might be down the road: Censorship of websites at the level of the internet service provider (commonly called ISP–your connection to the internet), and/or the Domain Name Service (commonly called DNS). Though, I don’t see that on the immediate horizon, partly because the courts would frown on it, in many cases, since many ISPs are already regulated as utilities, common carriers. They can’t just go around blocking content without exposing themselves to lawsuits; not yet, anyway.

For those not familiar with DNS, it acts as a “phone directory” for the internet. When you use a web browser, for example, and you give it the name of a website–any URL, your browser contacts the domain name service at your internet service provider. DNS returns what’s called the Internet Protocol address (commonly referred to as an “IP address”) for that URL, to your computer or device. This address is analogous to a phone number for a person or business, which your browser then uses to connect with, and talk to that service.

Brown talked about a solution that uses blockchain technology, called Handshake, which acts as a peer-to-peer replacement for DNS, and acts akin to a Virtual Private Network (commonly called a VPN). This a) makes your URL lookups independent of your ISP, and b) hides your internet connections from it, thus making ISP/DNS censorship more difficult to pull off.

The “bad news,” Brown said, is that it takes technical know-how to set Handshake up, right now. It hasn’t been fashioned into an easy-to-use package. He also said you use a special browser with it, which I guess makes sense, because your existing browser is set up only to use a domain name service that’s set up by your ISP.

This is not to say that if you went this route you’d need to use a different ISP. He’s saying you’d keep your existing internet connection, but a lot of your internet activity would be carried out independently of your ISP’s domain name service, and your connections to internet sites would be kept hidden. As an added benefit, this setup would also increase your online privacy.

It was encouraging to hear about this, since I’ve been anticipating for a few years that our presence online would move to a peer-to-peer model, rather than client-server, which we’ve been using for ages. More to the point, what I’ve been anticipating is that we’d get away from social media services altogether, by going to a peer-to-peer model for making social connections. This would allow us to share content independently of needing accounts on all sorts of platforms, which are controlled by people with agendas that might be different from what we want, individually.

We really should have that, because that’s more like real life! Think about it: When you meet people, or go to a community festival, or patronize a business, or do any of the innumerable things that people in a society do, do you have to sign up with a company to do each one of those activities? Why should we be doing that online? We should be able to make our own personal connections, form our own groups, make payments, without needing intermediaries. Sure, to do some things, you’d need to sign up with a service to go to conferences, make appointments, go to exclusive events, as a few examples, but so much of our digital presence should model what we’re able to do without it.

Perspective on disease and public policy

It has become apparent to me that just as Darwin’s theory of natural selection was once controversial in society, particularly as it applied to human origins, modern science continues to upset people. We also have this confusing thing going on where our society claims to own scientific knowledge, when science disagrees with what it’s doing, and so society fails to reach its stated goals, and we blame non-factors for the failure. We are acting quite superstitious. What doesn’t help is that we have people with the title of scientist who are espousing knowledge not based in science, but which is accepted as “official science” that everyone is expected to believe and follow. It’s not unusual, though. Thomas Sowell wrote about it in his book, “Intellectuals and Society,” saying that some politicians reverse the process of discovery. Rather than following the evidence, they hire experts who will massage the evidence to fit the politicians’ desired agenda, and basically rubber stamp it (without appearing publicly to do so). That doesn’t always happen, but it happens enough that it negatively affects our society. I’d like to share Dr. Morrissey’s contrasting account about his knowledge and experience on disease, and what is actually being enforced in government policy in various places around the world. It’s a tragic story, and I think it deserves to be highlighted, because it’s really a choice between knowledge and ignorance, and the joy and suffering that each brings.

The second topic that is not being addressed in our public discussions, because it seems verboten, is risk. What I see frequently is that too many in advanced societies are not mature enough to discuss it. It seems many are not experienced in taking risks, and what that means. They may only be experienced in the notion of tragedy, and any notion of taking a risk is disgusting to them. Though, in fact, they take risks all the time. I’ve seen an increasing trend since the 1990s of people who say that all risk must be eliminated. They are saying the equivalent that they demand the defeat of gravity. It can’t be done. What are we willing to sacrifice on the altar of this impossibility?

Alan Kay on goals in education

I wanted to repost Alan Kay’s answer from Quora to the question What needs to be done in order to improve Anki to reach the promise of the Dynabook’s “teacher for every learner”?, because I think it sums up so well his thinking on education, broadly, which dovetails with some thoughts I’ve had on it.

He responded to a question about an online learning site, called Anki, that’s designed to improve memorization techniques, asking if there was a way to improve it that would bring it in line with goals of Kay’s original conception of the Dynabook. For people who are not familiar with the Dynabook concept, I’ll point you to a post I wrote, part of which was on Xerox PARC’s research on networked personal computing, A history lesson on government R&D, Part 3 (If you go there, scroll down to the section on “Personal computing”).

There’s definitely a way to think of learning as ultimately being able to remember — and every culture has found a lot of things that need to be remembered, are able to get children to eventually remember them, and have some of their behaviors be in accordance with their memories.

But if we look at history, we find large changes in context of both what kinds of things to learn, and what it means to learn them. For example, the invention of writing brought not just a huge extension of oral knowledge, but an even more critical change of context: getting literate is a qualitative change, not just a quantitative one. A large goal of “learning to read and write” is to cross that qualitative threshold.

A change so large that it is hard to think of as an extension of the prevailing thinking patterns in the era of its birth, was the invention of “modern science” less than 500 years ago. It started with the return of accurate map making of all kinds and was catalyzed by the gradual realization that much of “the world was not as it seems” and by being able to make generalizations that could generate some of the maps. One of the most important larger perspectives on this has its 400th anniversary this year: Francis Bacon’s “A new organization for knowledge” (Novum Organum Scientia), in which he points out that we humans have “bad brain/minds” stemming from a number of sources, including our genetics, cultures, languages, and poor teaching. He proposed a “new science” that would be a set of approaches, methods and tools, that would act as heuristics to try to get around the biases and self generated noise from our “bad brains”.

His proposed “new science” is what today we call “science”. Before this, “science” meant “a gathering of knowledge” and “to gather it”. After this, it meant to move from knowledge to context and method and tools — and to new behaviors. This has led to not just a lot of new knowledge, but very different knowledge: qualitatively different knowledge in qualitatively different contexts.

A trap here is that the use of ordinary language for discussing these three contexts — oral, literate, scientific — is that things can be said and heard whether or not the discussants also have these contexts (this was one of Bacon’s four main “bad brain” traits).

E.g. people who can read but have not taken on the scientific world-view can think they understand what science is, and can learn and memorize many sentences “about” science, without actually touching what they actually mean.

Just as interesting, is the difficulty — for those who have gotten literate — of touching what is really going on — especially the feelings — in oral traditional societies. Music and poetry are bridges, but important parts of the innocence and id-ness are hard to get to. “Ecstatic music” can sometimes dominate one’s literate thought — especially when performing it.

To make an analogy here: in our society, there are courses in “music appreciation” that mostly use “sentences” about “sounds”, “relationships, “composers”, etc., in which most testing can be (and is) done via checking “the memory” of these “sentences”.

By contrast in “real deal music”, real music teachers treat their students as “growing musicians” and play with them as a large part of the guidance to help them “get larger”, to “make Technique be the servant of Art, not the master”, etc. It’s primarily an emotive art form …

A nice quote — which has many web pages — is:

“Talking about Music is like Dancing about Architecture”

(attributed to many people from Stravinsky to Frank Zappa). If you *do* music, you can barely talk about it just a little. The further away from inhabiting music, the less the words can map. (And: note that the quote brilliantly achieves a meta way to do a bit of what it says is difficult …)

The Dynabook idea — “a personal computer for children of all ages” — was primarily about aiding “growth in contexts”* and my initial ideas about it were partly about asking questions such as:

“If we make an analogy to writing/reading/printing-press, what are the qualitatively new kinds of thinking that a personal computer could help to grow?”

I got started along these lines via Seymour Papert’s ideas regarding children, mathematics and computing (my mind was blown forever). I added in ideas from McLuhan, Bruner, Montessori, etc., and … Bacon … to start thinking about how a personal computer for children could help them take on the large world-view of science as “real science learning” (not “science appreciation).

(Via Papert), the dynamic math part of quite a bit of science can be nicely handled by inventing special programming languages for children. But science is not math — math is a way to map ideas about phenomena — so an additional and important part of learning science requires actually touching the world around us in ways that are more elemental than “sentences” — even the “consistent sentences” of maths.

In an ideal world, this would be aided by adults and older children. In the world we live in, most children never get this kind of help from older children, parents, or teachers (this is crazy, but humanity is basically “crazy”).

Another way to look at this is that — as far as science goes — it almost doesn’t matter what part of the world you are born into and grow up in: the chances of getting to touch the real thing are low everywhere.

Several of Montessori’s many deep ideas were key for me.

One is that children learn their world-view not in class but by living in that world. She said the problem was that the calendar said 20th century but their homes were 10th century. So she decided to have her school *be* the 20th century, to embody it in all the ways she could think of in the environment itself.

Another deep idea is that what is actually important is for children to do their learning by actively thinking and doing — and with verve and deep interests. She cared much more about children concentrating like crazy on something that interested them than about what that thing was. She invented “toys” that were “interesting” and let the children choose those that appealed to them (she wanted them to learn what deep concentration without interruptions was like, and that teachers were there to help and not hinder).

In other words, she wanted to help as many children as possible become much more autodidactic.

(Note that this has much in common with getting to be a deep reader or musician — it doesn’t much matter in the beginning what the titles are, what matters is learning how to stay with something difficult because you want to learn it — if the environment has been well seeded, then all will work out well. More directed choices can and will be done later. And note this is even the case with learning to speak!)

After doing many systems and interfaces over quite a few years (~25) we finally got a system that was like the Montessori toys part of her school (Etoys), and then, in a Montessori/Bruner type of school (the Open Magnet School in LA), we got to see what could be done with children, the right kinds of teachers, and a great environment to play in and with.

What never got done, was to handle the needs of children who don’t have the needed kind of peers, teachers or parents around to help them. This help is not just to answer questions but to provide a kind of “community of motivation” and “culture” that is what human beings need to be human. (The by-chance forms of this tend to be very much reverted to oral society practices because of our genetics — and much of this will be anti-modern, and even anti-civilization. This is a very difficult set of designs to pull off, especially ca. where we are now.)


To answer your question: the spirit of Anki is not close to what the Dynabook was all about. It could possibly be a technical aid for some kinds of patterning, but it seems to miss what “contexts” are all about.


Here’s another way to think of some of this stuff, and in a “crazier” fashion.

There have been a number of excellent books over the years about the idea that the “invention of prose via writing killed off ‘the gods’ ”. These are worth finding and pondering.*

The two main problems are (a) we need “the gods”; and (b) “the gods” can be very good or bad for us (“they” don’t care).

It’s worth pondering that from the perspective of science, a metaphor is a lie, but from the perspective of “the gods”, a metaphor is true.

The dilemma of our species — and ourselves — is that we have both of these processes in our brain/minds, we need them both, and we need to learn how to allow both to work**.

Learning something really deeply and fluently goes way beyond (and *before*) conscious thought — important parts of the learning are taken to where “the gods” still lurk.

And, just as you don’t make up reasons for breathing (which “the gods” also handle for you), the reasons for doing these deep things move from “reasoning” to “seasoning” — for life itself.

“Artists are people who can’t not do their Art”.

It doesn’t have to do with talent or opinion … This is a critical perspective for thinking about we humans, and what one of the facets of “identity” could mean … Consider the relationship between the quote above and children …

When you are fluent in music, much of the real-time action is being done “by ‘the gods’ “, whether playing, improvising, composing etc. You are not the same person you were when you were just getting started. Music can get pedantic and over-analyzed, but this can be banished by experiencing some of it that is so overwhelming that it can’t really be analyzed in the midst of the experience (this is not just certain “classical” pieces, but some of “pop” music can really get there as well). This produces the “oceanic feeling” that Romain Rolland asked Freud about.

“Goosebumps are a kind of ‘basic ground’ for ‘humanity’ ”

It’s interesting and important that “the gods” can be found at the grounding of very new contexts such as modern science, and that the two can be made to go together.***

To use this weirder way to look at things:

“Education has to lift us from our genetic prisons, while keeping ‘the gods’ alive and reachable”.


* For example: Eric Havelock’s “Preface To Plato”, and especially Julian Jaynes’ “The Origin of Consciousness in the Breakdown of the Bicameral Mind” (my vote for the most thought provoking book that is perhaps a bit off).

** See Daniel Kahneman’s “Thinking: Fast and Slow”, and ponder his “System 1”

*** See Hadamard’s “The Psychology of Invention in the Mathematical Field”, and Koestler’s “Act of Creation”.