On the necessity of a proper respect for the irrational in the pursuit of truth

I’m featuring a video by Dr. Steve Turley, because I think this is an important point. He discusses what I would call moral education, which I have also concluded is missing from education. The video is from Apr. 11, 2017. He uses the uproar at the University of Missouri (“Mizzou”) as the lead-in to this topic.

We need education in how to reason, and to understand what knowledge is. We need education in how to categorize modes of thought, and to look at the strengths and weaknesses of each. We also need education in what is good and what isn’t, by which I mean, broadly, to understand that there is such a thing as better and worse, not only for oneself, but for the society in which we exist, and that it is moral and ethical to prefer the better, even though there will always be negative consequences for some as a result of doing so (in modern terms, we’d roughly call this a “cost/benefit analysis”). It should also be pointed out that we in the West have been learning about that which is better. It’s not a static thing. We’ve learned that in order to learn what is better, we need to venture beyond that which we think is good, to try new things, and see whether they add to our betterment, or not.

What’s come into challenge in the last 15 years is the very ideal of betterment itself; the pursuit of truth. I think Turley’s point regarding “the ordering of our loves” is profound here.

Where I would say we especially need education, in this regard, is where the perception of what benefits us comes in conflict with what someone else believes benefits them, or what we believe benefits society. How do we choose who has the better claim? Who will get the prerogative to act on it, and who must demur? Society has a claim on moral virtue, if it is going to survive, but who speaks for it other than a collective, common understanding of some virtues? The common virtues I speak of for our society are things like freedom of speech, freedom of religion, freedom of assembly, equal protection under the law, equal rights, etc. At times, I’ve called these virtues a meta-morality. They are not part of how we conduct our daily lives, person to person, but we appeal to them when making societal decisions. Even these common virtues have come into challenge in what is taught in our schools. Thereby, their perceived value to our society has been diminished, precisely because they have a moral foundation.

These virtues are not givens. People knowledgeable in history know this.

What’s becoming increasingly clear to me is that what we regard as empirical and philosophical knowledge has a moral basis for its practice, which is the pursuit of truth. The pursuit of truth is in itself a moral claim to virtuous action, precisely because even it has negative consequences for those who do not pursue truth, and see little value in it. It is a deliberate choice that accepts this reality, and prefers it over the historical alternatives, which have resulted in a lower quality of life, a lack of dignity for the vast majority of people, for most of our tragic human history.

I’ll note that with truth also comes beauty, something we also value. I don’t speak just of physical beauty. It also manifests in how we express ourselves, how we act with others, and what we contribute. It’s important, as it nourishes what we would otherwise call “the soul of humanity.”

I’m reminded that the mathematician Jerry King pointed out that many of the most beautiful mathematical truths have also turned out to be useful in science and engineering. So, beauty is not entirely frivolous, as we are tempted to think.

Truth and beauty can be taken to excess. As Aristotle said, any virtue taken to excess becomes a vice. In a healthy moral system, its virtues check each other, hemming in excess.

The benefits of the pursuit of truth are things that our society has taken for granted, but which are now being highlighted by the increasing power of postmodern thought in our education system, and increasingly in our politics, and the policies under which we live.

Turley has brought up repeatedly how in the late 19th century, American universities stopped teaching moral virtues as part of the curriculum, and just focused on empiricism as the basis of knowledge. One might wonder if this started then, why didn’t what we see now in our social state emerge much earlier. My guess is that what made up for it, and may have allowed schools to take this turn without a more obvious impact, is that Americans were still mostly engaged with organized religions, in whatever belief system they chose, and so their religious leadership gave them a moral education that schools no longer provided. Perhaps people became convinced moral education would be safe there. I think we see now this is not the case, as organized religious practice, particularly the Christian variety, has been on the decline in the United States in the last 15, or so, years, and has been in decline in Western Europe for even longer.

The fact is science cannot answer the question of how we should relate to each other. It cannot tell us what is better. It can inform us in making that determination, but it can’t tell us who is more worthy to live or die; who should live free, and who should be punished for violating our society’s standard of peaceful coexistence. It cannot answer what that standard should be. Yet, all of this is of critical importance. Without moral and ethical standards of good quality, we are prey to standards that are slipshod, short-sighted, and which end up degrading our quality of life, and the life of our society. The reason being it’s impossible for humans to act on the basic things that sustain our lives without having a belief system. It’s a fool’s bargain to neglect teaching quality moralities and ethics to the next generation, believing that science can substitute for them. One reason we have tried to substitute one for the other is the basis for these quality belief systems–the theological stories that give rise to them–have problems when looked at under the lens of our society’s analytical tools, or these belief systems cause some negative consequences in society. We’ve seen disappointing, and what are to us horrifying consequences when people have followed religious practices that are of poor quality, or if they’ve followed perverted versions of our society’s religious traditions. What’s beginning to be apparent, though, is when we neglect teaching quality moralities in our society, we undermine the very basis for preferring our analytical tools for determining what’s better. Without a quality moral/ethical foundation, why even pursue truth, when we can imagine that we can create our own world out of whole cloth with our desires (which ends up degrading quality of life)? 

Science doesn’t allow for that way of thinking, but if people see science as a limiting method, rather than a liberating one–when “moral truth is relative,” and our potential is seen as stifled; tricked away from us by our traditions–why use science to inform us about anything? Or, why shouldn’t science only be used as a voice of authority for a moralizing movement of our choosing–a ruse?

For a pursuit of truth to exist, it must first have a moral/ethical foundation that values it, the sense that truth is right, good, and elevates and liberates humanity, by virtue of the fact that it works relatively well for the societies that base themselves on it, and gives people a dignity for which they can strive, and which they can deserve.

The phrase “pursuit of truth” implies a kind of relativism, because it states that the truth is not perfectly known. What we should have in a pluralistic society is multiple moralities contending against each other in discussion and debate, because, since the truth is not known–though it is assumed as an absolute, with reason, to exist–different moralities are going to have some better ideas for addressing it than others, and people can reformulate what moralities they want to follow, based on their judgment of which ideas are better for them. However, what we used to do was make modifications to long-standing traditions, as some among us explored their own moralities. That practice had value. What’s been done recently is to throw out moral traditions entirely, sometimes keeping their names, but throwing out their substance. This came partly from our exercise of rational thought, and partly because we saw the traditions as power plays upon us, which limited our potential.

Throwing out religious traditions doesn’t mean our proclivity to believe in myths ends. What’s happened instead is new, irrational moral paralogies, as James Lindsay has called them, have arisen, seemingly formulated out of whole cloth, but which have roots in political traditions. They do not offer the kind of personal or societal life that traditional religions and philosophies teach. People may believe that these pseudo-religions offer a better life, but from what I’m seeing, they do not. However, that is up to each individual to judge.

Circling back to what I said about the need for a moral education, I don’t mean to suggest it must return to schools. My real intent with this is to appeal to our desire for a better life, and a better future for the next generation. We need to have a discussion in our society about how this should be carried out.

A point that I think needs to be made in our present circumstance is that since we need a moral basis for the pursuit of truth, I think we need to talk to those who would discourage the practice of various traditional religions–because of their failure when subjected to some rational analysis–to point out that what they’re doing is destructive to their own goals. They are undermining the basis for pursuing truth, and as I think we’re seeing, that leads to a growing chaos.

A thought that’s I’m sure disquieting to proponents of rationalism is that our betterment through rational means rests on an irrational foundation. We’ve mistakenly believed that we can toss aside our traditional irrational foundation, and substitute a rationally-derived orthodoxy in its place, because of our desire to be consistent. Along with that, we’ve brought in a worship of what rational thought, we think, has given us entirely on its own. Given human nature, though, this is really asking to have our cake, and eat it, too. While it has seemed to us that giving space to the irrational destroys the benefits of what rational modes of thought give us, actually, shunning, and discrediting certain traditions that have irrational bases creates the environment for lower irrational beliefs to take their place. By doing this, we are in fact embracing a contradiction to our nature. We pride ourselves on our consistency of thought for doing this, but neglect that we are acting in denial to what’s really going on, and what is required to maintain, and advance our society. Indeed, as we have been watching play out in our society for years, rationality can undermine its own foundation.

If anyone brings up the importance of a belief in a deity, our intelligentsia mocks them for having “imaginary friends.” Well, let me address that. I will not assert here that deities do or do not exist, but looking at it from a standpoint I find very useful for regarding such matters, there are such things as important notional inventions that we are neglecting here. Take our notion of time, for example. Yes, we notice that there is a progression of change. Nothing stays the same. To help us deal with this, and measure this change, humans invented a notion of time, but this is our formal idea of change. That notion is not objectively real, but could our society survive if we completely eliminated this idea, because of this? No, it could not. I contend that people’s religious traditions are in a similar category. We take those away, and disrespect them, at our peril. People need moral guidance from traditions that have been instrumental in furthering our civilization. We are not born with that guidance, and a secular “going through the motions” of passing on the religiously-derived notions of right and wrong (by the way!) falls to pieces on critical analysis, because, “Why do any of it?… Why should I care how other people feel, or think about what I do, or whether what I do makes them suffer? What if my desires are the most important to me, or if it’s my identity, and everything I think that’s owed? What if that’s all there is to it? What if I don’t care who judges me?”

If you listen to Scott Adams, and what he describes regarding human cognition and psychology, this will become clearer. If you look around you at the irrational moral claims that dominate our popular discussion now, you can see that nothing about our human nature has changed by quashing our religious traditions, and the moral lessons they convey.

Qualitative differences in morality matter. They make a difference in our quality of life. Making sense requires the pursuit of truth. This requires moral imperatives that value that pursuit, and motivate people toward it. This requires acknowledging and accepting that our irrational nature exists, and must be accommodated, but also kept under some control. That’s where the pursuit of truth becomes essential. It is a tool for keeping our irrational nature from going “off road” to such an extent that it ruins us.

What’s grown up in place of our displaced foundation is an irrational discourse that does not value the pursuit of truth. It behooves us to realize this, to understand what we are losing, and that our future does not have to follow linearly from our present.

Statesmen, my dear Sir, may plan and speculate for Liberty, but it is Religion and Morality alone, which can establish the Principles upon which Freedom can securely stand.

–John Adams, letter to Zabdiel Adams (21 June 1776)

Edit 11/27/2021: Jordan Peterson (in 2017) gave a good summation of one part of my argument here. It’s fair to say his presentation was an inspiration for me to write this post.

Why the computer revolution hasn’t happened yet

I consider this a follow-up to another post I wrote called Reviving programming as literacy. I wrote the following answer in 2020 to a question on Quora, asking What happened in the past 80 years that produced a much cruder world than the rich one that science, engineering, math, tinkering, and systems-thinking experts in the ARPA-IPTO/PARC community predicted? I got my first upvote from Alan Kay for it, which I take as a “seal of approval” that it must’ve been a pretty good account of what’s happened, or that it at least has a good insight or two. I thought I’d share it here.

I think the short answer is the world proceeded with its own momentum, despite what ARPA/PARC offered. Nothing “happened,” which is to say that nothing in society was fundamentally changed by it. Many would ask how one could say that, since the claim would be made that the computer, and some of what was invented at PARC, was “transformative” to our society, but what this is really talking about is optimization of existing processes and goals, not changing fundamental assumptions about what’s possible, given what computing represents, as a promising opportunity to explore ideas about system processes. It’s true that optimization opens up possibilities that would be difficult to achieve otherwise, but my point is that what ARPA/PARC anticipated was that computing would help us to think as no humans have thought before, not just do as no humans have done before. These are not by any means the same transformations. The former was what ARPA/PARC was after, and my understanding is this is what many of the researchers experienced. That experience, though, didn’t get much outside of “the lab,” and while this experience has expanded into the sciences, becoming a fundamentally important tool for scientists to do their work, it still is “in the lab.”

What Alan Kay, one of the ARPA/PARC researchers, realized was that there was more work to be done to lay the groundwork for it. He, Seymour Papert, Jerome Bruner, and others, tried to bootstrap some processes in society, which Kay thought would do that. Their efforts failed, though. They either didn’t last long, or the intent was “lost in translation,” lasted for many years, doing something that ended up being unproductive, and ended in failure later.

The buzzsaw they ran into was the expectations and understandings of parents and educators re. what education was about. A complaint I’ve heard from Kay is that educators tend to not have a good grasp of what math and science are, even if they teach those subjects. So, with the approach that was being used by Kay and others, it was impossible to get the fundamental point across in a way that would scale.

I remember listening to a small presentation Kay gave 11 years ago, where he talked about a study on education reform that was done in the UK years earlier. It was found that in order for any order-of-magnitude improvement in the curriculum to take hold successfully in the study groups, a change in the curriculum must also involve the parents, as well as the children and teachers, because parents fundamentally want to be able to help their kids with their homework. If parents can’t understand what the curriculum is going after, why the change is being made, understand the material being given to their kids, and buy into the benefits of it, they resist it intensely, and the reform effort fails. So, any improvement in education really requires educating the community in and around schools. Just treating the school system as the authority, and the teachers as transmitters of knowledge to children, without involving the parents, did not work.

A natural tendency among parents and educators is to transmit the culture in which they were raised to the children, thus providing continuity. This includes what and how they were taught in school. This is not to say that any change from that is good, but there is resistance to anything but that. To “live in the future,” and help students get there, requires acquiring some mental tools to realize that you are in a context, and that other contexts, some of them more powerful, are possible.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Alan Kay on goals in education

I wanted to repost Alan Kay’s answer from Quora to the question What needs to be done in order to improve Anki to reach the promise of the Dynabook’s “teacher for every learner”?, because I think it sums up so well his thinking on education, broadly, which dovetails with some thoughts I’ve had on it.

He responded to a question about an online learning site, called Anki, that’s designed to improve memorization techniques, asking if there was a way to improve it that would bring it in line with goals of Kay’s original conception of the Dynabook. For people who are not familiar with the Dynabook concept, I’ll point you to a post I wrote, part of which was on Xerox PARC’s research on networked personal computing, A history lesson on government R&D, Part 3 (If you go there, scroll down to the section on “Personal computing”).

There’s definitely a way to think of learning as ultimately being able to remember — and every culture has found a lot of things that need to be remembered, are able to get children to eventually remember them, and have some of their behaviors be in accordance with their memories.

But if we look at history, we find large changes in context of both what kinds of things to learn, and what it means to learn them. For example, the invention of writing brought not just a huge extension of oral knowledge, but an even more critical change of context: getting literate is a qualitative change, not just a quantitative one. A large goal of “learning to read and write” is to cross that qualitative threshold.

A change so large that it is hard to think of as an extension of the prevailing thinking patterns in the era of its birth, was the invention of “modern science” less than 500 years ago. It started with the return of accurate map making of all kinds and was catalyzed by the gradual realization that much of “the world was not as it seems” and by being able to make generalizations that could generate some of the maps. One of the most important larger perspectives on this has its 400th anniversary this year: Francis Bacon’s “A new organization for knowledge” (Novum Organum Scientia), in which he points out that we humans have “bad brain/minds” stemming from a number of sources, including our genetics, cultures, languages, and poor teaching. He proposed a “new science” that would be a set of approaches, methods and tools, that would act as heuristics to try to get around the biases and self generated noise from our “bad brains”.

His proposed “new science” is what today we call “science”. Before this, “science” meant “a gathering of knowledge” and “to gather it”. After this, it meant to move from knowledge to context and method and tools — and to new behaviors. This has led to not just a lot of new knowledge, but very different knowledge: qualitatively different knowledge in qualitatively different contexts.

A trap here is that the use of ordinary language for discussing these three contexts — oral, literate, scientific — is that things can be said and heard whether or not the discussants also have these contexts (this was one of Bacon’s four main “bad brain” traits).

E.g. people who can read but have not taken on the scientific world-view can think they understand what science is, and can learn and memorize many sentences “about” science, without actually touching what they actually mean.

Just as interesting, is the difficulty — for those who have gotten literate — of touching what is really going on — especially the feelings — in oral traditional societies. Music and poetry are bridges, but important parts of the innocence and id-ness are hard to get to. “Ecstatic music” can sometimes dominate one’s literate thought — especially when performing it.

To make an analogy here: in our society, there are courses in “music appreciation” that mostly use “sentences” about “sounds”, “relationships, “composers”, etc., in which most testing can be (and is) done via checking “the memory” of these “sentences”.

By contrast in “real deal music”, real music teachers treat their students as “growing musicians” and play with them as a large part of the guidance to help them “get larger”, to “make Technique be the servant of Art, not the master”, etc. It’s primarily an emotive art form …

A nice quote — which has many web pages — is:

“Talking about Music is like Dancing about Architecture”

(attributed to many people from Stravinsky to Frank Zappa). If you *do* music, you can barely talk about it just a little. The further away from inhabiting music, the less the words can map. (And: note that the quote brilliantly achieves a meta way to do a bit of what it says is difficult …)

The Dynabook idea — “a personal computer for children of all ages” — was primarily about aiding “growth in contexts”* and my initial ideas about it were partly about asking questions such as:

“If we make an analogy to writing/reading/printing-press, what are the qualitatively new kinds of thinking that a personal computer could help to grow?”

I got started along these lines via Seymour Papert’s ideas regarding children, mathematics and computing (my mind was blown forever). I added in ideas from McLuhan, Bruner, Montessori, etc., and … Bacon … to start thinking about how a personal computer for children could help them take on the large world-view of science as “real science learning” (not “science appreciation).

(Via Papert), the dynamic math part of quite a bit of science can be nicely handled by inventing special programming languages for children. But science is not math — math is a way to map ideas about phenomena — so an additional and important part of learning science requires actually touching the world around us in ways that are more elemental than “sentences” — even the “consistent sentences” of maths.

In an ideal world, this would be aided by adults and older children. In the world we live in, most children never get this kind of help from older children, parents, or teachers (this is crazy, but humanity is basically “crazy”).

Another way to look at this is that — as far as science goes — it almost doesn’t matter what part of the world you are born into and grow up in: the chances of getting to touch the real thing are low everywhere.

Several of Montessori’s many deep ideas were key for me.

One is that children learn their world-view not in class but by living in that world. She said the problem was that the calendar said 20th century but their homes were 10th century. So she decided to have her school *be* the 20th century, to embody it in all the ways she could think of in the environment itself.

Another deep idea is that what is actually important is for children to do their learning by actively thinking and doing — and with verve and deep interests. She cared much more about children concentrating like crazy on something that interested them than about what that thing was. She invented “toys” that were “interesting” and let the children choose those that appealed to them (she wanted them to learn what deep concentration without interruptions was like, and that teachers were there to help and not hinder).

In other words, she wanted to help as many children as possible become much more autodidactic.

(Note that this has much in common with getting to be a deep reader or musician — it doesn’t much matter in the beginning what the titles are, what matters is learning how to stay with something difficult because you want to learn it — if the environment has been well seeded, then all will work out well. More directed choices can and will be done later. And note this is even the case with learning to speak!)

After doing many systems and interfaces over quite a few years (~25) we finally got a system that was like the Montessori toys part of her school (Etoys), and then, in a Montessori/Bruner type of school (the Open Magnet School in LA), we got to see what could be done with children, the right kinds of teachers, and a great environment to play in and with.

What never got done, was to handle the needs of children who don’t have the needed kind of peers, teachers or parents around to help them. This help is not just to answer questions but to provide a kind of “community of motivation” and “culture” that is what human beings need to be human. (The by-chance forms of this tend to be very much reverted to oral society practices because of our genetics — and much of this will be anti-modern, and even anti-civilization. This is a very difficult set of designs to pull off, especially ca. where we are now.)


To answer your question: the spirit of Anki is not close to what the Dynabook was all about. It could possibly be a technical aid for some kinds of patterning, but it seems to miss what “contexts” are all about.


Here’s another way to think of some of this stuff, and in a “crazier” fashion.

There have been a number of excellent books over the years about the idea that the “invention of prose via writing killed off ‘the gods’ ”. These are worth finding and pondering.*

The two main problems are (a) we need “the gods”; and (b) “the gods” can be very good or bad for us (“they” don’t care).

It’s worth pondering that from the perspective of science, a metaphor is a lie, but from the perspective of “the gods”, a metaphor is true.

The dilemma of our species — and ourselves — is that we have both of these processes in our brain/minds, we need them both, and we need to learn how to allow both to work**.

Learning something really deeply and fluently goes way beyond (and *before*) conscious thought — important parts of the learning are taken to where “the gods” still lurk.

And, just as you don’t make up reasons for breathing (which “the gods” also handle for you), the reasons for doing these deep things move from “reasoning” to “seasoning” — for life itself.

“Artists are people who can’t not do their Art”.

It doesn’t have to do with talent or opinion … This is a critical perspective for thinking about we humans, and what one of the facets of “identity” could mean … Consider the relationship between the quote above and children …

When you are fluent in music, much of the real-time action is being done “by ‘the gods’ “, whether playing, improvising, composing etc. You are not the same person you were when you were just getting started. Music can get pedantic and over-analyzed, but this can be banished by experiencing some of it that is so overwhelming that it can’t really be analyzed in the midst of the experience (this is not just certain “classical” pieces, but some of “pop” music can really get there as well). This produces the “oceanic feeling” that Romain Rolland asked Freud about.

“Goosebumps are a kind of ‘basic ground’ for ‘humanity’ ”

It’s interesting and important that “the gods” can be found at the grounding of very new contexts such as modern science, and that the two can be made to go together.***

To use this weirder way to look at things:

“Education has to lift us from our genetic prisons, while keeping ‘the gods’ alive and reachable”.


* For example: Eric Havelock’s “Preface To Plato”, and especially Julian Jaynes’ “The Origin of Consciousness in the Breakdown of the Bicameral Mind” (my vote for the most thought provoking book that is perhaps a bit off).

** See Daniel Kahneman’s “Thinking: Fast and Slow”, and ponder his “System 1”

*** See Hadamard’s “The Psychology of Invention in the Mathematical Field”, and Koestler’s “Act of Creation”.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Alan Kay: Basic reading material for computer science students (and programmers)

Once again, Alan Kay gave some basic starting material for those who want to understand what computing makes possible. He seemed to take the bootstrapping approach to “build something better than what’s out there” as a basic project that every aspiring computer scientist/programmer should work on, beginning with some good ideas that have already been made.

Experienced programmers and computer scientists, what are some really old (or even nearly forgotten) books you think every new programmer should read?

I love that “2006” and “2008” (in another answer) must be considered “really old” (which is what the question requests) …

I’m still a big fan of the “Lisp 1.5 Programmers Manual” (MIT Press — still in print). This version of the language is no longer with us, but the book — first written ca 1962 — by John McCarthy, who invented, and his colleagues, who implemented, is a perfect classic.

It starts with a version of John’s first papers about Lisp, and develops the ideas in a few pages of examples to culminate on page 13 with Lisp eval and apply defined in itself. There are many other thought provoking ideas and examples throughout the rest of the book.

The way to grow from this book is to deeply learn what they did and how they did it, and then try to rewrite page 13 in a number of ways. How nicely can this be written in “a lisp” using recursion. How nicely can this be written without recursion? (In both cases, look ahead in the book to see that Lisp 1.5 had gotten to the idea of EXPRs and FEXPRs (functions which don’t eval their arguments before the call — thus they can be used to replace all the “special forms” — do a Lisp made from FEXPRs and get the rest by definition, etc.).

What is a neat bootstrapping path? How could you combine this with Val Shorre’s “Meta II” programmatic parser to make a really extensible language? What does it take to get to “objects”? What are three or four really interesting (and different ways) to think about objects here? (Hints: how many different ways can you define “closures” in a language that executes? What about using Lisp atoms as a model for objects? Etc.)

The idea is that Lisp is not just a language but a really deep “building material” that is tidy enough to “think with” not just make things (it’s a “building material” for thoughts as well as computer processes).

Dani Richard reminded me to mention: “Computation: Finite and Infinite Machines” by Marvin Minsky (Prentice-Hall, 1967), which — since it is one of my favorite books of all time — I’m surprised I didn’t include in the original list. Marvin could really write, and in this book he is at his best. It is actually a “math book” — with lots of ideas, theorems, proofs, etc., — but presented in the friendliest way imaginable by a great mind who treated everyone — including children — as equal to him, and as fellow appreciators of great ideas. There are lots of interesting things to ponder in this book, but perhaps it is the approach that beckons to the reader to start thinking “like this” that is the most rewarding.

“Advances in Programming and Non-Numerical Computation” (Ed. L. Fox) mid-60s. The papers presented at a 1963 summer workshop in the UK. The most provocative ones were by Christopher Strachey and several by Peter Landin. This was one of the books that Bob Barton had us read in his famous advanced systems design class in 1967.

Try “The Mythical Man-Month” by Fred Brooks, for an early look and experience with timeless truths (and gotchas) from systems building with teams …

Try “The Sciences of the Artificial” by Herb Simon. A much stronger way to think about computing — and what “Computer Science” might mean — by a much stronger thinker than most today.

“A Programming Language” by Ken Iverson (ca 1962). This has the same thought expanding properties of Lisp. And, like Lisp, the way to learn from these really old ideas is to concentrate on what is unique and powerful in the approach (we know how to better improve both Lisp and APL today, but the deep essence is perhaps easier to grasp in the original manifestations of the ideas). Another book that Barton had us read.

I like Dave Fisher’s 1970 CMU Thesis — “Control Structures for Programming Languages” — most especially the first 100 pages. Still a real gem for helping to think about design and implementations.

More recent: (80s) “The Meta-Object Protocol” by Kiczales, et al. The first section and example is a must to read and understand.

Joe Armstrong’s PhD thesis — after many years of valuable experience with Erlang — was published as a book ca 2003 …

Lots more out there for curious minds ….

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Alan Kay’s advice to computer science students

I’m once again going to quote a Quora answer verbatim, because I think there’s a lot of value in it. Alan Kay answered What book(s) would you recommend to a computer science student?

My basic answer is: read a lot outside of the computer field.

It is worth trying to understand what “science” means in “Computer Science” and what “engineering” means in “Software Engineering”.

“Science” in its modern sense means trying to reconcile phenomena into models that are as explanatory and predictive as possible. There can be “Sciences of the Artificial” (see the important book by Herb Simon). One way to think of this is that if people (especially engineers) build bridges, then these present phenomena for scientists to understand by making models. The fun of this is that the science will almost always indicate new and better ways to make bridges, so friendly collegial relationships between scientists and engineers can really make progress.

An example in computing is John McCarthy thinking about computers in the late 50s, the really large range of things they can do (maybe AI?), and creating a model of computing as a language that could serve as its own metalanguage (LISP). My favorite book on this is “The Lisp 1.5 Manual” from MIT Press (written by McCarthy et al.). The first part of this book is still a classic on how to think in general, and about computing in particular.

(A later book inspired by all this is “Smalltalk: the language and its implementation” (by Adele Goldberg and Dave Robson — the “Blue Book”). Also contains a complete implementation in Smalltalk written in itself, etc.)

A still later book that I like a lot that is “real computer science” is “The Art of the Metaobject Protocol” by Kiszales, Bobrow, Rivera,). The early part of this book especially is quite illuminating.

An early thesis (1970) that is real computer science is “A Control Definition Language” by Dave Fisher (CMU).

Perhaps my favorite book about computing might seem far afield, but it is wonderful and the writing is wonderful: “Computation: Finite and Infinite Machines” by Marvin Minsky (ca 1967). Just a beautiful book.

To help with “science”, I usually recommend a variety of books: Newton’s “Principia” (the ultimate science book and founding document), “The Molecular Biology of the Cell” by Bruce Alberts, et al. There’s a book of Maxwell’s papers, etc.

You need to wind up realizing that “Computer Science” is still an aspiration, not an accomplished field.

“Engineering” means “designing and building things in principled expert ways”. The level of this is very high for the engineering fields of Civil, Mechanical, Electrical, Biological, etc. Engineering. These should be studied carefully to get the larger sense of what it means to do “engineering”.

To help with “engineering” try reading about the making of the Empire State Building, Boulder Dam, the Golden Gate Bridge, etc. I like “Now It Can Be Told” by Maj Gen Leslie Groves (the honcho on the Manhattan Project). He’s an engineer, and this history is very much not from the Los Alamos POV (which he also was in charge of) but about Oak Ridge, Hanford, etc and the amazing mobilization of 600,000 plus people and lots of money to do the engineering necessary to create the materials needed.

Then think about where “software engineering” isn’t — again, you need to wind up realizing that “software engineering” in any “engineering” sense is at best still an aspiration not a done deal.

Computing is also a kind of “media” and “intermediary”, so you need to understand what these do for us and to us. Read Marshall McLuhan, Neil Postman, Innis, Havelock, etc. Mark Miller (comment below) just reminded me that I’ve recommended “Technics and Human Development,” Vol. 1 of Lewis Mumford’s “The Myth of the Machine” series, as a great predecessor of both the media environment ideas and of an important facet of anthropology.

I don’t know of a great anthropology book (maybe someone can suggest), but the understanding of human beings is the most important thing to accomplish in your education. In a comment below, Matt Gaboury recommended “Human Universals” (I think he means the book by Donald Brown.) This book certainly should be read and understood — it is not in the same class as books about a field, like “Molecular Biology of the Cell”.

I like Ed Tufte’s books on “Envisioning Information”: read all of them.

Bertrand Russell’s books are still very good just for thinking more deeply about “this and that” (“A History of Western Philosophy” is still terrific).

Multiple points of view are the only way to fight against human desires to believe and create religions, so my favorite current history book to read is: “Destiny Disrupted” by Tamim Ansary. He grew up in Afghanistan, moved to the US at age 16, and is able to write a clear illuminating history of the world from the time of Mohammed from the point of view of this world, and without special pleading.

Note: I checked out “The Art of the Metaobject Protocol,” and it recommended that if you’re unfamiliar with CLOS (the Common Lisp Object System) that you learn that before getting into this book. It recommended, “Object-Oriented Programming in Common LISP: A Programmer’s Guide to CLOS,” by Sonya Keene.

As I’ve been taking this track, reconsidering what I think about computer science, software engineering, and what I can do to advance society, making computing a part of that, I’ve been realizing that one of Kay’s points is very important. If we’re going to make things for people to use, we need to understand people well, specifically how the human system interfaces with computer systems, and how that interaction can help people better their condition in our larger system that we call “society” (this includes its economy, but it should include many other social facets), and more broadly, the systems of our planet. This means spending a significant amount of time on other things besides computer stuff. I can tell you from experience, this is a strange feeling, because I feel like I should be spending more time on technical subjects, since that’s what I’ve done in the past, and that’s been the stock in trade of my profession. I want that to be part of my thinking, but not what I eat and sleep.

Related material:

The necessary ingredients of computer science

Edit 2/9/2019: I highly recommend people watch Episode 10 of the 1985 TV series “The Day The Universe Changed” with James Burke, called “Worlds Without End.”

The original topic Alan Kay wrote about was in answer to a question about books to read, but I thought I should include this, since it’s the clearest introduction I’ve seen to scientific epistemology.

In an earlier post, “Alan Kay: Rethinking CS education,” I cited a presentation where Kay talked about “erosion gullies” in our minds, how they channel “water” (thoughts, ideas) to go only in particular directions. Burke gave an excellent demonstration of these mental gullies, explaining that they’re a natural part of how our brains work, and there is no getting away from them. Secondly, he pointed out that they’re the only way we can understand anything. So, the idea is not to reject these gullies, but to be aware that they exist, and to gain skill in getting out of one, and into a different one.

What I hope people draw out of what Burke said is that while you can’t really be certain of anything, that doesn’t mean you can’t draw knowledge that’s useful, reliable, and virtuous to exercise in the world, out of uncertainty. That is the essence of our modern science, and modern society. It is why we are able to live as we do, and how we can advance to a better society. I’ll add that rejecting this notion is to reject modernity, and any future worth having.

The ending of this episode is remarkable to watch in hindsight, 34 years on, because he addressed the impact that the computer would have on our social construct that we call society; its epistemological, and political implications, which I think are quite accurate. He put more of a positive gloss on it than I would, at this point in history, though his analysis was tinged with some sense of uncertainty about the future (at that point in time) that is worth contemplating in our present.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Trying to arrive at clarity on the state of higher education

I’ve been exploring issues in higher education over the last 10 years, trying to understand what I’ve been seeing with both the political discourse that’s been happening in the Western world for the past 12 or so years, and what’s been happening with campus politics for as long, which has been having severe consequences on the humanities, and has been making its way into the academic fields of math and science.

Rob Montz, a Brown University graduate, has done the best job I’ve seen anywhere of both getting deep into a severe problem at many universities, and condensing his analysis down to something that many people can understand. I think there is more to the problem than what Montz has pointed out, but he has put his finger on something that I haven’t seen anyone else talk about, and it really has to do with this question:

Are students going to school to learn something, or are they just there to get a piece of paper that they think will increase their socio-economic status?

I’m not objecting if they are there to do both. What I’m asking is if education is even in the equation in students’ experience. You would think that it is, since we think of universities as places of learning. However, think back to your college experience (if you went to college). Do you remember that the popular stereotype was that they were places to “party, drink, and lose your virginity”? That didn’t have to be your experience, but there was some truth to that perception. What Montz revealed is that perception has grown into the main expectation of going to college, not a “side benefit,” though students still expect that there’s a degree at the end of it. There’s just no expectation that they’ll need to learn anything to get it.

What I asked above is a serious question, because it has profound implications for the kind of society we will live in going forward. A student body and education system that does not care about instilling the ideas that have made Western civilization possible will not live in a free society in the future. This means that we can forget about our rights to live our lives as we see fit, and we can forget about having our own ideas about the world. Further, we can forget about having the privilege of living in a society where our ideas can be corrected through considering challenges to them. That won’t be allowed. How would ideas change instead, you might ask? Look at history. It’s bloody…

We are not born with the ideas that make a free society possible. They are passed down from generation to generation. If that process is interrupted, because society has taken them for granted, and doesn’t want the responsibility of learning them, then we can expect a more autocratic society and political system to result, because that is the kind of society that humans naturally create.

A key thing that Montz talked about is what I’ve heard Alan Kay raise a red flag about, which is that universities are not “guarding their subjects,” by which he means they are not guarding what it means to be educated. They are, in effect, selling undergraduate degrees to the highest bidders, because that’s all most parents and students want. An indicator of this that Kay used is that literacy scores of college graduates have been declining rapidly for decades. In other words, you can get a college degree, and not know how to comprehend anything more complex than street signs, or labels on a prescription bottle, and not know how to critically analyze an argument, or debate it. These are not just skills that are “nice to have.” They are essential to a functioning free society.

What Montz revealed is if you don’t want to learn anything, that’s fine by many universities now. Not that it’s fine with many of the professors who teach there, but it’s fine by the administrations who run them. They’re happy to get your tuition.

Montz has been producing a series of videos on this issue, coming out with a new one every year, starting in 2016. He started his journey at his alma mater, Brown University:

In the next installment, Montz went to Yale, and revealed that it’s turning into a country club. While, as he said, there are still pockets of genuine intellectual discipline taking place there, you’re more likely to find that young people are there for the extracurricular activities, not to learn something develop their minds and character. What’s scandalous about this is that the school is not resisting this. It is encouraging it!

The thing is, this is not just happening at Yale. It is developing at universities across the country. There are even signs of it at my alma mater, Colorado State University (see the video below).

Next, Montz went to the University of Chicago to see why they’re not quite turning out like many other universities. They still believe in the pursuit of truth, though there is a question that lingers about how long that’s going to hold up.

Canadian Prof. Gad Saad describing events at Colorado State University, from February 2017:

Dr. Frank Furedi talked about the attitudes of students towards principles that in the West are considered fundamental, and how universities are treating students. In short, as the video title says, students “think freedom is not a big deal.” They prefer comfort, because they’re not there to learn. Universities used to expect that students were entering adulthood, and were expected to take on more adult responsibilities while in school. Now, he says, students are treated like “biologically mature … clever children.” Others have put this another way, that an undergraduate education now is just an extension of high school. It’s not up to par with what a university education was, say, when I went to college.

Dr. Jonathan Haidt has revealed that another part of what has been happening is that universities that used to be devoted to a search for truth are in the process of being converted into a new kind of religious school that is pursuing a doctrine of politically defined social justice. Parents who are sending their children off to college have a choice to make about what they want them to pursue (this goes beyond the student’s major). However, universities are not always up front about this. They may in their promotions talk about education in the traditional way, but in fact may be inculcating this doctrine. Haidt has set up a website called Heterodox Academy to show which universities are in fact pursuing which goals, in the hopes that it will help people choose accordingly. They don’t just show universities that they might prefer. If you want a religious education, of whatever variety, they’ll list schools that do that as well, but they rank them according to principles outlined by the University of Chicago re. whether they are pursuing truth, or some kind of orthodoxy.

For people who are trying to understand this subject, I highly recommend an old book, written by the late Allan Bloom, called “The Closing of the American Mind.” He doesn’t exactly describe what’s going on now on college campuses, because he wrote it over 30 years ago, but it will sound familiar. I found it really valuable for contrasting what the philosophy of the university was at the time that he wrote the book, as opposed to what it had been in the prior decades. He preferred the latter, though he acknowledged that it was incomplete, and needed revision. He suggested a course of action for revising what the university should be, which apparently modern academics have completely rejected. I still think it’s worth trying.

Edit 6/26/2019: I wrote the paragraph below with the original blog post (which I’ve crossed out). I’ve had second thoughts about it since then. I thought maybe universities could be replaced by online course curricula, but listening to educators, that’s been tried, and the results are not usually up to par. Most people don’t have the discipline at home for the focused learning that’s needed for what a traditional university education requires. So, the only reasonable choice is the universities need to reform themselves, returning to the pursuit of truth. If that doesn’t happen, then the choice is either their orthodoxy takes over, or our society does without them. Both of which are terrible options.

I don’t know if this will come to fruition in my lifetime, but I can see the possibility that someday the physical university as we have known it will disappear, and accreditation will take a new form, because of the internet. In that way, a recovery of the quality of higher education might be possible. I have my doubts that getting the old form back is possible. It may not even be desirable.

Edit 1/31/2024 – Here’s an update from Rob Montz, profiling the case of Amy Wax, a law professor at the Univ. of Pennsylvania, from 2023. The situation in our universities has gotten worse. U. Penn is looking to take the unprecedented step of firing her without cause, eviscerating the legal and professional concept of tenure, all because people are triggered by what she says.

I take it they’re inviting a lawsuit…

Related articles:

Brendan O’Neill on the zeitgeist of the anti-modern West

For high school students, and their parents: Some things to consider before applying for college

Edit 11/6/2018: What Happened to Our Universities?, by Philip Carl Salzman

Edit 6/16/2021: I largely agree with what Bill Maher had to say about this:

A word of advice before you get into SICP

If you’ve been reading along with my posts on exercises from Structure and Interpretation of Computer Programs” (SICP), this is going to come pretty late. The book is a math-heavy text. It expects you to know the math it presents already, so for someone like me who got a typical non-math approach to math in school, it seems abrupt, but not stifling. If you’re wondering what I mean by “non-math approach,” I talked about it in “The beauty of mathematics denied,” James Lockhart also talked about it in “A Mathematician’s Lament.”

I’ve been reading a book called, Mathematics in 10 Lessons: The Grand Tour,” by Jerry King (I referred to another book by King in “The beauty of mathematics denied”), and it’s helped me better  understand the math presented in SICP. I could recommend this book, but I’m sure it’s not the only one that would suffice. As I’ve gone through King’s book, I’ve had some complaints about it. He tried to write it for people with little to no math background, but I think he only did an “okay” job with that. I think it’s possible there’s another book out there that does a better job at accomplishing the same goal.

What I recommend is getting a book that helps you understand math from a mathematician’s perspective before getting into SICP, since the typical math education a lot of students get doesn’t quite get you up to its level. It’s not essential, as I’ve been able to get some ways through SICP without this background, but I think having it would have helped make going through this book a little easier.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

The beauty of mathematics denied, revisited: Lockhart’s Lament

Vladislav Zorov, a Quora user, brought this to my attention. It is a worthy follow-up to Jerry King’s “The Art of Mathematics,” called Lockhart’s Lament. Lockhart really fleshes out what King was talking about. Both are worth your time. Lockhart’s lament reminds me a lot of a post I wrote, called The challenge of trying to get a real science of computing in our schools. I talked about an episode of South Park to illustrate this challenge, where a wrestling teacher is confronted with a pop culture in wrestling that’s “not real wrestling” (ie. WWE), as an illustration of the challenge that computer scientists have in trying to get “the real thing” into school curricula. The wrestling teacher is continually frustrated that no one understands what real wrestling is, from the kids who are taking his class, to the people in the community, to the administrators in his school. There is a “the inmates have taken over the asylum” feeling to all of this, where “the real thing” has been pushed to the margins, and the pop culture has taken over. The people who see “the real thing,” and value it are on the outside, looking in. Hardly anybody on the inside can understand what they’re complaining about, but some of them are most worried that nothing they’re doing seems to be enough to make a big dent in improving the lot of their students. Quite the conundrum. It looks idiotic, but it’s best not to dismiss it as such, because the health and welfare of our society is at stake.

Lockhart’s Lament — The Sequel is also worth a look, as it talks about critics of Lockhart’s argument.

Two issues that come to mind from Lockhart’s lament (and the “sequel”) is it seems like since we don’t have a better term for what’s called “math” in school, it’s difficult for a lot of people to disambiguate mathematics from the benefits that a relative few students ultimately derive from “math.” I think that’s what the critics hang their hat on: Even though they’ll acknowledge that what’s taught in school is not what Lockhart wishes it was, it does have some benefits for “students” (though I’d argue it’s relatively few of them), and this can be demonstrated, because we see every day that some number of students who take “math” go on to productive careers that use that skill. So, they will say, it can’t be said that what’s taught in school is worthless. Something of value is being transmitted. Though, I would encourage people to take a look at the backlash against requiring “math” in high school and college as a counterpoint to that notion.

Secondly, I think Lockhart’s critics have a good point in saying that it is physically impossible for the current school system, with the scale it has to accommodate, to do what he’s talking about. Maybe a handful of schools would be capable of doing it, by finding knowledgeable staff, and offering attractive salaries. I think Lockhart understands that. His point, that doesn’t seem to get through to his critics, is, “Look. What’s being taught in schools is not math, anyway! So, it’s not as if anyone would be missing out more than they are already.” I think that’s the sticking point between him and his critics. They think that if “math” is eliminated, and only real math is taught in a handful of schools (the capacity of the available talent), that a lot of otherwise talented students would be missing out on promising careers, which they could benefit from using “math.”

An implicit point that Lockhart is probably making is that real math has a hard time getting a foothold in the education system, because “math” has such a comprehensive lock on it. If someone offered to teach real math in our school system, they would be rejected, because their method of teaching would be so outside the established curriculum. That’s something his critics should think about. Something is seriously wrong with a system when “the real thing” doesn’t fit into it well, and is barred because of that.

I’ve talked along similar lines with others, and a persistent critic on this topic, who is a parent of children who are going through school, has told me something similar to a criticism Lockhart anticipated. It goes something like, “Not everyone is capable of doing what you’re talking about. They need to obtain certain basic skills, or else they will not be served well by the education they receive. Schools should just focus on that.” I understand this concern. What people who think about this worry about is that in our very competitive knowledge economy, people want assurances that their children will be able to make it. They don’t feel comfortable with an “airy-fairy, let’s be creative!” account of what they see as an essential skill. That’s leaving too much to chance. However, a persistent complaint I used to hear from employers (I assume this is still the case) is that they want people who can think creatively out of the box, and they don’t see enough of that in candidates. This is precisely what Lockhart is talking about (he doesn’t mention employers, though it’s the same concern, coming from a different angle). The only way we know of to cultivate creative thinkers is to get people in the practice of doing what’s necessary to be creative, and no one can give them a step-by-step guide on how to do that. Students have to go through the experience themselves, though of course adult educators will have a role in that.

A couple parts of Lockhart’s account that really resonated with me was where he showed how one can arrive at a proof for the area of a certain type of triangle, and where he talked about students figuring out imaginary problems for themselves, getting frustrated, trying and failing, collaborating, and finally working it out. What he described sounds so similar to what my experience was when I was first learning to program computers, when I was 12 years old, and beyond. I didn’t have a programming course to help me when I was first learning to do it. I did it on my own, and with the help of others who happened to be around me. That’s how I got comfortable with it. And none of it was about, “To solve this problem, you do it this way.” I can agree that would’ve been helpful in alleviating my frustration in some instances, but I think it would’ve risked denying me the opportunity to understand something about what was really going on while I was using a programming language. You see, what we end up learning through exploration is that we often learn more than we bargained for, and that’s all to the good. That’s something we need to understand as students in order to get some value out of an educational experience.

By learning this way, we own what we learn, and as such, we also learn to respond to criticism of what we think we know. We come to understand that everything that’s been developed has been created by fallible human beings. We learn that we make mistakes in what we own as our ideas. That creates a sense of humility in us, that we don’t know everything, and that there are people who are smarter than us, and that there are people who know what we don’t know, and that there is knowledge that we will never know, because there is just so much of it out there, and ultimately, that there are things that nobody knows yet, not even the smartest among us. That usually doesn’t feel good, but it is only by developing that sense of humility, and responding to criticism well that we improve on what we own as our ideas. As we get comfortable with this way of learning, we learn to become good at exploring, and by doing that, we become really knowledgeable about what we learn. That’s what educators are really after, is it not, to create lifelong learners? Most valuable of all, I think, is learning this way creates independent thinkers, and indeed, people who can think out of the box, because you have a sense of what you know, and what you don’t know, and what you know is what you question, and try to correct, and what you don’t know is what you explore, and try to know. Furthermore, you have a sense of what other people know, and don’t know, because you develop a sense of what it actually means to know something! This is what I see Lockhart’s critics are missing the boat on: What you know is not what you’ve been told. What you know is what you have tried, experienced, analyzed, and criticized (wash, rinse, repeat).

On a related topic, Richard Feynman addressed something that should concern us re. thinking we know something because it’s what we’ve been told, vs. understanding what we know, and don’t know.

What seems to scare a lot of people is even after you’ve tried, experienced, analyzed, and criticized, the only answer you might come up with is, “I don’t know, and neither does anybody else.” That seems unacceptable. What we don’t know could hurt us. Well, yes, but that’s the reality we exist in. Isn’t it better to know that than to presume we know something we don’t?

The fundamental disconnect between what people think is valuable about education and what’s actually valuable in it is they think that to ensure that students understand something, it must be explicitly transmitted by teachers and instructional materials. It can’t just be left to chance in exploratory methods, because they might learn the wrong things, and/or they might not learn enough to come close to really mastering the subject. That notion is not to be dismissed out of hand, because that’s very possible. Some scaffolding is necessary to make it more likely that students will arrive at powerful ideas, since most ideas people come up with are bad. The real question is finding a balance of “just enough” scaffolding to allow as many students as possible to “get it,” and not “too much,” such that it ruins the learning experience. At this point, I think that’s more an art than a science, but I could be wrong.

I’m not suggesting just using a “blank page” approach, where students get no guidance from adults on what they’re doing, as many school systems have done (which they mislabeled “constructivism,” and which doesn’t work). I don’t think Lockhart is talking about that, either. I’m not suggesting autodidactic learning, nor is Lockhart. There is structure to what he is talking about, but it has an open-ended broadness to it. That’s part of what I think scares his critics. There is no sense of having a set of requirements. Lockhart would do well to talk about thresholds that he is aiming for with students. I think that would get across that he’s not willing to entertain lazy thinking. He tries to do that by talking about how students get frustrated in his scheme of imaginative work, and that they work through it, but he needs to be more explicit about it.

He admits in the “sequel” that his first article was a lament, not a proposal of what he wants to see happen. He wanted to point out the problem, not to provide an answer just yet.

The key point I want to make is the perception that drives not taking a risk is in fact taking a big risk. It’s risking creating people, and therefore a society that only knows so much, and doesn’t know how to exceed thresholds that are necessary to come up with big leaps that advance our society, if not have the basic understanding necessary to retain the advances that have already been made. A conservative, incremental approach to existing failure will not do.

Related post: The beauty of mathematics denied

— Mark Miller, https://tekkie.wordpress.com

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Alan Kay: Rethinking CS education

I thought I’d share the video below, since it has some valuable insights on what computer science should be, and what education should be, generally. It’s all integrated together in this presentation, and indeed, one of the projects of education should be integrating computer science into it, but not with the explicit purpose to create more programmers for future jobs, though it could always be used for that by the students. Alan Kay presents a different definition of CS than is pursued in universities today. He refers to how Alan Perlis defined it (Perlis was the one to come up with the term “computer science”), which I’ll get to below.

This thinking about CS and education provides, among other things, a pathway toward reimagining how knowledge, literature, and art can be presented, organized, dissected, annotated, and shared in a way that’s more meaningful than can be achieved with old media. (For more on that, see my post “Getting beyond paper and linear media.”) As with what the late Carl Sagan often advocated, Kay’s presentation here advocates for a general model for the education of citizens, not just professional careerists.

Another reason I thought of sharing this is several years ago I remember hearing that Kay was working on rethinking computer science curriculum. What he presents is more about suggestion, “Start thinking about it this way.” (He takes a dim view of the concept of curriculum, as it suggests setting students on a rigid path of study with the intent to create minds with a cookie cutter, though he is in favor of classical liberal arts education, with the prescriptions that entails.)

As he says in the presentation, there’s a lot to develop in the practice of education in order to bring this into fruition.

This is from 2015:

I wrote notes below for some of the segments he talks about, just because I think his presentation bears some interpretation for many readers. He uses metaphors a lot.

The bicycle

This is an analogy for how an apparatus, or a subject, is typically modified for education. We take the optimized, or the adult version of something, and add compensators, which make it so that beginners can use it without falling all over themselves. It’s seen as easier to present it this way, and as a skill-building experience, where in order to learn how to do something, you need to use “the real thing.” Beginners can put on a good show of using this sort of apparatus, or modified subject, but the problem is that it doesn’t teach a beginner how to become good at really using the real thing at its full potential. The compensators become a crutch. He said a better idea is to use an apparatus, or a component of the subject, that allows a beginner to get a feel for how to use the thing in a way that gets across significant aspects of its potential, but without the optimizations, or the way adults use it, which make it too complicated for a beginner to use under their own power. In this case, a lowbike is better. This beginner apparatus, or component, is more like the first version of the thing you’re trying to teach. Bicycles were originally more like scooters, without pedals, or a chain, where you’d sit in the seat, push it along with your legs, kind of “running while sitting,” glide, and turn by shifting your weight, and turning into the turn. Once a beginner gets a feel for that, they can deal with the optimized version, or a scaled down adult version, with pedals and a chain to drive the bike, because all that adds is the ability to get more power out of it. It doesn’t change any of the fundamentals of how you use it.

This gets to an issue of pedagogy, that learners need to learn something in components, rather than dealing with the whole thing at once. Once they learn one capacity, they can move on to the next challenge in learning the “whole thing.”

Radiation vs. nouns

He put forward a proposition for his talk, which is that he’s mixing a bunch of ideas together, because they overlap. This is a good metaphor, because most of his talk is about what we are as human beings, and how society should situate and view education. Only a little of it is actually on computer science, but all of it is germane to talking about computer science education.

He also gives a little advice to education reformers. He pointed out what’s wrong with education as it is right now, but rather than cursing it, he said one should make a deliberate effort to “build a tribe” or coalition with those who are causing the problem, or are in the midst of the problem, and suggest ways to bring them into a dignified position, perhaps by sharing in their honors, or, as his example illustrated, their shame. I draw some of this from Plato’s Cave metaphor.

Cooperation and competition in society

I once heard Kay talk about this many years ago. He said that, culturally, modern corporations are like the ancient hunter-gatherers. They exploit the resources of an area for a while, and once it’s exhausted, they move on, and that as a culture, they have yet to learn about democracy, which requires more of a “settlement” mentality toward what they’re doing. Here, he used an agricultural metaphor to talk about a cooperative society that creates the wealth that is then used by competitive forces within it. What he means by this is that the true wealth is the knowledge that’s ultimately used to develop new products and services. It’s not all developed inside the marketplace. He doesn’t talk about this, but I will. Even though a significant part of the wealth (as he said, you can think of it as “potential energy”) is generated inside research labs, what research labs tend to lack is knowledge of what the members of society can actually understand of this developed knowledge. That’s where the competitive forces in society come in, because they understand this a lot better. They can negotiate between how much of the new knowledge to put into a product, and how much it will cost, to reach as many people as possible. This is what happened in the computer industry of the past.

I think I understand what he’s getting at with the agricultural metaphor, though perhaps I need to be filled in more. My understanding of what he means is that farmers don’t just want to reap a crop for one season. Their livelihood depends on maintaining fertility on their land. That requires not just exploiting what’s there season after season, or else you get the dust bowl. If instead, practices are modified to allow the existing land to become fertile again, or, in the case of hunter-gathering, aggressively managing the environment to create favorable grazing to attract game, then you can create a cycle of exploitation and care such that a group can stay in one area for a long time, without denying themselves any of the benefits of how they live. I think what he suggests is that if corporations would modify their behavior to a more settled, agricultural model, to use some of their profits to contribute to educating the society in which they exist, and to funding scientific research on a private basis, that would “regenerate the soil” for economic growth, which can then fuel more research, creating a cycle of renewal. No doubt the idea he’s presenting includes the businesses who would participate in doing this. They should be allowed to profit (“reap”) from what they “sow,” but the point is they’re not the only ones who can profit. Other players in the marketplace can also exploit the knowledge that’s generated, and profit as well. That’s what’s been done in the past with private research labs.

He attributes the lack of this to culture, of not realizing that the economic model that’s being used is not sustainable. Eventually, you use up the “soil,” and it becomes “infertile,” and “blows away,” and, in the case of hunter-gathering, the “good hunting grounds” are used up.

He makes a crucial point, though, that education is not just about jobs and competitiveness. It’s also about inculcating what citizenship really means. I’m sure if he was asked to drill down on this more, he would suggest a classical education for this, along with a modified math and science curriculum that would give students a sense of what those subjects really are like.

The sense I get is he’s advocating almost more of an Andrew Carnegie model of corporate stewardship, who, after he made his money, directed his philanthropy to building schools and libraries. Kay would just add science labs to that mix. (He mentions this later in his talk.)

I feel it necessary to note that not all for-profit entities would be able to participate in funding these cooperative activities, because their profit margins are so slim. I don’t think that’s what he’s expecting out of this.

What we are, to the best of our knowledge

He gives three views into human mental capacity: the way we perceive (theatrical), how much we can perceive at any moment in time (1 ± 2), and how educators should perceive ourselves psychologically and mentally (more primate and mammalian). This relates to neuroscience, and to some extent, evolutionary psychology.

The raison d’être of computer science

The primary purpose of computer science should be developing a science of systems in process, and that means all processes: mechanical processes, technological processes, social processes, biological processes, mental processes, etc. This relates to my earlier post, “Beginning the journey of becoming a computer scientist.” It’s partly about developing a new kind of mathematics for modeling processes. Alan Turing did it, with his Turing Machine concept, though he was trying to model the process of generating mathematical statements, and doing mathematical tests on them.

Shipping the design

Kay talks about how programmers today don’t have access to anything like what designers in other fields have, where they’re able to model their design, simulate it, and then have a machine fabricate a prototype that you can actually show and use.

I want to clarify this one point, because I don’t think he articulated it well (I found out about this because he expressed a similar thought on Quora, and I finally understood what he meant), but at one point he said that students at UCLA, one of the Top 10 CS schools, use “vi terminal emulators” (he sounds like he said “bi-terminal emulators”), emulating punched cards. What he meant by this was that students are logging in to a Unix/Linux system, bringing up an X-Windows terminal window, which is 80 columns wide (hence the punched card metaphor he used, because punch cards were also 80 columns wide), and using the “vi” text editor (or more likely “vim”, which is vi emulated in Emacs) to write their C++ code, the primary language they use.

I had an epiphany about this gulf between the tools that programmers use and the tools that engineers use, about 8 or 9 years ago. I was at a friend’s party, and there were a few mechanical engineers among the guests. I overheard a couple of them talking about the computer-aided design (CAD) software they were using. One talked about a “terrible” piece of CAD software he used at work. He said he had a much better CAD system at home, but it was incompatible with the data files that were used at work. As much as he would’ve loved to use it, he couldn’t. He said the system he used at work required him to group pieces of a design together as he was building the model, and once he did that, those pieces became inflexible. He couldn’t just redesign one piece of it, or separate one out individually from the model. He couldn’t move the pieces around on the model, and have them fit. Once they were grouped, that was it. It became this static thing. He said in order to redesign one piece of it, he had to take the entire model apart, piece by piece, redesign the part, and then redesign all the other pieces in the group to make the new part fit. He said he hated it, and as he talked about it, he acted like he was so disgusted with it, he wanted to throw it in the trash, like it was a piece of garbage. He said on his CAD system at home, it was wonderful, because he could separate a part from a model any time he wanted, and the system would adjust the model automatically to “make sense” out of the part being missing. He could redesign the part, and move it to a different part of the model, “attach it” somewhere, and the system would automatically adjust the model so that the new part would fit. The way he described it gave it a sense of fluidity. Whereas the system he used at work sounded rigid. It reminded me of the programming languages I had been using, where once relationships between entities were set up, it was really difficult to take pieces of it “out” and redesign them, because everything that depended on that piece would break once I redesigned it. I had to go around and redesign all the other entities that related to it to adjust to the redesign of the one piece.

I can’t remember how this worked, but another thing the engineer talked about was the system at work had some sort of “binding” mechanism that seemed to associate parts by “type,” and that this was also rigid, which reminded me a lot of the strong typing system in the languages I had been using. He said the system he had at home didn’t have this, and to him, it made more sense. Again, his description lent a sense of fluidity to the experience of using it. I thought, “My goodness! Why don’t programmers think like this? Why don’t they insist that the experience be like this guy’s CAD system at home?” For the first time in my career, I had a profound sense of just what Alan Kay talked about, here, that the computing field is retrograde. It has not advanced anywhere close to the kind of engineering that exists in other fields, where they would insist on this sort of experience. We accept so much less, whereas modern engineers have a hard time standing for it, because they know they have better options.

Don’t be fooled by large efforts below threshold

Before I begin this part, I want to share a crucial point that Kay makes, because it’s one of the big ones:

Think about what thinking is. Thinking is not being logical. Thinking is choosing the environment that you’re going to think in before you start rationalizing.

Kay had something very interesting, and startling, to say about the Apollo space program, using that as a metaphor for large reform initiatives in education generally. I recently happened upon a video of testimony he gave to a House committee on educational computing back in 1982, chaired by then-congressman Al Gore, and Kay talked about this same example back then. He said that the way the Apollo rockets were designed was a “hack.” They were not the best design for space travel, but it was the most expedient for the mission of getting to the Moon by the end of the 1960s. Here, in this presentation, he talks about how each complete rocket was the height of a 45-story building (1-1/2 football fields in length), most of it high explosives, with only a tiny capsule at the top that could fit 3 astronauts. This is not a model that could scale to what was needed for space travel.

It became this huge worldwide cultural event when NASA launched it, and landed men on the Moon, but Kay said it was really not a great accomplishment. I remember Rep. Gore said in jest, “The walls in this room are shaking!” The camera panned around a bit, showing pictures on the wall from NASA. How could he say such a thing?! This was the biggest cultural event of the century, perhaps in all of human history. He explained the same thing here: that the Apollo program didn’t advance space travel beyond the mission to the Moon. It was not technology that would get us beyond that, though, in hindsight we can say technology like it enabled launching probes throughout the Solar System.

Now, what he means by “space travel,” I’m not sure. Is it manned missions to the outer planets, or to other star systems? Kay is someone who has always thought big. So, it’s possible he was thinking of interstellar travel. What he was talking about was the problem of propulsion, getting something powerful enough to make significant discoveries in space exploration possible. He said chemical propellant just doesn’t do it. It’s good enough for launching orbital vehicles around our planet, and launching probes, but that’s really it. The rest is just wasting time below threshold.

Another thing he explained is that large initiatives which don’t cross a meaningful threshold can be harmful to efforts to advancing any endeavor, because large initiatives come with extended expectations that the investment will continue to be used, and they must be satisfied, or else there will be no cooperation in doing the initial effort. The participants will want their return on investment. He said that’s what happened with NASA. The ROI had to play out, but that ruined the program, because as that happened, people could see we weren’t advancing the state of the art that much in space travel, and the science that was being produced out of it was usually nothing to write home about. Eventually, we got what we now see: People are tired of it, and have no enthusiasm for it, because it set expectations so low.

What he was trying to do in his House committee testimony, and what he’s trying to do here, is provide some perspective that science offers, vs. our common sense notion of how “great” something is. You cannot get qualitative improvement in an endeavor without this perspective, because otherwise you have no idea what you’re dealing with, or what progress you’re building on, if any. Looking at it from a cultural perspective is not sufficient. Yes, the Moon landing was a cultural milestone, but not a scientific or engineering milestone, and that matters.

Modern science and engineering have a sense of thresholds, that there can come a point where some qualitative leap is made, a new perspective on what you’re doing is realized that is significantly better than what existed prior. He explains that once a threshold has been crossed, you can make small improvements which continue to build on that significant leap, and those improvements will stick. The effort won’t just crash back down into mediocrity, because you know something about what you have, and you value it. It’s a paradigm shift. It is so significant, you have little reason to go back to what you were doing before. From there, you can start to learn the limits of that new perspective, and at some point, make more qualitative leaps, crossing more thresholds.

“Problem-finding”/finding the goal vs. problem-solving

Problem solving begins with a current context, “We’re having a problem with X. How do we solve it?” Problem finding asks do we even have a good idea of what the problem is? Maybe the reason for the problems we’ve been seeing has to do with the fact that we haven’t solved a different problem we don’t know about yet. “Let’s spend time trying to find that.”

Another way of expressing this is a concept I’ve heard about from economists, called “opportunity cost,” which, in one context, gets across the idea that by implementing a regulation, it’s possible that better outcomes will be produced in certain categories of economic interactions, but it will also prevent certain opportunities from arising which may also be positive. The rub is these opportunities will not be known ahead of time, and will not be realized, because the regulation creates a barrier to entry that entrepreneurs and investors will find too high to overcome. This concept is difficult to communicate to many laymen, because it sounds speculative. What this concept encourages people cognizant of it to do is to “consider the unseen,” to consider the possibilities that lie outside of what’s currently known. One can view “problem finding” in a similar way, not as a way of considering the unseen, but exploring it, and finding new knowledge that was previously unknown, and therefore unseen, and then reconsidering one’s notion of what the problem really is. It’s a way of expanding your knowledge base in a domain, with the key practice being that you’re not just exploring what’s already known. You’re exploring the unknown.

The story he tells about MacCready illustrates working with a good modeling system. He needed to be able to fail with his ideas a lot, before he found something that worked. So he needed a simple enough modeling system that he could fail in, where when he crashed with a particular model, it didn’t take a lot to analyze why it didn’t work, and it didn’t take a lot to put it back together differently, so he could try again.

He made another point about Xerox PARC, that it took years to find the goal, and it involved finding many other goals, and solving them in the interim. I’ve written about this history at “A history lesson on government R&D” Part 2 and Part 3. There, you can see the continuum he talks about, where ARPA/IPTO work led into Xerox PARC.

This video with Vishal Sikka and Alan Kay gives a brief illustration of this process, and what was produced out of it.

Erosion gullies

There are a couple metaphors he uses to talk about the lack of flexibility that develops in our minds the more we focus our efforts on coping, problem solving, and optimizing how we live and work in our current circumstances. One is erosion gullies. The other is the “monkey trap.”

Erosion gullies channel water along a particular path. They develop naturally as water erodes the land it flows across. These “gullies” seem to fit with what works for us, and/or what we’re used to. They develop into habits about how we see the world–beliefs, ideas which we just accept, and don’t question. They allow some variation in the path that’s followed, but they provide boundaries that don’t allow the water to go outside the gully (leaving aside the exception of floods, for the sake of argument). He uses this to talk about how “channels” develop in our minds that direct our thinking. The more we focus our thoughts in that direction, the “deeper” the gully gets. Keep at it too long, and the “gully” won’t allow us to see anything different than what we’re used to. He says that it may become inconceivable to think that you could climb out of it. Most everything inside the “gully” will be considered “right” thinking (no reason why), and anything outside of it will be considered “wrong” (no reason why), and even threatening. This is why he mentions that wars are fought over this. “We’re all in different erosion gullies.” They don’t meet anywhere, and my “right” is your “wrong,” and vice-versa. The differences are irreconcilable, because the idea of seeing outside of them is inconceivable.

He makes two points with this. One is that we have erosion gullies re. stories that we tell ourselves, and beliefs that we hold onto. Another is that we have erosion gullies even in our sensory perceptions that dictate what we see and don’t see. We can see things that don’t even exist, and typically do. He uses eyewitness testimony to illustrate this.

I think what he’s saying with it is we need to watch out for these “gullies.” They develop naturally, but it would be good if we had the flexibility to be able to eventually get out of our “gully,” and form a new “channel,” which I take is a metaphor for seeing the world differently than what we’re used to. We need a means for doing that, and what he proposes is science, since it questions what we believe, and tests our ideas. We can get around our beliefs, and thereby get out of our “gullies” to change our perspective. It doesn’t mean we abandon “gullies,” but just become aware that other “channels” (perspectives) are possible, and we can switch between them, to see better, and achieve better outcomes.

Regarding the “monkey trap,” he uses it as a metaphor for us getting on a single track, grasping for what we want, not realizing that the very act of being that focused, to the exclusion of all other possibilities, is not getting us anywhere. It’s a trap, and we’d benefit by not being so dogged in pursuing goals if they’re not getting us anywhere.

Edit 2/9/2019: I highly recommend watching the last episode of a 1985 PBS/BBC mini-series by James Burke, called “The Day The Universe Changed.” The episode is called “Worlds Without End.” Burke gave a nice, long-form exposition of what our “erosion gullies” are like (he called them structures), and made clear that they are a fundamental part of our how brains work. Part of what he demonstrated was the same as what Kay demonstrates about how our perceptual systems distort reality to fit what our brains believe should exist. That’s part of what “erosion gullies” are. We can’t get away from them. Burke also explained that they are the only way that we can understand things. So, the idea is not to reject them, but to understand that they exist, and gain skill in getting out of one and into another one, which is hopefully more accurate, and/or leads to greater human flourishing.

An implication of this is that we can really be certain of nothing, but an important caveat that I can’t stress enough is that we can gain useful knowledge out of that uncertainty that is reliable, within constraints (which is to say “within a structure,” to use Burke’s term, or a “gulley,” using Kay’s term). This is the essence of modern science, and our modern society.

“Fast” vs. “slow”

He gets into some neuroscience that relates to how we perceive, what he called “fast” and “slow” response. You can train your mind through practice in how to use “fast” and “slow” for different activities, and they’re integral to our perception of what we’re doing, and our reactions to it, so that we don’t careen into a catastrophe, or miss important ideas in trying to deal with problems. He said that cognitive approaches to education deal with the “slow” systems, but not the “fast” ones, and it’s not enough to help students in really understanding a subject. As other forms of training inherently deal with the “fast” systems, educators need to think about how the “fast” systems responds to their subjects, and incorporate that into how they are taught. He anticipates this will require radically redesigning the pedagogy that’s typically used.

He says that the “fast” systems deal with the “atoms” of ideas that the “slow” system also deals with. By “atoms,” I take it he means fundamental, basic ideas or concepts for a subject. (I think of this as the “building blocks of molecules.”)

The way I take this is that the “slow” systems he’s talking about are what we use to work out hard problems. They’re what we use to sit down and ponder a problem for a while. The “fast” systems are what we use to recognize or spot possible patterns/answers quickly, a kind of quick, first-blush analysis that can make solving the problem easier. To use an example, you might be using “fast” systems now to read this text. You can do it without thinking about it. The “slow” systems are involved in interpreting what I’m saying, generating ideas that occur to you as you read it.

This is just me, but “fast” sounds like what we’d call “intuition,” because some of the thinking has already been done before we use the “slow” systems to solve the rest. It’s a thought process that takes place, and has already worked some things out, before we consciously engage in a thought process.

Science

This is the clearest expression I’ve heard Kay make about what science actually is, not what most people think it is. He’s talked about it before in other ways, but he just comes right out and says it in this presentation, and I hope people watching it really take it in, because I see too often that people take what they’ve been taught about what science is in school and keep reiterating it for the rest of their lives. This goes on not only with people who love following what scientists say, but also in our societal institutions that we happen to associate with science.

…[Francis] Bacon wrote a book called “The Novum Organum” in 1620, where he said, “Hey, look. Our brains are messed up. We have bad brains.” He called the ways of messing up “idols.” He said we get serious errors because of our genetics. We get serious errors because of the culture we’re in. We get serious errors because of the languages we use. They don’t represent what’s actually out there. We get serious errors from the way that academia hangs on to bad ideas, and teaches them over again. These are his four “idols.” Anyone ever read Bacon? He said we need something to get around our bad brains! A set of heuristics, is the term we’d use today.

What he called for was … science, because that’s what “Novum Organum,” the rest of the title, was: “A new way of dealing with knowledge.”

Science is not the knowledge, because knowledge is in this context. What science is is a negotiation between what’s out there and what we can represent.

This is the big idea. This is the idea they don’t teach in school. This is the idea we should be teaching. It’s one of the biggest ideas of all time.

It isn’t the knowledge. It’s the relationship, because what’s out there is only knowable by a phenomena that is being filtered in every possible way. We don’t even know if our brain is capable of representing the stuff.

So, to think about science as the truth is completely wrong! It’s not the right way to look at it. But if you think about it as a negotiation between the best you can do right now and stuff that’s out there, where you’re not completely sure, you’re in a very strong position.

Science has been the most important, powerful thought system humans have ever invented, because it gave up the idea of truth, and it substituted for it a thousand variations of false, some of which are incredibly powerful. This is the big idea.

So, if we’re going to think about computing, this is one way … of thinking about, “Wow! Computers!” They are representers. We can learn about representations. We can simulate ideas. We can get a very good–much better sense of dealing with thinking about these complexities.

“Getting there”

The last part demonstrates what I’ve seen with exploration. You start out thinking you’re going to go from Point A to Point B, but you take diversions, pathways that are interesting, but related to your initial search, because you find that it’s not a straight path from Point A to Point B. It’s not as straightforward as you thought. So, you try other ways of getting there. It is a kind of problem solving, but it’s really what Kay called “problem finding,” or finding the goal. In the process, the goal is to find a better way to get to the goal, and along the way, you find problems that are worth solving, that you didn’t anticipate at all when you first got going. In that process, you’ll find things you didn’t expect to learn, but which are really valuable to your knowledge base. In your pursuit of trying to find a better way to get to your destination, you might even get through a threshold, and find that your initial goal is no longer worth pursuing, but there are better goals to pursue in this new perception you’ve obtained.

Related posts:

Reviving programming as literacy

The necessary ingredients for computer science

Alan Kay’s advice to computer science students

Alan Kay: Basic reading material for computer science students (and programmers)

—Mark Miller, https://tekkie.wordpress.com

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

A first stab at understanding what education is missing

Several years ago, while I was taking in as much of Alan Kay’s philosophy as I could, I remember him saying that he wanted to see science be integrated into education. He felt it necessary to clarify this, that he didn’t mean teaching everything the way people think of science–as experimental proofs of what’s true–but rather science in the sense of its root word, scientia, meaning “to know.” In other words, make practices of science the central operating principle of how students of all ages learn, with the objective of learning how to know, and situating knowledge within models of epistemology (how one knows what they know). Back when I heard this, I didn’t have a good sense of what he meant, but I think I have a better sense now.

Kay has characterized the concept of knowledge that is taught in education, as we typically know it, as “memory.” Students are expected to take in facts and concepts which are delivered to them, and then their retention is tested. This is carried out in history and science curricula. In arithmetic, they are taught to remember methods of applying operations to compute numbers. In mathematics they are taught to memorize or understand rules for symbol manipulation, and then are asked to apply the rules properly. In rare instances, they’re tasked with understanding concepts, not just applying rules.

Edit 9/16/2015: I updated the paragraph below to flesh out some ideas, so as to minimize misunderstanding.

What I realized recently is missing from this construction of education are the ideas of being skeptical and/or critical of one’s own knowledge, of venturing into the unknown, and trying to make something known out of it that is based on analysis of evidence, with the goal of achieving greater accuracy to what’s really there. Secondly, it also misses on creating a practice of improving on notions of what is known, through practices of investigation and inquiry. These are qualities of science, but they’re not only applicable to what we think of as the sciences, but also to what we think of as non-scientific subjects. They apply to history, mathematics, and the arts, to name just a few. Instead, the focus is on transmitting what’s deemed to be known. There is scant practice in venturing into the unknown, or in improving on what’s known. After all, who made what is known, as far as a curriculum is concerned, but other people who may or may not have done the job of analyzing what is known very well. This isn’t to say that students shouldn’t be exposed to notions of what is known, but I think they ought to also be taught to question it, be given the means and opportunity to experience what it’s like to try to improve on its accuracy, and realize its significance to other questions and issues. Furthermore, that effort on the part of the student must be open to scrutiny and rational, principled criticism by teachers, and fellow students. I think it’d even be good to have professionals in the field brought into the process to do the same, once students reach some maturity. Knowledge comes through not just the effort to improve, but arguments pro and con on that effort.

A second ingredient Kay has talked about in recent years is the need for outlooks. He said in a presentation at Kyoto University in 2009:

What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

Knowledge is ‘silver,’ but outlook is ‘gold.’ I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that.

He has called outlooks “brainlets,” which have been developed over time for getting around our misperceptions, so we can see more clearly. One such outlook is science. A couple others are logic, and mathematics. And there are more.

The education system we have has some generality to it, but as a society we have put it to a very utilitarian task, and as I think is accurately reflected in the quote from Kay, we rob ourselves of the ability to gain important insights on our work, our worth, and our world by doing this. The sense I get about this perspective is that as a society, we use education better when we use it to develop how to think and perceive, not to develop utilitarian skills that apply in an A-to-B fashion to some future employment. This isn’t to say that skills used, and needed in real-world jobs are unimportant. Quite the contrary, but really, academic school is no substitute for learning job skills on the job. They try in some ways to substitute for it, but I have not seen one that has succeeded.

What I’ll call “skills of mind” are different from “skills of work.” Both are important, and I have little doubt that the former can be useful to some employers, but the point is it’s useful to people as members of society, because outlooks can help people understand the industry they work in, the economy, society, and world they live in better than they can without them. I know, because I have experienced the contrast in perception between those who use powerful outlooks to understand societal issues, and those who don’t, who fumble into mishaps, never understanding why, always blaming outside forces for it. What pains me is that I know we are capable of great things, but in order to achieve them, we cannot just apply what seems like common sense to every issue we face. That results in sound and fury, signifying nothing. To achieve great things, we must be able to see better than the skills with which we were born and raised can afford us.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.