Google+ is gone

It shut down April 2nd. I got word of this last fall. Google originally said they were going to keep it up for another year, but then chopped that timeline in half.

I’ve gotten used to the fact that many Google services are temporary. I had been on Google+ since 2012, it seems. Its best feature was its groups. It wasn’t as nice a platform as Facebook, and the groups I was interested in were not very active, but I had some of my best conversations on there. The experience was really great if I turned off G+’s ability to suggest other posts outside of my interests. There was quite a bit of “trash” on there that I just assumed ignore, and a nice thing about G+ is it allowed me to do that.

I’ve preserved most of my G+ posts on Mark’s favorites, a WordPress blog I set up out of frustration with Facebook many years ago. I’ve been using it as a “dumping ground” for stuff that’s had some interest for me, but I don’t think groups and people I’ve met on other social media platforms would care about.

I expect I’ll be posting most new stuff on here, and occasionally posting small items of interest on “Mark’s favorites”.

On improving a computing model

I’ve been struggling for at least a year to come up with a way to talk about this that makes some sort of sense, since I would like to help people start somewhere in understanding what it means to get on the track of “computer science as an aspirational goal.” For me, the first steps have been to understand that some tools have been invented that can be used in this work, and to work on a conceptual model of a processor. Part of that is beginning to understand what constitutes a processor, and so it’s important to do some study of processors, even if it’s just at a conceptual level. That doesn’t mean you have to start out understanding the electronics, though that can also be interesting. It could be (as has been the case with me) looking at the logic that supports the machine instructions you use with the processor.

The existing tools are ones you’ve heard about, if you’ve spent any time looking at what makes up traditional computing systems, and what developers use: operating systems, compilers, interpreters, text editors, debuggers, and probably processor/logic board emulators. These get you up to a level where, rather than focusing on transistor logic, or machine code, you can think about building processors out of other conceptual processors, with the idea being that you can build something that carries out processes that map to clear, consistent, formal meanings in your mind.

What started me on this was I referred back to a brief discussion I had with Alan Kay on what his conception of programming was, back in the late 2000s. His basic response to my question was, “Well, I don’t think I have a real conception of programming”. In part of this same response he said,

Smalltalk partly came from “20 examples that had to be done much better”, and “take the most difficult thing you will have to do, do it, and then build everything else out of that architecture”.

He emphasized the importance of architecture, saying that “most of programming is really architectural design.”

It took me several years to understand what he meant by that, but once I did, I wanted to start on figuring out how to do it.

A suggestion he made was to work on something big. He suggested taking a look at an operating system, or a programming language, or a word processor, and then work on trying to find a better architecture for doing what the original thing did.

I got started on this about 5 years ago, as I remember. As I look back on it, the process of doing this reminds me a lot of what Kay said at the end of the video I talked about in Alan Kay: Rethinking CS education.

He talked about a process that looks like a straight line, at first, but you run into some obstacles. In my experience, these obstacles are your own ignorance. The more you explore, the more you discover some interesting questions about your own thinking on it, and some other goals that might be of value, which take you some distance away from the original goal you had, but you feel motivated to explore. You make some progress on those, more than what you started out trying to do, but still run into other obstacles, again from your own ignorance, which, when you explore some more, you find other goals that again, take you farther away from your goal, and you make even more progress on that. If you think on it, everything you’re doing is somehow related to the goal, but it can give you some concern, because you may think that you’re wasting time on trivial goals, rather than the goal you had. The way I’ve thought about this is like “Backing up from the problem I thought I was trying to solve,” looking at more basic problems. It seems to me the “terrain” that Kay was showing in the virtual map was really the contours of one’s own ignorance.

Describing how I got into this (you don’t have to take this path), I looked at a very simple operating system, from a computer I used when I was a teenager. I looked at a disassembly of its memory, since it was built into ROM. The first realization from looking at the code was to see that an operating system really is just a program, or a program with an attached library of code that the hardware, and user software often jumps into temporarily to accomplish standard tasks. The other realization was answering the question, “How does this program get started?” (It turns out microprocessors have a pre-programmed boot address they go to in the computer’s address space, which is set up as boot code in ROM, where they begin fetching and executing instructions, once you power on the processor.) In modern computers, the boot ROM is what’s traditionally called a BIOS (Basic Input/Output System), but in the computer I was studying, it was the OS.

The basic outline of my project was to study this OS, and try to see if I could improve on its architecture, reimplement it in the new architecture, and then “make everything else” out of that architecture.

I quickly realized a couple things. The first was that I didn’t want to be dealing just in assembly language for the whole project. The second was a sneaking feeling I’ve learned to listen to in my learning process that (in this case) I was ignorant of the underlying processor architecture, and I needed to learn that. My answer to both of these problems is that I need to model some higher level process concepts (to get out of working in assembly, yet still use it to generate a new implementation out of a higher level model–a compiler), and I need to construct a conceptual model of the CPU, so I can gain some understanding of the machine the OS is running on. In my learning process, I’ve found it most helpful to understand what’s going on underneath what I’m using to get things done, because it actually does matter for solving problems.

I needed a tool for both creating a programming model (for my higher-level process concepts), and for modeling the CPU. I chose Lisp. It felt like a natural candidate for this, though this is not a prescription for what you should use. Do some exploration, and make your own choices.

I started out using an old dialect of Lisp that was watered down to run on the old computer I was studying. I had a goal with it that I’ve since learned was not a good idea: I wanted to see if I could use the old computer to write its own OS, using this method. What I’ve since learned through some historical research was when Kay was talking about what they did at Xerox Parc, they definitely didn’t use this method I’m describing, anything but! They used more powerful hardware to model the computer they were building, to write its system software. Nevertheless, I stuck with the setup I had, because I was not ready to do this. I found a different benefit from using it, though: The Lisp interpreter I’ve been using starts out with so little, that in order to get to some basic constructs I wanted, I had to write them myself. I found that I learned Lisp better this way than using a modern Lisp environment, because in that, I kept being tempted to learn to use what was already available in it, rather than “learn by doing.”

I had to step back from my goal, again, to learn some basics of how to gain power out of Lisp. I have not regretted this decision. In the process, I’ve been learning about how to improve on the Lisp environment I’ve been using. So, it’s become a multi-stage process. I’ve put the idea of improving the OS on hold, and focused on how I can improve the Lisp environment, to make it into something that’s powerful for me, in a way that I can both understand, and in a way that serves the larger goal I have with understanding the target system.

As I’ve worked with this underpowered setup, I’ve been getting indications now and then that I’m going to need to move up to a more powerful programming environment (probably a different Lisp), because I’m inevitably going to outgrow the current implementation’s limitations.

The deviations I’m describing here are about first pursuing a goal as a motivator for taking some action, but realizing that I had more to learn about the means of achieving the goal, putting the initial goal on hold, and spending time learning the means. It’s a process of motivated exploration. It can be a challenge at times to keep my eye on the initial goal that got me started on this whole thing, which I still have, because I get so focused on learning basic stuff. I just have to remind myself from time to time. It’s like wanting to climb Everest when you have no experience climbing mountains. There are some basic skills you have to develop, some basic equipment you have to get, and physical conditioning you have to do first. Just focusing on that can seem like the entire goal after a while. Part of the challenge is reminding yourself where you want to get to, but not getting impatient with that. Patience is something I’ve had to put some deliberate focus on, because I can feel like I’m not moving fast enough, because it’s taken me a long time to get to where I am. I think back to how I started out programming for the first time, when I was about 12 years old. The process of doing this has felt a lot like that, and I’ve used that experience as a reminder of what I’m doing, as opposed to my expectations from the more experienced context I am trying to give up, from my many years in the “pop culture” of computing. When I was first learning programming, it took a few years before I started feeling confident in my skill, where I could solve problems more and more by myself, without a lot of hand-holding from someone more experienced.

I remind myself of a quote from “The Early History of Smalltalk”:

A twentieth century problem is that technology has become too “easy”. When it was hard to do anything whether good or bad, enough time was taken so that the result was usually good. Now we can make things almost trivially, especially in software, but most of the designs are trivial as well. This is inverse vandalism: the making of things because you can. Couple this to even less sophisticated buyers and you have generated an exploitation marketplace similar to that set up for teenagers. A counter to this is to generate enormous dissatisfaction with one’s designs using the entire history of human art as a standard and goal. Then the trick is to decouple the dissatisfaction from self worth–otherwise it is either too depressing or one stops too soon with trivial results.

What I see myself doing with this attempt is gaining practice in improving computing models. I’ve selected an existing model that is somewhat familiar to me as my platform to practice on, but it contains some elements that are utterly unfamiliar. It contains enough context that I can grasp basic building blocks quickly, making it less intimidating. (Part of this approach is I’m working with my own psychology; what will motivate me to keep at this.) This approach has also enabled me to push aside the paraphernalia of modern systems that I find distracting in this process, for the purpose of gaining practice. (I haven’t thrown away modern systems for most things. I’m writing this blog from one.)

You cannot plan discovery. So, I’m focused on exploring, improve my understanding, being receptive, and  questioning assumptions. I’ve found that being open to certain influences, even ones that take me away from goals, in an unplanned fashion (just from events in life forcing me to reprioritize my activity), can be very helpful in thinking new thoughts. That’s the goal, improving the way I think, learn, and see.

Coming to grips with a frustrating truth

I’d heard about Mensa many years ago, and for many years I was kind of interested in it. Every once in a while I’d see “intelligence quizzes,” which were supposed to get one interested in the group (it worked). Mensa requires an IQ test, and a minimum score, to join. Nevertheless, I looked at some samples of the discussions members had on topics related to societal issues, though, and it all looked pretty mundane. It wasn’t the intelligent discussion I expected, which was surprising. Later, I found out I’m below their entrance threshold. Based on taking a peek, though, I don’t feel like I’m missing anything.

I came upon the following video by Stefan Molyneux recently (it was made in 2012), and it seems to explain what I’ve been seeing generally for more than ten years in the same sort of societal discussions (though I can’t say what the IQ level of the participants was). I’ve frequently run into people who seem to have some proactive mental ability, and yet what they come out with when thinking about the society they live in is way below par. I see it on Quora all the time. Most of the answers to political questions are the dregs of the site–really bad. I’ve had no explanation for this inconsistency, other than perhaps certain people with less than stellar intelligence are drawn to the political questions, until I saw this analysis. Molyneux said it’s the result of a kind of cognitive abuse.

The reason I’m bothering with this at all is seeing what he described play out has bothered me for many years, though I’ve assumed it’s due to popular ignorance. It’s part of what’s driven my desire to get into education, though now I feel I have to be more humble about whether that’s really a good answer to this.

I found his rational explanation for this confusing. I’ve needed to listen to it a few times, and take notes. I’ll attempt to summarize.

This is a generalization, but the point is to apply it to people who are more intelligent than average, and refuse to allow inquiry into their beliefs about society:

Children who are in the “gifted” categories of IQ are told a certain moral message when they’re young, about how they are to behave. However, when those same children try to apply that morality to their parents, and the adults around them–in other words, demand consistency–they are punished, humiliated, and/or shamed for it. They eventually figure out that morality has been used to control them, not teach them. (This gave me the thought, based on other material by Molyneux, that perhaps this is one reason atheism is so prevalent among this IQ category. Rather than morality being a tool to uplift people to a higher state of being, it’s seen purely as a cynical means of control, which they understandably reject.) As soon as they try to treat morality as morality, in other words, as a universal set of rules by which everyone in their society is governed, they are attacked as immoral, uncaring, brutish, wrong, and are slandered. This is traumatic to a young mind trying to make sense of their world.

The contradiction they encounter is they’re told they’re evil for not following these rules as a child, and then they’re told they’re evil for attempting to apply those same rules to other adults when they grow up. They are punished for attempting to tell the truth, even though they were told when they were young that telling the truth is a virtue (and that lying is evil). If they attempt to tell the truth about their society, they are punished by the same adults who cared for them.

The image he paints is, to me, analogous to Pavlov’s dog, where all of its attempts to follow its instincts in a productive way are punished, leading to it quivering in a corner, confused, afraid, and despondent, unable to respond at all in the presence of food. In this case, all attempts to apply a moral code consistently are punished, leading to a disabled sense of social morality, and a rejection of inquiry into this battered belief system, in an attempt to protect the wound.

Molyneux comes to an ugly truth of this situation. This inability to question one’s societal beliefs is the product of a master-slave society: In slave societies, rules are applied to the slaves that are not applied to the masters. They operate by a different set of rules. Morality that is dispensed to the ignorant is used as a cynical cover for control. Those subjected to this inconsistent reality deal with it by trying their best to not look at it. Instead of pushing through the shaming, and demanding consistency, risking the rejection that entails from the society in which they grew up, they blindly accept the master-slave dichotomy, and say, “That’s just the way it is.” Those who question it are attacked by these same people, because engaging in that leads them back to the pain they suffered when they did that themselves.

He also addressed a psychological phenomenon called “projection.” He said,

… they must take the horrors of their own soul and project them upon the naive and honest questioner. Every term that is used as an attack against you for engaging in these conversations is an apt and deeply known description of their own souls, or what’s left of them.

I also found this summary video helpful in understanding motivated reasoning, why we’re wired to reject rational thought, and evidence, and to prefer beliefs that are inculcated and reinforced through our social groups, and their authority figures.

Molyneux sort of addressed the evolutionary reasons for it, but I have liked Jonathan Haidt’s explanation for it better, since he gets into the group dynamic of shared beliefs, and justifies them, saying that they played some role in the survival of our species, up until recently: Those who had this group-belief trait lived to reproduce. Those who did not died out. That isn’t to say that it’s essential to our survival today, but that it deserves our respectful treatment, since it was a trait that got what we are here.

What’s also interesting is that Molyneux relates the trait of motivated reasoning to the practice of science, quoting Max Planck (I’ve heard scientists talk about this) in saying that science really only advances when the older generation of scientists dies. This creates room for other ideas, supported by evidence, and critical analysis, to flourish, perhaps setting a new paradigm. If so, it becomes a new sort of orthodoxy in a scientific discipline for another generation or so, until it (the orthodoxy), too, dies away with the scientists who came up with it, repeating the cycle.

Related posts:

Psychic elephants and evolutionary psychology

“What a waste it is to lose one’s mind”

The dangerous brew of politics, religion, technology, and the good name of science

Alan Kay’s advice to computer science students

I’m once again going to quote a Quora answer verbatim, because I think there’s a lot of value in it. Alan Kay answered What book(s) would you recommend to a computer science student?

My basic answer is: read a lot outside of the computer field.

It is worth trying to understand what “science” means in “Computer Science” and what “engineering” means in “Software Engineering”.

“Science” in its modern sense means trying to reconcile phenomena into models that are as explanatory and predictive as possible. There can be “Sciences of the Artificial” (see the important book by Herb Simon). One way to think of this is that if people (especially engineers) build bridges, then these present phenomena for scientists to understand by making models. The fun of this is that the science will almost always indicate new and better ways to make bridges, so friendly collegial relationships between scientists and engineers can really make progress.

An example in computing is John McCarthy thinking about computers in the late 50s, the really large range of things they can do (maybe AI?), and creating a model of computing as a language that could serve as its own metalanguage (LISP). My favorite book on this is “The Lisp 1.5 Manual” from MIT Press (written by McCarthy et al.). The first part of this book is still a classic on how to think in general, and about computing in particular.

(A later book inspired by all this is “Smalltalk: the language and its implementation” (by Adele Goldberg and Dave Robson — the “Blue Book”). Also contains a complete implementation in Smalltalk written in itself, etc.)

A still later book that I like a lot that is “real computer science” is “The Art of the Metaobject Protocol” by Kiszales, Bobrow, Rivera,). The early part of this book especially is quite illuminating.

An early thesis (1970) that is real computer science is “A Control Definition Language” by Dave Fisher (CMU).

Perhaps my favorite book about computing might seem far afield, but it is wonderful and the writing is wonderful: “Computation: Finite and Infinite Machines” by Marvin Minsky (ca 1967). Just a beautiful book.

To help with “science”, I usually recommend a variety of books: Newton’s “Principia” (the ultimate science book and founding document), “The Molecular Biology of the Cell” by Bruce Alberts, et al. There’s a book of Maxwell’s papers, etc.

You need to wind up realizing that “Computer Science” is still an aspiration, not an accomplished field.

“Engineering” means “designing and building things in principled expert ways”. The level of this is very high for the engineering fields of Civil, Mechanical, Electrical, Biological, etc. Engineering. These should be studied carefully to get the larger sense of what it means to do “engineering”.

To help with “engineering” try reading about the making of the Empire State Building, Boulder Dam, the Golden Gate Bridge, etc. I like “Now It Can Be Told” by Maj Gen Leslie Groves (the honcho on the Manhattan Project). He’s an engineer, and this history is very much not from the Los Alamos POV (which he also was in charge of) but about Oak Ridge, Hanford, etc and the amazing mobilization of 600,000 plus people and lots of money to do the engineering necessary to create the materials needed.

Then think about where “software engineering” isn’t — again, you need to wind up realizing that “software engineering” in any “engineering” sense is at best still an aspiration not a done deal.

Computing is also a kind of “media” and “intermediary”, so you need to understand what these do for us and to us. Read Marshall McLuhan, Neil Postman, Innis, Havelock, etc. Mark Miller (comment below) just reminded me that I’ve recommended “Technics and Human Development,” Vol. 1 of Lewis Mumford’s “The Myth of the Machine” series, as a great predecessor of both the media environment ideas and of an important facet of anthropology.

I don’t know of a great anthropology book (maybe someone can suggest), but the understanding of human beings is the most important thing to accomplish in your education. In a comment below, Matt Gaboury recommended “Human Universals” (I think he means the book by Donald Brown.) This book certainly should be read and understood — it is not in the same class as books about a field, like “Molecular Biology of the Cell”.

I like Ed Tufte’s books on “Envisioning Information”: read all of them.

Bertrand Russell’s books are still very good just for thinking more deeply about “this and that” (“A History of Western Philosophy” is still terrific).

Multiple points of view are the only way to fight against human desires to believe and create religions, so my favorite current history book to read is: “Destiny Disrupted” by Tamim Ansary. He grew up in Afghanistan, moved to the US at age 16, and is able to write a clear illuminating history of the world from the time of Mohammed from the point of view of this world, and without special pleading.

Note: I checked out “The Art of the Metaobject Protocol,” and it recommended that if you’re unfamiliar with CLOS (the Common Lisp Object System) that you learn that before getting into this book. It recommended, “Object-Oriented Programming in Common LISP: A Programmer’s Guide to CLOS,” by Sonya Keene.

As I’ve been taking this track, reconsidering what I think about computer science, software engineering, and what I can do to advance society, making computing a part of that, I’ve been realizing that one of Kay’s points is very important. If we’re going to make things for people to use, we need to understand people well, specifically how the human system interfaces with computer systems, and how that interaction can help people better their condition in our larger system that we call “society” (this includes its economy, but it should include many other social facets), and more broadly, the systems of our planet. This means spending a significant amount of time on other things besides computer stuff. I can tell you from experience, this is a strange feeling, because I feel like I should be spending more time on technical subjects, since that’s what I’ve done in the past, and that’s been the stock in trade of my profession. I want that to be part of my thinking, but not what I eat and sleep.

Related material:

The necessary ingredients of computer science

Edit 2/9/2019: I highly recommend people watch Episode 10 of the 1985 TV series “The Day The Universe Changed” with James Burke, called “Worlds Without End.”

The original topic Alan Kay wrote about was in answer to a question about books to read, but I thought I should include this, since it’s the clearest introduction I’ve seen to scientific epistemology.

In an earlier post, “Alan Kay: Rethinking CS education,” I cited a presentation where Kay talked about “erosion gullies” in our minds, how they channel “water” (thoughts, ideas) to go only in particular directions. Burke gave an excellent demonstration of these mental gullies, explaining that they’re a natural part of how our brains work, and there is no getting away from them. Secondly, he pointed out that they’re the only way we can understand anything. So, the idea is not to reject these gullies, but to be aware that they exist, and to gain skill in getting out of one, and into a different one.

What I hope people draw out of what Burke said is that while you can’t really be certain of anything, that doesn’t mean you can’t draw knowledge that’s useful, reliable, and virtuous to exercise in the world, out of uncertainty. That is the essence of our modern science, and modern society. It is why we are able to live as we do, and how we can advance to a better society. I’ll add that rejecting this notion is to reject modernity, and any future worth having.

The ending of this episode is remarkable to watch in hindsight, 34 years on, because he addressed the impact that the computer would have on our social construct that we call society; its epistemological, and political implications, which I think are quite accurate. He put more of a positive gloss on it than I would, at this point in history, though his analysis was tinged with some sense of uncertainty about the future (at that point in time) that is worth contemplating in our present.

Trying to arrive at clarity on the state of higher education

I’ve been exploring issues in higher education over the last 10 years, trying to understand what I’ve been seeing with both the political discourse that’s been happening in the Western world for the past 12 or so years, and what’s been happening with campus politics for as long, which has been having severe consequences on the humanities, and has been making its way into the academic fields of math and science.

Rob Montz, a Brown University graduate, has done the best job I’ve seen anywhere of both getting deep into a severe problem at many universities, and condensing his analysis down to something that many people can understand. I think there is more to the problem than what Montz has pointed out, but he has put his finger on something that I haven’t seen anyone else talk about, and it really has to do with this question:

Are students going to school to learn something, or are they just there to get a piece of paper that they think will increase their socio-economic status?

I’m not objecting if they are there to do both. What I’m asking is if education is even in the equation in students’ experience. You would think that it is, since we think of universities as places of learning. However, think back to your college experience (if you went to college). Do you remember that the popular stereotype was that they were places to “party, drink, and lose your virginity”? That didn’t have to be your experience, but there was some truth to that perception. What Montz revealed is that perception has grown into the main expectation of going to college, not a “side benefit,” though students still expect that there’s a degree at the end of it. There’s just no expectation that they’ll need to learn anything to get it.

What I asked above is a serious question, because it has profound implications for the kind of society we will live in going forward. A student body and education system that does not care about instilling the ideas that have made Western civilization possible will not live in a free society in the future. This means that we can forget about our rights to live our lives as we see fit, and we can forget about having our own ideas about the world. Further, we can forget about having the privilege of living in a society where our ideas can be corrected through considering challenges to them. That won’t be allowed. How would ideas change instead, you might ask? Look at history. It’s bloody…

We are not born with the ideas that make a free society possible. They are passed down from generation to generation. If that process is interrupted, because society has taken them for granted, and doesn’t want the responsibility of learning them, then we can expect a more autocratic society and political system to result, because that is the kind of society that humans naturally create.

A key thing that Montz talked about is what I’ve heard Alan Kay raise a red flag about, which is that universities are not “guarding their subjects,” by which he means they are not guarding what it means to be educated. They are, in effect, selling undergraduate degrees to the highest bidders, because that’s all most parents and students want. An indicator of this that Kay used is that literacy scores of college graduates have been declining rapidly for decades. In other words, you can get a college degree, and not know how to comprehend anything more complex than street signs, or labels on a prescription bottle, and not know how to critically analyze an argument, or debate it. These are not just skills that are “nice to have.” They are essential to a functioning free society.

What Montz revealed is if you don’t want to learn anything, that’s fine by many universities now. Not that it’s fine with many of the professors who teach there, but it’s fine by the administrations who run them. They’re happy to get your tuition.

Montz has been producing a series of videos on this issue, coming out with a new one every year, starting in 2016. He started his journey at his alma mater, Brown University:

In the next installment, Montz went to Yale, and revealed that it’s turning into a country club. While, as he said, there are still pockets of genuine intellectual discipline taking place there, you’re more likely to find that young people are there for the extracurricular activities, not to learn something develop their minds and character. What’s scandalous about this is that the school is not resisting this. It is encouraging it!

The thing is, this is not just happening at Yale. It is developing at universities across the country. There are even signs of it at my alma mater, Colorado State University (see the video below).

Next, Montz went to the University of Chicago to see why they’re not quite turning out like many other universities. They still believe in the pursuit of truth, though there is a question that lingers about how long that’s going to hold up.

Canadian Prof. Gad Saad describing events at Colorado State University, from February 2017:

Dr. Frank Furedi talked about the attitudes of students towards principles that in the West are considered fundamental, and how universities are treating students. In short, as the video title says, students “think freedom is not a big deal.” They prefer comfort, because they’re not there to learn. Universities used to expect that students were entering adulthood, and were expected to take on more adult responsibilities while in school. Now, he says, students are treated like “biologically mature … clever children.” Others have put this another way, that an undergraduate education now is just an extension of high school. It’s not up to par with what a university education was, say, when I went to college.

Dr. Jonathan Haidt has revealed that another part of what has been happening is that universities that used to be devoted to a search for truth are in the process of being converted into a new kind of religious school that is pursuing a doctrine of politically defined social justice. Parents who are sending their children off to college have a choice to make about what they want them to pursue (this goes beyond the student’s major). However, universities are not always up front about this. They may in their promotions talk about education in the traditional way, but in fact may be inculcating this doctrine. Haidt has set up a website called Heterodox Academy to show which universities are in fact pursuing which goals, in the hopes that it will help people choose accordingly. They don’t just show universities that they might prefer. If you want a religious education, of whatever variety, they’ll list schools that do that as well, but they rank them according to principles outlined by the University of Chicago re. whether they are pursuing truth, or some kind of orthodoxy.

For people who are trying to understand this subject, I highly recommend an old book, written by the late Allan Bloom, called “The Closing of the American Mind.” He doesn’t exactly describe what’s going on now on college campuses, because he wrote it over 30 years ago, but it will sound familiar. I found it really valuable for contrasting what the philosophy of the university was at the time that he wrote the book, as opposed to what it had been in the prior decades. He preferred the latter, though he acknowledged that it was incomplete, and needed revision. He suggested a course of action for revising what the university should be, which apparently modern academics have completely rejected. I still think it’s worth trying.

I don’t know if this will come to fruition in my lifetime, but I can see the possibility that someday the physical university as we have known it will disappear, and accreditation will take a new form, because of the internet. In that way, a recovery of the quality of higher education might be possible. I have my doubts that getting the old form back is possible. It may not even be desirable.

Related articles:

Brendan O’Neill on the zeitgeist of the anti-modern West

For high school students, and their parents: Some things to consider before applying for college

Edit 11/6/2018: What Happened to Our Universities?, by Philip Carl Salzman

 

How I lost weight

No, this is not an ad for a weight-loss product. This is a personal testimonial of how I learned to control my weight. I wanted to share it, because I think there are some facts that anyone who’s trying to lose weight should know. It turned out to be so simple, I’ve been surprised that I didn’t hear this from people more often. Here is the most important thing you need to know if you want to lose weight:

1,400 calories a day.

That’s it.

To lose weight, reduce your rate of caloric intake from a baseline of 2,000 calories per day, which is roughly what your body uses every day, to about 1,400. This is not a hard number. Just consider it a target you’re shooting at. Sometimes you’ll go over. Sometimes you’ll go under, but try to hit it each day. Don’t worry if you go over it. If you go 100 or 200 over, you’ll still lose weight, just probably not as fast. The idea is you’re limiting the rate that you consume calories.

Note: I’m assuming this is information for adults. I have no idea what the appropriate caloric intake should be for children or teenagers. I would suggest if there are people under the age of 20, or so, who need to lose weight that they look up specific information about their calorie requirements.

Also, if you have a medical condition, it would probably be a good idea to consult with your doctor before going on a weight loss plan, just to get some pointers on what to do and not do.

The idea with this method is that your body will continue using its baseline of calories, but you will force it to use some of your stored fat for energy, thereby reducing your stored fat.

I learned about this by trying to find a way to lose weight by searching on Quora, and finding an answer posted by retired physicist Richard Muller. See his answer to the question What is the best way to reduce belly fat and flatten your stomach? It laid out the basic parameters I needed:

  • Subtract 600 calories from the 2,000 baseline, and eat that amount (1,400 calories) each day
  • Eat at minimum 1,000 calories per day (don’t go below that!), though you should shoot for the 1,400 goal each day
  • You will feel hungry doing this, but you should not feel starving. As you follow this plan, you will learn the difference. If at any point you feel light-headed and anemic, that’s a sign you need to eat more soon. Generally, you want to eat calories at a rate that you avoid this. Muller described the hunger you will usually feel as you do this as a “dull ache.” I’d say that’s accurate. You feel hungry, but you don’t feel desperate to eat something. You just feel like you’d like to eat something. “Starving” feels like you’ve got a gaping hole in your stomach, you could eat a horse, and are anemic. That’s a sign you need to eat some more, because you’re not taking in calories at a high enough rate.
  • There are times where you will have cravings, where your stomach is bothering you incessantly, saying “Feed me!”, even though you’ve eaten enough calories for the time being. It’s not a “starving” feeling, but it’s bothering you, constantly making you think of food. Muller said in that case, eat some celery. It will calm your appetite. I found this usually worked, and I found out one reason why: One stalk of celery is 1 calorie! So, eat as much of it as you like! It doesn’t taste bad. Before I got onto this, I used to keep a bundle of celery in my refrigerator to use in salad. While in this regime, I kept two bundles of celery around, because I used it more often. It really helped keep me on my calorie budget sometimes.
  • This surprised me: Rigorous exercise will burn fat, but it’s actually less efficient than just consuming fewer calories per day. A common thing I had heard for decades was that to lose weight, you needed to exercise. That’s what most everyone talked about, with rare exception. Anytime I heard of someone trying to lose weight, they were always working out, or doing regular exercise, like running, doing aerobics, lifting weights, playing basketball, etc. In the advertising we see for weight loss, we often see people using exercise machines, or doing some workout program. All of those are fine to do. Your body needs exercise, but it’s for building strength, and supporting your general health. It’s not the best solution for weight loss. The ideal is to reduce your caloric intake and exercise. Your body needs exercise anyway. So do that, but to lose weight, reduce your rate of caloric intake. It turns out, for practical purposes, exercise/fitness, and weight loss are separate activities, unless you’re a dedicated athlete.

Once I started on this regime, I was really surprised how many calories some foods have. Before I started on this, I often ate twos of things. It became pretty clear how I had gained weight, because I was eating twice of what I probably should have.

I reflected on how I used to hear fast food critics complain that a McDonald’s Big Mac is 540 calories. That was not excessive compared to what I had been eating. I thought I was eating healthy, since I often got my groceries from natural food stores. I looked at the prepared foods I’d get at these places from time to time, and my jaw dropped, because what seemed like small items would have 700-900 calories in them! I’d still get these things when I was losing weight, but I’d eat half, or a third, one day, and save the rest for another day.

A good rule I found that kept me within my calorie limit was to cut portions I used to eat in half. The exception being vegetable salad (without dressing or cheese). I could eat as much of that as I wanted, since a bare vegetable salad, even a large serving, is very low in calories. Fruit salad is a different story…

The way I thought about it was I wasn’t “reducing calories,” but reducing the rate I consumed calories. Spreading them out more over time will cause you to lose weight. You can eat what you want. Just slow it down, not by seconds or minutes, but hours.

Just because it’s about calories doesn’t mean that you don’t need to think about nutrition. You need a certain amount of protein, fiber, and carbohydrates. So, factor that in as you budget what you eat.

I learned to do what I call “flying by instruments,” and I encourage you to do it as well. If you’re a pilot who’s serious about flying (I’m not a pilot), a skill you learn is not just to “fly by sight,” which is just trusting what your senses are telling you. You advance to paying less attention to your senses, and trusting what the plane’s instruments are telling you. What I mean is that I wrote down my calories each day, adding them up, and even though I felt hungry, I’d look at my calorie count, and if I’d had enough for the time being, I wouldn’t eat, because my “instruments” were telling me “you’re good. Do something else for a while.” Likewise, even if I didn’t feel more hungry than usual, but my calorie count told me I needed to eat more, I would eat anyway. The goal is to not do what you were doing before, because your senses can deceive you. That’s how you gained the weight.

I tried to pace the calories I consumed throughout the day, about 300-400 calories in a meal. Really what “calories per day” means is “calories in 24 hours.” I found that it worked better if I could set a “reset point” for my daily calorie budget that fit with when I could find time to eat. It doesn’t have to match with when the next day starts (Midnight). You can pick any time of day you want, like 5, or 7 o’clock in the evening. Be consistent about it, though. Otherwise, there’s no point in doing any of this.

I found particularly with the internet, it was pretty easy to find out the calories in any food. If I didn’t have calorie information on food packaging to draw from, I could do a search on “calorie counter <name of food>”, where “<name of food>” was something like “cinnamon roll,” and I’d get fairly reliable information for it. If it was a specific brand of something I’d bought, like a sandwich at a store, I could sometimes find calorie information that was specific to that item. Other times, I would go by type of food like “hoagie,” or most any fruit, etc., and I’d have to make a rough estimate. That method worked well.

One thing I found really nice about Muller’s method is that anything below my body’s baseline daily calorie usage would cause me to lose weight, so I could go some amount over my target, and not fear that I was going to stop losing weight, or start gaining weight.

There’s no reason to “make up” for going over your calorie budget, by eating less than your budget the next day. That would actually be bad! If you go over one day, just hit the budget goal the next day, and thereafter, just like you have before. You’ll be fine, and you will continue losing weight.

The way I’m describing this is rather strict, by sticking to a fixed target, but that’s because this is what I felt was necessary for me to really accomplish my goal. I had to stick to something consistent, psychologically. You can make your weight loss plan anything you want. If you don’t like how you feel on 1,400 a day, you can up the budget to something more. As long as it’s below your body’s baseline calorie use in a day, you’ll continue losing weight. You can make it whatever works for you. You can even eat more some days, and less on others. What you’d want to avoid is making a habit of eating at, or beyond your body’s baseline, because if you do that, you won’t lose weight, or you could end up gaining some weight, and that’s just going to be demoralizing.

I found it was important to work with my psychology. An appealing thing about Muller’s plan was he said by following it, I would lose 1 pound of fat a week. I thought that would motivate me. It was an impressive enough amount that I could keep up with the regime.

I weighed 240 pounds in March 2017. After being on this regime for about a year, I weighed 178 pounds (though I had lost most of the weight after 9 months, but I spent a few months trying to find a reliable “balance point” where I could maintain my weight). The 2,000-calorie baseline is actually a rough estimate. Each person is different. I found my balance point is about 1,900-2,000 calories.

I bought a body fat caliper tool, the Accu-Measure Fitness 3000, for about $6.50 to determine when to stop losing weight. It turns out it’s better to have more body fat when you’re older than when you were younger. I’m 48, so my ideal fat percentage is about 18%. I’ve got a little flab on me, but it’s where I need to be. I’ve gotten back to eating normally, and I’m enjoying it.

I’m trying a rougher method of tracking my calories than what I used for losing weight, because I didn’t want to be adding up my calories for the rest of my life. Now what I do is if I eat something that’s, say, 230 calories, I just call it “2”. If it’s 250, or above, I call it “3”. I round them up or down, just keep the most significant digit, and add it up that way. It’s easier to remember for how much I’ve eaten. I go up to 19 or 20 in a day, and keep it at that.

Fitness

I thought I’d talk some about developing your fitness, because there’s a technique to that as well. I happened upon a book that was mostly about recovering from sports injuries, but I found its recommendation for “ramping up” your exercise regime worked well for me, even though I had not suffered an injury.

I am by no means a dedicated athlete. Throughout my life, I would exercise from time to time, more like sporadically, but otherwise I’d had a habit of being pretty sedentary. So, my muscles have not been well developed. One of the best recommendations I’ve seen about exercise is to pick an activity that you can stick with, something that will fit into your daily routine, and/or be something that you look forward to. I’d tried walking, going a couple miles each time I’d go out to exercise, but I got bored with that after several years. A couple years ago, I got this idea that I might like swimming better. So, I tried that, and as I expected, I liked it quite a bit better. A question I grappled with when I started was how long should I swim? I ran into the same issue as my eating habits before I realized what was causing me to gain weight. I just did what “felt right,” what I enjoyed. I’d swim for too long, and then I’d be sore for days. I liked the water, so I was staying in the pool for about an hour, taking breaks when I’d get tired.

This book I started reading said this is the wrong approach. You need to start small, and increase the time you spend exercising gradually. When you first start out, exercise for about 10, maybe 15 minutes, and then stop. Give your body a couple days where you’re not exercising vigorously. Then exercise again, but add 2-3 minutes onto the time you spent last time, then stop. Wait a couple days. Do it again. Add on a few more minutes to your last time, etc. The author said exercising 3 times a week was a good rate that would give you the optimal amount of exercise to build strength and stamina.

Another point he made is it’s important to keep up with the exercise plan every week, because each day that you don’t exercise beyond the 2-day breaks you take, some amount of your strength conditioning goes away. Taking the 2-day breaks is good, because it gives your muscles a chance to rest after exercise, which is important for building strength. However, if you go for a week without exercising, he said you need to start over, meaning you start again with your initial time (10, maybe 15 minutes), and do the same thing of letting 2 days pass, and doing it again, with some more time than the last. He said as a consolation that you’ll notice that you recover your stamina more quickly than when you started on this regime. I’ve found this to be true.

The goal with this technique is not to tire yourself out, and get to the point where your muscles are sore. The author talked about how dedicated athletes do this, stressing their bodies to the point of pain, and they can do it, once they’ve developed their muscles to the level they have, and it can help them gain strength more quickly. They put extreme stress on their bodies. This is where the phrase, “No pain, no gain,” comes from, but it is not good for people who have not exercised vigorously for a long period of time. It can lead to tissue damage, or if it doesn’t, it makes you sore for days, which lessens your motivation to keep the exercise schedule you need to build strength and endurance, and you don’t get anywhere.

If you keep up with this plan, you will eventually reach your peak performance, where you can maintain the most strength and endurance your body can muster. You’re not going to advance your strength and stamina forever. You will eventually reach a maximum amount of time you can exercise, and a maximum amount of stress your body can endure. At that point, you just keep up with that limit.

— Mark Miller, https://tekkie.wordpress.com

A word of advice before you get into SICP

If you’ve been reading along with my posts on exercises from Structure and Interpretation of Computer Programs” (SICP), this is going to come pretty late. The book is a math-heavy text. It expects you to know the math it presents already, so for someone like me who got a typical non-math approach to math in school, it seems abrupt, but not stifling. If you’re wondering what I mean by “non-math approach,” I talked about it in “The beauty of mathematics denied,” James Lockhart also talked about it in “A Mathematician’s Lament.”

I’ve been reading a book called, Mathematics in 10 Lessons: The Grand Tour,” by Jerry King (I referred to another book by King in “The beauty of mathematics denied”), and it’s helped me better  understand the math presented in SICP. I could recommend this book, but I’m sure it’s not the only one that would suffice. As I’ve gone through King’s book, I’ve had some complaints about it. He tried to write it for people with little to no math background, but I think he only did an “okay” job with that. I think it’s possible there’s another book out there that does a better job at accomplishing the same goal.

What I recommend is getting a book that helps you understand math from a mathematician’s perspective before getting into SICP, since the typical math education a lot of students get doesn’t quite get you up to its level. It’s not essential, as I’ve been able to get some ways through SICP without this background, but I think having it would have helped make going through this book a little easier.

Answering some basic mathematical questions about rational numbers (fractions)

A couple questions have bugged me for ages about rational numbers:

  1. Why is it that we invert and multiply when dividing two fractions?
  2. Why is it that when we solve a proportion, we multiply two elements, and then divide by a third?

For example, with a proportion like:

\frac {3} {180} = \frac {2} {x}

the method I was taught was to cross-multiply the two numbers that are diagonally across from each other (2 and 180), and divide by the number opposite x (3). We solve for x with x = \frac {2 \times 180} {3}, but why?

These things weren’t explained. We were just told, “When you have this situation, do X.” It works. It produces what we’re after (which school “math” classes see as the point), but once I got into college, I talked with a fellow student who had a math minor, and he told me while he was taking Numerical Analysis that they explained this sort of stuff with proofs. I thought, “Gosh, you can prove this stuff?” Yeah, you can.

I’ve picked up a book that I’d started reading several years ago, “Mathematics in 10 Lessons,” by Jerry King, and he answers Question 1 directly, and Question 2 indirectly. I figured I would give proofs for both here, since I haven’t found mathematical explanations for this stuff in web searches.

I normally don’t like explaining stuff like this, because I feel like I’m spoiling the experience of discovery, but I found as I tried to answer these questions myself that I needed a lot of help (from King). My math-fu is pretty weak, and I imagine it is for many others who had a similar educational experience.

I’m going to answer Question 2 first.

The first thing he lays out in the section on rational numbers is the following definition:

\frac {m} {n} = \frac {p} {q} \Leftrightarrow mq = np, where\: n \neq 0,\, q \neq 0

I guess I should explain the double-arrow symbol I’m using (and the right-arrow symbol I’ll use below). It means “implies,” but in this case, with the double-arrow, both expressions imply each other. It’s saying “if X is true, then Y is also true. And if Y is true, then X is also true.” (The right-arrow I use below just means “If X is true, then Y is also true.”)

In this case, if you have two equal fractions, then the product equality in the second expression holds. And if the product equality holds, then the equality for the terms in fraction form holds as well.

When I first saw this, I thought, “Wait a minute. Doesn’t this need a proof?”

Well, it turns out, it’s easy enough to prove it.

The first thing we need to understand is that you can do anything to one side of an equation so long as you do the same thing to the other side.

We can take the equal fractions and multiply them by the product of their denominators:

nq (\frac {m} {n}) = (\frac {p} {q}) nq

by cancelling like terms, we get:

{mq = np}

This explains Question 2, because if we take the proportion I started out with, and translate it into this equality between products, we get:

{3x = 2 \times 180}

To solve for x, we get:

x = \frac {2 \times 180} {3}

which is what we’re taught, but now you know the why of it. It turns out that you don’t actually work with the quantities in the proportion as fractions. The fractional form is just used to relate the quantities to each other, metaphorically. The way you solve for x uses the form of the product equality relationship.

To answer Question 1, we have to establish a couple other things.

The first is the concept of the multiplicative inverse.

For every x (with x ≠ 0), there’s a unique v such that xv = 1, which means that v = \frac {1} {x}.

From that, we can say:

xv = \frac {x} {1} \frac {1} {x} = \frac {x} {x} = 1

From this, we can say that the inverse of x is unique to x.

King goes forward with another proof, which will lead us to answering Question 1:

Theorem 1:

r = \frac {a} {b} \Rightarrow a = br, where\; b \neq 0

Proof:

b (\frac {a} {b}) = rb

by cancelling like terms, we get:

{a = br}

(It’s also true that a = br \Rightarrow r = \frac {a} {b}, but I won’t get into that here.)

Now onto Theorem 2:

r = \frac {\frac {m} {n}} {\frac {p} {q}}, where\; n \neq 0, p \neq 0, q \neq 0

By Theorem 1, we can say:

\frac {m} {n} = r (\frac {p} {q})

Then,

\frac {m} {n} \frac {q} {p} = r (\frac {p} {q}) \frac {q} {p}

By cancelling like terms, we get:

\frac {m} {n} \frac {q} {p} = (r) 1

r = \frac {m} {n} \frac {q} {p}

Therefor,

\frac {\frac {m} {n}} {\frac {p} {q}} = \frac {m} {n} \frac {q} {p}

And there you have it. This is why we invert and multiply when dividing fractions.

Edit 1/11/2018: King says a bit later in the book that by what I’ve outlined with the above definition, talking about how if there’s an equality between fractions, there’s also an equality between a product of their terms, and by Theorem 1, it is mathematically correct to say that division is just a restatement of multiplication. Interesting! This does not mean that you get equal results between division and multiplication: \frac {a} {b} \neq ab, except when b equals 1 or -1. It means that there’s a relationship between products and rational numbers.

Some may ask, since the mathematical logic for these truths is fairly simple, from an algebraic perspective, why don’t math classes teach this? Well, it’s because they’re not really teaching math…

Note for commenters:

WordPress supports LaTeX. That’s how I’ve been able to publish these mathematical expressions. I’ve tested it out, and LaTeX formatting works in the comments as well. You can read up on how to format LaTeX expressions at LaTeX — Support — WordPress. You can read up on what LaTeX formatting commands to use at Mathematical expressions — ShareLaTeX under “Further Reading”.

HTML codes also work in the comments. If you want to use HTML for math expressions, just a note, you will need to use specific codes for ‘<‘ and ‘>’. I’ve seen cases in the past where people have tried using them “naked” in comments, and WordPress interprets them as HTML tags, not how they were intended. You can read up on math HTML character codes here and here. You can read up on formatting fractions in HTML here.

Related post: The beauty of mathematics denied

— Mark Miller, https://tekkie.wordpress.com

An interview with Stewart Cheifet re. The Computer Chronicles

One of my favorite shows when I was growing up, which I tried not to miss every week, was “The Computer Chronicles.” It was shown on PBS from 1983 to 2002. The last couple years of the show fell flat with me. It seemed to lack direction, which was very unlike what it was in the prior 17 years. Still, I enjoyed it while it lasted.

Randy Kindig did an interview with Cheifet. They talked about the history of how the show got started, and how it ended. He told a bunch of notable stories about what certain guests were like, and some funny stories. Some screw-ups happened on the show, and some happened before they taped a product demonstration. It was great hearing from Cheifet again.

The necessary ingredients for computer science

This is the clearest explanation I’ve seen Alan Kay give for what the ARPA community’s conception of computer science was. His answer on Quora was particularly directed at someone who was considering entering a CS major. Rather than summarize what he said, I’m just going to let him speak for himself. I’d rather preserve this if anything happens to Quora down the road. There is a lot of gold here.

As a beginner, what are the best ways to approach Computer Science?

If you are just looking to get a job in computing, don’t bother to read further.

First, there are several things to get clear with regard to any field.

  • What is the best conception of what the field is about?
  • What is the best above threshold knowledge to date?
  • How incomplete is the field; how much will it need to change?

When I’ve personally asked most people for a definition of “Computer Science” I’ve gotten back an engineering definition, not one of a science. Part of what is wrong with “CS” these days is both a confusion about what it is, and that the current notion is a weak idea.

The good news is that there is some above threshold knowledge. The sobering news is that it is hard to find in any undergrad curriculum. So it must be ferreted out these days.

Finally, most of the field is yet to be invented — or even discovered. So the strategies for becoming a Computer Scientist have to include learning how to invent, learning how to change, learning how to criticize, learning how to convince.

Most people in the liberal arts would not confuse learning a language like English and learning to read and write well in it, with the main contents of the liberal arts — which, in a nutshell, are ideas. The liberal arts spans many fields, including mathematics, science, philosophy, history, literature, etc. and we want to be fluent about reading and writing and understanding these ideas.

So programming is a good thing to learn, but it is most certainly not at the center of the field of computer science! When the first ever Turing Award winner says something, we should pay attention, and Al Perlis — who was one of if not the definer of the term said: “Computer Science is the Science of Processes”, and he meant all processes, not just those that run on digital computers. Most of the processes in the world to study are more complex than most of the ones we’ve been able to build on computers, so just looking at computers is looking at the least stuff you can look at.

Another way to figure out what you should be doing, is to realize that CS is also a “blank canvas” to “something” kind of field — it produces artifacts that can be studied scientifically, just as the building of bridges has led to “bridge science”. Gravitational and weather forces keep bridge designers honest, but analogous forces are extremely weak in computing, and this allows people who don’t know much to get away with murder (rather like the fashion field, where dumb designs can even become fads, and failures are not fatal). Getting into a “learned science of designs that happen to be dumb” is not the best path!

We (my research community) found that having an undergraduate degree in something really difficult and developed helped a lot (a) as a bullshit detector for BS in computing (of which there is a lot), (b) as a guide to what a real “Computer Science” field should be and could be like, and (c) to provide a lot of skills on the one hand and heuristic lore on the other for how to make real progress. Having a parallel interest in the arts, especially theater, provides considerable additional perspective on what UI design is really about, and also in the large, what computing should be about.

So I always advise young people -not- to major in computing as an undergraduate (there’s not enough “there there”) but instead to actually learn as much about the world and how humans work as possible. In grad school you are supposed to advance the state of the art (or at least this used to be the case), so you are in a better position with regard to an unformed field.

Meanwhile, since CS is about systems, you need to start learning about systems, and not to restrict yourself just those on computers. Take a look at biology, cities, economics, etc just to get started.

Finally, at some age you need to take responsibility for your own education. This should happen in high school or earlier (but is rare). However, you should not try to get through college via some degree factory’s notion of “certification” without having formed a personal basis for criticizing and choosing. Find out what real education is, and start working your way through it.

When I’ve looked over other material written by Kay, and what has been written about his work, I’ve seen him talk about a “literate” computing science. This ties into his long-held view that computing is a new medium, and that the purpose of a medium is to allow discussion of ideas. I’ve had a bit of a notion of what he’s meant by this for a while, but the above made it very clear in my mind that a powerful purpose of the medium is to discuss systems, and the nature of them, through the fashioning of computing models that incorporate developed notions of one or more specific kinds of systems. The same applies to the fashioning of simulations. In fact, simulations are used for the design of said computer systems, and their operating systems and programming languages, before they’re manufactured. That’s been the case for decades. This regime can include the study of computer systems, but it must not be restricted to them. This is where CS currently falls flat. What this requires, though, is some content about systems, and Kay points in some directions for where to look for it.

Computing is a means for us to produce system models, and to represent them, and thereby to develop notions of what systems are, and how they work, or don’t work. This includes regarding human beings as systems. When considering how people will interact with computers, the same perspective can apply, looking at it as how two systems will interact. I’m sure this sounds dehumanizing, but from what I’ve read of how the researchers at Xerox PARC approached developing Smalltalk and the graphical user interface as a learning environment, this is how they looked at it, though they talked about getting two “participants” (the human and the computer) to interact/communicate well with each other (See discussion of “Figure 1” in “Design Principles Behind Smalltalk”). It involved scientific knowledge of people and of computer system design. It seems this is how they achieved what they did.

— Mark Miller, https://tekkie.wordpress.com