This came out of a question about what prompted him to invent object-oriented programming, but he gets into some goals that have not been implemented yet, which sound like good ideas.
The earliest programming was in the forms of the earliest computers: to find resources in memory — usually numbers, or numbers standing for something (like a text character) — and doing something with them: often changing them or making something and putting the results in memory. Control was done by simple instructions that could test and compare, and branch to one part of code or another: often to a part of code that had already been done to create a loop. An instruction not in the hardware could be simulated if there was a way to branch and capture where the branch originated, thus producing the idea of “subroutine” (first used in full glory with a “library” on arguably the first working programmable computer, the EDSAC by Maurice Wilkes at Cambridge, late 40s).
Beginning programming was and is most often taught in this style, and it has been noted that the first programming language and style one learns tends to manifest most deeply throughout the rest of a career. Not a lot has changed 70 years later, partly because many languages started off with this style in mind, and thus the new languages were attempts to make this style more convenient to use (Lisp and APL were different early interesting exceptions).
Another way to look at this is to note that (1) the degrees of freedom of a computer, and of the possible problems to be solved, coupled with the limitations of the human mind, means that anticipating all the tools needed will be essentially impossible. This means that *how to define new things* becomes more and more important, and can start to dominate the “do this, do that” style.
Along with this (2) soon came *systems* — dynamic relationships “larger” than simple programs. Programs are simple systems, but the idea doesn’t scale up very well to deal with qualitatively new properties that arise. Historically, this never quite subsumed “programming” (and the teaching of “programming”). It gave rise to a different group of computerists and did not affect “ordinary programming” very much.
I think it is fair to say today that the majority of programmers reflect this history: most do not regard *definition* as a central part of their job, and most do not exhibit “systems consciousness” in their designs and results.
I think quite a bit of this has to do with the ways programming is taught today (more about this gets even more off topic).
Looking at this, the earliest real “computer scientists” could see that e.g. subroutines were an extension mechanism, but they were weak — for example, to make a new kind of “data structure” was fragile and could not be made a real extension to the language. This led to a search for “extensible languages”.
Other computer scientists could see that “data structures” were not a great idea e.g. sending a data structure somewhere required the receiving programmer to know many details, and the structure itself might not fit well on a different kind of computer. A vanilla data structure was vulnerable to having a field changed by an assignment statement “somewhere” in the code by “somebody”. And so forth.
Most of the programmers were used to the idea of commanding changes to “data”, and so some of the fixes were mechanisms that allowed data structures to be invented and defined: one of the major styles today is “abstract data structures”.
Along with all this were several ideas for dealing with simple smashing of variables (and the essential “variable” that is a data field). This was scattershot and reinvented in different ways. The most prominent way in strong use today is for very large structures: “data bases” that are controlled by the intermediaries of “atomic transactions” and “versioning”, which effective wrap the state with many procedures to ensure that a valid history is kept and relationships between parts of the data base are not violated. Eventually, it was realized that “data” didn’t capture all the important questions that could be asked — for example: “date of birth” could be “data”, but “age of” had to be computed on the fly. This was originally done externally, for some data bases, procedures could be included. (This required a “data base” to eventually be able to do what a whole computer could do — maybe “data” is not the operative idea here, but instead “dynamic relationships relative to time” works better. If so, then the current implementations of “data bases” are poor.
In computer terms, modern data bases” are subsets of the idea of a “server”.
Another line of thought — which goes back before there were workable computers — is that (3) certain easy enough to make computers can simulate any kind of mechanism/computer that can be thought of. This partly led to several landmark early systems such as Sketchpad, and the language SImula.
If you take in the above, and carry to the extreme, its worth noting that only one abstract idea is needed to make anything and everything else: the notion of “computer” itself. Every other kind of thing can — by definition — be represented in terms of a virtual computer. These entities (I’m sorry I used the term “objects” for them) are used like servers, and mimic the behaviors of (literally) any kind) that are desired.
A key point here is that just having practical means for creating objects doesn’t indicate what should be simulated with them. And here is where the actual history has been and continues to be unfortunate. The most use of the idea — still today — has been to simulate old familiar ideas such as procedures and data structures complete with imperative destructive commands to bash state. This again goes back partly to the way programming is still taught, and to the rather high percentage of programmers today who are uncomfortable with design and “meta”.
For example, since “an object” is a completely encapsulated virtual computer, it could be set up to act like a transactional versioned date-base. Or something much better and more useful than that.
Note that most interesting representations of things do “change over time” so something has to be done to deal with this problem. So-called “Functional Programming” has to add features — e.g. “monads” — to allow state to advance “in a more functional way”. This might not be the nicest way to deal with this problem, but something does have to be done.
And note that if you have gotten religious about “FP”, then it is really easy to make a pure FP system and language by using the universal definitional properties of “real objects” (being able to define what you want is the deep main idea!) But before you do, it will be good to ponder in larger terms.
As Bob Barton once remarked “Good ideas don’t often scale” — and neither do most simple programming paradigms. This means that another of the new things that can be built with “objects” — but have to be invented first — are less fragile ways to organize systems.
Along the Barton “qualitative changes” line of hints, one could start contemplating a kind of “request for goals” kind of organization where the semantics of the worlds being dealt with are more richly human and the main center of discourse is about the “whats that are needed” rather than the “hows” that the system ultimately uses.
This was one of the impulses behind some of the HLLs in the 50s and 60s, but the field gave up too early. The original idea behind a “compiler” was to take a “what” and do the work necessary to find and synthesize the “hows” to accomplish the “what”. 60 years ago the “whats” were limited enough to allow compilers to find the “hows”. But the field decided to sit on these and not uplift the “whats” that would require the compilers to do much more work and use more knowledge to synthesize the “hows”. This is another way to miss out on the changes of scaling.
In a “real object language” — with “universal objects” — it should be possible to define new ways to program and define and design any new ideas in computing — I think this is necessary, and that it has to be done “as a language” in order to be graceful enough to be learnable and usable.
Historically and psychologically, *tools* have had a somewhat separate status from what is made with tools (and the people who make tools, and make tools to help make tools, etc. are also somewhat separate from the average maker). But a computer is always also a tool making shop and factory, you don’t have to go to the hardware story to buy a hammer etc. This requires a change in mindset in order to really do computing.
At Xerox Parc in the 70s, we made a “real object language” to walk both sides of the street (a) we wanted to invent and make a whole graphical personal computing system, and (b) we wanted to be able to easily remake the tools we used for this as we learned more. I.e. we wanted to “co-evolve” our ignorance in both areas to reflect our increased understanding. We were motivated both by “beauty” and that we had to go super high level in order to fit our big ideas into the tiny Alto.
This process resulted in five languages, one every two years (thanks to the amazing Dan Ingalls and his colleagues), with one deep qualitative change between the 2nd and 3rd languages. That these languages could be useful “right away” was due to the way they were made (and partly because the languages contained considerable facilities for improving and changing themselves). To make progress on the personal computing parts, the constructs made in the languages had to be extremely high level so that the system could be rewritten and reformulated every few years.
The 5th version of this process was released to the public in the 80s, and to our enormous surprised was not qualitatively improved again, despite that it included the tools and the reflective capabilities to do this. The general programmers used the language as though it came “tight” from a vendor and chose not to delve into even higher level semantics that could help the new problems with the new scalings brought by Moore’s Law. (This was critical because there were somethings we didn’t do at Parc because of the scalings that needed to be done to deal with “10 years later” scalings, etc.)
To answer the current question after the “long wind” here: there are usually enough things “not right enough” in computing to need new inventions to help. Most people try to patch their favorite ways of doing things. A few will try to raise the outlook and come up with new ways to look at things. The deep “object” idea, being one of “universal definition” can be used for both purposes. Using it for the former tends to just put off real trouble a little bit. I think programming is in real trouble, and needs another round of deep rethinking and reinventing. Good results from this will be relatively easy to model using “real objects”.