A collection of (Quora) answers on object-orientation

In my time on Quora.com, I’ve answered a bunch of questions on object-oriented programming (OOP). In giving my answers, I’ve tried my best to hew as closely as I can to what I’ve understood of Alan Kay’s older notion of OOP from Smalltalk. The theme of most of them is, “What you think is OOP is not OOP.” I did this partly because it’s what’s most familiar to me, and partly because writing about it helped me think clearer thoughts about this older notion. I hoped that other Quora readers would be jostled out of their sense of complacency about OO architecture, and it seems I succeeded in doing that with a few of them. I’ve had the thought recently that I should share some of my answers on this topic with readers on here, since they are more specific descriptions than what I’ve shared previously. I’ve linked to these answers below (they are titled as the questions to which I responded).

What is Alan Kay’s definition of object-oriented? (This points to a bunch of answers, because a few of them are good. Alan Kay responded to this one himself, as well.)

A big thing I realized while writing this answer is that Kay’s notion of OOP doesn’t really have to do with a specific programming language, or a programming “paradigm.” As I said in my answer, it’s a method of system organization. One can use an object-oriented methodology in setting up a server farm. It’s just that Kay has used the same idea in isolating and localizing code, and setting up a system of communication within an operating system.

Another idea that had been creeping into my brain as I answered questions on OOP is that his notion of interfaces was really an abstraction. Interfaces were the true types in his message passing notion of OOP, and interfaces can and should span classes. So, types were supposed to span classes as well! The image that came to mind is that interfaces can be thought to sit between communicating objects. Messages, in a sense, pass through them, to dispatch logic in the receiving object, which then determines what actual functionality is executed as a result. Even the concept of behavior is an abstraction, a step removed from classes, because the whole idea of interfaces is that you can change the class that carries out a behavior with a completely new implementation (supporting the same interface), and the expected behavior for it will be exactly the same. In one of my answers I said that objects in OOP “emulate” behavior. That word choice was deliberate.

This seemed to finally make more sense for me than it ever had before about why Kay said that OOP is about what goes on between objects, not what goes on with the objects themselves (which are just endpoints for messages). The abstraction is interstitial, in the messaging (which is passed between objects), and the interfaces.

This is a conceptual description. The way interfaces were implemented in Smalltalk were as collections of methods in classes. They were strictly a “gentleman’s agreement,” both in terms of the messages to which they matched, and their behavior. They did not have any sort of type identifiers, except for programmers recognizing a collection of method headers (and their concomitant behaviors) as an interface.

Are utility classes good OO design?

Object-Oriented Programming: In layman’s terms, what is the difference between abstraction, encapsulation and information hiding?

What is the philosophical genesis of object-oriented programming methodology?

Why are C# and Java not considered as Object Oriented Language compared to the original concept by Alan Kay? (Quora user Andrea Ferro also gave a good answer.)

The above answer also applies to C++.

What does Alan Kay mean when he said: “OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP”?

What does Darwin’s theory of evolution bring to object oriented programming methodology?

This last answer gets to what I think are some really interesting thoughts that one can have about OOP. I posted a video in my answer from a presentation by philosopher Daniel Dennet on “Free Will Determinism and Evolution.” Dennet takes an interesting approach to philosophy. He seems almost like a scientist, but not quite. In this presentation, he uses Conway’s game of “life” to illustrate a point about free will in complex, very, very, very large scale systems. He proposes a theory that determinism is necessary for free will (not contrary to it), and that as systems survive, evolve, and grow, free will becomes more and more possible, whereby multiple systems that are given the same inputs will make different decisions (once they reach a structured complexity that makes it possible for them to make what could be called  “decisions”). I got the distinct impression after watching this that this idea has implications for where object-orientation can go in the direction of artificial intelligence. It’s hard for me to say what this would require of objects, though.

What resources does Alan Kay recommend for learning real object oriented programming?

Edit 11/7/2019: A commenter calling themselves “Quang” referenced this video by Alan Kay, from about 1987.

I think it is a great primer for looking at the STEPS project at Viewpoints Research, because it explains in some more detail the object model they used for it than the STEPS documentation laid out. I remember Kay told me back in about 2012 that he came up with this concept (a “pull”, or subscription model for objects) in the late ’70s. I highly recommend this video for anyone wanting to marinate in better ideas for OOP architecture.

Related post:

The necessary ingredients for computer science – If you really want to understand OOP–what it was in answer to, and what it can be, you must understand this perspective!

—Mark Miller, https://tekkie.wordpress.com

9 thoughts on “A collection of (Quora) answers on object-orientation

  1. Hi Mark,

    Great post (and perfect timing, since I’ve been in research mode lately). I’ve just had time to get into the first question, but two things caught my attention. That theme of data as code, and code as data, plays out a lot in programming, but also in life as well. It’s essentially verbs and nouns, or 3D space and time. Possibly static vs. dynamic as well. It shows up so often that it is clearly some form of physical characteristic of our universe, or at least a product of the way we observe it. Way back when Java was first introduced they seemed to be headed in a direction where you could instatiate an object, then send it to someone else to use. If they had continued down that remote object path it seems they might have gotten really close to what Alan visualized. Another near example was the OODBs like ObjectStore. you just inherent from Persistence, and wrap the interaction in a transaction. Then the object is quietly persistened automatically. A while ago I manged to get that same paradigm to play put in Hibernate, but the wrapper code was a bit tricky.

    The weakness in any code as data scheme has been the volitily of the code. It can and does change rapidly, causing upgrading issues. Data however often contains implicit dependencies embedded into it, such that it is not fully usable without the associated code. The two are tighly bound, yet somewhat independent. As for data as code examples, regular expressions come to mind, maybe. Any intermediate data-structure or table driven goes goes the other way.

    So your last part of your post on SICP ends with you saying that:

    “I think the reason for this complexity is, as Alan Kay has said, we haven’t figured out what computing is yet.”

    That’s exactly where I have been stuck lately. I see computation as a means of stepping through the construction of complex relationships between symbolic objects at various different meta-levels of abstraction, but I can’t seem to reconcile that perspective with a view of how things are modelled as formal systems with respect to time. By way of analogy, it’s like I know how to build stuff in a higher up language like Java, but can’t imagine how it would run (or how to recreate it) in a lower level like assembler. Each perspective is entirely indepedent of each other, although I know they are all built on, and explaining the same underlying entities.


  2. HI Paul.

    It shows up so often that it is clearly some form of physical characteristic of our universe, or at least a product of the way we observe it.

    I’d go more with the latter. We can know something about the physical characteristics of the Universe (within some bounds), and how we observe it, but we will never know with certainty what the actual physical characteristics are.

    Re. code as data

    I guess what you’re saying is that the concept of data is significantly tied to relationships between symbolic entities, and that those relationships can become tenuous, or break, as the code that represents those entities is modified.

    It seems to me, though, that the reason you’re seeing that problem is that the organizations implementing the scheme are not understanding the idea I described with object orientation, which is that to outside entities the implementation shouldn’t matter. What matters is the expected behavior. If a relationship breaks, it’s because whatever was exhibiting the expected behavior before is no longer doing that.

    Other qualities that get in the way is our use of type tags to talk about associations between data and code, and our use of symbols as means of referencing functionality (most languages have this issue). Even if nothing about the essential qualities of a function changes, just changing its name, or changing the parameters it accepts, or changing the type it returns, causes problems for everything else that uses it. This is part of the reason Smalltalk was designed the way it was. The only thing that’s recognized as a type in that sphere is an interface.

    Re. data as code

    “Code as data” and “data as code” were notions that I’d heard about since I was in college, and it wasn’t until I got to that section in SICP that I felt like I finally understood what “code as data” meant.

    Every time I think about “data as code,” I think about code generation. It was a minor revelation to do some 6502 assembly programming a couple years ago, to take a look at the generated machine code, and to fully realize, “It’s all just numbers.” I mean, I knew for many years that machine code was “just numbers,” but to finally see the mapping of the assembly code to the numbers, going opcode by opcode, data element by data element, to really convince myself that all the computer was really dealing with was numbers (and even that’s an abstraction of a closer-to-reality description that it’s all patterns of voltages), but that some numbers are not “data” to the computer, but “signals” to execute internal functions to the processor that then processes what it considers data. Nevertheless, I could think of all of it as “data,” and the processor could “think” of it as “code.” What seems to help (with me, anyway) in relating to the concept is understanding that the processor makes different decisions about what gets computed, what computations are done, and where the result gets stored, depending on what internal state it’s in. Another realization I made is that the processor does a lot of what we think of as non-computational stuff (like pushing elements on a stack (moving the stack pointer), or branching to an address (changing the program counter)) by using its Arithmetic Logic Unit.

    Likewise, if you program in Lisp, or some variant, the point about “data as code” comes across quickly once you get into exploring eval and apply, or even creating your own macros.

    but I can’t seem to reconcile that perspective with a view of how things are modelled as formal systems with respect to time.

    I don’t know if this would help, but what you say here brings to mind John McCarthy’s situation calculus. From what I remember, it’s about modeling processes temporally.

  3. My late night odd thought about code vs. data is that if you remove (or freeze) time from code you have data. Going the other way then if you introduce some time element into the data, you should get back to code. Freeze the computer and dump the code as data. Read it back, then start executing it as code again. The one caveat is that code itself is a serialization of the states and complex relationships expressible at runtime, so it would need to be pumped up from a single dimension to multiple ones.

    Situational calculus does help a bit. It’s like turning my intuitive data-structure relationship on it’s side so that progessions of actions are just nouns instead of verbs. I’ll have to chew on that for a bit 🙂

    Getting back to Alan’s vision of OO, another late night thought I was having was to model objects as single independent threads. That is, each object acts as a stand-alone TM, and can only have it’s methods called serially (to avoid concurrency). In that sense it is rather like mixing FBP with OO, but driving the paradigm from the object side as opposed to pipes. Messages would then be passed back and forth on say a message queue, but it wouldn’t be as resource intensive as it sounds since most messages would just be references to other objects in the same computational space (but possibly be distributed on different hardware). For fun, make each object have a unique immutable key and use instatiation as the means of software deployment (to persistence first, then into runtime). Thus objects would be live serialized actors trading back and forth other object references. Still upgrades would remain a problem given that the system and data models will evolve over time, so there would need to be some external automated means of instantiating an object with an existing key that would effectively seek out its predecessor, absorb its current data and then eliminate it (or some less violent, but equally effective approach). To support the distributed references the keys would be tied to an interface spec, not an absolute symbolic name. That would push programmers to wrap older stuff if they wanted to change the spec quickly. But also given that the spec is the general behavioural identifier, allow intrinsic polymorphism (leaving inherence as really a compile-time issue, not a runtime one). This would also encapsulate any data/code dependencies since at runtime, on any machine, the two would always move together. The other benefit is that programmers would no longer be able to do half-baked OO, since they really only ever deploy one object at a time. The most used design pattern would then probably switch to fly-weights (to avoid over consuming resources).


  4. Hi Mark,

    Thanks for these links. I’ve been trying to learn about OO at its roots for some time now, and you’ve been able to point me in the right direction many times. There is one concept that I’ve been struggling with, however, and I’m curious to hear your perspective.

    The concept I struggle with is the idea that objects can draw themselves. If I’m writing an application that displays a user’s profile, does the User object know how to display itself on the screen?

    It is at this point where I become confused because I’m not sure if the object knows too much. What if I wanted to change how the UI looks? What if I wanted to change the position of this object’s view representation on the screen? Is this proper ontology, or does this break the bones of my domain instead of finding its joints? Questions like these come to mind.

    Coming from an iOS background, this is something I’m not used to since we follow MVC, but I’m hoping this is something I can still apply and share with my colleagues.



  5. @eatsleepios:

    The main thing to keep in mind is that if you want objects that can display themselves, they should be able to specify themselves in quantifiable, relational terms. Ideally, the display itself should be represented by objects to the rest of the system, which implies that it has its own semantics–its own logic, and that it governs itself. The objects that need to be displayed should not be driving the display. Let the display’s semantics decide how and when the objects should be displayed, how big the objects’ representation should be, and where to display them. The part of your application that deals with user input will also likely provide some direction to the display.

    The point is that the objects should be able to provide a kind of quantifiable rendering of themselves through their relationships with other objects–think of them as producing a kind of “signal” through a system of semantics, which does “signal processing,” but the objects should not be able to decide who’s rendering them, or how. They need to provide an interface for this so that other objects who will want to receive their “processed signal” can get it, but that’s really all they should have to do. Perhaps a sufficient interface already exists, and your objects can just adopt it, and implement it.

    Reading your comment, this video by Abelson came to mind:

    Even though he used a functional language called Scheme, the same principle applies to OOP, and is an important idea to get for it. He talked about building up a “picture language” that can draw any figure using nothing but procedures. The big idea in this is that the figure itself is represented by nothing but procedures. What’s commonly called “data” (another way of thinking of it is “state” or “information”) is involved, but it’s embedded inside of procedures, not data structures. In the case of OOP, it’s embedded inside of objects, which interact semantically with other objects, not by getters and setters.

    Most of what he talked about would apply to the display, and only a little bit to the objects that need to be rendered.

    An important principle that applies directly to OOP is the idea of the “universal object.” He said in the picture language he talked about there is only one primitive (ie. universal object), called “Picture.” Every operation in the language resolves to a Picture entity, which allows more and more operations to be easily applied to previous operator combinations.

    I learned a little Objective C, because I figured since I own a Mac, I should learn to program it, though I haven’t done anything serious with that. (You may be using Swift.) I don’t know if there’s any way around this, but I didn’t like some of the example code I saw in the instructional material, because it weakened the notion of objects, treating them sort of like data structures, where in order to get something to happen, you had to pull values out of one object, inside of something like a “view,” to pass to an object representing a context or “surface” that was going to render whatever data I passed to it. In OOP terms, that’s not how it should work. Rather, objects should be allowed to exchange information between each other on their own terms, using interfaces. Relationships can be established inside something like a view, and actions can be initiated there to get the interaction going, but otherwise, all of the action should be in the realm of objects communicating with each other, keeping their implementation and state hidden from that which initiates their interaction.

    I sometimes use this as an example. In Smalltalk, the result from “2 + 2” is the result of the interaction of the two numeric objects. A message “+ 2” is passed to the first “2”, which then does a little analysis of the second “2”, to determine how it can operate with it (what interface it should use), and then does the addition, returning the result. You don’t do: “(2 getValue) + (2 getValue)”! This is sort of a bad example, since Objective C does not allow you to treat numbers as objects, but the idea I’m trying to get across is with any user objects, you should strive for the former kind of interaction, both with each other, and with objects from the Cocoa runtime.

  6. if you look at this video. Alan explained that objects interact and form a network. But he said it is slippery when scaling up with a lot of objects. This is to me the key point of OOP. He suggested a solution in which objects do not fire events…
    This is exactly the problems with the rebellion against OOP recently.
    To me there is 2 problems:
    1. We are trying to build perfect objects (SOLID, pattern, reuse..) so that we can plug and play them to form any network. An app is a giant network if you can visualize it. In reality it is just hard to visualize that entire network.
    2. We build the above network from classes not even objects. This happens even in Smalltalk which makes Smalltalk nothing better than other OOP languages.
    How to fix this, I dont know. Most recent effort from Trygve (MVC) to focus on objects and break the app into smaller, contextual network of objects. I regain my joy of programming with this paradigm so far.

  7. @Quang:

    I’ll look at the video, and perhaps add to this, but one thing I’ll note quickly is I talked with Alan some years back about what they were up to at Viewpoints Research, and he said they were trying out an idea he’d had back in the late 1970s, but didn’t get a chance to try at Xerox. I think he said he mentioned it in his Scientific American article on Smalltalk. It was a “pull” model of OOP, rather than “push” (sending messages), and the way he characterized it was as a “subscriber” model, where rather than having objects send messages to accomplish tasks, object interfaces are made up of two parts: a description of the services they provide, and a description of the services they need. Both are “transmitted” to the operating environment, and the operating environment matches objects with each other, based on what each can provide, and what they need, to carry out tasks. What happens after objects are matched, I have no idea. Maybe there’s still message passing at that point, but I’m not sure. He said it was analogous to the way spreadsheet cells carry out their calculations, drawing values from other cells.

    Early in the Viewpoints project, he said the goal was to reduce the number of lines of code required to create a complete operating system, with “applications” that would enable people to do something useful, by a factor of 10. This would seem to match with what you were talking about. He said the baseline they used for measuring their progress was the size of Squeak, which at the time had about 200,000 lines of code. They wanted to write their complete system in 20,000 lines of code. Tracking his progress, he said they did not successfully get it to under 20,000 lines of code, but they did get something that worked as described, and at least got it under 200,000 lines of code. It was called “Frank.” You can read about it in the NSF reports on the STEPS project Viewpoints posted on their site at http://vpri.org/writings.php. They were posted from 2007 to 2012.

  8. @Quang:

    Thanks for posting Kay’s video. It filled in some gaps in my understanding of the object model they used at Viewpoints Research for the STEPS project.

    What he described was what I laid out in my previous comment: Objects come into the system describing what they need, and what they supply. The environment is capable of doing some figuring for needs. I read him describing on Quora about objects that can be described as “near” something else, for example (See: Is there an OOP approach that uses a “milieu”, not an object graph, to send messages, similar to how my voice is carried through the air and hits people who happen to be in range, rather than being addressed to a specific person?). Vladislav Zorov posted some example code in the answers on Quora, in CLOS (the Common Lisp Object System).

    From the video, it looks like Viewpoints would’ve done this via. predicates.

    In the object model Kay described, the environment matches needs with suppliers, but it doesn’t link them up. Instead, when a supplier has the desired result, the environment takes a copy of that, and sends it as a message to the object that needed it. So, “objects only receive messages.” Thinking about the details, though, I think this description is a bit deceptive, because objects are sending messages, but they’re only sending them to the environment, to register their needs, and what they can supply.

    This is a great video on OOP, expanding one’s concept of what that means. I updated my post to include it.

    In terms of a better object system, that’s more manageable, I don’t have the expertise yet to answer that. I found Kay’s commentary on Smalltalk in this video matched my experience with it. He said the only way to figure out an interface/protocol is to look at how it’s used, which means you have to look at other people’s code to learn the interface. He said people find this frustrating. I agree. Another thing he said was that Smalltalk wasn’t set up to encourage people to develop “idioms” (my term) for using objects in the system. I forget what he said about how many objects were in a Smalltalk system, but it was in the thousands. He also said there were something like hundreds of interfaces, rather than something in the thirties, which would be more manageable for people. He seemed to indicate that the interfaces in Smalltalk could be simplified to something in the thirties range, but the problem is that when programmers have deadlines to meet, their first reaction is to come up with a new method signature, and just use that, rather than try to find and use a mathematically equivalent interface that already exists. I had the thought that what would help with that is a tool that would help programmers find existing interfaces that would match what they want to create. Again, Smalltalk, as it is, doesn’t make this easy, but it seems to me it might have enough meta-information in the system that such a tool would be feasible.

    He excused this, saying that Smalltalk wasn’t originally designed for professional development. He originally wrote it for kids to use. This hinted to me that what’s needed is a better constructed development system that helps the programmer comprehend what’s there, and not complicate it more than necessary.

    I remember years ago, reading him saying that after Parc came out with its last version of Smalltalk, that the researchers anticipated a follow-on generation of kids would improve on it, maybe adding in “policy-oriented” features, I think was the term he used. However, that never happened. As I listened to him talk about the problems with Smalltalk, I had the thought maybe the problem with the proliferation of interfaces, and understanding them, was along the lines of what he was thinking when they were talking about that.

  9. @Quang:

    I meant to add that I’ve heard about Cuis Smalltalk. On the webpage for it, it has some kudos from people like Ingalls and Kay, complimenting its implementor on a clean, simple design. I haven’t looked at it yet, but I thought I’d suggest it, as perhaps you may get some inspiration from it. I get the impression from these comments that the designer cut down on the clutter.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s