As I’ve been on this journey of investigating and using high-functioning dynamic languages, I’ve been making discoveries about technology that never occurred to me before. The most rewarding discovery for me is that programming can be like writing. I am a writer at heart. I am also a problem solver. Just about any time I’m presented with a puzzle, I work incessantly on it. I don’t always solve it successfully, but my mind becomes focused and committed to solving it.
Some of my most rewarding experiences have been the articles I’ve written that have been published so that hundreds of people can read and benefit from something I’ve learned or realized. This blog has been an extension of that.
When I say that “coding can be like writing”, I’m not saying that one can program in English in Lisp or Smalltalk. You can get close to that using embedded DSLs in these languages. What I’m talking about is mixing declarative code with programming logic written in expressions that can be evaluated.
My first encounter with this was in Lisp. In the languages I used about 7 years ago, C or C++, when I wanted to create a data structure, say like the Composite Pattern, I had to create classes for the composite and leaf objects, and then I had to dynamically allocate memory for these objects, put data in them, and then traverse them. Say I had a collection of assemblies and parts for a machine. Assemblies can contain other assemblies, and parts, but parts are atomic. In Lisp if I want to instantly create a composite data structure of assemblies and parts, I can just do this:
I simultaneously create the structure and populate it with data, declaratively. In fact, this is not unlike what one does in XML, and indeed this is what’s been resorted to in the popular languages in the last several years. The data is declared in a syntactic structure, which is then parsed by an XML parser, and loaded into a DOM, where it can be traversed and modified. In Lisp much the same thing happens, though the structure you have to write is less verbose.
I can filter (based on assembly name) and traverse the structure using this code:
(defun display-assembly (assembly-name assembly)
(cond ((null assembly) nil)
((and (eq (first assembly) ':assembly)
(eq (second assembly) assembly-name))
(format t "~%assembly ~S"
(format t "~%End assembly ~S"
(t (display-assembly assembly-name
assembly-name (third assembly))))))
(defun display-parts-list (parts-list)
(dolist (part parts-list)
(if (eq (first part) ':part)
(print (second part))
(display-assembly (second part) part))))
(dolist (part parts-list)
(when (eq (first part) ':assembly)
(if (eq (second part) assembly-name)
assembly-name (third part)))))))
Using recursion this code attempts to find the assembly I am referring to (for example (display-assembly ‘B *machine*)), and outputs its parts list. I’ve only used Lisp for a few months. I’m no expert. Maybe this could’ve been written better.
Some explanation for those unfamiliar with Lisp. You’ll notice that all of the code is in prefix form. There are no infix expressions. In Lisp there are lists, and atoms. Lists are the things in parentheses. Atoms are character or integer constants, or symbols. Every instruction and piece of data exists inside lists (the more technical name for them are S-expressions). Each instructional list starts with a function name. Each function returns a value, even if it’s just nil. A symbol or list of data elements starts with an apostrophe. A symbol can be one or more characters in length. What are strings? I suspect they’re classified as a list of characters, but I’m not sure yet.
The code here is a series of 3 functions: display-assembly, display-parts-list, and find-assembly-in-parts-list. Each function definition starts with “defun” (which is itself a function, but you can just think of it as a keyword for now).
The macro, “cond”, is like a series of if-then-else statements in the popular languages. Each term in the “cond” list starts with a conditional test. If the test is true it executes the action associated with it. If it’s false, it moves on to the next term. The last term’s conditional statement, in this case, is “t” for “true”. In effect it is the default term executed if all other terms are false. You don’t have to do this. It just worked for me here.
Macros are code generating functions. The best analogy I can think of is they’re like macros in C and C++.
The “null” function tests whether its argument is “nil” (think of this as “no value”). Every list is terminated with “nil”. So this is my bootstrap condition for display-assembly’s recursion. If “null” is true, I return “nil”. The “eq” function tests equality between symbols.
For outputting data, sometimes I use the “print” function, and other times I use “format”. It just depends on whether I want to format the output.
The “if” function is like an if-then-else statement in the popular languages. It takes 3 arguments: a conditional expression, an expression to evaluate if it’s true, and another one if it’s false. The “when” function is like an if statement without the else.
The “return” function works like you’d expect a “return” command to work in the popular languages. It breaks out of whatever context it’s in and returns its value from the enclosing function (defined with “defun”).
Another example I’ll give is from a Squeak project I’ve worked on from time to time. As an exercise in refreshing my memory of Smalltalk, and to eventually learn the Seaside framework, I decided to redo a project I had originally done in ASP.Net 1.0. As part of the project I used a screen scraping library and regular expressions to get employment data from the Bureau of Labor Statistics automatically, parse out the data, which was loaded into a database, and selectively display comparative employment data that a user wanted to see. I decided to try my hand at creating a simple DSL in Smalltalk to do the screen scraping. I wanted to query the web server in a similar fashion to a SQL query. It looks something like this (from actual code):
| query page |
query := (WebQuery
where: #format is: 'block';
where: #html_tables is: 'no';
where: #catalog is: 'no';
where: #delimiter is: 'tab';
where: #print_line_length is: '400';
where: #level is: '1';
where: #series_id is: 'CES0000000001';
where: #year is: '2001-2006';
query addParametersWithNullValue: #(#lines_per_page
#row_stub_key #date #net_change #start
page := query retrievePage.
Some explanation. In Smalltalk you have objects that receive messages. The object is always on the left-hand side of an expression, followed by the messages sent to it. There are three types of messages: ones with parameters (these messages have colons to delimit the parameter values), those that don’t have parameters (they have no colons), and operators which also take parameters, but don’t have colons. In the first assignment (query := WebQuery …) I use what’s called a “cascade” to send several messages to the WebQuery object all in one statement. Each message is separated by a semicolon. In a fashion similar to Lisp, the result of an expression is always the result of the last sub-expression evaluated (in simple expressions this is just the expression itself). Since I want to ensure that “query” gets a WebQuery object, I send a message called “yourself” to WebQuery to ensure that “itself” is returned to “query” in the assignment. The array that’s passed with the addParametersWithNullValue: message starts with “#(“. Actually this is the same way Lisp array literals are declared. The values inside the array are symbols (each starting with #).
The server process that serves the page expects to get 16 parameters. Since I didn’t want to fill them all in, I added some with the null value. This code will POST the query data to the BLS server and retrieve the corresponding page into the “page” variable. The data parsing comes later.
Once I tried this it was great. This is what I meant in past posts when I’ve said that I’m expressing my intent in the language.
The nature of software
I understood after working in this business for about 4 or 5 years that just about every project I worked on up to that point was “hand made”. I said this once to a fellow developer, and he joked that I was saying we were like the furniture craftsmen of old, who worked with primitive tools, making our software out of “Corinthian leather”. I didn’t quite mean it that way, but in hindsight, in a way, he was right. By “hand made” I meant that the logic of the program and a large part of the structure have to be invented each time. There’s no existing structure or documentation that adequately lays out the solution for us. We have to come up with that ourselves.
I read “The Art of Lisp & Writing”, by Richard Gabriel a week ago (h/t to José A Ortega-Ruiz), and it captures the essence of what I’ve felt for a long time about programming. It looks like it was written in the 1990s.
Gabriel makes the point in his article that mature engineering disciplines work on what’s been standardized. There’s not very much that’s invented from project to project. We know how to build bridges, and we understand how to build them well. You can lay out a set of specifications for one, and from that determine what materials, structure, shape, and land space are needed. Software is usually not like this. Have you heard of a bridge being built “iteratively” in your lifetime? Can you imagine a bridge being built in our day and age where the builders just “take a stab at it”, have inspectors review it, have the engineers significantly “revise” it, and see this process repeat multiple times before the project is done? Not on your life! This isn’t done, because it doesn’t need to be done. Further, with physical materials this can weaken structural integrity. It’s better to get it right the first time with concrete.
Yes bridges are inspected, but they are inspected for structural integrity, and approval is either given or denied based on whether the structure is up to standards. Relatively small modifications are made to improve their integrity. You don’t see them being significantly “revised” unless there was incompetence in the first place. There have been bridges with design flaws in them, even as recently as several years ago, but they are rare occurances. Design flaws and ad hoc iterative development in software are commonplace–too commonplace for software development to be considered a mature engineering discipline.
Gabriel makes the point that we humans have been building bridges for thousands of years. We’ve only been writing software for fifty. Despite the belief that software can be built in a standardized way, it doesn’t work that way. Not yet. We all know this (speaking for us developers, at least).
Programming is a creative process–a craft. Software is crafted in a highly interactive process, hopefully with the people who will eventually use it. It is molded, shaped, and refined, until it gets to a point where it’s judged to be useable. Even then, it’s not done. More gets added to it later with enhancements, or the users decide they don’t want a part to work this way, but another way.
Gabriel starts by talking about the creative process of writing:
Writing a beautiful text takes sitting before a medium that one can revise easily, and allowing a combination of flow and revision to take place as the outlines and then the details of the piece come into view. Changes are made as features of the writing emerge—first the noise, then the sense, the force, and then the meaning—and the work of the writer is to continually improve the work until it represents the best words in the best order. Great writing is never accomplished through planning followed by implementation in words, because the nature of the word choices, phrasings, sentence structures, paragraph structures, and narrative structures interact in ways that depend [on] each other, and the only possible plan that can be made with which a writer can “reason” about the piece is the piece itself.
This is true of all creative making—flow and revision are the essences. Flow is the preferred state of mind for the writer doing discovery. Flow means that there are no barriers between the movement of the mind and the placement of words on the page. When in flow, the writer feels as if he or she is in a vivid and continuous dream in which the relevant parts of the dream flow onto the page.
Revision includes reading material already written and perfecting it as presentation, but as the name suggests, it is more. Discovery is always mixed in with perfecting, because, as [Richard] Hugo says, “one does not know what will be on the page until it is there.” Moreover, revision—when performed courageously—involves determining what to leave out, just as the mapmaker determines what in the real world can remain unmentioned and unrepresented for a map made for a particular purpose.
For many, this will not sound as if I am describing programming—or as some like to put it: software design and implementation. To satisfy the biases of such folks, I will use untypical words and phrases for concepts that have acquired too strict a meaning.
So instead he uses the term “programming medium”. He goes on to describe the creative process of programming:
The difference between Lisp and Java, as Paul Graham has pointed out, is that Lisp is for working with computational ideas and expression, whereas Java is for expressing completed programs. As James [Gosling] says, Java requires you to pin down decisions early on. And once pinned down, the system which is the set of type declarations, the compiler, and the runtime system make it as hard as it can for you to change those assumptions, on the assumption that all such changes are mistakes you’re inadvertently making.
There are, of course, many situations when making change more difficult is the best thing to do: Once a program is perfected, for example, or when it is put into light-maintenance mode. But when we are exploring what to create given a trigger or other impetus—when we are in flow—we need to change things frequently, even while we want the system to be robust in the face of such changes. In fact, most design problems we face in creating software can be resolved only through experimentation with a partially running system. Engineering is and always has been fundamentally such an enterprise, no matter how much we would like it to be more like science than like art. And the reason is that the requirements for a system come not only from the outside in the form of descriptions of behavior useful for the people using it, but also from within the system as it has been constructed, from the interactions of its parts and the interactions of its parts separately with the outside world. That is, requirements emerge from the constructed system which can affect how the system is put together and also what the system does. Furthermore, once a system is working and becomes observable, it becomes a trigger for subsequent improvement. The Wright Brothers’ first flying machine likely satisfied all the requirements they placed on it, but they were unwilling to settle for such modest ambitions—and neither was the world—and even today we see the requirements for manned flight expanding as scientific, engineering, and artistic advances are made.
This is the reason the Waterfall Model, as it’s usually implemented, is such a problem. The typical implementation assumes that one can go from requirements, to design, to implementation, to testing, and to delivery, without ever needing to go back and significantly revise something. The Waterfall Model was originally conceived as having feedback loops, but most organizations don’t implement it that way.
James Gosling in talking about Java acknowledges that at least the object-oriented notion of late or dynamic binding is important to support shipping new (though constrained) versions of libraries and to make derived classes, but he seems to forget that the malleability of the medium while programming is part of the act of discovery that goes into understanding all the requirements and forces—internal or not—that a system must be designed around.
As a system is implemented and subjected to human use, all sorts of adjustments are required. What some call the “user interface” can undergo radical reformulation when designers watch the system in use. Activities that are possible using the system sometimes require significant redesign to make them convenient or unconscious. As people learn what a system does do, they start to imagine what further it could do. This is the ongoing nature of design.
Gabriel begins to merge the concepts of creative writing and creative programming:
The acts of discovery and perfecting merge for the writer, and for this to happen it is essential that the medium in which the creation is built be the medium in which it is delivered. No writer could even begin to conceive of using one language for drafting a novel, say, and another to make its final form.
Like many types of artist, the writer is manipulating during the discovery and perfecting stages the very material that represents the work when complete. In the old days, the manuscript would be retyped or typeset, but not so much anymore. The word “manuscript” itself refers to the (hand written) copy of the work before printing and publication, which the writer constantly revises. But the manuscript—which is constantly worked over—differs from a published work only by finally pinning down decisions like fonts, page breaks, illustrations, and design elements.
What’s apparent to me is that Gabriel sees the value of having a stricter programming language for when a work product is mature, but while a product is in its early and mid-development stages, greater flexibility in a “programming medium” is the ideal.
I get the image that what he wishes more of software development was like was using a word processor. When you’re using one you don’t have to plan what you’re going to write. You can just start. Once you have some things in the document, you can shift things around. You can begin to shape it. Contrast this with the punch card, and teletype machines, or the line editors we used to use for programming (I remember them). If you made a mistake, it was a real pain to correct it. It became more productive to try to plan what you were going to write, because mistakes were costly. So it would seem this is what most of us are doing today with software development. And the thing that’s weird about it is it’s like we have something like “word processor” technology for software here with us today. It’s been there for decades. We’re just not using it, almost out of willfull ignorance.
What’s emerging in my mind as Gabriel describes his experience with developing a product is the Groovy language, which is based on the JVM, and allows both dynamic and static typing. You can use straight Java with it. You can mix static and dynamic typing together, or make your code completely dynamic. Getting away from Groovy for a moment, when I read what Gabriel is talking about, the image that emerges for me is that the developer starts out with strictly dynamic types in a late-bound system. It is written, executed, and delivered this way. At about Version 3, say, static types and restrictive scoping start to get introduced (like declaring a class “final”, and certain parts “private”). As time passes more of this gets introduced into the system as it matures, and the development team becomes certain that they don’t want certain parts to change. This is like typesetting when a manuscript is deemed ready to go to print, after many drafts have been reviewed.
Continuing, Gabriel gets more concrete about the problems with the contradictions in purposes that developers are faced with today:
The distinction some make between prototyping and software development has vexed me my whole career. I have never seen a version 1.0 of a system that was acceptable. I find that every software group messes with the software up until the very time it’s deployed or delivered. At the outset the developers want to get something running as soon as possible—they are interested in leaving things out that are not necessary to the purpose of understanding the internal forces and external reactions affecting what has been built. When prototyping was considered part of the development process, people spoke of prototyping languages or languages good for prototyping as an ideal. Though these times are long past, software developers still need to go through the same steps as prototypers did and for the same reasons, but what these developers are using are delivery languages or languages designed to describe tightly a finished program.
The idea behind a description language is to create the most complete description of a runnable program, primarily for the benefit of a compiler and computer to create an executable version of it that is as efficient as it can be, but sometimes also for readers who will perhaps be enlightened by knowing all the details of the program without having to do a lot of thinking.
I like to think of a program written in one of these languages as a “measured drawing.” A measured drawing is a set of plans made by a craftsman of a piece of furniture, for example, which enables an amateur to make an exact copy. Whereas a master craftsman will be able to take a rough drawing or an idea of such a piece and by virtue of his or her expertise make a perfect specimen, an amateur may not be able to make all the joints perfectly and in the perfect positions without having all the dimensions and distances spelled out. Often, a measured drawing requires the craftsman to build a prototype to understand the piece and then a second to make one perfect enough to provide the measurements.
The problem with the idea of using a programming language designed to describe finished programs is that programs are just like poems. Paul Valéry wrote, “a poem is never finished, only abandoned.” Developers begin work on a project and deliver it when it seems to work well enough—they abandon it, at least temporarily. And hence the series of released versions: 1.0, 1.1, 1.5, 1.6, 2.0, 2.05, etc. Trying to write a program in manuscript using a finished-program programming language would be like trying to write a poem using a Linotype machine. Perhaps using a Linotype machine and thereby producing a beautiful printed book of poetry is good for readers, critics, and therefore literature, but it doesn’t help poets and would make their writing lives intolerable with lousy poetry the result.
The screwed-up way we approach software development is because of what we have done with programming languages. With some exceptions, we have opted for optimizing the description of programs for compilers, computers, and casual human readers over providing programming media. Early on, computers were insanely slow, and so performance was the key characteristic of a system, and performance was easiest to achieve when the language required you to “pin down decisions early on.” Because of this, much work was poured into defining such languages and producing excellent compilers for them. Eventually, some of these languages won out over ones more suitable for drafting and exploration both in the commercial world and in academia, and as a result it is politically difficult if not impossible to use these so-called “very dynamic” languages for serious programming and system building.
Interestingly, the results of this optimization are well-described by Hoare’s Dictum, often mistakenly attributed to Donald Knuth, which states:
Premature optimization is the root of all evil in programming.
–C. A. R. Hoare
This is interesting to me, because it brings into relief that our practices and beliefs about software construction arise in an ad hoc fashion. We accept this, but for some reason (he attributes it to fear of size and complexity of projects) we don’t accept the ad hoc nature of software construction itself, in the current state of the art. Despite the fact that we accept that “all software has bugs”, for the most part we don’t use technologies that allow us to fix those bugs easily. Instead we use technologies that restrict what we can do in an effort to prevent bugs…but bugs appear anyway. If anything we’ve tried to make up for the inflexibility of our development technologies with Information Systems infrastructure. One of the answers to software inflexibility we as an industry came up with was “put it on the web”. My guess is politics had something to do with this. I guess the belief is if we weren’t using inflexible software development technologies, then bugs would multiply out of control. Interesting how the reaction was to bring software over to an information systems infrastructure that’s less stateful, less restrictive, and more flexible…
Gabriel’s message is we need to give up our fear, because it’s irrational. Rather than restricting developers, a “programming medium” should be used to free them to make frequent and rapid changes to software in the early stages of development. You will still have bugs, but they will be less consequential, because the technology will allow them to be fixed quickly.
Alan Kay made a similar point to what Gabriel said, at his ETech presentation on Squeak and Croquet in 2003:
The key idea here, which is still not the main idea, is what to do until software engineering gets invented. Do late binding. Virtually every piece of software done today is done as early binding, through manipulating text files, compiling. Java still has a hard time adding code to itself while it’s running, for chrissakes. So, if we go back to Peter Deutsche’s Lisp of 40 years ago it did not have that problem.
You should be able to do everything dynamically. The system should be able to safely maintain its own environment while you’re debugging its environment. You should be able to make every change in less than 1/4 of a second, and so forth. And the reason is we just don’t know how to do software yet. So basically, when you have the disaster, you finally realize, ‘Oh, I should’ve done this back here.’ If you’ve got a late-binding system you can go back and change it. If you’ve got a typical C-based–C++, Java, whatever other systems people use–you just can’t do it.
Kay was being imprecise here. What he’s getting at is that software engineering is not yet a mature engineering discipline, as is bridge building, for example. Even Kay would acknowledge that the software we build today is built using “engineering of a sort”. In the second paragraph it sounds to me like he’s talking about systems which have the “five nines” requirement, that they be up and running all the time. There’s not really time for downtime. In that case he has a point. Even if you decompose the system into shared modules (like DLLs in Windows), there’s still the possibility that trying to change one of them will force you to take the system down. The web technologies of late have managed to alleviate this issue some. For example in ASP.Net it’s possible to update an app. without shutting it down first. If you haven’t structured your modules such that multi-step user interactions are atomic ahead of time, I suspect it would be disruptive. That’s kind of the point. In a late bound system you don’t have to worry about structuring modules for application maintenance. Since there’s no distinction between what’s “live” and what isn’t in a dynamic system, you can just change it on the fly.
Gabriel, in my last quote here, gets into what makes great software–creativity:
[Lisp] is a good vehicle for understanding how programming and software development really take place. Because programming Lisp is more like writing than like describing algorithms, it fits with how people work better than the alternatives. The problem, of course, is that writing is considered a “creative” activity, involving “self-expression,” and creativity is a gift while self-expression is hokey. Not so:
The conventional wisdom here is that while “craft” can be taught, “art” remains a magical gift bestowed only by the gods. Not so. In large measure becoming an artist consists of learning to accept yourself, which makes your work personal, and in following your own voice, which makes your work distinctive. Clearly these qualities can be nurtured by others…. Even talent is rarely distinguishable, over the long run, from perseverance and lots of hard work.
–David Bayles & Ted Orland, Art & Fear
Writing and programming are creative acts, yet we’ve tried to label programming as engineering (as if engineering weren’t creative). Instead of fearing creativity, we need to embrace it.
My emphasis in bold/italics. Couldn’t have said it better. 🙂
On Richard Gabriel’s site I also found an interview with him in 2002 titled, “The Poetry of Programming” in which he talked about his idea for creating a degree program called a “Master of Fine Arts in Software”. It would’ve focused on just what he’s been talking about. It seemed for a bit like the University of Illinois was going to take up the idea, but I can’t find references to such a degree program being implemented anywhere. I guess the idea died.
My publishing past
In case you were wondering, I mentioned earlier that I had written a few published articles in the past. My first one appeared in the Dec. ’92/Jan. ’93 issue of Current Notes magazine, called “Atari’s Future: The User Comfortability Factor”. My second was in the February and April, 1993 issues (a two-part series) in Atari Classics magazine, called “Advanced C Programming on the 8-bit”. Nothing big, but it didn’t matter. I really enjoyed seeing what I wrote in print. 🙂 My third article was published online in May, 2002 on DevX.com, called “Create Adaptable Dialog Boxes in MFC” (note: The inset graphics figures for the article are distorted for some reason. If you click on them they show up nicer).
Read Full Post »