Java: Let it be

Joshua Bloch, Chief Java Architect at Google, gave a talk entitled “The Closures Controversy” (PowerPoint slides) at Javapolis in December 2007. I found it online through reddit, and it intrigued me, because I think it illustrates a disconnect between what we as an industry are doing and the goals we have. Bloch also makes what I think is a reasonable argument for limiting Java.

Those of you who’ve read what I’ve written for a while might be surprised by that statement. I have been critical of the IT industry for not thinking in more powerful terms about programming/computing. I am not endorsing Java. After watching his talk I was prepared to lay into Bloch for not having higher ambitions for the language, but as I thought about it I realized that it’s unreasonable to expect a computing environment to be more than what it was designed for. It helps me see that Java has a shelf life. To expect more would mean trying to change it into something it’s not.

The appeal of Java

Bloch began his talk using Gosling’s ideas from his article The Feel of Java, published in IEEE Computer in 1997, as a touchstone for what Java ought to be like in the future:

It is a language for a job. Doing language research was an anti-goal [in the development of Java] . . . It is practical, not theoretical. It’s driven by what people need. Theory provides rigour, cleanliness, and cohesiveness. [Java had] no new ideas. It took ideas from existing languages and put them together. It maybe added a few new things, but that wasn’t the goal. And this is perhaps the most important slide in the whole talk. He said, ‘Don’t fix it until it chafes’, to keep it simple. Don’t put anything in the language just because it’s nice. Require several real instances before including any feature. In other words, ‘Just Say No, until threatened with bodily harm.’ He said Java feels hyped, playful, flexible, deterministic, non-threatening, and rich, and like I can just code.

Bloch continued from the article:

Java is a blue collar language. It’s not a PhD thesis. It’s a language for a job. Java feels familiar to many different programmers, because we preferred tried and tested things.

When Gosling said Java is “not theoretical”, he was comparing it to languages/VMs that are based on computer science theory, like Lisp, which is based on lambda calculus. By “practical” he meant the instrumentalist approach to design, which is that the feature set and the architecture the language/VM supports should handle current problems. Nothing more, nothing less. In my view this is short-sighted. It means that after acquiring and using the language to solve current problems you will run into problems the language wasn’t designed to handle, because new challenges arise. You will have to improvize within a structure that is ill-suited to the task, until whoever is in charge of the language adds new structures that handle the problem effectively. Wash, rinse, repeat. I have a feeling that this approach is a defense against languages whose designers add features willy-nilly without consideration for how it fits into the overall goal of the platform. Bloch points out that “theoretical” languages don’t have this problem, but he passes by it quickly.

Java was a swing in the other direction from languages like C/C++ that were said to “give you more than enough rope to hang yourself with”. I guess programmers “hung themselves” more often than not. C and C++ are powerful in the sense of allowing you to access the hardware directly inside of a language that gives you high-level structures. If you’re interested in doing anything more than building kernels, device drivers, or VMs/compilers, they’re pretty weak. Java rode a wave that was happening at the time it was introduced. C and C++ were the dominant languages. All sorts of projects were done in them, including applications. Java provided a productivity boost to project groups with this orientation by getting rid of many of the pitfalls of using C/C++ (buffer overruns, memory leaks, null or uninitialized pointer references, etc.), but in terms of building more abstract software like applications/environments with the power the language and the VM gave you, it wasn’t that different from C/C++. So in essence it played to the tastes of programmers of the time. It went for familiarity, but without having a particularly good technical reason for it. Java doesn’t allow direct access to the hardware. So why did the language designers feel it necessary to play “follow the leader” with C/C++ in terms of conventions and power? It was an easier sell to programmers. They like familiarity.

Java meets the programming culture where it is. As long as the culture is ignorant of the fact that the architecture it’s using is ill-suited to the task at hand, then technologists can convince themselves and others that problems which arise from using a limited architecture for large projects just go with the territory, and there’s nothing anyone can do about it. It’s about time we became more enlightened, but it’s going to take us admitting that despite our masterful technical skills we don’t know very much about why it is we do the things we do with computers, beyond the joy of solving puzzles. According to Alan Kay, and I think he’s right about this, it will take learning about and exposing ourselves to other things besides computers.

Quoting Kay, from The Early History of Smalltalk:

Should we even try to teach programming? I have met hundreds of programmers in the last 30 years and can see no discernible influence of programming on their general ability to think well or to take an enlightened stance on human knowledge. If anything, the opposite is true. Expert knowledge often remains rooted in the environments in which it was first learned–and most metaphorical extensions result in misleading analogies. A remarkable number of artists, scientists, philosophers are quite dull outside of their specialty (and one suspects within it as well). The first siren’s song we need to be wary of is the one that promises a connection between an interesting pursuit and interesting thoughts. The music is not in the piano, and it is possible to graduate Julliard without finding or feeling it.

I have also met a few people for whom computing provides an important new mataphor for thinking about human knowledge and reach. But something else was needed besides computing for enlightenment to happen.

Tools provide a path, a context, and almost an excuse for developing enlightenment, but no tool ever contained it or can dispense it.

The state of Java today

Bloch’s introduction felt like cheerleading for Gosling’s vision for Java, but then he got into what’s painful about it now. He talked about the Java community’s involvement in the design of Java, and the wisdom of its actions. “How are we doing today?”, Bloch asked the audience. He showed several examples of messy Java generics. It didn’t look that different from C++. The audience laughed. He said, “Not so well, unfortunately. This is how Java looks now.”

The way he described things made it sound like Sun “jumped the shark” with generics. He was more polite than to say this. He said generics per se were not a bad addition to the language, but their implementation allowed so much freedom that Java code is becoming incomprehensible even to people who have been working with the language for years.

Generics: An escape hatch

In my view generics are just an attempt to get out from under the limitations of Java’s strict type system. The same goes for generics in .Net.

Generics are the “managed code” world’s version of C++ templates. They are also called “parameterized types”. They give you some of the programming flexibility of a dynamic language. Statically typed versions of generic code get generated depending on what types are called for in statically typed code. What they don’t give you is the “get to the point” feel of programming in a powerful dynamic language, because you have to tell the compiler about how many types the generic code will accept, and in what form it will accept them, where type information needs to go in the generic code, and what form type information will take when data is returned from the routines you write. In other words, you have to give some metadata for your code so that the static type system can deal with it. I call it “overhead”, but that’s just me. What static type systems force you to do is talk about the data flow of your code. This isn’t so that humans can understand it (though with some discipline it’s possible to create readable code this way), but so that the compiler and runtime don’t have to figure out what the data flow will be when your code is executed. It’s really a form of optimization.

Bloch explored where the complexity that now bedevils Java comes from. He got into talking about feature creep. It’s not the number of features, it’s the permutations of feature interactions. “When you add a feature to an already rich language, you vastly increase its complexity,” he said.

The “quagmire” of closures

First, let me get a definition out of the way. A closure can be thought of as an anonymous function. It is not officially a method in any class. You can spontaneously create closures inline in your code and use them, in dynamic languages. You can pass them around from method to method, and they never lose their integrity. In the context of OOP it’s more aptly defined as an anonymous class that contains a single anonymous function, because in OOP languages closures are objects with methods of their own.

Bloch gave a brief explanation, which I think is correct: closures “enclose their environment”. They create references to whatever variables are used within them that are in scope range, at the time they are created. This is called “lexical scoping”. They also automatically return the value of whatever is the last expression evaluated within them. Bloch gives two reasons why closures are being considered for Java:

  • They facilitate fine-grained concurrency in fork-join frameworks. You can use anonymous classes with these frameworks, but they’re a pain right now. He said it’s important that programmers use fork-join frameworks because, “we have these multi-core machines, and in order to actually make use of these cores, we’re going to have to start using these parallel programming frameworks.”
  • Resource management – Always using try-finally blocks for resource management is painful and causes resource leaks, because programmers misapply the construct. Closures would help with this.

Even though he conceded these points, he clearly has disdain for closures. He used BGGA closures as his example. With the way they’ve been implemented, I’d have disdain for them, too. The hoops you have to jump through to make them work makes me glad I’m not using this implementation.

He said closures encourage an “exotic form of programming”, which is the use of higher-order functions: functions that take functions as parameters, and/or return functions. He jabs, “If you give someone a new toy, then they’ll play with it.” Ah, so higher-order functions are a “toy”. Mm-hmm… I’m tempted to say something impolite, but I won’t.

Bloch talked about closure semantics next, and how they’re different from normal Java semantics. He expressed concern throughout the rest of his talk that this would confuse Java programmers, and thereby create bugs. He has a point. Still, I don’t mind the idea of mixing computing models in a single environment (VM). I think there are advantages to this approach.

Some history: Where modern computing comes from

Code blocks (closures) in Squeak/Smalltalk work differently than normal Smalltalk code as well.

I’d venture to say that the reason closures have been put in Ruby, and are now being considered for Java, is that Smalltalk was the first OO language to use them. Even though I’m sure it’s largely forgotten now, there are some significant artifacts of computing that have come into common use because of the Smalltalk-80 system. The GUI you’re using now, “hibernation” on your PC, bit-blitting, and the MVC design pattern are a few other examples. Language and system designers have been mining Smalltalk features for years, and incorporating them into modern, though less powerful technologies.

In retrospect people within the Smalltalk community have pointed to some design flaws in it. They didn’t ruin it by any means, but it’s recognized that there’s room for improvement. In a conversation I was in on between Alan Kay and Bill Kerr, a blogger I’ve mentioned previously, Kay expressed muted regret about the inclusion of closures in Smalltalk, saying they broke the semantic simplicity of the language. I can see where he’s coming from. Closures introduce an element of functional programming into OOP. He desired a system that was purely OO, mainly for consistency so the language would be easier for beginners to learn. I haven’t gotten far enough along in my research to see what Kay would’ve suggested instead of closures. My understanding is he was heavily involved with the development of Smalltalk up until Smalltalk-76. Due to a schism that developed in the Smalltalk team, he gave up most of his involvement in it by the time Smalltalk-80 was in development. This is the version that ultimately was released to the public. “The professionals”, as Kay put it, took over and so some of the vision was lost. They turned the Smalltalk language into something that was friendly to them, not beginners.

Closures are something that anyone new to a dynamic OO language has had to learn about. They’re pretty painless to use in Smalltalk, in my opinion. They’re easy to create. Contrast this with closures in Java. Bloch referenced a post written by blogger Mark Mahieu, where he got some of the examples. Here are a couple of them, with equivalents in Smalltalk for clarity.

Java example #1:                          

static <A1, A2, R> {A1 => {A2 => R}} curry({A1, A2 => R} fn) {
   return {A1 a1 => {A2 a2 => fn.invoke(a1, a2)}};
}                          

Smalltalk equivalent:                          

curry := [:fn | [:a1 | [:a2 | fn value: a1 value: a2]]]
                           

Java example #2:                          

static <A1, A2, A3, R> {A2, A3 => R}
      partial({A1, A2, A3 => R} fn, A1 a1) {
   return {A2 a2, A3 a3 => fn.invoke(a1, a2, a3)};
}                          

Smalltalk equivalent:                          

partial := [:fn :a1 |
            [:a2 :a3 |
             fn value: a1 value: a2 value: a3]]

This is what I meant about “hoops”. The type information you have to supply for the Java closures detracts from the clarity of the code. It’s clunky. That’s for sure.

I’m not going to say anything about these structures (“curry” and “partial”) right now. If readers would like, I can write a post just on them, explaining how they work, and how they can be useful.

Bloch’s proposal

Bloch put forward a proposal, that rather than adding closures to the language, at least in the near future, that Java designers instead focus on creating concise structures (ie. add features) that implicitly create anonymous classes, to eliminate the verbosity (I guess this was his answer to the challenge of concurrency), handle resource allocation, and critical sections for threading. Basically it sounded like he favored the stylistic approaches that Microsoft has taken with .Net, with some differences.

He rejected out of hand the idea of adding library-defined control structures to the language, something made possible by closures, because it will encourage dialects. Squeak has a lot of library-defined control structures. I think they work quite well. They don’t change the fundamental rules of the language. Instead they expand its “vocabulary”. That is, they expand upon what you can do using the language’s existing structures. I understand this is hard to imagine in the context of Java, because most programmers are used to the idea that control structures are invariants in the language. Bloch would probably feel uncomfortable with this concept, but ask yourself which language is getting more cumbersome and complicated as time passes?

I think there is a fundamental contradiction in goals between what Bloch says he prefers about Java, and the growing problems of computing. He says that adding features to a language increases its complexity, and he makes a good case for that. Yet, how is Java going to solve new problems in the future? His answer is to add more features to the language. Granted they’re focused, and not open-ended to allow further expansion, but they’re still feature additions all the same. As is illustrated by Squeak, library-defined structures would actually help reduce the number of features in the language. Even though Squeak can do more things concisely than Java can, the feature set of the language is very small. As a point of comparison, Lisp’s language feature set is even smaller, yet it’s more capable of handling complex problems than Java. They accomplish this by taking what Java does as a built-in feature, and use message-passing (Squeak), closures (Squeak, Lisp, others), an extensive meta system (Squeak, Lisp, others), named functions (Lisp, Scheme), and macros (Lisp) instead.

Where Java sits

I agree with Bloch that if the intention of Java was to be a language that is “practical” (ie. is designed from the standpoint of instrumental reasoning), easy enough to use so “you can just code”, and doesn’t strain the ability of Java programmers to understand their code, then generics should’ve been scaled back, closures shouldn’t be added, and his suggestions for expansion should be considered. Unfortunately this also means that it’s going to get harder and harder to work with Java as time passes, because the demands placed on it will exceed its ability to deal with them. Maybe at that point people will pursue another “escape hatch”.

Struggling to transcend Java

I compliment the Java community on its efforts to build something better out of a limited platform. What some are trying out is the idea of a VM of VMs. That is, building a VM on top of the JVM. This creates a common computing environment that enables some interoperability between different computing models, while at the same time transcending some of the limitations of the JVM. From what I hear, it sounds like it’s been a struggle. Personally I think it would be wiser for the industry to start with something better–a “theoretical” language/VM–and then try to build on top of that. Our computing culture has a lot of resistance to this idea. Even the efforts of groups to build better language/VM architectures on top of the JVM seem to be regarded as curiosities by most practitioners. They get talked about, and get used by a relative few who recognize their power for real projects, but they’re ultimately dismissed by most developers as “impractical” for one reason or another.

It’s funny. Several years ago I talked with a friend about how slow Java was. His solution was to “throw more hardware at it” then. Some months back I told him about Groovy. He read up on it and dismissed it as too slow. Why not just “throw more hardware at it”? I didn’t hear that recommendation from him. I didn’t press the case for it, because I didn’t have a dog in the fight. I’m not part of the Java community. I figured if he didn’t want to pursue it, he either had a good reason for it, or it was just his loss. My inclination is to believe the latter. I guess by and large there’s not enough pain yet to motivate people to reach for something better.

—Mark Miller, https://tekkie.wordpress.com

Advertisements

31 thoughts on “Java: Let it be

  1. While I can understand the reluctance to bring more features into a workhorse language, I still can’t fathom why they would avoid making it easier to work in the language.

    Java is just riddled with boilerplate requirements that make it a chore to work with. Take, for example, proper resource handling:

    try
    {
    reader = new FileReader(someFile);
    BufferedReader bufferedReader = new BufferedReader(reader);
    String line;
    while ( null != (line = bufferedReader.readLine()) )
    {
    // Do something
    }
    }
    finally
    {
    if ( null != reader )
    {
    try
    {
    reader.close();
    }
    catch ( Exception e )
    {
    // Ignore
    }
    }
    }

    It’s insane that I have to deal with all of this cruft just to read a file! I should be able to simply do:

    BufferedReader bufferedReader = new BufferedReader(new FileReader(someFile));
    bufferedReader.setCloseOnScopeLoss(true);
    String line;
    while ( null != (line = bufferedReader.readLine()) )
    {
    // Do something
    }

    There are two sources of cruft here: checked exceptions, and the try-finally, which is simply being used to close the file handle when leaving scope.

    Then there’s the action listener:

    myControl.addActionListener(new ActionListener()
    {
    private static final long serialVersionUID = 1L;

    public void actionPerformed(ActionEvent event)
    {
    // Do something
    }
    });

    I’d much rather write:

    myControl.addActionListener(ActionEvent event)
    {
    // Do something
    };

    Any time a language forces you to write stuff that doesn’t add any semantic meaning, it clutters the code, making it harder to write and harder to read.

  2. @Karl:

    Bloch talked about this in his presentation, using the same example you use, under the section on resource management. I agree it’s atrocious, and that it detracts from the meaning of the code. It seems the answer that a lot of Java coders would give for your complaint is “find a library that will do this for you.” I went over this with the same friend I talked about in my post. I told him about how Groovy allows you to open, read, and close a file in something like 1-3 lines of code. He just laughed and said, “You can find a Java library that’ll let you do it in one line of code.” He wasn’t impressed. Apparently in the Java world there’s an API for everything, and there are more being released all the time. I’ve heard the complaint that there’s a new framework coming out for it every couple weeks.

    I ran into similar issues when I used to program in C++/MFC on Windows. There were times when I had to worry about catching an exception when closing a resource after having already caught an exception on the resource itself. It seemed insane to me then, too. I would’ve appreciated an API call called something like file.CloseIgnoreException(), in addition to the normal Close() call. Then I could’ve just been done with it.

    Bloch covered .Net’s use of the Dispose pattern, and proposed that Java designers create an equivalent, using the existing ICloseable interface, that looks something like:

    try (/* insert code here to open file */)
    {
    /* do stuff with file */
    }

    without the finally block. The Java compiler will just generate code to use ICloseable’s methods, in this case on the file you have open, to automatically close the file when you leave the try block. I’m not sure how it would handle exceptions. This is like how the “using” construct works in C# with the IDisposable interface.

    I was confused a bit by Bloch’s example, where you have to worry about an exception on close(). The only thing I could figure out was this would come into play if you were writing to a file, and were just within “inches” of filling the disk, and the close() call flushed the write buffer, and it overflowed the disk. But he was trapping exceptions on close() when reading a file. I don’t understand that. I can understand an exception on opening the file, but not when closing it when you’re just reading data.

    I think where Java accolytes would object to Bloch’s proposal of concise structures is that it implicitly introduces hard-wired dependencies in the language to Java library code. It’s the same way in .Net. I remember I ran into a little trouble with the foreach construct, which also uses the Dispose pattern, in C# several years ago when I was first learning how to use it in conjunction with a class where I was implementing IDisposable myself. I eventually got it. It was just a matter of intimately understanding how foreach worked; what methods it called implicitly, and when.

    Anyway, I feel for your plight. I agree things should be easier. It’s just a matter of how you go about it I guess. Like I said, in the meantime I guess try looking for an open source/free library that’ll handle file manipulation for you to make your life easier until the Java community figures this stuff out.

  3. Feels like an open season on Java on the ‘nets lately. What has gotten into people, or maybe this is just another cycle of what the ‘nets are known for, mostly just marketing. It is unfortunate that so many try to advance their language/runtime/pet at the expense of another’s; getting along it seems may not be in store in the future of the ‘nets, people will continue to peddle their latest technology in which they have or are beginning to invest, meanwhile they’ll take every chance available to them to try to either stonewall or outright bash any competitor. I suppose that’s natural. It’s unfortunate, nonetheless.
    Java is and I sorely hope will continue to be a mainstream language for the foreseeable future, and if that implies modifications to the tune of BGGA closures or any other advanced features, I hope that detractors (those that mean well or otherwise) don’t get too much influence. Either that, or perhaps it’s about time, should we just kill Java? To hell with people and their careers, we have new toys in store.

  4. I just want to say I really enjoyed your article, and I just wanted to comment with my personal experience with Java.

    To give you some history, I learned Java in college (they had just switched to it from C++ my freshman year). I initially didn’t hold a high opinion of it, but soon grew to really respect and love it. I liked that it removed the possibility of certain kinds of bugs from C/C++, and I always felt the basic API provides a rich set of classes that is very well documented (and at least for me, easy to traverse), was fairly complete, and for the most part very consistent.

    4 years down the road, I got my first job as a professional, but now working with C#, which I have come to think of as a Java wannabe, with some nice additions, but other things that I really didn’t like… restricting to Microsoft platform was one problem, the documentation never seemed as well done as Java, and I have found multiple holes in the API that I never had with Java (inconsistent documentation, and other windows forms things that just couldn’t be done in C# without dropping to unmanaged code).

    3 years later and I finally decided to try Ruby to see if all the hype was well founded. Having used it for about 3 months now, Ruby instantly became my new favorite language. It fails compared to Java and other “workhorse” languages when doing things like cpu bound computations, but just the syntax alone gives that good feeling I never experienced in Java (and had read about prior to trying Ruby). And then of course the hugely shrunken code to do the same thing in Java is a big plus.

    With that history in mind, I think I can safely say that, while I would like to use clean closures in Java to gain the kind of expressiveness that you can have in Ruby and other dynamic languages, I think trying to “fix” Java is an effort in futility. You just can’t gain the kind of expressiveness of dynamic languages like Ruby in a strongly statically typed language. I think you were on track with the VM within a VM alternative… I am very enthusiastic about JRuby, which either can now or plans to soon support compiling Ruby to Java bytecode (with some constructs dropping down to the JRuby interpreter when the JVM doesn’t support it). I haven’t fully looked into that yet, but have actually tried JRuby strictly as an interpreter allowing me to use the rich API of the base Java language, along with the expressive power of Ruby.

    For the action listener example, I can use singleton methods to very easily create my action listener (and probably open classes to change the base swing components to support blocks). As an example:

    listener = ActionListener.new
    def listener.actionPerformed(e)
    #do stuff
    end
    button.addActionListener listener

    Or with open classes fix it to work correctly with blocks from the beginning (I haven’t actually tried this way of doing it, but I don’t see why it wouldn’t work):

    class JButton
    def action_performed(&block)
    listener = ActionListener.new
    def listener.actionPerformed(e)
    block.call e
    end
    addActionListener listener
    end
    end

    button.action_performed { |e|
    #do stuff
    }

    I think our future will be of languages higher level than Java, so the sooner we dust it off our boots the better. Just look at the history of programming… machine code turned into assembly language, turned into procedural languages, which evolved into C, got replaced by C++, further supplanted with Java and C#… the next step seems dynamic languages, hopefully with good and more improved compilers and/or VMs + faster hardware that can remove the speed as an issue challenge like Java did.

    In the meantime, I will stick with JRuby and be thankful that I can have the best of both worlds.

  5. Pingback: roScripts - Webmaster resources and websites

  6. The thing is, all the static languages strive for ugliness.
    Now, “dynamic” languages such as perl strive for ugliness as well.

    The human factor is one where you think design is coherent, logical, beautiful.

    It is how I felt in theory about the .NET approach with System.xxxx and similar Namespaces. Unfortunately c# will also strive for ugliness in the long run.
    It happened in the past and will happen again, languages evolve in a way that make it harder and harder to use. “Natural” looking languages with a clean syntax are a lot harder to see.

    Fortran example for a loop:

    DO i = 1, iFunc(4711)
    ENDDO

    Does this look elegant? I think it looks ugly. In C it would still be ugly, in Java too… lets jump quickly to Java

    try
    {
    object.method();
    }

    The object.method (message) is rather elegant, the ; should be optional though because there is a newline in this line already. What I dont like is that
    you need to specify that this is a special condition in the try {}

    I think this is an ad-hoc solution to another problem, the problem that messages to objects could somehow fail – but this should never be the case. It should not be possible. The default for the object should be behavioural, like in Prototype objects. But the thing is, once a language has evolved in a certain direction, you cant just tell it ” hey buddy, move back 10 years, we went a wrong road” ….

    Ultimately I think the language that will win out is one that is:

    – very flexible (so you can use different thinking patterns in it easily)
    – strives to never become too complicated/unmaintainable (does anyone maintain huge complex perl apps that are crucial, usable and important in a system? i dont think so)
    – allows one to solve tasks easily and with as few “instructions” as possible (in bash, you do “mkdir” to create a directory. In a language it should be as simple too) etc..
    – Also, I think it would be a good way to structure the language in a way that using it makes sense from a HUMAN standpoint. I like the System.xxx approach of .NET, i think it might be worth pursuing stuff like
    Network.ping ‘www.google.de’
    Audio.mp3(‘fooobar.mp3’).cut ‘0..20%’ # from the beginning to 20%

    Think you know what i mean, it would all make logical sense to write in such a pseudo-language, and it would always be flexible.
    The backends could be whatever the programmer chooses to have btw, so he could use Java – but the “interface” to it must be human readable (and afterwards, having a GUI to it would be nice)

  7. Regarding generics, you say:

    > They give you some of the programming flexibility of a dynamic language.

    I would say that they give you *some* of the expressivity of a language with a more powerful type system. You can get *all* the flexibility of an untyped (“dynamic”) language by using Object in Java. A container of Objects in Java is pretty much equivalent to a container in an untyped language like Smalltalk or Ruby. In both cases, you can put whatever you like in there, but there’s a dynamic check before you use it. The difference is in Java you have to write the downcast, while in Smalltalk the check is implicit.

    I consider generics a small step toward making Java almost usable, but as you point out, the boilerplate can be overwhelming. This isn’t the fault of types, because there are languages with much richer type systems than Java’s where you don’t have to explicitly write any types. They’re all inferred by the compiler. I suspect that a good portion of Java’s type annotations can be inferred as well, but I haven’t worked it out in detail.

    > What static type systems force you to do is talk about the data flow of your code. This isn’t so that humans can understand it (though with some discipline it’s possible to create readable code this way), but so that the compiler and runtime don’t have to figure out what the data flow will be when your code is executed.

    I write down types primarily for humans, not the machine, because that kind of documentation makes programs easier to understand. In languages with type inference, the compiler will figure out the most general type for each variable, so you needn’t write any types at all. It’s worth writing the types, though, because that provides useful documentation about how to use a function and (often) what it does. The experience of many programmers is that a type, such as (‘a -> ‘b) -> ‘a list -> ‘b list (in ML notation) is enough to know what a function does.

    Even in untyped languages such as Smalltalk or Scheme, it’s possible for a clever compiler to use representation inference or dataflow analysis to figure out what the types of many objects will be, and then it can monomorphize operations on those objects. Types aren’t necessary for optimization, because compilers can make programs run really fast with this other strategies. Types are for expressiveness, for humans.

  8. Pingback: links for 2008-03-07 « that dismal science

  9. @Mike:

    I’ve written about this previously, that the feel of the power you get with Ruby, for example, is due to the computing architecture that is implemented by the runtime of the language. It’s like having a bigger lever for handling complexity. You don’t have to work as hard, because the language/runtime is doing some of the work you used to have to do “manually”. The style of the code is the icing on the cake, though it helps. If you like, Ruby is an example of what I’m talking about here, that this is the sort of thing we need to strive for if we’re working in the domain of applications/environments. We now have the speed to run it. My personal preference is for Smalltalk and Lisp, the main examples I used, but that’s just me.

    I didn’t get into type inferencing in relation to Java, and the reason why is I felt that it had a certain architecture it was built with, one that didn’t have dynamism in it, and it had a certain audience in mind, one that valued a level of consistency to the language. Somehow I had the feeling that the Java community wouldn’t embrace type inferencing, because people would worry about type safety and code readability, and they’re dependent on type information for that. I mean, it took them years of debate before they included enumerations in the language. My memory may be flawed, but as far as I know Java still doesn’t have operator overloading because of community objections.

    @she:

    I like the way you think on the last part. Let me work backwards. As for messages never failing, that’s an interesting idea. I haven’t seen a language like that yet. When you say “Prototype”, I assume you’re talking about a type of language, not a language called “Prototype”. I’ve heard of this language category (primarily Self), but haven’t investigated it in any detail.

    When you say that it should be behavioral, I assume you mean that you can change how the system behaves in response to an error condition. I suppose the method could always return nil in that case, something, rather than throw an exception, but that doesn’t tell you much. Does it mean the object you sent the message to doesn’t receive that message (doesn’t have a method for it)? Does it mean that something the method tried to do failed?

    From what I’ve seen, the reason exceptions are implemented in Smalltalk is it gives the programmer the opportunity to intervene and debug the problem at the point where it occurs. It also gives an opportunity for context to appear. The exception can contain a message talking about what the problem is. The cool thing is it allows you to intervene “while the patient is still alive”. The code is still active, just in a state of suspended animation. The programmer can go in and fix the code, and then try to continue execution from that point.

    As for all languages tending towards ugliness, I disagree. There are some that have maintained a sense of elegance, but they’re not the popular ones. There is something to that, and it’s something I struggle with, because I would like to see these languages get more popular. I think they could help people, but a consideration I often ignore is “What could they do to the language?” A fear that’s been expressed in the Ruby community is that as the language gets more popular, particularly because of Rails, that the code used in projects will get uglier, because the newer people coming in who aren’t as interested in elegance, the ones who say “it’s just a job”, will create messy solutions to problems. Their influence may even make the language uglier as the community takes input on features. I’ve heard this about Perl, that one reason it got uglier is that programmers of all sorts made suggestions and Wall obliged them. It sounds like there was no underlying philosophy to it, and that’s what the community that grew up around it liked. Unfortunately it led to the language getting ugly.

    An idea I tried to get across in my post is a significant part of the problem is the programming culture itself–us developers. From what I hear most programmers are not sophisticated in how they think about their jobs. They’re not that educated about what they do, and hardly anyone’s encouraging them to become more aware, to understand higher level computing concepts. In fact computer science curricula by and large today are training “software laborers”.

    What I’m also saying is that our own ignorance is a significant reason why it’s hard to do our job. We’re holding ourselves back. As Alan Kay’s said, the way we build software today is like how the ancient Egyptians built pyramids. That was the way the ancient world built skyscrapers. It worked, but it was hard, dangerous work, and expensive. Not that software development is dangerous, but it doesn’t have to be such a slog.

    @rieux:

    I don’t agree that you get all the flexibility of a dynamic language by using Object. Let me get something out of the way first, with all of the languages I’ve worked with, they’ve all cared about “types” to a certain extent. For example in Smalltalk it cares about the class of the object you’re holding on to. If you pass it a message it doesn’t understand, it’s going to balk.

    Object in Java is a base class with few methods. As you say, if you get an object out of a container this way you have to type cast it before using it, which complicates things, because it’s possible to cast wrongly. I don’t consider that the same level of flexibility, particularly in the context of an early-bound compiled language. Even if I substitute one object for another that uses the same interface, I still have to worry about casting to the right type.

    I write down types primarily for humans, not the machine, because that kind of documentation makes programs easier to understand. In languages with type inference, the compiler will figure out the most general type for each variable, so you needn’t write any types at all. It’s worth writing the types, though, because that provides useful documentation about how to use a function and (often) what it does. The experience of many programmers is that a type, such as (’a -> ‘b) -> ‘a list -> ‘b list (in ML notation) is enough to know what a function does.

    Like I said, you can use the type system as a form of documentation–with discipline. The type system doesn’t demand that types make any sense to a human. My understanding from when I took compilers in college was that static typing was an optimization technique. My memory could be wrong. My understanding is it enables the compiler to compute the size of the stack frame at any point in time with precision. Another example is rather than having to interrogate an object about what messages it understands, it can just “jump” to the method at run time, because the type has already been determined at compile time, in most cases (there are exceptions).

    I have not run into a situation yet with languages I’ve used where the types have been able to tell me what a function does. Why do programmers find well done comments to be so wonderful if the code is so self-documenting?

    This is a bad example, because a lot of people know it, but assuming you knew nothing about C except for its syntax, what does this function do?:

    printf(char *format, …);

    You can tell from the “format” parameter (not a type specifier) that the function takes a formatting specification, but it doesn’t tell you how it uses it. It doesn’t tell you that you can put a string in for “format” that prints out the desired message. It doesn’t tell you how the format specification relates to the variable number of parameters that come after it. In other words, it tells you very little about what the function does. All it really tells me is “put a character array here”.

    Even if I were using objects, there’s more context, but I wouldn’t say that types tell me what a function does. If anything it’s the non-type labels that tell me:

    DB.FindDataFor(String name);

    Again, what does “String” tell me about what the function does? It’s the labels that give me needed information.

    Types tell me about the data flow. They tell me “I put these things in, and I can expect to get this out.” It helps for understanding code, generally, though it’s not a guarantee. I’ve seen plenty of code in statically typed languages that made my head hurt. I pleaded for inline comments.

  10. Karl Said:

    “It’s insane that I have to deal with all of this cruft just to read a file! I should be able to simply do:

    BufferedReader bufferedReader = new BufferedReader(new FileReader(someFile));
    bufferedReader.setCloseOnScopeLoss(true);
    String line;
    while ( null != (line = bufferedReader.readLine()) )
    {
    // Do something
    }”

    Apologies to Karl and others in advance – but I think our collective expectations about programming (and to some degree – the experience of computing in general) have been muted over the last three decades.

    First a disclosure: I did Smalltalk for 13 years and have been doing Java full time now for 4 years.

    Reflecting on both experiences – I find myself wishing there was more Smalltalk work to be had (or Ruby). You do Java long enough (a couple years) and your threshold for pain skyrockets.

    Yet – like a forgotten melody (ok that’s pretty sappy) I remember doing things like…

    ‘My File.txt’ asFile linesDo:[:aLine | …do something].

    I know Alan Kay was hoping we’d move past Smalltalk onto something better. Sadly, it’s hasn’t happened. There appears to be a fair amount of excitement in the industry about “dynamic languages”. And more than few keep trying to bring Smalltalk features into the fold. Why not just modernize Smalltalk and be done with it?

    RP

  11. @Ronald:

    “I know Alan Kay was hoping we’d move past Smalltalk onto something better. Sadly, it’s hasn’t happened. There appears to be a fair amount of excitement in the industry about “dynamic languages”. And more than few keep trying to bring Smalltalk features into the fold. Why not just modernize Smalltalk and be done with it?”

    I agree with you. I guess the question is how to do that. I think with any talk of modernization there’s the temptation to make it into something that works with legacy stuff more fluidly. There are some in the community who are all for that, since they feel that will make Smalltalk more relevant. This is just my read on it, but I think there are others in the community who reject that idea, because they see Smalltalk as a revolutionary technology that forges a new path. I think the reticence comes around the “culture” issue. There is a sense of not being welcoming to outsiders, because they want people to join in who are of like mind, who recognize what Smalltalk represents, and want to help develop it in that spirit. In short they don’t want to see a “mass horde” come into it and “dumb it down”. As Kay can attest to with OOP, “There’s no concept so simple that you can’t get thousands of people to misunderstand it.” I can say from my own experience that it’s quite easy to misunderstand what Kay has talked about, and what Smalltalk represents. They are both very open to interpretation, but I think this is because Kay doesn’t have much patience with uncultured people, yet he has a way of using common terminology that a lot of people can understand. He doesn’t want to have to try to explain himself at every turn, plus the ideas he puts forward are quite advanced. Smalltalk has the same problem. It looks like something familiar, and yet it behaves in a way that isn’t. It really confuses people. Most just throw up their hands and say “I can’t deal with this. It wants me to only use its editor, and it’s not file based. It’s a programming language/system that wants to be all about itself. Let me out.”

    As the quote I use from Kay says, it’s more important to spread interesting ideas than to spread technology. Technology helps those ideas be realized more fully if people already have them, but the technology is not going to bring about those ideas. It’s very different from the expectations of the tech industry. I think Ruby is a promising step to something better. At least people are getting interested in it, and recognizing its power. It helps people take another step away from machine-centered programming, and more towards a symbolic, abstract form of programming, which is what we need in our field. It’s also different enough that it almost forces programmers to contemplate some new ideas–that is if they program in it at all. I’ve read articles about shops using RoR to set up basic websites, with the developers doing hardly any programming at all.

    So in short, I think in order for Smalltalk to become modernized, developers are going to have to catch up to it first, in terms of ideas. I predict that ultimately, in all likelihood Smalltalk will simply be bypassed. The ideas from it will be implemented but in a different system. Smalltalk had its moment in the sun, but its time passed as far as most people are concerned. I wish I had a more hopeful message. The best I can offer is until that times comes, those of us who do recognize it for what it is can enjoy the future now. 🙂

  12. Mark –

    Great post, thanks!

    I see two sides of this, particularly enhanced by the last year where I was managing developers more than developing. Developers fall into two major groups right now (not saying it can’t change), the “business coders” and the “academic coders” (just two shortcut labels). The BC’s are better off with static typed languages that require a lot of explicitness and verbosity to the code, as well as the orgnaizational structure that languages like C#, Java, and VB.Net impose on the code. Let’s be real, splitting the project into 1 file for each class, and making each class usually nothing more than a full OO equivalent of a C struct makes it really easy for 15 people to collaborate through a version control system. I can’t vouch for Lisp or Smalltalk, but Perl sure does not encourage putting your project in more than one really large file. The AC’s, on the other hand, figure out how to be really productive with dynamic and/or functional languages. They are the folks who do things like wrap dangerous code in a closure to essential jail it, or to pass it around to fork/join mechnanisms to preserve data integrity, and so on.

    I know you know this, and I bet your other commenters here know it too, since they all seem like smart folks who have been around the industry for a while, but one of these top programmers is so efficient compared to a standard BC that it makes sense to replace 20 BCs with 3 ACs, get the project done faster, cheaper, and better. So why don’t we do this?

    Simple. AC’s are nearly impossible to recruit, and relatively rare. I can count on 2 fingers the number of resumes I have seen that contained Lisp or Smalltalk experience (relatively mainstream AC-type languages). Make that one hand if I add in Ruby. I did actually see a resume list APL once. It’s a self reinforcing issue, a lack of AC’s makes it impossible for companies to have AC’s on teams mentoring or leading, or calling the shots, which makes it impossible for anyone to become an AC. Since most schools are now using Java, C#, or VB.Net to teach, forget there being a percentage of students graduating college with a love of Haskell like there used to be.

    The upshot? I think Java, C#., and VB.Net can add these features, risk free. The simple fact is, the BCs won’t notice or touch them, and in general, the AC’s won’t bother. .Net generics *are* clumsy, but they are still better than the alternative (collections of objects and constantly casting, or inheriting a collection simply to layer in type enforcement). Likewise, .Net closures are fairly miserable, but they are still better than the alternatives, particularly delegates (bleh). What this means is that the ACs forced to write business code get a few toys, crippled as they may be, and the BCs are not forced to use a whole new language that is beyond their ken just because Paul Graham wrote a really good travel ticket system in Lisp 15 years ago. 🙂

    In all seriousness, I think that there is an AC hidden inside 80% (or more) of BC’s, but in business, growing employees must take second place to getting projects done with available talent & skillsets, regardless of how inefficient they may be. Trying to train staff in a whole new system contains too much risk for the average manager’s stomach, including my own. When the Big Boss is breathing fire over a late project, being the person who decided, “hey, let’s try spending 2 months training the team to use Ruby and since that should save 3 months, we’ll be ahead of the game” is typically not a move that keeps you employed. 😦

    J.Ja

  13. I came to realize just today that a bunch of comments to my posts were getting caught in the spam filter. I went through and approved them. Sorry if you thought I deleted them, especially to you, Justin James. There were about 3 of your comments in there! I hope I didn’t miss any. Periodically the anti-spam system just clears out the queue of messages it considers spam, unless I happen to check it first and “save” some of them. I haven’t looked at it for a while. I’ve been too busy.

    Now, to address some comments.

    @let_it_be:

    I think you missed the point of my post. Yes, I criticized Java, but more importantly I criticized the culture that created it and surrounds it. Where I pointed to an end to Java, I was really making a prediction. Nothing more. I did not advocate for Java to be “killed off”. I just said I think it does not have an indefinite future, because its architecture, and the expectations of the community around it were set from the beginning, and it’s too limited.

    I did bring Smalltalk into the discussion, but only as an example to contrast against. I tried to shy away from anything that implied that I was advocating the use of Smalltalk. I guess I failed, because I guess you thought I was doing that. I apologize. That wasn’t my intent. I admit that I am partial to Smalltalk, but that’s just a preference. I’m not trying to sell anything. Really what I’m trying to do with this blog is challenge people to ask themselves, “Is this really the best we can do?” I also hope to expose people to a different perspective that will open some eyes, for those who are willing to hear the message, to a wider array of possibilities in this field.

    re: people’s careers

    I think whether people’s careers are at stake has more to do with whether the people in this business can tackle computing problems effectively, no matter what tool/language they’re using.

    @Justin James:

    I know of what you’re talking about. I wrote a post about it back in November, though I don’t put you in the same category as the guy I criticized, Steve Jones. I believe he’s an enterprise architect and he was saying that he had no use for dynamic languages, because there were only a rare few people who knew how to use them well. He said a lot of the people he hires are “muppets” who don’t even know how to read API docs. He said that IT development, “is a discipline for the masses and as such the IT technologies and standards that we adopt should recognise that stopping the majority [from] doing something stupid is the goal, because the smart guys can always cope.” Where I criticized him is he recognized that the methodology he was using wasn’t that enlightened, but he didn’t care. He basically endorsed the status quo. I don’t mind pragmatism, dealing with the hand you’re dealt, but I’d like to see some introspection, with people asking the question, “Okay, this is the way it is now, but couldn’t it be better?”

    Mark Guzdial has been saying that increasingly computer science curricula are geared towards turning out “software laborers”. Apparently software architects like it this way. They get to take care of all the design details, spec it out, and the programmers are expected to just translate the spec into code. I think I read in one of his posts a software architect saying, kind of tongue in cheek, but being quite serious, “We’re trying to take the fun out of programming.”

    I don’t hold much against the “software labor” that’s getting hired. They just think they’re doing what they’re supposed to. I think a lot more people could be skilled developers if they got the right education. It’s just that the schools aren’t teaching this stuff. So this is what we get. In addition, from what I read, it sounds like the majority of programmers now are not CS graduates. So they’ve read up on how to program in a particular language, and they get hired that way, none the wiser.

    I think you’re right about the ratio of AC’s that could be hired compared to BC’s, and its potential cost effectiveness. It seems to me that if businesses understood this they would demand this arrangement, but it may be that they just find the current scheme of things more cost effective from the standpoint of being able to hire “cheaper per head”, so to speak. Since a lot of companies don’t understand IT real well, morale is probably not that high among the staff. Another reason, related to this, may be that they don’t want to have indispensable programmers. They have turnover, so it’s probably just better in their eyes to have a large pool of people who can understand a relatively simple set of skills. They probably think this lessens their risk. If people quit, or they have to lay people off, the impact is lessened. Plus, it’s easier to outsource their jobs if it comes to that. If this is the thinking, I still think it’s foolish for reasons I’ve already talked about.

    Plus, in my experience, anytime you lose a programmer, even if they’re using a commodity language, you lose knowledge. There are lots of bits of knowledge that the programmer who leaves had about the system being developed, about the customers, about the customers’ business, that’s gone. The new people will have to learn all that stuff from scratch. I’ve never worked for a place where all I did was translate a spec into code. I’ve had spec’s I’ve followed, but it was basically just a set of requirements, like, “This transaction must do X, Y, and Z.” It was up to me to implement the transaction logic. It wasn’t spelled out for me down to the littlest details. I also had contact with customers, so I became somewhat familiar with their specific needs.

    I’ve said this before, but I think a part of the problem is the dynamic language community itself. It’s a bit like some in the open source community (Unix, Linux in particular). They make learning this stuff a “black art” by basically providing zero help to beginning programmers who are interested in learning about it. I’d call Ruby an exception to that. They have tried to reach out and bring in newbies. The Smalltalk community is OK on this score. There are some who are welcoming, and others who choose to exclude. The Lisp community has had this problem for years, though that’s changing a bit.

    Part of it is that once some people in the community have acquired the knowledge, they want to keep it for themselves. They don’t want to share it. They’ll actually say, “It’s my competitive advantage, and I don’t want to give it up.” Another aspect I hear about every once in a while is they don’t give much back to the community either, even though there are open source avenues. They keep a lot close to the vest, again, because they don’t want to give up their advantage.

    I think what you see in the DL community is primarily what I’ll call “entreprenuerial programmers”, like Paul Graham. They have their own small business, or they have the connections to cash they need so that if they want to do something, they have that, and they can implement their software however they want. I think it’s sad that it’s only the entreprenuers who can find the opportunities to use the best technology around. Like you’re saying in your business, it’s too much trouble and risk to try to get talent that can write software that way. So anyone who wants to pursue this in the private sector almost certainly HAS to go it alone.

  14. @Mark

    > Let me get something out of the way first, with all of the languages I’ve worked with, they’ve all cared about “types” to a certain extent. For example in Smalltalk it cares about the class of the object you’re holding on to.

    I was being careful with my use of the word “type.” Informally, Smalltalk classes or Perl datatypes might be called “types,” but formally, types are static properties of expressions, not dynamic properties of values. I think it’s an important distinction, but others may disagree.

    > Object in Java is a base class with few methods. As you say, if you get an object out of a container this way you have to type cast it before using it, which complicates things, because it’s possible to cast wrongly.

    I am speaking in a very particular sense, which is that if you write Java in a style in which every type is Object — that is, you declare every variable as Object, every method takes Objects and returns Object, etc., then you can macro-express an untyped (aka “dynamically typed”) language like Smalltalk. Yeah, it’s a pain to write the casts. You can mitigate that somewhat by declaring *every* method in its own interface of the same name, and then every class with that method need to implement the interface. Then, when you want to call a method on an object, you downcast to the interface for that method, and call the method. Do you see how a Smalltalk program could be translated (possibly mechanically) into a Java program written in this silly style? When someone says that Object gets you dynamic typing, this is essentially what they mean.

    > My understanding from when I took compilers in college was that static typing was an optimization technique.

    This is probably how some *compiler writers* think of static typing, but it’s likely not how most programming language theorists think of types. Yes, the optimizations you mention are done more easily in a typed setting, but a good compiler will be able to do them fairly well even in an untyped language. For example, the Scheme compiler that I’ve worked on in the past does something called “representation inference,” where it tries to figure out if certain variables might contain only particular forms of data. For example, if it can determine that two variables m and n will contain only fixnums, then it can compile the expression (+ m n) to a fast inlined addition, rather than jump to a subroutine that checks whether they are fixnums, bignums, rationals, or floating, performs conversions, etc. As a result, this Scheme compiler produces object code that generally runs significantly faster than Java, approaching C.

    Types were invented by Bertand Russell about a century ago, and they certainly had nothing to do with optimizing compilation back then. Types as we use them in the modern sense were mostly formulated by logicians such as Alonzo Church back in the ’30s, and again these didn’t have much to do with optimization. The first FORTRAN compilers used types for optimization, but types in modern languages like Haskell are more like Alonzo Church’s types than like FORTRAN types.

    > I have not run into a situation yet with languages I’ve used where the types have been able to tell me what a function does.

    This is possibly because you haven’t used a language with a sufficiently expressive type system. Of course no one would have this experience with Java types, which are fairly inexpressive (my initial complaint), nor with C types, which are more like suggestions than theorems. Languages like Standard ML, Ocaml, Haskell, Mercury, and Scala have types that are expressive enough to give you a pretty good clue.

    In particular, some of these languages have a property called “parametricity,” which means that each type generates a theorem about any function having that type. For example, consider the Haskell type “[a] -> [a]”, which means a function that takes a list of values of *any* type, and returns a list of values of the same type. Parametricity then tells us that while such a function may freely permute, duplicate, or drop elements of the list, the resulting order absolutely cannot depend on their values. Some functions with this type include tail, which drops the first element of the list, and reverse, which reverses the list. Both of these functions operate without looking at any of the list elements, but only at the lists structure. A function like sort cannot (in Haskell) have this type, because sort requires inspecting and comparing the elements, and the free theorem for the type [a] -> [a] says that the elements cannot be inspected. So in the case of the type [a] -> [a], it doesn’t tell us exactly what the function does — we still need a comment — but it tells us an awful lot about what the function doesn’t do, which is useful for writing correct code.

    > Why do programmers find well done comments to be so wonderful if the code is so self-documenting?

    Haskell and ML programmers often find comments to be helpful, but much less essential than Perl, Smalltalk, and Java programmers. One nice thing about types is that unlike comments, the types can’t get out of sync with the code, since they are checked by the machine.

    > I’ve seen plenty of code in statically typed languages that made my head hurt. I pleaded for inline comments.

    If you mean C or Java, then I’m not surprised, though they tend to make heads hurt in profoundly different ways.

    If you want to see what types can do for you (not for the compiler), I’d suggest learning a richly typed language such as Haskell or Ocaml. You might be surprised by how much types can say, and by how little they get in your way. To a veteran Haskell programmer, types are a tool to make interfaces more precise and to make the compiler check the properties we want; if they help the code run faster, too, we don’t complain, but that’s not the point.

  15. It’s interesting how many people come back to Smalltalk as a comparison to how OO languages should be. Well, I am (or was) a Smalltalker, too, back at university and the early years in business. Since Java got hyped mostly by marketing people, I had to move, too, so Java became my second “mother tongue”.

    I understand that people are anxious about changes in the Java language. Most arguments against such language changes are a bit bogus, though. While people mostly argue to keep Java simple, I enjoy the fact it became more safe before going productive with rewritten code. Smalltalk is a clear and open language, but error prone, if the developer does not take care about what he or she does. And from what I see coming from “business developers”, as you called them, I doubt Smalltalk would be a good tool for big commercial projects. Smalltalk urges a developer to think harder about APIs, much more than Java, where types etc. enforce specific usage.

    Regarding closures in Java, where, unfortunately, people often only discuss BGGA, as for being a co-author of First-class Methods I usually say that there may and should be different flavours of closures to be taken into consideration fo Java. The reason, why CICE does not work for me, mainly is it being yet another additional feature to Java. Introducing this feature, it will soon be misused (e.g., implementing Closeable only to be able to use ARM) and calls for additional features will rise, as other cases are not covered properly. A good indication is the introduction of for-each, which restricts to classes implementing Iterable, and many developers would like to have a simliar construct for Maps.

    One of the strengths of Smalltalk is its very small set of language constructs paired with enabling API-driven constructs by multi-piece method names and blocks. Of course, in Smalltalk, a developer gets used to this programming idiom, so it seems natural for him or her, while a Java developer is used to a rather imperative programming style having some constructs for decisions and loops. One surely could replace all constructs in Java by changing the language in some aspects, without changing the look of the language at all. Opening these changes to developers might change the future look, though, being more API driven.

    Another argument driving me crazy is the “if Java does not suffice, pick a language you like better”. I am working for a company developing a product for big enterprises. There is no chance in about 10 or more years to be able to change the programming language. And I doubt it is that much different in a lot of other software development companies. It was hard enough to step up from Java 1.4 to Java5, which will make all our customers to at least change to Java5, too. As long as there is no multi-language eco-system around and no money to employ multi-languaged developers (as people tend to change jobs) I cannot see an option here but to improve the tools we’re currently using.

  16. @rieux:

    I appreciate your comments a lot. They help clarify what you were saying. What you call “types” look like a mixture of what I call “types”, and the concept of contracts. I will have to reconsider my notions of types. Thanks!

    @Stefan Schulz:

    As for Smalltalk being the example of how OO languages should be, well it was one of the originals. I like it for its expressiveness and power, which I don’t believe is matched by any other OO language, with the possible exception of Ruby. I agree with you about it forcing developers to think harder about APIs. That’s still a challenge I’m trying to tackle with it.

    As for Smalltalk being error-prone, I think that was the reason unit testing as it’s now practiced in TDD was developed. The development and practice of TDD came out of the Smalltalk community. When I first got into Smalltalk, I listened to a podcast on Ruby and one of the things they emphasized as essential was unit testing. It’s needed much more in a dynamically typed language than with a language like Java, though it’s still useful in that environment.

    As for closures in Java, I see what you’re saying about using it to modify the language, which is something that only advanced developers would get into, but Bloch was also talking about using them in conjunction with concurrent processing and resource management. They would come in more handy for concurrent processing. Isn’t this something that the average business developer is going to have to touch on in the course of doing their job?

    I appreciate pragmatism in the sense of recognizing what you have to deal with. I would not argue for creating greater disruption in a system, either, at least not without being confident (backed by past experience) that short term disruption would lead to long term stability and an increase in productivity. Sometimes painful decisions have to be made to that effect. Most of all, though, if such a change is being considered, the “programming culture” has to be ready for it in the sense of being mentally prepared to use the new model. Just throwing technology at a group that doesn’t understand it is going to create more trouble than it’s worth.

    The only thing I was arguing is I see the situation with Java as growing worse, not better, in terms of being able to tackle more difficult problems that are coming down the pike. It doesn’t have to do with the fact that Java got picked over Smalltalk or something else. I didn’t want to make this about Language A vs. Language B. It really has to do with the fact that with rare exceptions, we don’t evaluate architecture in our field at all, and it bites us in the end every time. It’s about time we learned this lesson.

    The reason you’re feeling trepidation about any sort of change (though I’ll take your word for it that it comes out of a sense of reality) is that all of the knowledge you’re automating in your system is encoded in one system, which I contend doesn’t scale very well in terms of adapting to new problems. I mean, Bill Joy argued for including closures, continuations, and parameterized types from the beginning, back when Java was just an in-house project called Oak, but he was ignored (see history in “The Long Strange Trip to Java”). My understanding from what Alan Kay has said about it is they were even going to make it a late-binding system, but it was forced out the door before that could be developed. I think it’s more a case of “You’ve made your bed, and now you have to lie in it,” and make do the best way you can. I guess that’s what you’re saying, but I suspect the introduction of closures in Java could end up being more disruptive than you think. It’s just a hunch. I think that unless type inferencing is introduced along with it, it’s going to force programmers to create code that’s going to be a struggle to comprehend.

    When you said “It was hard enough to step up from Java 1.4 to Java5”, were you talking about retraining your employees? I was going to ask, why not use another language that embodies higher level concepts, which is also compatible with the JVM ecosystem? But maybe this was just what you were talking about. If so, wouldn’t closures present similar challenges?

  17. Surely, concurrent processing also is an important topic for closures. As we ship a client-server application, it goes hand in hand with communications issues. I was just writing about language changes as to catch up with Smalltalk capabilities of not using language constructs.

    The change from Java 1.4 to Java5 was not that difficult wrt. retraining. The main reason is that while Java5 introduced new features, the existing code base for rewrite (not reimplementation) continued to work with no change. Developers as well as the code continually adapted to those new features, so the change in code and the learning curve ran quite parallel (of course, having some ACs who helped in learning). Challenges will be similar for any future language change but are not that different from introducing new libraries including their API (which often seems common but usually is domain-specific, too).

    The problem wrt. the eco-system is not that languages exist providing the functionality, which would come in handy, but that there is no tooling environment that helps working with multiple languages in a larger software project. And, of course, it seems easier to train developers new features in a language than to teach them one or more new languages, so one does not lose very important knowledge when those who know a specific language decide leave.

    Thus, the idea of First-class Method is not only to introduce closures, but to stay with the “look and feel” of Java and limit to a level of functionality, we think is necessary to solve most of the problems on the table. While it may be difficult to write proper APIs and applications making use of closures, it should be easy to apply, read and understand the usage part, which is what most developers will have to deal with, in my opinion.

  18. @Mark:

    I wonder if Java would have been less accepted initially if it had closures, parameterized types, and first-class continuations from the beginning. It seems to me like much of its appeal was in its simplicity. From a corporate perspective, having a language in which the programmer is only responsible for translating a spec, and there aren’t many degrees of freedom, probably seems like a good thing.

    Have you read Guy Steele’s talk “Growing a Language”? Here: http://www.brics.dk/~hosc/local/HOSC-12-3-pp221-236.pdf — toward the end, he says:

    > The Java programming language has done as well as it has up to now because it started small. It was not hard to learn and it was not hard to port. It has grown quite a bit since then. If the design of the Java programming language as it is now had been put forth three years ago, it would have failed—of that I am sure. Programmers would have cried, “Too big! Too much hair! I can’t deal with all that!”

    My usage of “types”, for what it’s worth, is similar to what you would find in papers from PL conferences such as POPL or ICFP. It’s not the common usage that you would see on Reddit. The reason I avoid that usage is that most languages have some amount of (static) typing going on, and all languages allow various amounts of dynamic “typing,” with various levels of convenience — so I think it’s worth not thinking of the two as exclusive alternatives.

    Contracts mean different things to different people. I’m most familiar with the contracts in PLT Scheme (since the lab I work in is where PLT Scheme is developed), but I don’t know how similar those are to what other people mean by contracts. Do you mean Eiffel contracts? Part of the goal in PLT Scheme is to allow you to express interesting invariants in the contract language, and then when a contract is violated, it can pinpoint the module that is to blame. There has been some work on using automatic theorem proving techniques to remove runtime contract checks when possible, or to report definite contract violations statically, but none of this work, as far as I know, has made it into a general purpose implementation.

  19. Regarding the initial functionality — wouldn’t this work as well while being less crufty?

    public String readFile(File someFile) throws IOException
    {
    BufferedReader bufferedReader = new BufferedReader(new FileReader(someFile));
    try {
    StringBuffer buffer = new StringBuffer();
    String line;
    while ( null != (line = bufferedReader.readLine()) ) {
    buffer.append(line);
    }
    return buffer.toString();
    } finally {
    reader.close();
    }
    }

  20. @rieux:

    I wonder if Java would have been less accepted initially if it had closures, parameterized types, and first-class continuations from the beginning. It seems to me like much of its appeal was in its simplicity. From a corporate perspective, having a language in which the programmer is only responsible for translating a spec, and there aren’t many degrees of freedom, probably seems like a good thing.”

    I think continuations could’ve been included in Java and no one would’ve noticed, because things like a call stack and functions that return values to callers, not to mention the semantics of non-local returns in closures, are just special cases of continuations. The infrastructure for continuations could’ve been included, but just used for the special cases of a stack and value-returning functions at first, for the language programmers would’ve seen. I don’t think the look or feel of Java would’ve needed to be different than what it was. Later as computing problems required more sophistication, raw continuations could’ve been surfaced more.

    I think Java programmers would’ve taken to parameterized types just fine. C++ had them at the time, and those were the developers Sun was courting with Java. I remember a few years after Java came out complaining that it didn’t have them. I think it was a matter of market timing. They said as much in the article I referenced in my previous comment. Generics had been discussed in the Java and .Net communities for a few years before they finally appeared. It didn’t sound like it was a matter of whether they would screw up programmers. When I read about them at the conceptual stage, it sounded like it took so long because it was a challenge to implement them well. Sun jumped the gun and hacked generics into the language to get this feature out quickly. Maybe the implementation has improved since then. From what I’ve heard .Net has a complete implementation for them, but this feature didn’t come out until 2005.

    I agree closures would’ve been weird for programmers when they first saw Java. I felt like I gave both sides of the issue in my post. I personally think they’re alright in Smalltalk, but Alan Kay took a different view, saying that they made Smalltalk more difficult to learn for beginners. Interesting you’re making the same argument for closures in Java. Kay was probably right, though I didn’t find them that hard to learn, at least from the standpoint of applying them.

    There’s been discussion on the Squeak developers mailing list about how to implement concurrent OOP. There have been some interesting solutions proposed. It’s tempting to think that since Smalltalk already implements message passing, that all it would take is a way to pass messages in an asynchronous fashion. This doesn’t take synchronization into account though (essentially a “join” action). The solutions that have been proposed are more along the lines of the fork-join pattern, but they feel like a natural part of the existing language (with synchronous and delayed message passing).

    re: contracts

    I was thinking of them in the Eiffel sense.

    @Stefan Schulz:

    “The problem wrt. the eco-system is not that languages exist providing the functionality, which would come in handy, but that there is no tooling environment that helps working with multiple languages in a larger software project.”

    Hmm. I thought Eclipse had modules available for JRuby and Scala, so it would support those languages. I haven’t heard of one for Groovy, but I’m sure there’s one now. I’m not that hot on tool-oriented development anyway. IDEs are fine, but once the tool starts trying to design the app. for me, that’s where I draw the line.

    “Thus, the idea of First-class Method is not only to introduce closures, but to stay with the “look and feel” of Java and limit to a level of functionality, we think is necessary to solve most of the problems on the table. While it may be difficult to write proper APIs and applications making use of closures, it should be easy to apply, read and understand the usage part, which is what most developers will have to deal with, in my opinion.”

    Maybe I misunderstand you here. How do you separate writing applications with closures, which you say is hard, and “apply, read and understand the usage part”, which you say is easy? Assuming you have developers using closures, aren’t they going to at least be applying closures in their application code, if for no other reason than to make API calls?

    I haven’t looked at First-Class Methods. You say they preserve the “look and feel” of Java. Do they have the same semantics as regular Java code? That was one reason I changed my position on Bloch’s presentation. I thought the change in semantics would be too much for most Java developers to handle, plus the idea of lexical scoping, and “passing a function around”.

    Before I got into all this stuff, I heard that anonymous functions were coming in .Net 2.0. I read some articles on them and thought, “Okay…what good is that?” It wasn’t until I got back into Smalltalk after being away from it for years that I finally realized, “Ooooooh! So THAT’S what you can do with them.” The reason being that while I had used closures when I had Smalltalk in college (briefly), I didn’t realize that’s what I was using. I thought of them as similar to code blocks, like you’d find in Pascal or C, with some different characteristics (they could receive parameters). Linq’s lambda expressions (closures) made sense to me when I first saw them, but that was only because I had learned about lambda expressions in Lisp a couple weeks before then. I think getting exposed to other languages, plus some of the knowledge from the community that supported them, explained features such as this in a full, conceptual way that really helped me grasp them better than if I had just tried to understand them from the perspective of MSDN documentation.

    These language features also seemed to go well with the language that contained them. They felt more natural. Whereas anonymous functions in .Net went against the grain of the dominant language and API design that existed in it.

  21. Mark –

    Yeah, I am not against DL’s in the slightest, in fact, I personally *love* them. The problem is the other people on any given team. Unless you have a “rock star” team, or a lot of people with exposure to them (for example, people who went to a traditional CS school) and/or functional languages, it will be a mistake for business coding. After all, giving Ruby or Python or Perl or Lisp or whatever to someone used to and trained in staticly typed languages is simply going to result in code that looks like Java or C# with a slightly different syntax.

    That is actually why I have no problem with things like Linq, closures, anonymous methods/functions, etc. within a language where it does not feel appropriate. The folks who don’t want to (or don’t know how to do so effectively) can ignore them, and the folks who want them, like me & you, can use them. 🙂

    That being said, I really want to dig into IronRuby. I persoanlly stick with .Net stuff for the last few years, because I like Visual Studio as an environment (so long as I avoid the data binding and other “we write your code for you” type junk), and because I like that the .Net Framework encapsulates so much of the drudge work coding for me. It is really tough for me to sit there with a blank copy of Notepad in one window and a command line in the other with “interpreter name_of_file_I_am_editing” in the history and a bunch of println() calls to “debug”. For me, anything like Ruby, Python, Spec#, Lisp, Smalltalk, etc. is a non-work function (sadly), and since it is competing with so many things, I need to find ways to efficiently learn them. I miss being single and childless sometimes. 😦

    J.Ja

  22. With eclipse, yes, there are plugins. But I haven’t read about them playing together nicely. To me, it seems already difficult to handle a project that is Java as well as Web-App. I think, IntelliJ integrates those better, composing a project of modules that have flavours. The main problem is to make different languages work similar to using one language nowadays, including using APIs from other languages, testing frameworks, etc.

    Maybe I misunderstand you here. How do you separate writing applications with closures, which you say is hard, and “apply, read and understand the usage part”, which you say is easy?
    To my understanding, most developers only will use APIs that have method type parameters. Writing a closure in most cases will be to wrap your listener code, simple threadable code, or such. But maybe I am too naive here.

    I haven’t looked at First-Class Methods. You say they preserve the “look and feel” of Java. Do they have the same semantics as regular Java code?
    Well, we try to make it as much Java as possible. FCM proposes “inner methods” that close over it’s environment, return returns from the inner method, no break or continue stuff but local variable access. Quite like inner classes but without boiler plate and interface binding.
    We have a seperate proposal on control abstractions, which allow API-based control structures. In opposite to inner methods, they have to follow the control flow and allow for break, continue, and return from the hosting method.
    (FCM proposal)

  23. @Justin James:

    I understand what you mean. I think the workplace is not the place to “fight out” this issue. Business has more important things to worry about, though I think what would help is if business interests would put pressure on schools to teach this stuff. IMO businesses need it. I think if this were done it would grow the pool of skilled developers. There is the danger of alienating developers with concepts that are too advanced. I wonder how that barrier could be surmounted. It seems to me that one reason why these concepts feel too hard to most is that computer scientists, presenting theory, are the only ones talking about them. They can be explained in an easier-to-understand way. I’ve seen it done.

    Like others have said, even if you have a “rock star” team, what if some or all of them leave? Then you have a bunch of average coders who can’t understand the stuff too well.

    The base of highly skilled programmers needs to grow. I think this sort of thing needs to be fought out in the “training centers”–schools. People who write some of the technical books follow their lead. If they want to teach vocational skills, fine, but I’d like to see some “equal time” given to the more advanced stuff, just as they require with math–even though people will claim, “I’ll never use this.” What would be great is if they used books like “Practical Common Lisp”, for example. Theory’s fine, but show how this stuff can apply to stuff they’ll run into in the real world, because it works just fine with it. What I’d like to see is more cooperation between the business schools and computer science disciplines on this. That’s just a pie in the sky wish at this point. I’m not expecting it to happen spontaneously.

    I had the thought, “What if there were more books in the bookstores on the conceptual underpinnings of these advanced concepts?” I imagine that wouldn’t do a thing, except maybe for Paul Graham’s books, because he can actually talk about something real he did. Most people are instrumental reasoners, so if a book doesn’t address features that are already in the language they want to learn they won’t find it that valuable.

    I can understand a little what you’re saying about “wishing to be childless and single” sometimes. That’s the life I’ve been living. It does have its benefits, but I think those who have a family life live a fuller life than I do. I’ve heard the “childless and single” wish from other fathers my age, but I don’t think they’d give it up for anything. It’s just one of those things in life of wanting to have your cake and eat it too, sometimes. 🙂

    I remember listening to a .Net Rocks! episode a while ago where the host talked a little about how raising kids is a little like building a system, with significant differences. He said when you’re building a system, you’re doing all the work of “growing” it yourself. With kids, you take care of them, but they develop and grow by themselves. It’s interesting to watch that process, more so than trying to figure out how to grow everything yourself.

  24. @Stefan Schulz:

    Like I said, I can’t speak to the tool support, but I looked at Groovy a bit when it first came out more than a year ago. The unique thing about Groovy is its integration with the JVM environment. It can not only run as an interpreted language (so you can try out code without going through the edit-compile-test cycle), you can also compile Groovy code into a JAR file that can be run alongside normally compiled Java code. In addition you can program in straight Java in Groovy, with strict types and all. You can mix strict typing and dynamic typing in Groovy code. So it allows a migration path. People can gradually learn to use it. It doesn’t force you to abandon Java conventions cold turkey.

    When it first came out there were some limitations. It didn’t work with struts. That’s all I remember. It did work with Spring and Hibernate.

    Like I said there have been concerns expressed about Groovy’s slowness. What I don’t understand (but I haven’t looked into it in any detail) is why people couldn’t just use the same “throw more hardware at it” strategy that’s been used with Java for years.

    The most impressive thing about it, given that it runs on a stock JVM environment is it’s a late-binding system. I’ve heard that you can update the Groovy VM while it’s running! I know of an early adopter of Groovy who’s used it to run his website, and he’s updated the Groovy VM about 4 times without taking down his site once. I wrote a post on Groovy here.

  25. @Karl: your exception handling code seems to be verbose on purpose, why don’t you just write
    public void readFile(File someFile) throws IOException {
    FileReader reader = new BufferedReader(new FileReader(someFile));
    try{
    String line;
    while ( null != (line = bufferedReader.readLine()) )
    { /* Do something */ }
    } finallly {
    reader.close();
    }
    }

    That’s as short as you wished and as a bonus, you also have a clean way to tell the caller that something went wrong (namely by throwing the ioexception).

  26. @Hermel:

    The example that Josh Bloch used was:

    static String readFirstLineFromFile(String path) throws IOException {
    BufferedReader r = null;
    String s;
    try {
    r = new BufferedReader(new FileReader(path));
    s = r.readLine();
    } finally {
    if (r != null)
    r.close();
    }
    return s;
    }

    so the instantiation of the BufferedReader and FileReader objects were inside the try block. My guess is this was in case anything went wrong with opening the file. The exception would be caught. Maybe there’s no difference.

    I should clarify something I said earlier. I mentioned Bloch’s example where he had to check for an exception on close(). I looked at this part of Bloch’s presentation again, and he only did this when he wrote code to copy data from one file into another. I’m still not sure why that’s necessary, but there was a different activity going on. He used it as an example of having to handle multiple resources. With the above example he’s just reading in the first line of a file.

  27. “Business has more important things to worry about, though I think what would help is if business interests would put pressure on schools to teach this stuff. IMO businesses need it.” … “Theory’s fine, but show how this stuff can apply to stuff they’ll run into in the real world, because it works just fine with it. What I’d like to see is more cooperation between the business schools and computer science disciplines on this.”

    I agree; it’s back to the concept of risk adversion. The MBA schools teach that risk is bad. It opens you to loss, it opens you to lawsuits. A lack of profits is not viewed as being a loss, despite it being logically and mathematically equivalent to an actual loss.

    As such, businesses are not going to fight this battle, but they should be putting some pressure on academia to produce graduates with the advanced skills *in a format that is useful to businesses*. For example, don’t teach closures as some academic junk, teach it as a way to separate design time and run time considerations, and how that relates to project planning.

    “I can understand a little what you’re saying about “wishing to be childless and single” sometimes. That’s the life I’ve been living. It does have its benefits, but I think those who have a family life live a fuller life than I do. I’ve heard the “childless and single” wish from other fathers my age, but I don’t think they’d give it up for anything. It’s just one of those things in life of wanting to have your cake and eat it too, sometimes.”

    Don’t ever think that you don’t have a full and/or complete life due to a lack of children and/or mate. The reality is, you do, it’s just full and complete in different ways. Look at the learning you’ve been doing, and then the writing too. You are learning more and growing so quickly in ways I am quite jealous of; even when I had the time, I rarely spent it in such a meaningful way.

    When I was single and childless, I could spend my time enriching my life through the persuit of knowlege, personal growth, getting to know lots of people, etc. Now, my enrichment is through raising a child and trying to be a good father and husband (we’re not married, but who’s counting?). It’s simple focusing my efforts outwards instead of inwards. I worked with a guy, he never was married or had children, but he did something like a “Boys Club” activity, which blew me away. He put more work into helping the children of strangers than most people put into their own kids. If you feel like you might be missing out, give that (or something like it) a try; who knows, you might help interest the next Paul Graham or Charles Simonyi in computers! For me, what I miss most is reading. I read “Shogun” in less than a week when I was childless and single. It took me months to get through “Whirlwind” (same autheor, therefore same level of difficulty, comparable length). I have half a bookshelf devoted to “books I intend to read” when most of my life, that was a pile of 1 book at most. The flip side is, for the first time in my life, I get to be completely selfless. I sacrifice my own interests for my son’s. I spent the day at the zoo with him, Chelsea, and my mother today. I’ve seen everything at that zoo, but I still had a great time because I spent it with him.

    “I remember listening to a .Net Rocks! episode a while ago where the host talked a little about how raising kids is a little like building a system, with significant differences. He said when you’re building a system, you’re doing all the work of “growing” it yourself. With kids, you take care of them, but they develop and grow by themselves. It’s interesting to watch that process, more so than trying to figure out how to grow everything yourself.”

    What is fascinating to me is having full direct and indirect control over the environment, but not the child himself. It’s like playing SimCity. Seeing his attempts to alter his environment (like training the dogs to meet his wants) are hilarious.

    J.Ja

  28. @Justin:

    Re: The MBA attitude is risk is bad

    Hmm. I have a friend who has an MBA. I don’t know how he manages things, but I did hear him quote someone once saying something like, “In today’s world the biggest risk is not taking a risk.” He did go to a good school though, so maybe he got a different education. I have heard of MBAs ruining perfectly good technology companies because they don’t understand the spirit of innovation. So what you say has some merit.

    Re: teach computing where it’s useful to business

    I think the “academic junk” is fine so long as that’s not all you teach. What you say brings back memories of what it was like transitioning from school to work. It took about a year for me to adjust to the fact that I couldn’t stick by principles. I had deadlines to meet, and the priority was to get the darn thing to work, no matter what. There were deadlines in school of course. There was also an emphasis on it working (you didn’t get much points for effort). There was also an emphasis on good design and readability. That’s what I saw lacking in the work world. I understand that’s a reality, though I wish it wasn’t dominant in business all the time. I think it’s a big reason why systems become hellacious to manage after years of just throwing more code or another system on the “pile”.

    Re: leading a full life

    Thanks for that. I hope to gain perspective from what you say. Indeed this has been meaningful to me. For the first time in a long time I feel like I’m doing what is most comfortable to me. This has always been a part of me. I just pushed it aside because it wasn’t practical to pursue it, and I didn’t feel I had avenues for it either. The internet has helped A LOT. Some people have put forward the idea that one could actually get a decent CS education just from using Google to find educational material. There’s so much to find now that’s really good stuff, which I wouldn’t even know existed unless it was online. I would’ve had to have gone to a better university than the one I attended to find it the old way.

    Re: shelf with books “to read”

    Yep. Even with the time I have available to me, I have that same thing going on.

    “I spent the day at the zoo with him, Chelsea, and my mother today. I’ve seen everything at that zoo, but I still had a great time because I spent it with him.”

    That’s what I was talking about. Sometimes I’ve felt thankful that I’ve been single and childless, because if I had those responsibilities now I’m pretty sure I wouldn’t be pursuing this stuff. I wouldn’t have the time or energy. It’s definitely a trade off. Even so, maybe it’ll happen one day.

    I don’t know for sure yet, but I think what’s happening with me is I’m realizing that while I can do what I did before, I don’t know that I want to do that in the future. So this could mean a career change for me, not in the sense of leaving the tech field, but doing something different with it. That’s what feels like an uphill climb. Not only is it a rarified strata in the field (because most don’t look at this stuff), but it’s something totally new that takes some hard work to understand and get into.

  29. Pingback: In Traction » Blog Archive » Back to the future: Smalltalk

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s