The future is not now

I found this post, by Steve Yegge, through reddit. In a lot of ways he’s saying what I’ve read Alan Kay to say, only more bluntly: The current “state of the art” in the tech industry sucks. We build software like the Egyptians built pyramids. There are better hardware designs out there that have been ignored for decades, and the programming languages that most of us developers use suck, but they’re the only ones that run efficiently on the current hardware. Maybe that’s not entirely true, but he’s in the ballpark. He even said the languages some of us like so much like Lisp, Scheme, Smalltalk, etc. are just “less crappy” than the others. I don’t think Alan Kay would be that blunt, but I think he’d agree with the idea. Yegge’s main point is “Moore’s Law is Crap”.

Maybe he’s been reading Kay, or maybe Yegge came to the same conclusions himself. I’ve quoted this article before, but I guess it bears some repeating. Here’s what Kay (AK) said in an ACM Queue interview with Stuart Feldman (SF) from 2005:

(AK) If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.

The languages of Niklaus Wirth have spread wildly and widely because he has been one of the most conscientious documenters of languages and one of the earlier ones to do algorithmic languages using p-codes (pseudocodes)—the same kinds of things that we use. The idea of using those things has a common origin in the hardware of a machine called the Burroughs B5000 from the early 1960s, which the establishment hated.

SF Partly because there wasn’t any public information on most of it.

AK Let me beg to differ. I was there, and Burroughs actually hired college graduates to explain that machine to data-processing managers. There was an immense amount of information available. The problem was that the DP managers didn’t want to learn new ways of computing, or even how to compute. IBM realized that and Burroughs didn’t.

The reason that line lived on—even though the establishment didn’t like it—was precisely because it was almost impossible to crash it, and so the banking industry kept on buying this line of machines, starting with the B5000.

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea.

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

Finally, Kay said:

…so both Lisp and Smalltalk can do their things and are viable today. But both of them are quite obsolete, of course.

In a different interview two years earlier done with O’Reilly at OpenP2P.com he said of Smalltalk:

“Twenty years ago at PARC . . . I thought we would be way beyond where we are now. I was dissatisfied with what we did there. The irony is that today it looks pretty good.”

Something else Yegge identified that I’ve just begun to see is that there’s a consensus of complacency about all this, even today. Most people don’t care, and want nothing to change. They’re comfortable where they are, and they don’t want to have to learn anything hard.

Yegge focuses on parallelism as a problem that could be solved if our hardware and languages advanced. Right now I’d settle for a processor that could efficiently and natively run bytecode, even if it only did it well serially at first. That would be progress.

Anyway, I really liked Yegge’s post, even though it’s just a rant. He expresses in strong terms how I feel about our industry. He says he’s “nauseous” about it. I’m not that bad, but I know enough to get a bit depressed about it sometimes. He says a lot of what I think to be true.

An article that’s had an influence on me since I read it is Paul Murphy’s “IT Commandment: Thou shalt honor and empower thy (Unix) sysadmins”. In it he gives a history lesson on how the “data processing mindset” and science-based computing developed, and how even though they both use computers, one uses computers as an extension of bureaucratic control, and the other is actually focused on computing. He has a bias against Windows, so he can’t help but put a dig in against it, but the article made an impression on me as far as the overall concept.

At the root of all this is a fundamental confusion: data processing is not computing.

Data processing started with mechanical tabulators in the nineteenth century, developed into a professional discipline during the 1920s, and became a cornerstone of business operations during the late nineteen twenties and early thirties. That was the period, too, in which fundamental organisational structures (like stringent role separation), fundamental controls (like the service level agreement), and fundamental functional assumptions (like the emphasis on reporting, the focus on processing efficiency, the reliance on expectations management, and tendency to justify costs on layoffs) all became more or less cast in stone.

When computers entered this picture in the late 1940s they weren’t used to replace tabulators, they were used to control tabulators – and cost justified on layoffs among the people who had previously controlled batch processes. Thus the first assemblers were physically just that: controls enabling the automated assembly of jobs from card decks and the transfer of information from the output of one batch run to the inputs of the next one.

Science based computing had nothing to do with any of this and focused, from its origins in the mid to late thirties, on problem solving and the extension, rather than the replacement, of human ability. Thus when Atanasoff and Zuse dreamt of solving computational problems, Shannon applied computing to communications, or Newman used a Collosus in raid planning, none of them had the slightest interest in financial reporting or other commercial tasks.

That science community focus continues with Unix today – particularly in the BSD and openSolaris communities – but it’s been there from the beginning. Thus when Thomson and his colleagues first worked on Unix, they talked about forming communities.

Another way of looking at the different mindsets is that the data processing mindset has looked at automation as a way of freeing workers, and making existing business processes more efficient. Rather than having people focused on boring, repetitive tasks, let the machines do them. This will allow people to disassociate themselves from those tasks and focus on more creative activities. That’s a nice way of putting it, and I’m sure that’s been realized by many people. The thing is it also causes distress among many, at least for a time. They can end up feeling worthless, because the job they worked so hard at can be done by a machine better. What’s also implied is this more creative work probably will not involve using a computer that much. There’s a clear separation of work functions: there’s machine work, and there’s work people do.

Science-based computing has had a different vision. Rather than automating existing business functions, computers should extend workers’ abilities. I think someone from this point of view would say that if you have workers that are doing boring, repetitive processes, there’s something wrong with your business process. That’s where the inefficiency is, not in the workers. It sees computers as being a part of enabling people to do and understand more, to become more active participants in the business process. It sees the potential of workers to grow in their roles in the company, to not be limited by a job title. They are not seen as computer operators. They are knowledge workers. They gain and work with knowledge as a part of their job, leveraging it for the benefit of the business. It’s not a matter of taking an existing process and automating it, but rather transforming the business process itself in light of what computers can do for the company. It sees the computer as a new medium, and as with any new medium, information is conveyed, managed, and manipulated in a different way. I think this is the reason Alan Kay has railed against the mindset of the “paperless office”, because it implies automating what used to be done on paper–an old medium. Computers are not a new form of paper! Though they can be used as such if that’s what we choose to do with them.

Looking at things the way they are now, what I’ve discovered, as I’ve looked back over my career, is that computer science is not in the driver’s seat when it comes to business computing. Business computing is where most of the computing jobs are that you can find in the want ads. In my experience what business computing is really about is computer engineering and an Information Systems approach of process automation. This involves connecting systems together, and automating pre-established business functions, with computer science as a support function. It doesn’t matter if the company asked for someone with a CS degree in the want ad. In these environments CS comes in handy for those unusual situations when two pieces don’t quite come together and you need to make them work by creating some “chewing gum and duct tape”, or create a technology function that your company needs but no one’s invented yet. Actually, the latter is the fun stuff…er, usually. Otherwise, all you’re doing is going through the motions of connecting “this” to “that”, making it look nice to the user, and making it flow nicely. Not to say these aren’t noble goals, nor that they are easy, but the science and technique involved in making this happen has nothing to do with CS as far as you’re concerned. The technology you’re using was created using CS. You’re using the product of that effort to do your job. I think the truth of the matter is that most people trained in CS don’t use most of the skills they were trained for. Up until recently, that was true for me as well. CS helped me do my job better, but my job was to support the existing technology.

Where I’m headed, I don’t know. All I know is I’m happy where I am. I’m learning about Smalltalk. I feel drawn towards Scheme as a means to update my computer science knowledge. I was trained in computer science, but I’ve come from a business computing background in my professional life. That’s what I’ve been focused on for 7 years. I’d like to bring whatever new knowledge I learn back into that realm. Just because these technologies have tended to not be used doesn’t mean it always has to be that way. I think they have something to contribute to business. It’s just a feeling at this point. I need to try something more advanced to crystalize in my mind where it can be used.

Edit 5/18/07: Following up on my commentary about typical IT business practice, I used to support some of the practices in the “data processing mindset”, as just accepting the way things were. An idea that’s been around for a while is that programmers are replaceable. I think managers have gotten into this mindset with the advent of de facto programming language standards within companies. The idea is that a programming language becomes popular, and a lot of companies use it in their IT/software shops. I think the idea is this makes the workforce more fluid. Programmers can be laid off, or they can quit their jobs, and move on to other jobs, with no disruption to projects that need to get done. If the company needs to replace them, they can hire from a pool of developers who have the same skills as those who left. All they need to do is spec. out what skills they want. In my experience this is a better idea in theory than in practice. What’s often neglected is the knowledge the employee has about how the company’s IT works, and/or how their customers’ IT works. Just because the new people may have 5 years experience with C# or Java has no relation to how well they know your business, which is likely zero. Usually they’re going to have to learn that completely on the job, and business knowledge is critical. After all, what’s being modeled in the software? Not C# or Java.

Another thing a lot of companies never seem to acknowledge is that a lot of technical knowledge is gained on the job. That’s just how it goes. Most of the time no new programmer is going to know everything they need to know to get every project done they will ever do for the company. A lot of that knowledge will have to be learned along the way. So the idea of “replaceable programmers” in my mind is a myth. Yes, companies can and do try to implement this philosophy all the time. Managers and employees try to compensate for its fallacies in various ways, but it’s still disruptive, because the people with the knowledge are either pushed out, or are walking out the door. With IT being around for as long as it has, you would think most companies would’ve figured out by now that this isn’t working as planned. Instead it sounds like it’s been getting worse, with more companies expecting new workers to have “X years of Y”, as if they’re hiring secretaries, assembly line workers, electricians, or plumbers.

The right questions to ask someone you might hire involve, among other things, gauging whether they can write software well, whether they’re familiar with the language and architectural paradigm you’re using, and whether they have what it takes to learn new skills effectively. These are all open-ended criteria, but they’re more realistic to what the goals actually are than asking if someone has 4 years of C# or Java, HTML/CSS, Javascript, XML, and (the kitchen sink) experience.

Advertisements

4 thoughts on “The future is not now

  1. Good post. I’ve been talking to people at work about this all day. There’s a usual pattern here: we start studying CS because we like the challenge and the abstract nature of it. We keep doing it because everyone says “that’s great you’re doing CS, computer jobs pay well.” Then we get into the workplace and find out it’s nothing like the reason we started doing it. I’m still looking for a solution myself.

  2. @Aaron:

    Thanks for the comments. One of my first jobs out of college used my CS knowledge well, but this was only while we were innovating. Eventually we got to a point where we weren’t really innovating anymore, but just plugging technologies together. This still took quite a bit of work, but it took the fun out of it, because it was very repetitive.

    I think that CS is geared towards innovation, not building the same thing over and over again. The times when I’ve really enjoyed what I’m doing is when I’m basically inventing something new, or participating in that process. In that job I mentioned, my first project there was to work on a script interpreter. Very challenging, a pain to deal with (because of all the pointers, and working in DOS, Ugh!), but the project itself was still cool. I learned some lessons and got a lot out of it. I had the opportunity to work on a few other projects like that there. I got that opportunity again a couple years ago, working on a couple projects for a different company, and I found out that prior to those projects I had become a “tool user”, not so much a programmer. I was leaning on the tool to do the design thinking for me, and sometimes it was leading me down a design path that led to problems.

    Re: “that’s great you’re doing CS, computer jobs pay well.”

    Huh. The last time I heard that working in this field paid well was in the booming 90s. I wouldn’t recommend it as a road to riches. It has its ups, and it has its downs, and the downs are a b__ch.

  3. Pingback: Work like an Egyptian… « Tekkie

  4. Pingback: Goals for software engineering | Tekkie

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s