The continuing research on the Antikythera Mechanism

Antikythera Mechanism, from Wikipedia.org

I heard about this mechanism some years ago. At first the speculation was it was an “ancient computer,” but no description of it was given. That was fantastic enough. “The ancient Greeks created a computer? Come on!” In the last several years it’s been described definitively as an astronomical computer. Pat_S over at tammybruce.com wrote a post on Christmas Eve on the latest research results. (Note: This site normally devotes itself to political topics. Just so you know.) There are a few great videos you can watch from there. The video over at Nature Magazine’s website is pretty interesting (there are links to this from Pat_S’s article).

The mechanism was discovered in a Roman shipwreck, off the island of Antikythera in the Mediterranean. The ship was bringing treasures from the Greek world. It’s estimated the mechanism was made in 140 BC. At first it appeared to be just a lump of rock (the mechanism was encased in it). It was pretty much ignored. Then one day the material split, and someone was able to see what looked like gears inside. Recently researchers started using sophisticated X-Ray technology with it, and they’ve been able to get a very detailed view of how the mechanism worked. They’ve even gotten detailed views of writing that was embossed into the parts.

The mechanism has a sophistication that has excited researchers. It’s way beyond what historians thought the Greeks were capable of. It has a compactness of design that was not duplicated by later, similar mechanisms, which only appeared about 1,400 years after it’s estimated this device was made.

This is an exciting find. I would rank it up there with the discovery of the Archimedes Palimpsest, which reveals some things about Archimedes’s mathematical knowledge. This was also quite stunning.

I recently wrote a post on the computers and designs that Charles Babbage created, and I said that when I was in Jr. high school he was the earliest creator of mechanical computers I had found. What I did not say (though I talked about it in an earlier post) is that when I was in high school, I had found out about an earlier mechanism, the Pascaline, invented by Blaise Pascal in 1642, though it was much simpler. It was an adding machine. In terms of the sophistication of computing devices, the Antikythera Mechanism bests everything before Babbage, as far as we know, who began his work on automatic computers in 1821.

The Middle Ages has been an interesting time in history for me, particularly as it contrasts with the ancient Greek and Roman civilizations. The lesson we can learn from looking at this history is that progress is not necessarily linear and inevitable. Archimedes, it turns out, discovered some principles of Calculus 1,900 years before Newton, but that knowledge was eventually forgotten sometime in the Middle Ages. A story I heard about why this knowledge died out in the Western world was that Archimedes’s work was considered sacrosanct. He was “the great master” of mathematics, and no one was allowed to question his work, or try to improve on it. So his knowledge was just handed down from generation to generation, for centuries. It became dead, because it wasn’t presented as a “live” subject, something that could grow and evolve. Eventually it became devalued and forgotten. Newton rediscovered this knowledge of Calculus through his own work with mathematics about 500 years after it passed out of people’s knowledge base.

If you read about the works of Hero (or “Heron”) of Alexandria (10-70 AD), you’ll find that he discovered some basic principles of steam power, and jet propulsion. Again, knowledge that was lost (or kept, but ignored) for 1,700 years, and then reacquired, and advanced. I understand that the reason this particular area of knowledge was not advanced in society when it was discovered, at least by modern analysis, is that the Roman society that Heron lived in (though Heron was Greek) had no social use for advanced steam power. Not that advanced technology was developed, but there was no incentive to take it to the next level. For one thing they had slaves to do the work, and this was an institution that was useful to the Romans for maintaining their dominance over other cultures. It would take a modern, reformed understanding of political power and economics to make advanced steam power something that was useful and accepted by society.

It’s a humbling thing to realize that knowledge that could be advanced, even today, can be lost for hundreds of years, even if it is documented, and available for others to learn. It’s a fragile thing. It’s also kind of discouraging that knowledge is not always applied to societal progress, because the social/cultural environment is not compatible with it. It has to wait for a later age, and perhaps be forgotten and rediscovered.

Advertisements

4 thoughts on “The continuing research on the Antikythera Mechanism

  1. “Not that advanced technology was developed, but there was no incentive to take it to the next level. For one thing they had slaves to do the work, and this was an institution that was useful to the Romans for maintaining their dominance over other cultures. It would take a modern, reformed understanding of political power and economics to make advanced steam power something that was useful and accepted by society.”

    Cheap labor is the #1 biggest impediment to technical innovation. “Necessity is the mother of invention” cuts both ways! A computer is always competing with a person… the question is, “can you build the computer to be as accurate and fast as the person at a lower price than it takes to train the person?” Here in the South, the answer to that is “no” much more than in other parts of the country, thanks to dirt cheap labor caused by high unemployment rates (even in good times), a lack of unions to keep labor prices artificially high, low education levels, lower expectations, and cheap land prices (which mean cheap housing). When I moved to SC, I was stunned by the way employers treated their workers, it was very much so a plantation mentality, and they got away with it because workers had no alternatives… and this was during the “boom years”.

    Think of the advances in robotics; do you think this would have been possible without the cost of auto labor being so high, between the relatively high wages, pensions, and lifelong benefits? Probably not. We’re starting to see this in the IT industry too. Jobs that require large amounts of labor are either being offshored (make the labor cheaper) or automated, because even though both are expensive and risky changes, even low-end IT workers make enough money to justify the expense.

    One of the biggest changes we’re going to see, is that “software development” will fracture into two distinct groups over time: the people building development tools, and the people using them. 150 years ago, construction only had one category of worker: laborer. Now, construction has two categories: the engineers who design and build machines like dump trucks, backhoes, bulldozers, table saws, nail guns, etc., and the people who use them. Alan Kay’s talked about this a lot, and you see people working in this direction… Charles Simonyi with Intential Software, OutSystems with the Agile Platform product that I like so much… where some REALLY insanely smart people put together tools that do things the right way, so that it takes a lot less work to get things done.

    The approach by mainstream systems (.NET, Java, etc.) is doomed to failure. They let developers continue to do what they’ve always done (generate machine instructions represented textually) in a more efficient fashion (code generators, drag/drop IDEs, more abstract languages, libraries). But they don’t address the workflow. Web apps have security issues with things like SQL injection, cross site scripting, etc. because the systems assume you know what you are doing. The paradigms still require you to be painfully aware of the underlying protocols (as we discussed recently, the REST system is moronic… do I write my desktop apps based on what functions the file system supports?). Instead, we need to be moving away from this. Why make it easier for the developer to work with a system that still requires them to be painfully aware of the intricacies of HTTP, HTML, etc?

    All of these developers focused on these issues are doomed in the long term, because they are essentially Neanderthals are are thrilled that they sharpened a stone to cut meat with, meanwhile their neighbors are learning about metal.

    J.Ja

  2. Hi Justin.

    I understand what you’re saying, that people who own businesses will always consider the cost of labor vs. the cost of automation, but the whole mindset of automation is not always the best option when you’re considering what’s the most effective way to get work done. I’m not saying it would be better to use a worker instead of a machine, but my feeling about it is that a lot of productivity is wasted by not understanding how humans can leverage their use of a computer to be more effective.

    I’ve heard consumers complain quite a bit over the years about automated systems, and they wish they could deal with a real person. So there’s a market disconnect there, but perhaps it’s informational. If consumers knew that by paying more for a product they could deal with a real person if they have a problem, rather than a machine, they might do it.

    Slave labor never was that efficient. A worker who is motivated to do the job well (ie. wants the job, and is paid to do it) will always be more efficient than a worker who is forced to do it. So in that sense slave labor is not cheap. It goes back to what I was saying earlier on your blog that if, “inefficiency is shared, no one is the wiser.” It was pretty much the case in the agricultural South that “everybody was doing it.” I think the real question is, is the cost of paid labor too high, and I think the way this would be measured is what the market price for the product is vs. what the market price for the labor that produces that product is.

    If we look at history, we can see that sometimes technological innovation created the opposite effect of what you say. Slavery was already seen as inefficient in the early 19th century, and was on the wane, but then the Cotton Gin came along, and suddenly made slavery a lot more cost-effective. It was a matter of what plantation owners typically had slaves do. If they could just be used to pick cotton, rather than pick it and also separate the fibers from the seeds, then plantation owners liked that bargain, as gruesome as it was.

    In America, I’ve heard historians say that the real reason slavery became institutionalized was because of a labor shortage. So in that sense slavery was “cheap labor,” but once the “supply/demand problem” was solved, slavery was expensive. I’ve heard it said that by the time the Civil War happened, slavery had already become inefficient again, and was on its way out “anyway.” What this suggests is even if the Civl War hadn’t happened, slavery would’ve eventually died out because of market forces. It just would’ve taken longer to happen. Thankfully it died out as soon as we as a country could get up the gumption to end the institution of it. I hear analysis occasionally today that we still have people treated as slaves, people who are literally forced to work without pay. It’s just illegal. We also have the underground economy where people are paid less than a legal wage, are thankful to get it, and don’t say a word about abuse in the workplace lest the people who pay them call the immigration service. I wouldn’t call that slavery, but it’s pretty close to it, mainly because of the intimidation/fear factor, and the fact that immigrants who are here illegally have no legal protection for these sorts of issues.

    Re. developers and tools

    I think I know what you’re saying about more sophisticated tools that are more focused on getting the job done for you as opposed to making you focus on the internals. I said the following in a conversation I had with someone at the Rebooting Computing Summit in January 2009:

    “My impression is that undergraduate CS has long traded on the idea of training people to bring ideas about algorithms and optimization into the field of IT. This used to work out. Now the landscape has changed and what CS teaches is no longer as much in demand by industry. CS used to be more relevant, because its principles were applicable so long as the dynamic was about software running on hardware. Now this isn’t so much the focus. More and more software runs on VMs, knowledge of algorithms and data structures is no longer as important (though it’s still important from an evaluative standpoint), and there are significant issues around system configuration and tuning. Programming is still important, but now, because many of the important algorithms that IT uses have been abstracted away in OO frameworks, it’s much more important in an IT setting to know the framework than it is to know how to program. It’s still necessary to understand the structure of a program, but the industry can get by without a programmer knowing the finer points of optimization (though it’s helpful). ** It’s not impossible to imagine that we may very well see a time in the not too distant future when the structure of a program itself is abstracted, and so even knowing that won’t be so valuable. ** I think that the patterns and practices that CS has taught, as they apply to IT circumstances, are gradually being abstracted and automated. I do not think that the academic field can continue down the road it’s on with its head in the sand about this.

    I’ve said earlier that what I see is CS departments are already adjusting to the reality in IT by setting up alternative ‘applied computing’ majors for undergrads that focus more on the frameworks, and system configuration/tuning issues. I think a renewed focus on CS research would allow it to reassert itself as a valuable field of study that would attract investment, and interested students. I think this could ultimately lead to new, and I dare say disruptive forms of computing appearing in the commercial marketplace that will increase interest in computing and CS again. I know it’s a lot of hand waving. I’m probably being pollyannish. This is the way I see CS recovering though.”

    I added some emphasis with the “**”‘s.

    From what I read, even the goals of Intentional Software were not to put coders out of business. It’s just that they would do less infrastructure coding, and what is essentially design work. Instead their main responsibility would be to code the business rules, using the “DSL” that the Intentional tool created (the “DSL” being the class hierarchy and method names that it would create, in say, C#), based on the domain expert’s specification. I imagine it does a lot of the “plumbing” work, too.

  3. Mark –

    The biggest reason, I think, that slavery stuck around as long as it did was because of the capital investment involved. Imagine owning a machine that costs you a lot to purchase, you can’t use it for years at all, and it takes you 15 or so years of owning it before it reaches its full productivity potential. Oh, and the machine has upkeep costs, regardless of whether its being used or not. Slavery, as a form of capital investment, is a fairly risky move… one bad illness going around, and you are left without the means to produce, for example. Even worse, when thought of as a labor pool, it is incredibly inflexible. If your slaves are uneducated, weak, unmotivated, etc., you can’t easily fire them and replace them, or hire the right talent. I am also sure that a good number of slaves played dumb or even deliberately worked slowly as a form of rebellion. Needless to say, slavery is pretty inefficient.

    “From what I read, even the goals of Intentional Software were not to put coders out of business. It’s just that they would do less infrastructure coding, and what is essentially design work. Instead their main responsibility would be to code the business rules, using the “DSL” that the Intentional tool created (the “DSL” being the class hierarchy and method names that it would create, in say, C#), based on the domain expert’s specification. I imagine it does a lot of the “plumbing” work, too.”

    This is my understanding as well. I don’t think anyone is trying to “out coders out of work”, but I do think we’re going to see a distinct schism in the field as things like Intentional, the modern versions of 4GL’s (I hate calling them 4GL’s because of the stigma, but that’s what they are), and other systems that take the abstraction to a high level. The biggest issue with the mainstream stuff, is that they keep feeling trying to make the old stuff still work. We need that next conceptual leap to occur. The .NET ecosystem is just great at the language (C#) and CLR level; when writing algorithmic code, I like it plenty. If you know where to look, it can even express some fairly complex ideas rather elegantly (not quite LISP or Smalltalk, but a lot better than anything else in the mainstream). Using interop with F# gives you even more options.

    It’s as soon as you try writing an *application* that things fall apart, and quickly. It’s not really the Framework’s fault. It’s more the fault of modern app development. There are simply too many moving parts. Things like ASP.NET MVC bask in the glory of exposing things that really need to be hidden. It’s been TWENTY YEARS since HTML and HTTP first hit the scene… why does my app need to be painfully aware of them? Were devs still writing code in Assembly to put letters on the screen and accept keyboard input twenty years after the invention of the monitor and keyboard? I don’t think so… and if they did, they certainly weren’t proud of it. But this is EXACTLY what people are doing! Why are we architecting these REST services based on the HTTP verbs, of all things? Why not base our architecture on our actual NEEDS?!?!

    I’ve come to the conclusion that the people trying to influence mainstream development are, for the most part, total idiots in love with technology that’s “clever” with no regard for whether or not it’s “useful”. With that mindset, is it any wonder that no one likes the apps we churn out?

    J.Ja

  4. @Justin:

    If your slaves are uneducated, weak, unmotivated, etc., you can’t easily fire them and replace them, or hire the right talent.

    This was an argument that slave owners used to justify the institution at the time: “Common laborers are ‘rented.’ Their employers don’t care for them. The moment they’re not productive enough, they become unemployed. We make a commitment to our slaves to care for them. We house and feed them for as long as they are loyal.”

    I can’t relate to such an argument, but this was one of the mentalities about it at the time.

    Re. the architecture of app. development

    To answer your question, yes, I think people were still programming in assembly to get keyboard input and put characters on the screen 20 years after the invention of the monitor and keyboard, but it was getting rare after 20 years. The C language was starting to take over for assembly. As to whether they were proud of it, I’m not sure. I remember listening to a recording of a Don Box presentation on .Net several years ago where he complained a bit about programmers being “traditionalists.” He said when Microsoft introduced its first multi-tasking operating system (which I guess was Windows 3.0), programmers complained about relative addressing. He joked, “They wanted to be able to open the box while their program was running and see that little bit they set, wriggling right at the edge of the memory card, where they said it should be.”

    What you said about architecture and “cleverness” reminds me of something Alan Kay said in my post called, “Does computer science have a future?”:

    What outlook does is give you a stronger way of looking at things, by changing your point of view. And that point of view informs every part of you. It tells you what kind of knowledge to get. And it also makes you appear to be much smarter.

    Knowledge is ‘silver’, but outlook is ‘gold’. I dare say [most] universities and most graduate schools attempt to teach knowledge rather than outlook. And yet we live in a world that has been changing out from under us. And it’s outlook that we need to deal with that. And in contrast to these two, IQ is just a big lump of lead. It’s one of the worst things in our field that we have clever people in it, because like Leonardo [da Vinci] none of us is clever enough to deal with the scaling problems that we’re dealing with. So we need to be less clever and be able to look at things from better points of view.

    In addition, I recently tried to probe deeper into his idea of what computer science practice (as a real science) is like. He said:

    I often use an analogy to “bridges” when trying to explain how STEM relates to computing. (I used to use an analogy to math, but not enough people were familiar enough with what it is actually about.)

    We can discover the idea and possibility of bridges in many ways, including using a log to solve a problem of how to get from A to B with something inconvenient in between. We now have phenomena to contemplate. We can improve our bridging schemes without any deep understanding of why they work (via semi ad hoc experiments) and choose the ones that don’t fall down. This is a kind of artificial evolution, somewhat like dog breeding.

    I took the above to relate to what is typical in our field, standardizing on architecture that “doesn’t fall down,” even though it’s difficult to work with in many circumstances. He went on to say:

    We can also try to understand *bridging* by making models of bridges consisting of relationships of parts to each other that try to give rise to “what bridges do”, in the form of similar phenomena to those made in the physical world. Here what we permute are the models. And then we can use these models to try things in the physical world to see how things work out.

    I think what he was talking about here is a method for developing new forms of architecture. I had to think on this for a while before I realized what he was talking about. One of the images that came to mind as a result was:


    This is the Key Bridge in Washington, D.C.

    Notice the repeated use of the arch, an architectural feature that had been originally designed to create openings and provide structural stability inside of brick buildings, like a palace, or the Collosseum in Rome. Here it was used to build not a building people would live in, or conduct some activity in, but a bridge that spans the Potomac River. I guess it’s not quite a novel idea, since the Romans had used arches in their aqueducts, but it seems to me it took some thought for someone to say, “Let’s use arches as support structure for our bridge.” The Key Bridge is not the first to do this (I don’t think), but it’s a good illustration of the concept.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s