Feeds:
Posts
Comments

Posts Tagged ‘History’

See Part 1, Part 2, and Part 3

It’s been a while since I’ve worked on this. I used up a lot of my passion for working on this series in the first three parts. Working on this part was like eating my spinach, not something I was enthused about, but it was a good piece of history to learn.

Nevertheless, I am dedicating this to Bob Taylor, who passed away on April 13. He got the ball rolling on building the Arpanet, when a lot of his colleagues in the Information Processing Techniques Office at ARPA didn’t want to do it. This was the predecessor to the internet. When he started up the Xerox Palo Alto Research Center (PARC), he brought in people who ended up being important contributors to the development of the internet. His major project at PARC, though, was the development of the networked personal computer in the 1970s, which I covered in Part 3 of this series. He left Xerox and founded the Systems Research Center at Digital Equipment Corp. in the mid-80s. In the mid-90s, while at DEC, he developed the internet search engine AltaVista.

He and J.C.R. Licklider were both ARPA/IPTO directors, and both were psychologists by training. Among all the people I’ve covered in this series, both of them had the greatest impact on the digital world we know today, in terms of providing vision and support to the engineers who did the hard work of turning that vision into testable ideas that could then inspire entrepreneurs who brought us what we are now using.

Part 3 Part 2 covers the history of how packet switching was devised, and how the Arpanet got going. You may wish to look that over before reading this article, since I assume that background knowledge.

My primary source for this story, as it’s been for this series, is “The Dream Machine,” by M. Mitchell Waldrop, which I’ll refer to occasionally as “TDM.”

Ethernet: The first internetworking protocol

Bob Metcalfe,
from University of Texas at Austin

In 1972, Bob Metcalfe was a full-time staffer on Project MAC, building out the Arpanet, using his work as the basis for his Ph.D. in applied mathematics from Harvard. While he was setting up the Arpanet node at Xerox PARC, he stumbled upon the ALOHAnet System, an ARPA-funded project that was created by Norman Abramson at the University of Hawaii. It was a radio-based packet-switching network. The Hawaiian islands created a natural environment for this solution to arise, since telephone connections across the islands were unreliable and expensive. The Arpanet’s protocol waited for a gap in traffic before sending packets. On ALOHAnet, this would have been impractical. Since radio waves were prone to interference, a sending terminal might not even hear a gap. So, terminals sent packets immediately to a transceiver switch. A network switch is a computer that receives packets and forwards them to their destination. In this case, it retransmitted the sender’s signal to other nodes on the network. If the sending terminal received acknowledgement that its packets were received, from the switch, it knew they got to their destination successfully. If not, it assumed a collision occurred with another sending terminal, garbling the packet signal. So, it waited a random period of time before resending the lost packets, with the idea of trying to send them when other terminals were not sending their packets. It had a weakness, though. The scheme Abramson used only operated at 17% of capacity, because the random wait scheme could become overloaded with collisions if it got above that. If use kept increasing to that limit, the system would eventually grind to a halt. Metcalfe thought this could be improved upon, and ended up collaborating with Abramson. Metcalfe incorporated his improvements into his Ph.D. thesis. He came to work at PARC in 1972 to use what he’d learned working on ALOHAnet to develop a networking scheme for the Xerox Alto. Instead of using radio signals, he used coaxial cable, and found he could transmit data at a much higher speed that way. He called what he developed Ethernet.

The idea behind it was to create local area networks, or LANs, using coaxial cable, which would allow the same kind of networking one could get on the Arpanet, but in an office environment, rather than between machines spread across the country. Rather than having every machine on the main network, there would be subnetworks that would connect to a main network at junction points. Along with that, Metcalfe and David Boggs developed the PUP (PARC Universal Packet) protocol. This allowed Ethernet to connect to other packet switching networks that were otherwise incompatible with each other. The developers of TCP/IP, the protocol used on the internet, would realize these same goals a little later, and begin their own research on how to accomplish it.

In 1973, Bob Taylor set a goal in the Computer Science Lab at Xerox PARC to make the Alto computer networked. He said he wanted not only personal computing, but distributed personal computing, at the outset.

The creation of the Internet

Bob Kahn,
from the ACM

Another man who enters the picture at this point is Bob Kahn. He joined Larry Roberts at DARPA in 1972, originally to work on a large project exploring computerized manufacturing, but that project was cancelled by Congress the moment he got there. Kahn had been working on the Arpanet at BBN, and he wanted to get away from it. Well…Roberts informed him that the Arpanet was all they were going to be working on in the IPTO for the next several years, but he persuaded Kahn to stay on, and work on new advancements to the Arpanet, such as mobile satellite networking, and packet radio networking, promising him that he wouldn’t have to work on maintaining and expanding the existing system. DARPA had signed a contract to expand the Arpanet into Europe, and networking by satellite was the route that they hoped to take. Expanding the packet radio idea of ALOHAnet was another avenue in which they were interested. The idea was to look into the possibility of allowing mobile military forces to communicate in the field using these wireless technologies.

Kahn got satellite networking going using the Intelsat IV satellite. He planned out new ideas for network security and digital voice transmission over Arpanet, and he planned out mobile terminals that the military could use, using packet radio. He had the thought, though, how are all of these different modes of communication going to communicate with each other?

As you’ll see in this story, Bob Kahn with Vint Cerf, and Bob Metcalfe, in a sense, ended up duplicating each other’s efforts; Kahn working at DARPA, Cerf at Stanford, and Metcalfe at Xerox in Palo Alto. They came upon the same basic concepts for how to create internetworking, independently. Metcalfe just came upon them a bit earlier than everyone else.

Within the DARPA community, Kahn thought it would seem easy enough to make the different networks he had in mind communicate seamlessly. Just make minor modifications to the Arpanet protocol for each system. He also wondered about future expansion of the network with new modes of communication that weren’t on the front burner yet, but probably would be one day. Making one-off modifications to the Arpanet protocol for each new communications method would eventually make network maintenance a mess. It would make the network more and more unweildy as it grew, which would limit its size. He understood from the outset that this method of network augmentation was a bad management and engineering approach. Instead of integrating the networks together, he thought a better method was to design the network’s expansion modularly, where each network would be designed and managed seperately, with its own hardware, software, and network protocol. Gateways would be created via. hybrid routers that had Arpanet hardware and software on one side, and the foreign network’s hardware and software on the other side, with a computer in the middle translating between the two. Kahn recognized how this structure could be criticized. The network would be less efficient, since each packet from a foreign network to the Arpanet would have to go through three stages of processing, instead of one. It would be less reliable, since there would be three computers that could break, instead of one. It would be more complex, and expensive. The advantage would be that something like satellite communication could be managed completely independently from the Arpanet. So long as both the Arpanet and the foreign network adhered to the gateway interface standard, it would work. You could just connect them up, and the two networks could start sending and receiving packets. The key design idea is neither network would have to know anything about the internal details of the other. Instead of a closed system, it would be open, a network of networks that could accommodate any specialized network. This same scheme is used on the internet today.

Kahn needed help in defining this open concept. So, in 1973, he started working with Vint Cerf, who had also worked on the Arpanet. Cerf had just started work at the computer science department of Stanford University.

Vint Cerf,
from Wikipedia

Cerf came up with an idea for transporting packets between networks. Firstly, a universal transmission protocol would be recognized across the network. Each sending network would encode its packets with this universal protocol, and “wrap” them in an “envelope” that used “markings” that the sending network understood in its protocol. The gateway computer would know about the “envelopes” that the sending and receiving networks understood, along with the universal protocol encoding they contained. When the gateway received a packet, it would strip off the sending “envelope,” and interpret the universal protocol enclosed within it. Using that, it would wrap the packet in a new “envelope” for the destination network to use for sending the packet through itself. Waldrop had a nice analogy for how it worked. He said it would be as if you were sending a card from the U.S. to Japan. In the U.S., the card would be placed in an envelope that had your address and the destination address written in English. At the point when it left the U.S. border, the U.S. envelope would be stripped off, and the card would be placed in a new envelope, with the source and destination addresses translated to Kanji, so it could be understood when it reached Japan. This way, each network would “think” it was transporting packets using its own protocol.

Kahn and Cerf also understood the lesson from ALOHAnet. In their new protocol, senders wouldn’t look for gaps in transmission, but send packets immediately, and hope for the best. If any packets were missed at the other end, due to collisions with other packets, they would be re-sent. This portion of the protocol was called TCP (Transmission Control Protocol). A separate protocol was created to manage how to address packets to foreign networks, since the Arpanet didn’t understand how to do this. This was called IP (Internet Protocol). Together, the new internetworking protocol was called TCP/IP. Kahn and Cerf published their initial design for the internet protocol in 1974, in a paper titled, “A Protocol for Packet Network Interconnection.”

Cerf started up his internetworking seminars at Stanford in 1973, in an effort to create the first implementation of TCP/IP. Cerf described his seminars as being as much about consensus as technology. He was trying to get everyone to agree to a universal protocol. People came from all over the world, in staggered fashion. It was a drawn out process, because each time new attendees showed up, the discussion had to start over again. By the end of 1974, the first detailed design was drawn up in a document, and published on the Arpanet, called Request For Comments (RFC) 675. Kahn chose three contractors to create the first implementations: Cerf and his students at Stanford, Peter Kirstein and his students at University College in London, and BBN, under Ray Tomlinson. All three implementations were completed by the end of 1975.

Bob Metcalfe, and a colleague named John Schoch from Xerox PARC, eagerly joined in with these seminars, but Metcalfe felt frustrated, because his own work on Ethernet, and a universal protocol they’d developed to interface Ethernet with Arpanet and Data General’s network, called PUP (Parc Universal Packet) was proprietary. He was able to make contributions to TCP/IP, but couldn’t overtly contribute much, or else it would jeopardize Xerox’s patent application on Ethernet. He and Schoch were able to make some covert contributions by asking some leading questions, such as, “Have you thought about this?” and “Have you considered that?” Cerf picked up on what was going on, and finally asked them, in what I can only assume was a humorous moment, “You’ve done this before, haven’t you?” Also, Cerf’s emphasis on consensus was taking a long time, and Metcalfe eventually decided to part with the seminars, because he wanted to get back to work on PUP at Xerox. In retrospect, he had second thoughts about that decision. He and David Boggs at Xerox got PUP up and running well before TCP/IP was finished, but it was a proprietary protocol. TCP/IP was not, and by making his decision, he had cut himself off from influencing the direction of the internet into what it eventually became.

PARC’s own view of Ethernet was that it would be used by millions of other networks. The initial vision of the TCP/IP group was that it would be an extension of Arpanet, and as such, would only need to accommodate the needs that DARPA had for it. Remember, Kahn set out for his networking scheme to accommodate satellite, and military communications, and some other uses that he hadn’t thought of yet. Metcalfe saw that TCP/IP would only accommodate 256 other networks. He said, “It could only work with one or two nets per country!” He couldn’t talk to them about the possibility of millions of networks using it, because that would have tipped others off to proprietary technology that Xerox had for local area networks (LANs). Waldrop doesn’t address this technical point of how TCP/IP was expanded to accommodate beyond 256 networks. Somehow it was, because obviously it’s expanded way beyond that.

Several years later, people on the project started using the term “Internet,” which was just a shortened version of the term “internetworking.” It took many years before TCP/IP was considered stable enough to port the Arpanet over to it completely. The Department of Defense adopted TCP/IP as its official standard in 1980, and the Arpanet was converted to TCP/IP by January 1, 1983, thus creating the Internet. This made it much easier to expand the network than was the case with the Arpanet’s old protocol, because TCP/IP incorporates protocol translation into the network infrastructure. So it doesn’t matter if a new network comes along with its own protocol. It can be brought into the internet, with a junction point that translates its protocol.

We should keep in mind as well that at the point in time that the internet officially came on line nothing had changed with respect to the communications hardware (which I covered in Part 3). 56Kbps was still the maximum speed of the network, as all network communications were still conducted over modems and dedicated long-distance phone lines. This became an issue as the network expanded rapidly.

The TCP/IP working group worked on porting the protocol (called the Kahn-Cerf internetworking protocol in the early days) to many operating systems, and migrating subprotocols from the Arpanet to the Kahn-Cerf network, from the mid-1970s into 1980.

The next generation of the internet

While the founding of the internet would seem to herald an age of networking harmony, this was not the case. Competing network standards proliferated before, during, and after TCP/IP was developed, and the networking world was just as fragmented when TCP/IP got going as before. People who got on one network could not transfer information, or use their network to interact with people and computers on other networks. The main reason for this was that TCP/IP was still a defense program.

The National Science Foundation (NSF) would unexpectedly play the critical role of expanding the internet beyond the DoD. The NSF was not known for funding risky, large-scale, enterprising projects. The people involved with computing research never had much respect for it, because it was so risk-averse, and politicized. Waldrop said,

Unlike their APRA counterparts, NSF funding officers had to submit every proposal for “peer-review” by a panel of working scientists, in a tedious decision-by-committee process that (allegedly) quashed anything but the most conventional ideas. Furthermore, the NSF had a reputation for spreading its funding around to small research groups all over the country, a practice that (again allegedly) kept Congress happy but most definitely made it harder to build up a Project MAC-style critical mass of talent in any one place.

Still, the NSF was the only funding agency chartered to support the research community as a whole. Moreover, it had a long tradition of undertaking … big infrastructure efforts that served large segments of the community in common. And on those rare occasions when the auspices were good, the right people were in place, and the planets were lined up just so, it was an agency where great things could happen. — TDM, p. 458

In the late 1970s, researchers at “have not” universities began to complain that while researchers at the premier universities had access to the Arpanet, they didn’t. They wanted equal access. So, the NSF was petitioned by a couple schools in 1979 to solve this, and in 1981, the NSF set up CSnet (Computer Science Network), which linked smaller universities into the Arpanet using TCP/IP. This was the first time that TCP/IP was used outside of the Defense Department.

Steve Wolff,
from Internet2

The next impetus for expanding the internet came from physicists, who thought of creating a network of supercomputing centers for their research, since using pencil and paper was becoming untenable, and they were having to beg for time on supercomputers at Los Alamos and Lawrence Livermore that were purchased by the Department of Energy for nuclear weapons development. The NSF set up such centers at the University of Illinois, Cornell University, Princeton, Carnegie Mellon, and UC San Diego in 1985. With that, the internet became a national network, but as I said, this was only the impetus. Steve Wolff, who would become a key figure in the development of the internet at the NSF, said that as soon as the idea for these centers was pitched to the NSF, it became instantly apparent to them that this network could be used as a means for communication among scientists about their work. That last part was something the NSF just “penciled in,” though. It didn’t make plans for how big this goal could get. From the beginning, the NSF network was designed to allow communication among the scholarly community at large. The NSF tried to expand the network outward from these centers to K-12 schools, museums, “anything having to do with science education.” We should keep in mind how unusual this was for the NSF. Waldrop paraphrased Wolff, saying,

[The] creation of such a network exceeded the foundation’s mandate by several light-years. But since nobody actually said no—well, they just did it. — TDM, p. 459

The NSF declared TCP/IP as its official standard in 1985 for all digital network projects that it would sponsor, thenceforth. The basic network that the NSF set up was enough to force other government agencies to adopt TCP/IP as their network standard. It forced other computer manufacturers, like IBM and DEC, to support TCP/IP as well as their own networking protocols that they tried to push, because the government was too big of a customer to ignore.

The expanded network, known as “NSFnet,” came online in 1986.

And the immediate result was an explosion of on-campus networking, with growth rates that were like nothing anyone had imagined. Across the country, those colleges, universities, and research laboratories that hadn’t already taken the plunge began to install local-area networks on a massive scale, almost all of them compatible with (and having connections to) the long-distance NSFnet. And for the first time, large numbers of researchers outside the computer-science departments began to experience the addictive joys of electronic mail, remote file transfer and data access in general. — TDM, p. 460

That same year, a summit of network representatives came together to address the problem of naming computers on the internet, and assigning network addresses. From the beginnings of the Arpanet, a paper directory of network names and addresses had been updated and distributed to the different computer sites on the network, so that everyone would know how to reach each other’s computers on the network. There were so few computers on it, they didn’t worry about naming conflicts. By 1986 this system was breaking down. There were so many computers on the network that assigning names to them started to be a problem. As Waldrop said, “everyone wanted to claim ‘Frodo’.” The representatives came to the conclusion that they needed to automate the process of creating the directory for the internet. They called it the Domain Name Service (DNS). (I remember getting into a bit of an argument with Peter Denning in 2009 on the issue of the lack computer innovation after the 1970s. He brought up DNS as a counter-example. I assumed DNS had come in with e-mail in the 1970s. I can see now I was mistaken on that point.)

Then there was the problem of speed,

“When the original NSFnet went up, in nineteen eighty-six, it could carry fifty-six kilobits per second, like the Arpanet,” says Wolff. “By nineteen eighty-seven it had collapsed from congestion.” A scant two years earlier there had been maybe a thousand host computers on the whole Internet, Arpanet included. Now the number was more like ten thousand and climbing rapidly. — TDM, p. 460

This was a dire situation. Wolff and his colleagues quickly came up with a plan to increase the internet’s capacity by a factor of 30, increasing its backbone bandwidth to 1.5 Mbps, using T1 lines. They didn’t really have the authority to do this. They just did it. The upgrade took effect in July 1988, and usage exploded again. In 1991, Wolff and his colleagues boosted the network’s capacity by another factor of 30, going to T3 lines, at 45 Mbps.

About Al Gore

Before I continue, I thought this would be a good spot to cover this subject, because it relates to what happened next in the internet’s evolution. Bob Taylor said something about this in the Q&A section in his 2010 talk at UT Austin. (You can watch the video at the end of Part 3.) I’ll add some more to what he said here.

Gore became rather famous for supposedly saying he “invented” the internet. Since politics is necessarily a part of discussing government R&D, I feel it’s necessary to address this issue, because Gore was a key figure in the development of the internet, but the way he described his involvement was confusing.

First of all, he did not say the word “invent,” but I think it’s understandable that people would get the impression that he said he invented the internet, in so many words. This story originated during Gore’s run for the presidency in 1999, when he was Clinton’s vice president. What I quote below is from an interview with Gore on CNN’s Late Edition with Wolf Blitzer:

During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth, environmental protection, improvements in our educational system, during a quarter century of public service, including most of it coming before my current job. I have worked to try to improve the quality of life in our country, and our world. And what I’ve seen during that experience is an emerging future that’s very exciting, about which I’m very optimistic …

The video segment I cite cut off after this, but you get the idea of where he was going with it. He used the phrase, “I took the initiative in creating the Internet,” several times in interviews with different people. So it was not just a slip of the tongue. It was a talking point he used in his campaign. There are some who say that he obviously didn’t say that he invented the internet. Well, you take a look at that sentence and try to see how one could not infer that he said he had a hand in making the internet come into existence, and that it would not have come into existence but for his involvement. In my mind, if anybody “took the initiative in creating the Internet,” it was the people involved with ARPA/IPTO, as I’ve documented above, not Al Gore! You see, to me, the internet was really an evolutionary step made out of the Arpanet. So the internet really began with the Arpanet in 1969, at the time that Gore had enlisted in the military, just after graduating from Harvard.

The term “invented” was used derisively against him by his political opponents to say, “Look how delusional he is. He thinks he created it.” Everyone knew his statement could not be taken at face value. His statement was, in my own judgment, a grandiose and self-serving claim. At the time I heard about it, I was incredulous, and it got under my skin, because I knew something about the history of the early days of the Arpanet, and I knew it didn’t include the level of involvement his claim stated, when taken at face value. It felt like he was taking credit where it wasn’t due. However, as you’ll see in statements below, some of the people who were instrumental in starting the Arpanet, and then the internet, gave Gore a lot of slack, and I can kind of see why. Though Gore’s involvement with the internet didn’t come into view until the mid-1980s, apparently he was doing what he could to build political support for it inside the government all the way back in the 1970s, something that has not been visible to most people.

Gore’s approach to explaining his role, though, blew up in his face, and that was the tragedy in it, because it obscured his real accomplishments. What would’ve been a more precise way of talking about his involvement would’ve been for him to say, “I took initiatives that helped create the Internet as we know it today.” The truth of the matter is he deserves credit for providing and fostering political, intellectual, and financial support for a next generation internet that would support the transmission of high-bandwidth content, what he called the “information superhighway.” Gore’s father, Al Gore, Sr., sponsored the bill in the U.S. Senate that created the Interstate highway system, and I suspect that Al Gore, Jr. wanted, or those in his campaign wanted him to be placed right up there with his father in the pantheon of leaders of great government infrastructure projects that have produced huge dividends for this country. That would’ve been nice, but seeing him overstate his role took him down a peg, rather than raising him up in public stature.

Here are some quotes from supporters of Gore’s efforts, taken from a good Wikipedia article on this subject:

As far back as the 1970s Congressman Gore promoted the idea of high-speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship […] the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises.

— a joint statement by Vint Cerf and Bob Kahn

The sense I get from the above statement is that Gore was a political and intellectual cheerleader for the internet in the 1970s, spreading a vision about how it could be used in the future, to help build political support for its future funding and development. The initiative Gore talked about I think is expressed well in this statement:

A second development occurred around this time, namely, then-Senator Al Gore, a strong and knowledgeable proponent of the Internet, promoted legislation that resulted in President George H.W Bush signing the High Performance Computing and Communication Act of 1991. This Act allocated $600 million for high performance computing and for the creation of the National Research and Education Network. The NREN brought together industry, academia and government in a joint effort to accelerate the development and deployment of gigabit/sec networking.

— Len Kleinrock

This is basically accurate, but more of a summary. Here is some more detail from TDM.

In 1986, Democratic Senator Al Gore, who had a keen interest in the internet, and the development of the networked supercomputing centers created in 1985, asked for a study on the feasability of getting gigabit network speeds using fiber optic lines to connect up the existing computing centers. This created a flurry of interest from a few quarters. DARPA, NASA, and the Energy Department saw opportunities in it for their own projects. DARPA wanted to included it in their Strategic Computing Initiative, begun by Bob Kahn in the early 1980s. DEC saw a technological opportunity to expand upon ideas it had been developing. To Gore’s surprise, he received a multiagency report in 1987 advocating a government-wide “assault” on computer technology. It recommended that Congress fund a billion-dollar research initiative in high-performance computing, with the goal of creating computers in several years whose performance would be an order of magnitude greater than the fastest computers available at the time. It also recommended starting the National Research and Education Network (NREN), to create a network that would send data at gigabits per second, an order of magnitude greater than what the internet had just been converted to. In 1988, after getting more acquainted with these ideas, he introduced a bill in Congress to fund both initiatives, called the Gore Bill. It put DARPA in charge of developing the gigabit network technology, and officially authorized and funded the NSF’s expansion of NSFnet, and it was also given the explicit mission in the bill to connect up the whole federal government to the internet, and the university system of the United States. It ran into a roadblock, though, from the Reagan Administration, which argued that these initiatives for faster computers and faster networking should be left to the private sector.

This stance causes me to wonder if perhaps there was a mood to privatize the internet back then, but it just wasn’t accomplished until several years later. I talked with a friend a few years ago about the internet’s history. He had been a tech innovator for many years, and he said he was lobbying for the network to be privatized in the ’80s. He said he and other entrepreneurs he knew were chomping at the bit to develop it further. Perhaps there wasn’t a mood for that in Congress.

The funding for high-performance computing research in Gore’s original bill was passed in 1991, under the George H.W. Bush Administration. The second half, on funding the gigabit network architecture, was passed in 1993, as the High-Performance Computing and Communications Act (HPCCA), under President Clinton.

Waldrop says, though, that this bump in the road hardly mattered, because the agencies were quietly setting up a national network on their own. Gorden Bell at the NSF said that their network was getting bigger and bigger. Program directors from the NSF, NASA, DARPA, and the Energy Department created their own ad hoc shadow agency, called the Federal Research Internet Coordinating Committee (FRICC). If they agreed on a good idea, they found money in one of their agencies’ budgets to do it. This committee would later be reconstituted officially as the Federal Networking Council. These agencies also started standardizing their networks around TCP/IP. By the beginning of the 1990s, the de facto national research network was officially in place.

Marc Andreeson said that the development of Mosaic, the first publicly available web browser, and the prototype for Netscape’s browser, couldn’t have happened when it did without funding from the HPCCA. “If it had been left to private industry, it wouldn’t have happened. At least, not until years later,” he said. Mosaic was developed at the National Center for Supercomputer Applications (NCSA) at the University of Illinois in 1993. Netscape was founded in December 1993.

The 2nd generation network takes shape

Recognizing in the late ’80s that the megabit speeds of the internet were making the Arpanet a dinosaur, DARPA started decommissioning their Information Message Processors (IMPs), and by 1990, the Arpanet was officially offline.

By 1988, there were 60,000 computers on the internet. By 1991, there were 600,000.

CERN in Switzerland had joined the internet in the late 1980s. In 1990, an English physicist working at CERN, named Tim Berners-Lee, created his first system for hyperlinking documents on the internet, what we would now know as a web server and a web browser. Berners-Lee had actually been experimenting with data linking long before this, long before he’d heard of Vannevar Bush, Doug Engelbart, or Ted Nelson. In 1980, he linked files together on a single computer, to form a kind of database, and he had repeated this metaphor for years in other systems he created. He had written his World Wide Web tools and protocol to run on the NeXT computer. It would be more than a year before others would implement his system on other platforms.

How the internet was privatized

Wolff, at NSF, was beginning to try to get the internet privatized in 1990. He said,

I pushed the Internet as hard as I did because I thought it was capable of becoming a vital part of the social fabric of the country—and the world,” he says. “But it was also clear to me that having the government provide the network indefinitely wasn’t going to fly. A network isn’t something you can just buy; it’s a long-term, continuing expense. And government doesn’t do that well. Government runs by fad and fashion. Sooner or later funding for NSFnet was going to dry up, just as funding for the Arpanet was drying up. So from the time I got here, I grappled with how to get the network out of the government and instead make it part of the telecommunications business. — TDM, p. 462

The telecommunications companies, though, didn’t have an interest in taking on building out the network. From their point of view, every electronic transaction was a telephone call that didn’t get made. Wolff said,

“As late as nineteen eighty-nine or ‘ninety, I had people from AT&T come in to me and say—apologetically—’Steve, we’ve done the business plan, and we just can’t see us making any money.'” — TDM, p. 463

Wolff had planned from the beginning of NSFnet to privatize it. After the network got going, he decentralized its management into a three-tiered structure. At the lowest level were campus-scale networks operated by research laboratories, colleges, and universities. In the middle level were regional networks connecting the local networks. At the highest level was the “backbone” that connected all of the regional networks together, operated directly by the NSF. This scheme didn’t get fully implemented until they did the T1 upgrade in 1988.

Wolff said,

“Starting with the inauguration of the NSFnet program in nineteen eighty-five,” he explains, “we had the hope that it would grow to include every college and university in the country. But the notion of trying to administer a three-thousand-node network from Washington—well, there wasn’t that much hubris inside the Beltway.” — TDM, p. 463

It’s interesting to hear this now, given that HealthCare.gov is estimated to be between 5 and 15 million lines of code (no official figures are available to my knowledge. This is just what’s been disclosed on the internet by an apparent insider), and it is managed by the federal government, seemingly with no plans to privatize it. For comparison, that’s between the size of Windows NT 3.1 and the flight control software of the Boeing 787.

Wolff also structured each of the regional service providers as non-profits. Wolff told them up front that they would eventually have to find other customers besides serving the research community. “We don’t have enough money to support the regionals forever,” he said. Eventually, the non-profits found commercial customers—before the internet was privatized. Wolff said,

“We tried to implement an NSF Acceptable Use Policy to ensure that the regionals kept their books straight and to make sure that the taxpayers weren’t directly subsidizing commercial activities. But out of necessity, we forced the regionals to become general-purpose network providers.” — TDM, p. 463

This structure became key to how the internet we have unfolded. Waldrop notes that around 1990, there were a number of independent service providers that came into existence to provide internet access to anyone who wanted it, without restrictions.

After a dispute erupted between one of the regional networks and NSF over who should run the top level network (NSF had awarded the contract to a company called ANS, a consortium of private companies) in 1991, and a series of investigations, which found no wrongdoing on NSF’s part, Congress passed a bill in 1992 that allowed for-profit Internet Service Providers to access the top level network. Over the next few years, the subsidy the NSF provided to the regional networks tapered off, until on April 30, 1995, NSFnet ceased to exist, and the internet was self-sustaining. The NSF continued operating a much smaller, high-speed network, connecting its supercomputing centers at 155 Mbps.

Where are they now?

Bob Metcalfe left PARC in 1975. He returned to Xerox in 1978, working as a consultant to DEC and Intel, to try to hammer out an open standard agreement with Xerox for Ethernet. He started 3Com in 1979 to sell Ethernet technology. Ethernet became an open standard in 1982. He left 3Com in 1990. He then became a publisher, and wrote a column for InfoWorld Magazine for ten years. He became a venture capitalist in 2001, and is now a partner with Polaris Venture Partners.

Bob Kahn founded the Corporation for National Research Initiatives, a non-profit organization, in 1986, after leaving DARPA. Its mission is “to provide leadership and funding for research and development of the National Information Infrastructure.” He served on the State Department’s Advisory Committee on International Communications and Information Policy, the President’s Information Technology Advisory Committee, the Board of Regents of the National Library of Medicine, and the President’s Advisory Council on the National Information Infrastructure. Kahn is currently working on a digital object architecture for the National Information Infrastructure, as a way of connecting different information systems. He is a co-inventor of Knowbot programs, mobile software agents in the network environment. He is a member of the National Academy of Engineering, and is currently serving on the State Department’s Advisory Committee on International Communications and Information Policy. He is also a Fellow of the IEEE, a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), a Fellow of the ACM, and a Fellow of the Computer History Museum. (Source: The Corporation for National Research Initiatives)

Vint Cerf joined DARPA in 1976. He left to work at MCI in 1982, where he stayed until 1986. He created MCI Mail, the first commercial e-mail service that was connected to the internet. He joined Bob Kahn at the Corporation for National Research Initiatives, as Vice President. In 1992, he and Kahn, among others, founded the Internet Society (ISOC). He re-joined MCI in 1994 as Senior Vice President of Technology Strategy. The Wikipedia page on him is unclear on this detail, saying that at some point, “he served as MCI’s senior vice president of Architecture and Technology, leading a team of architects and engineers to design advanced networking frameworks, including Internet-based solutions for delivering a combination of data, information, voice and video services for business and consumer use.” There are a lot of accomplishments listed on his Wikipedia page, more than I want to list here, but I’ll highlight a few:

Cerf joined the board of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1999, and served until the end of 2007. He has been a Vice President at Google since 2005. He is working on the Interplanetary Internet, together with NASA’s Jet Propulsion Laboratory. It will be a new standard to communicate from planet to planet, using radio/laser communications that are tolerant of signal degradation. He currently serves on the board of advisors of Scientists and Engineers for America, an organization focused on promoting sound science in American government. He became president of the Association for Computing Machinery (ACM) in 2012 (for 2 years). In 2013, he joined the Council on Cybersecurity’s Board of Advisors.

Steve Wolff left the NSF in 1994, and joined Cisco Systems as a business development manager for their Academic Research and Technology Initiative. He is a life member of the Institute of Electrical and Electronic Engineers (IEEE), and is a member of the American Association for the Advancement of Science (AAAS), the Association for Computing Machinery (ACM), and the Internet Society (ISOC) (sources: Wikipedia, and Internet2)

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Hat tip to Christina Engelbart at Collective IQ

I was wondering when this was going to happen. The Difference Engine exhibit is coming to an end. I saw it in January 2009, since I was attending the Rebooting Computing Summit at the Computer History Museum, in Mountain View, CA. The people running the exhibit said that it was commissioned by Nathan Myhrvold, he had temporarily loaned it to the CHM, and he was soon going to reclaim it, putting it in his house. They thought it might happen in April of that year. A couple years ago, I checked with Tom R. Halfhill, who has worked as a volunteer at the museum, and he told me it was still there.

The last day for the exhibit is January 31, 2016. So if you have the opportunity to see it, take it now. I highly recommend it.

There is another replica of this same difference engine on permanent display at the London Science Museum in England. After this, that will be the only place where you can see it.

Here is a video they play at the museum explaining its significance. The exhibit is a working replica of Babbage’s Difference Engine #2, a redesign he did of his original idea.

Related posts:

Realizing Babbage

The ultimate: Swade and Co. are building Babbage’s Analytical Engine–*complete* this time!

Read Full Post »

I found a version of Smalltalk-72 online through Gilad Bracha on Google+. It’s being written by Dan Ingalls, the original implementor of Smalltalk, on his relatively new web authoring platform called Lively Kernel (or Lively Web). I don’t know what Ingalls has planned for it, whether this will be up for long. I’m just writing about it because I found it an interesting and enlightening experience, and I wanted to share about it with others so that hopefully you’ll gain similar value from trying it out. You can use ST-72 here. I have some words of warning about it, as I’ll gradually describe in this post.

First of all, I’ve noticed as of this writing that as it’s loading up it puts up a scary “error occurred” message, and a bunch of “loading” messages get spewed all over the screen. However, if you are patient, all will be well. The error screen clears after a bit, and you can start using ST-72. This looks to be a work in progress, and some of it works.

What you’ll see is a white vertical rectangle surrounded by a yellowish box that contains some buttons. The white rectangle represents the screen area you work in. At the bottom of that screen area is a command line, which starts off with “Welcome to SMALLTALK.”

This environment was created using an original saved image of a running Smalltalk-72 system from 40 years ago. Even though the title says “ALTO Smalltalk-72,” it’s apparently running on a Nova emulator. The Nova was a minicomputer from Data General, which was the original platform Smalltalk was written on at Xerox PARC in the early 1970s. I’m unclear on what the connection is to the Alto’s hardware. You can even look at a Nova assembly monitor display by clicking on the “Show Nova” button, which will show you a disassembly of the machine instructions that the emulator is executing as you use Smalltalk. (To get back to Smalltalk, click on “Show Smalltalk.”) The screen display you see is as it was originally.

Smalltalk-72 is not the first version of Smalltalk, but it comes pretty close. According to Alan Kay’s retelling in “The Early History of Smalltalk,” there was an earlier version (which was the result of a bet he made with his PARC colleagues), but it was rudimentary.

This version does not run like the versions of Smalltalk most people familiar with it have used, which were based on Smalltalk-80. There are no workspace windows (yet), other than the command line interface at the bottom of the screen area, though there is a class editor (which as of this writing I’d say is non-functional). It doesn’t look like there’s a debugger. Anytime an error occurs, a subwindow with an error message pops up (which can be dismissed by pressing ctrl-D). You can still get some things done with the environment. To really get an idea of how to use it you need to click on “Open ST-72 Manual.” This will bring up a PDF of the original Smalltalk-72 documentation.

What you should keep in mind, though, (again, as of this writing) is that some features will not work at all. File functions do not work. There’s a keystroke the manual calls “‘s”, which is supposed to recall a specified object attribute, such as, for example, “title’s string” to access a variable called “string” out of an object called “title.” (“title” receives “‘s string” as separate messages (“‘s” and “string”), but “‘s” is supposed to be received as a single character, not two, as shown.) I have not been able to find this keystroke on the keyboard, though you can substitute a different message for it (whatever you want to use) inside a class.  The biggest thing missing at this point is that while it can read the mouse pointer’s position, this ST-72 environment does not detect any mouse button presses (the original mouse that was used with it had several mouse buttons). This disables any ability to use the built-in class editor, which makes using ST-72 a chore for anything more than “hello world”-type stuff, because besides adding a method to a class (which is easy to do), it seems the only way to change code inside a class is to erase the class (assign nil to its name), and then completely rewrite it. I also found this made later code examples in the documentation (which got more into sophisticated graphics) impossible to try out, as much as I tried to use keyboard actions to substitute for mouse clicks.

What I liked about seeing this version was it helped explain what Kay was talking about when he described the first versions of Smalltalk as “iconic” programming languages, and what he described with Smalltalk’s parsing technique, in “The Early History of Smalltalk” (which I will call “TEHS” hereafter).

To use this version, you’ll need to get familiar with how to use your keyboard to type out the right symbols. It’s not cryptic like APL, but it’s impossible to do much without using some symbolic characters in the code.

The most fascinating aspect to me is that parsing was a major part of how objects operated in this environment. If you’re familiar with Lisp, ST-72 will seem a bit Lisp-like. In fact, Kay says in TEHS that one of his objectives was to improve on Lisp’s use of “special forms” by making them unnecessary. He said that he did this by making parsing an integral part of how objects operate, and by making what appear to be syntactic elements (what would be reserved words or characters in other languages) their own classes, which receive messages, which are just other elements of a Smalltalk expression in your code; so that the parsing action goes pretty deep into how the whole system operates.

Objects in ST-72 exist in what I’d call a “nether world” between the way that Lisp functions and macros behave (without the code generation part). Rather than substitute bound values and evaluate, like Lisp does, it parses tokens and evaluates (though it has bound values in class, instance, and temporary variables).

It’s possible to create a class which has no methods (or just an implicit anonymous one). Done this way, it looks almost like a lambda (an anonymous function) in Lisp with no parameters. However, to pass parameters to an object, the preferred method is to execute some parsing actions inside the class’s code. ST-72 makes this pretty easy. Once you start doing it, using the “eyeball” character, classes start to look a bit like Lisp functions with a COND expression in it.

Distinguishing between using a class and an instance of it can be confusing in this environment. If you reference a class and pass it a message, that’s executing a class’s code using its class variable(s). (In ST-80 this would be called using “class-side code,” though in ST-72 the only distinction between class and instance is what variables inside the class specification get used. The code that gets executed is exactly the same in both cases.) In order to use an object instance (using its instance variable(s)), you have to assign the class to a variable first, and then pass your message to that variable. Assigning a class to a variable automatically creates an instance of the class. There is no “new” operator. The semantics of assignment look similar to that of SETQ in Lisp.

A major difference I notice between ST-72 and ST-80 is that in ST-80 you generally set up a method with a signature that lays out what message it will receive, and its parameter values, and this is distinct from the functionality that the method executes. In ST-72 you create a method by setting up a parsing expression (using the “eyeball” character) within the body of the class code, followed by an “implies” (conditional) expression, which then may contain additional parsing/implies expressions within it to get the message’s parameters, and somewhere in the chain of these expressions you execute an action once the message is fully parsed. I imagine this could get messy with sophisticated interactions, but at the same time I appreciated it, because it can allow a very straightforward programming style when using the class.

Creating a class is easy. You use the keyword “to” (“to” is itself a class, though I won’t get into that). If anyone has used the Logo language, this will look familiar. One of Kay’s inspirations for Smalltalk was Logo. After the “to”, you give the class a name, and enter any temporary, instance, and/or class variables. After that, you start a vector (a list) which contains your code for the class, and then you close off the vector to complete the class specification.

All graphics in ST-72 are done using a “turtle,” (though it is possible to use x-y coordinates with it to position it, and to direct it to draw lines). So it’s easy to do turtle graphics in this version of Smalltalk. If you go all the way through the documentation, you’ll see that Kay and Adele Goldberg, who co-wrote the documentation, do some sophisticated things with the graphics capabilities, such as creating windows, and menus, though since mouse button presses are non-functional at this point, I found this part difficult, if not impossible to deal with.

I have at times found that I’ve wanted to just start the emulator over and erase what I’ve done. You can do this by hitting the “Show Nova” button, hitting the “Restart” button, and then hitting “Show Smalltalk.”

Here is the best I could do for an ST-72 key map. Some of this is documented. Some of it I found through trial and error. You’ll generally get the gist of what these keys do as you go through the ST-72 documentation.

function keystroke description
! (“do it”) \ (backslash) evaluates expression entered on the command line, and executes it
hand shift-‘ quotes a literal (equivalent to “quote” in Lisp)
eyeball shift-5 “look for” – for parsing tokens
open colon ctrl-C fetch the next token, but do not evaluate (looks like two small circles one on top of the other)
keyhole ctrl-K peek at the input stream, but do not fetch token from it
implies shift-/ used to create if…then…else… expressions
return shift-1 returns value
smiley/turtle shift-2 for turtle graphics
open square shift-7 bitwise operation, followed by:
a * (number) = a AND (number)
a + (number) = a OR (number)
a – (number) = a XOR (number)
a / (number) = a left-shift by (number)
? shift-~ (tilde) question mark character
‘s (unknown)
done ctrl-D exit subwindow
unary minus
less-than-or-equal ctrl-A
greater-than-or-equal ctrl-Z
not equal ctrl-N
% ctrl-V
@ ctrl-F
! ctrl-Q exclamation point character
ctrl-O double quote character
zero(?) shift-4
up-arrow shift-6
assignment shift-_ displays a left-arrow, equivalent to “=” in Algol-derived languages.
temp/instance/class variable separator : (colon) (this character looks like ‘/’ in the ST-72 documentation, but you should use “:” instead)

When entering Smalltalk code, the Return key does not enter the code into the system. It just does a line-feed so you can enter more code. To execute code you must “do it,” by pressing ‘\’.

You’ll notice some of the keystrokes are just for typing out a literal character, like ‘?’. This is due to the emulator re-mapping some of the keystrokes on your keyboard to do something else, so it was necessary to also re-map the substituted characters to something else so they could still be used.

Using ST-72 feels like playing around in a sandbox. It’s fun seeing what you can find out. Enjoy.

Read Full Post »

The death of Neil Armstrong, the first man to walk on the Moon, on August 25 got me reflecting on what was accomplished by NASA during his time. I found a YouTube channel called “The Conquest of Space,” and it’s been wonderful getting acquainted with the history I didn’t know.

I knew about the Apollo program from the time I was a kid in the 1970s. I was born two months after Apollo 11, so I only remember it in hindsight. By the time I was old enough to be conscious of the Apollo program’s existence, it had been mothballed for four or five years. I could not be ignorant of its existence. It was talked about often on TV, and in the society around me. I lived in Virginia, near Washington, D.C., in my early childhood. I remember I used to be taken regularly to the Smithsonian Air and Space Museum. Of all of their museums, it was my favorite. There, I saw video of one of the moon walks, the space suits used for the missions (as mannequins), the Command Module, and Lunar Module at full scale, artifacts of a time that had come and gone. There was hope that someday we would go back to the Moon, and go beyond it to the planets. The Air and Space Museum had an IMAX movie that was played continuously, called “To Fly.” From what I’ve read, they still show it. It was produced for the museum in 1976. I remember watching it a bunch of times. It was beautifully done, though looking back on it, it had the feel of a “demo” movie, showing off what could be done with the IMAX format. It dramatizes the history of flight, from hot air balloons in the 19th century, to the jet age, to rockets to the Moon. A cool thing about it is it talked about the change in perspective that flight offered, a “new eye.” At the end it predicted that we would have manned space missions to the planets.

Why wouldn’t we have manned missions that venture to the planets, and ultimately, perhaps a hundred years off, to other star systems? It would just be an extension of the advancements in flight we had made on earth. The idea that we would keep pushing the boundaries of our reach seemed like a given, that this technological pace we had experienced would just keep going. That’s what everything that was science-oriented was telling me. Our future was in space.

In the late 1970s Carl Sagan produced a landmark series on science called “Cosmos.” He talked about the history of space exploration, mostly from the ground, and how our destiny was to travel into space. He said, introducing the series,

The surface of the earth is the shore of the cosmic ocean. On this shore we’ve learned most of what we know. Recently we’ve waded a little way out, maybe ankle deep, and the water seems inviting. Some part of our being knows this is where we came from. We long to return, and we can…

Winding down?

As I got into my twenties, in the 1990s, I started to worry about NASA’s robustness as a space program. It started to look like a one-trick pony that only knew how to launch astronauts into low-earth orbit. “When are we going to return to the Moon,” I’d ask myself. NASA sent probes out to Jupiter, Mars, and then Saturn, following in the footsteps of Voyager 1 and 2. Surely similar questions were being asked of NASA, because I’d often hear them say that the probes were forerunners to future manned space flight, that they were gathering information that we needed to know in advance for manned missions, holding out that hope that someday we’d venture out again.

The Space Shuttle was our longest running space program, from 1981 to 2011, 30 years. Back around the year 2000 I remember Vice President Al Gore announcing the winner of the contract to build the next generation space shuttle, which would take the place of the older models, but it never came to be. Under the administration of George W. Bush the Constellation program started in 2005, with the idea of further developing the International Space Station, returning astronauts to the Moon, establishing a base there for the first time, and then launching manned missions to Mars. This program was cancelled in 2010 in the Obama Administration, and there has been nothing to replace it. I heard some criticism of Constellation, saying that it was ill-defined, and an expensive boondoggle, though it was defended by Neil Armstrong and Gene Cernan, two Apollo astronauts. Perhaps it was ill-defined, and a waste of money, but it felt sad to see the Space Shuttle program end, and to see that NASA didn’t have a way to get into low-earth orbit, or to the International Space Station. The original idea was to have the first stage of the Constellation program follow, after the space shuttles were retired. Now NASA has nothing but rockets to send out space probes and robotic rovers to bodies in space. Even the Curiosity rover mission, now on Mars, was largely developed during the Bush Administration, so I hear.

I have to remember at times that even in the 1970s, during my childhood, there was a lull in the manned space program. The Apollo program was ended in the Nixon Administration, before it was finished. There was a planned flight, with a rocket ready to go, to continue the program after Apollo 17, but it never left the ground. There’s a Saturn V rocket that was meant for one of the later missions that lays today as a display model on the grounds of the Kennedy Space Center. I have to remember as well that then, as now, the program was ended during a long drawn out war. Then, it was in Vietnam. Now, it’s in the Middle East.

Manned space flight ended for a time after the SL-4 mission to the Skylab space station in 1974. It didn’t begin again for another 7 years, with the first launch of the Space Shuttle. The difference is the Shuttle was first conceptualized towards the end of the Apollo program. It was there as a goal. Perhaps we are experiencing the same gap in manned flight now, though I don’t have a sense that NASA has a “next mission” in mind. As best I can tell the Obama Administration has tasked NASA with supporting private space flight. There is good reason to believe that private space flight companies will be able to send astronauts into low-earth orbit soon. That’s a consolation. The thing is that’s likely all they’re going to do in the future–launch to low-earth orbit. They’re at the stage that the Mercury program was more than 50 years ago.

What I ask is do we have anything beyond this in mind? Do we have a sense of building on the gains in knowledge that have been made, to venture out beyond what we now know? I grew up being told that “humans want to explore, to push the boundaries of what we know.” I guess we still are that, but maybe we’re directing that impulse in new ways here on earth, rather than into space. I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence. Astrophysicist, and vocal advocate for NASA, Neil deGrasse Tyson has worried about this, too.

I realized a few years ago, to my dismay, that what really drove the creation of the space program, and our flights to the Moon, was not an ambition to push our frontiers of knowledge just for the sake of gaining knowledge. There was a major political aspect to it: beating the Soviets in “the space race” of the 1960s, establishing higher ground for ourselves, in a military sense. Yes, some very valuable scientific and engineering work was done in the process, but as Tyson would say, “science hitched a ride on another agenda.” That’s what it’s often done in human history. Many non-military benefits to our society flowed from what NASA once did, none of which are widely recognized today. Most people think that our technological development came from innovators in the private sector alone. The private sector did a lot, but they also drew from a tremendous resource in our space and defense research and development programs, as I’ve documented in earlier posts.

I’ll close with this great quote. It echoes what Tyson has said, though it’s fleshed out in an ethical sense, too, which I think is impressive.

The great enemy of the human race is ignorance. It’s what we don’t know that limits our progress. And everything that we learn, everything that we come to know, no matter how esoteric it seems, no matter how ivory tower-ish, will fit into the general picture a block in its proper place that in the end will make it possible for mankind to increase and grow; become more cosmic, if you wish; become more than a species on Earth, but become a species in the Universe, with capacities and abilities we can’t imagine now. Nor do I mean greater and greater consumption of energy, or more and more massive cities.

It’s so difficult to predict, because the most important advances are exactly in the directions that we now can’t conceive, but everything we now do, every advance in knowledge we now make, contributes to that. And just because I can’t see it, and I’m an expert at this, … doesn’t mean it isn’t there. And if we refuse to take those steps, because we don’t see what the future holds, all we’re making certain of is that the future won’t exist, and that we will stagnate forever. And this is a dreadful thought. And I am very tired when people ask me, “What’s the good of it,” because the proper answer is, “You may never know, but your grandchildren will.”

— Isaac Asimov, 1973, from the NASA film “Small Steps, Giant Strides”

Then as now, this is the lament of the scientist, I think. Scientists must ask society’s permission to explore, because they usually need funds from others to do their work, and there is no immediate payback to be had from it. It is for this reason that justifying the funding of that work is tough, because scientific work goes outside the normal set of expectations people have about what is of value. If the benefits can’t be seen here and now, many wonder, “What’s the point?” What Asimov pointed out is the pursuit of knowledge is its own reward, but to really gain its benefits you must be future-oriented. You have to think about and value the world in which your children and grandchildren will live, not your own. If your focus is on the here and now, you will not value the future, and so potential future benefits of scientific research will not seem valuable, and therefor will not seem worthy of pursuit. It is a cultural mindset that is at issue.

Edit 12-10-2012: Going through some old articles I’d saved, I came upon this essay about humanity’s capacity for intellectual thought, called “Why is there Anti-Intellectualism?”, by Steven Dutch at the University of Wisconsin-Green Bay. It provides some reasonable counter-notions to my own that seem to confirm what I’ve seen, but will still take some contemplation on my part.

There’s no science in the article. In terms of quality, at best, I’d call this an “executive summary.” Maybe there’s more detailed research behind it, but I haven’t found it yet. Dutch uses heuristics to provide his points of comparison, and uses a notion of evidence to provide some meat to the bones. He asks some reasonable questions that are worth contemplating, challenging the notion that “humans are naturally curious, and strive to explore.” He then makes observations that seem to come from his own experience. Overall, he provides a reasonable basis for answering a statement I made in this article: “I wonder sometimes whether the scientific community fooled itself into believing this to justify its existence.” He comes down on the side of saying, in his opinion (paraphrasing), “Yes, some in the scientific community have fooled themselves on this issue.” He discusses the notion that “humans are naturally curious,” due to the behavior exhibited by children. He concludes by saying that children naturally display a shallow curiosity, which he calls “tinkering.” The harder task of creative, deep thought does not come naturally. It’s something that needs to be cultivated to take root. Hence the need for schools. The question I think we as citizens should be asking is whether our schools are actually doing this, or something else.

Read Full Post »

Jack Tramiel died on April 8, at the age of 83. This isn’t going to rank real high on the radar of too many people, but it was notable to me, because I remember a bit about Jack.

I don’t know much about Jack’s history, and the history of Commodore. What I remember is that Jack was a Polish immigrant. He founded Commodore Business Machines in the 1950s, as a typewriter parts company. It eventually got into selling electronic calculators. It got into the computer market in the late 1970s. I think its first computer was the Commodore PET. Jack later said that he didn’t get into the computer business because he particularly loved the concept. He just did it to make money.

The Commodore 64, from Wikipedia

While Apple Computer pioneered high end personal computing, Commodore pioneered the low end of that market. Jack was I think the first to have the concept of profiting by selling computers in volume, at prices that consumers could afford. The company’s first popular, low-priced computer was the VIC-20. Its Commodore 64 computer was wildly popular. It was sold in toy and department stores, for what was then a bargain basement price of about $550. It was the most widely sold computer of its era.

Tramiel was said to be ruthless, wanting to crush all his competitors. He largely succeeded at it. When I say this, you have to understand that back in the late 70s, up to the mid-80s, the computer market was really separated into the two strata of high-end and low-end. While there were people who bought high-end computers to use at home, most of them were bought by schools and businesses. At that time, computers like Commodore’s were mainly bought for use at home, and it mainly competed against other computer manufacturers in the home market. Commodore began to make a foray into the high-end market with its Amiga computer, which came out in 1985, but its influence was not as widespread in that market as was technology from IBM, Microsoft, and Apple.

The consumer division of Atari (which was owned by Warner Communications) and Commodore were fierce rivals in the low-end market. In a surprising move, Jack left Commodore in 1984, and bought Atari from Warner. He made a go of it with Atari for another 12 years, first coming out with the Atari ST computer, its first 16-bit model, and then other models like the TT030, and the Falcon 030, the last computer they made. Atari also made a foray into the high-end market with its 16- and 32-bit line, but it had a similar profile as Commodore’s Amiga. It was accepted as a niche machine.

Here’s a British interview I found on YouTube with Jack Tramiel from around 1984/85, introducing Atari’s new line of 8- and 16-bit machines.

When Jack bought Atari, there was some credence given to the idea that he would do for Atari what he had done for Commodore, making it a dominant player, crushing all its rivals. It didn’t even get close to that, at least in the U.S. Atari did very well for several years in Europe, becoming one of the dominant computer manufacturers there, but the U.S. market was already changing. By the time Tramiel bought the company, consumers were beginning to “standardize” on the IBM PC, and later PC clones, facilitated by Microsoft’s operating system, MS-DOS. Atari admitted defeat in the computer market in 1993, but continued to make a go of it in the consumer video game business, with the Atari Lynx color portable game system, and the Jaguar 64-bit console.

Commodore went into bankruptcy in 1994. Its intellectual property has since been acquired and used by a couple companies.

Atari was on its last legs in 1996. It had been whittled down to nothing, just a few employees. Atari’s intellectual property was sold to a disk drive manufacturer, JTS, that year. It was bought and sold a couple times after that. It eventually “landed” with a company called Infogrames around the year 2000. They changed their name to “Atari” in 2003, and continue to sell video games under the Atari label.

Tramiel went into retirement after selling Atari. He later joked, in a self-effacing way, “I wanted to destroy Atari, and I did!”

Well, anyway, I enjoyed Atari’s computers. I still have a 130XE and a Mega STe (both models from the Tramiel era) that I’ve kept in storage. Maybe one of these days I’ll donate my Mega STe to some computer museum that wants it. I’ve promised myself that one of these days I’m going to drag out the 130XE and transfer all my old Atari disks so I can run the old stuff on an emulator when I want to reminisce. I did that with my STe stuff about 10 years ago. Ah memories…

Related posts: Reminiscing, Part 4A history of the Atari ST and Commodore Amiga

Read Full Post »

When President Obama gave his State of the Union address on January 25, 2011, I noticed he was criticized specifically for his so-called “Sputnik moment,” his call for “investment” in certain areas involving science and engineering. Here is some of what he said on that:

Half a century ago, when the Soviets beat us into space with the launch of a satellite called Sputnik, we had no idea how we would beat them to the moon. The science wasn’t even there yet. NASA didn’t exist. But after investing in better research and education, we didn’t just surpass the Soviets; we unleashed a wave of innovation that created new industries and millions of new jobs.

This is our generation’s Sputnik moment. Two years ago, I said that we needed to reach a level of research and development we haven’t seen since the height of the Space Race. And in a few weeks, I will be sending a budget to Congress that helps us meet that goal. We’ll invest in biomedical research, information technology, and especially clean energy technology, an investment that will strengthen our security, protect our planet, and create countless new jobs for our people.

People within the computer science, science, and engineering communities have been pining for a “Sputnik moment” for several years now, hoping that something, anything, will inspire our society to make math, science, engineering, and computer science a higher priority, and promote more funding for those fields in academia. It’s not just academics who are worried about this. The industries that count on having a supply of good scientists, engineers, and mathematicians to draw from in the U.S. are worried about the decline in interest among American students as well.

The hope for this “moment” is misguided, in my opinion, because what they’re really hoping for is something contrived. It also speaks to a lack of faith in America, that we cannot be inspired to pursue these fields without some foreign enemy that we feel we have to compete against. Obama kind of tried to create this “Sputnik moment” in his speech, talking about how the Chinese and South Koreans have better technology than we do. This is true. In the case of South Korea and Singapore, it’s been true for almost ten years. Back in 2002 I was hearing about how in Singapore people were reserving movie tickets for specific movies at specific theaters, and carrying out cash and credit card transactions at retail outlets, with their cell phones. All they needed to enter on their phone was their PIN. You can watch a Computer Chronicles episode where they talk about this. A couple years later I heard about how South Koreans had higher speed internet service than we did, back when the fastest broadband service to which most American consumers had access was probably 1.5 megabits per second. Koreans were already doing HD video conferencing as a matter of course. It’s come to this country just recently. I’m not convinced, though, that Americans are going to commit themselves to spending years studying technical subjects just to beat the Chinese and Koreans over issues like this.

Sputnik was a cultural realization in 1957 about the implications of the Soviets, our Cold War adversary, getting ahead of the U.S. in space technology, primarily out of an anti-communist sentiment, and concern for national security. The government didn’t have to tell people to be worried about this. In fact it was the citizenry that was banging on the doors of the government, demanding, “Why didn’t you see this coming,” and, “Why weren’t we first?” The Soviets had developed their own nuclear weapon in 1949, and now they had the ability to launch things into space. People made the connection that “they could drop nuclear bombs on us from space,” a strategic high ground that we were not even close to occupying at the time. Our own test rockets were blowing up on the launch pad on a regular basis. The feeling was the Soviets had beaten us in a game of one-upsmanship, when we least expected it, and there was an alarming sense that we needed to “catch up.”

According to what we know from history, the so-called “missile gap” was a political myth. Nevertheless, something good was able to come out of this. Math and science, which our culture had largely ignored and neglected, became a higher priority. We produced more scientists and engineers. Some of them went into university research. Others went into private industry. A couple by-products of this was a heightened interest in getting more women into computer science in the 1970s–an effort that actually succeeded at the time, but only temporarily. It also spurred the explosion in commercial personal computers in the late 1970s, which continued into the 1990s. There was a cultural understanding which said that learning about computers at a technical level, and how to use them, was important for our future.

That culture has changed. We value computer technology, because now it’s everywhere, and it’s hard not to learn about it, but now it’s all about learning the technology’s interface, not how to build it, or manipulate its internals. There is much less interest in what makes our technology possible, and in learning what’s necessary to drive it forward.

The criticism of Obama’s speech where he talks about our “Sputnik moment,” has bothered me somewhat, because the critics missed the history of government research funding. We really shouldn’t dismiss it, and let ourselves be ignorant of it, because I think this is one of the relatively few areas where government has done a good job, even with the occasional misguided notions that have been promoted through it under the label of “science”.

Here is some of what was said against Obama’s speech.

From Investors.com, published by Investor’s Business Daily, “Obama’s Tribute to Big Government”:

Are you impressed with the Internet? With your iPad? With that gadget on your car’s dashboard that gets you back on the Interstate after you get lost in a strange city?

Thank Washington, because according to the president Washington “planted the seeds for the Internet. That’s what helped make possible things like computer chips and GPS. Just think of all the good jobs — from manufacturing to retail — that have come from these breakthroughs.”

If you don’t remember Presidents Carter or Reagan or Clinton bragging about the initiatives of their administrations that would one day bear the fruit of global social networks like Facebook, handheld communication devices like BlackBerrys, and the mobility revolution of Wi-Fi, it’s because they didn’t.

Government didn’t bring us any of those things; the private sector did. Does anyone really believe the federal government — which can’t find, let alone police, the 13 million-plus illegal aliens within our borders — can find and finance entrepreneurs like Bill Gates and Steve Jobs while they are in the embryonic stages of their careers and make them successes?

The part of Obama’s speech that was quoted makes it sound like he was making a bit of a non-sequitor. Here is the full quote:

Our free enterprise system is what drives innovation. But because it’s not always profitable for companies to invest in basic research, throughout our history, our government has provided cutting-edge scientists and inventors with the support that they need. That’s what planted the seeds for the Internet. That’s what helped make possible things like computer chips and GPS. Just think of all the good jobs — from manufacturing to retail — that have come from these breakthroughs.

On these points, Obama was right, but it takes an understanding of the history of this technology to get that. Defense spending was an important part of establishing the internet (through ARPA). It should be noted that the internet I refer to here is not the web, but rather the basic hardware and software infrastructure that the web uses in order to work. NASA provided R&D funding for the development of computer chips around the time they were invented, and without defense spending there would not be a GPS satellite system, which is essential for GPS technology to work here on the ground. It should be noted that the technology for GPS was initially used exclusively by the military, the technology for which was kept secret, and I think was only made available for consumer use within the last 10+ years. These have been essential technology platforms for the consumer products and services we use.

Just a side note: The example of the level of illegal immigration exhibiting government incompetence is a bad one. It’s not that our government can’t police and regulate this better. It’s that it doesn’t want to, and there are interests in this country who like it that way. Anyway…

There was also this from Glenn Beck shortly after the speech:

The President also announced he would be “investing” in biomedical research. Can anyone find that one in the Constitution? Oh, and information technology, especially clean energy technology. Alright. Let’s take these apart one at a time, shall we?

Biomedical: Maybe it’s just me as an American citizen, but I don’t want the government now dabbling with creating our drugs. Period.

Information technology? I dunno. I thought we were doing pretty good with Steve Jobs. Bill Gates [is] doing good, you know. Newcomers [are] coming into that field all the time. I think we’re good! Have you seen the iPhone…next to the Post Office? I’m going to go with Apple on that one.

After seeing these comebacks from conservatives, I shook my head a bit in disgust. To me, these people clearly didn’t understand what they were talking about with regard to how the products we see today came to be. Well, I hope to remedy that a bit in this series.

Obama wasn’t talking about starting a government-owned biomedical or IT company, or the government becoming a venture capitalist that would fund startups (though, as you’ll see later in this series, the government did that in a round-about way during the “middle stage” of the internet’s development), but rather that the government would provide funding for basic scientific research of the same type that developed the technology platforms which Facebook, Twitter, GPS devices, smartphones, and PCs were able to build upon, or were able to use. It’s not a stretch at all to say that most of these platforms would not exist today, or would’ve come later, had government-financed research not delved into creating them.

I will use several sources for this series. The primary one is “The Dream Machine,” by M. Mitchell Waldrop. He tells the story of where the basis–the ideas and infrastructure–for a lot of the technology we use today came from. Other sources I will use are Wikipedia, historical videos I’ve found on the internet, and a documentary called “The Machine That Changed The World.” Occasionally I will cite anecdotes I have heard from people who were either involved in this work, or who have been close to the people who were.

“The Dream Machine” follows the career of one man, J.C.R. Licklider, and several other computing luminaries, who made great contributions to the field of computer science, and particularly to the goal of creating interactive, networked computing, the kind of computing we do every day when we use technology.

What follows is a series of posts on this subject. I’ve broken it up into parts, because it’s extensive. The goal of what I’m writing here is not to be an authoritative source, but to provide a primer which you, the reader, can use to further your own research of this topic, if you feel so inclined. I will not cover the full history of computer technology (though there is still a lot here). What I will cover is the history of government-funded research into certain computer technologies, and what flowed from that research. I’ve made a point to try as much as I can to include the contributions of private companies that were created from government projects, or which worked with government agencies, or were beneficiaries of ideas generated by government-sponsored research, and which were critical in bringing us the technology we use today.

Getting computing off the ground

The first technological breakthrough described in the “The Dream Machine” was ENIAC (the Electronic Numerical Integrator And Computer), the first general-purpose computer. There were other computers invented by other creators before this, but they were designed for a specific purpose. It was either impossible, or very difficult, to program them for other purposes. This computer could be programmed to use a variety of calculation methods.

J. Presper Eckert, from the Computer History Museum

ENIAC was designed by J. Presper Eckert, and John Mauchly (pronounced “mock-lee”). It was a digital computer, and its electronics were made up of vacuum tubes, also known as “valves.” It was a government project, financed by the U.S. Army, at the Moore School of Engineering at the University of Pennsylvania, during WW II. It was completed in 1946, and took up a whole room. It was also the first high profile project where women were important to the operation of a computer. They were called “Rosies,” and they were the programmers of ENIAC. This was no mean feat for anyone to do, since no programming language existed. It was programmed via. panels of “switches” (though the controls were actually dials), and the only code the programmers had to read were wiring diagrams. Before this, the “Rosies” had worked as human computers, figuring firing tables for artillery shells for the war effort by hand, with the help of mechanical calculators. Human computation was a problem, though. Errors were inevitable, and they might go undetected. It took a long time for people to compute tables. The sense of the time was they needed the firing tables yesterday. They couldn’t produce them fast enough. So building an electronic computer to do this work was seen as essential. As was the case in all sorts of male-dominated fields of the day, since most of the young men in the country were off fighting the war, women were brought in to do both kinds of work (computation and programming), and they showed themselves to be up to the task, though their contribution went unrecognized by society for many years.

John Mauchly, from the Computer History Museum

There’s a new documentary out on DVD now called “Top Secret Rosies: The Female Computers of World War II” (h/t to Mark Guzdial) for anyone who’s interested in learning more about this. There’s also a segment on this history in the documentary, “The Machine That Changed The World,” in its first episode, called “Great Brains.”

The following video of ENIAC, and the people who operated it, is National Archives footage digitized to the web. Note: There is no sound in this video. Eckert and Mauchly appear in the video at 4:55.

The blinking light display on the ENIAC was set up solely for the computer’s public unveiling. The display is a panel of lights covered with ping pong balls, with numbers painted on them. This panel was not used for normal operation. Nevertheless, it established the precedent of computers having panels of blinking lights.

There were two limitations with ENIAC. One was, while it could be programmed, it could not store its program. Programming was a matter of “wiring” the program into the machine, which was accomplished by adjusting the aforementioned panels of dials. Second, it was not interactive. The computer loaded data into itself, processed it, and produced printed tables of calculations. That was it. This would more or less become the template for most of the computers that would be produced and used for the next 25 years.

Taking what they had learned from building ENIAC, Eckert and Mauchly formed the world’s first computer company, the Electronic Control Company, later called the Eckert-Mauchly Computer Corporation, in 1946. They developed the Univac (the Universal Automatic Computer). The following ad for Univac was produced after the Eckert and Mauchly company was bought by Remington Rand in 1950.

Univac was made famous when it was featured on a CBS News broadcast, giving the first statistical computer prediction for an election, in the 1952 presidential race. The prediction was based on a sample of actual vote tallies on election night, and was very close to the actual result, though CBS did not air the prediction, because it differed significantly from polls taken before the election.

IBM decided to get into the computer business after hearing requests for computers from the U.S. military during the Korean War. There was resistance within IBM to getting into this new market. Nevertheless, it produced its first computer model, the 701, in 1953. Its design was based on Dr. John von Neumann’s paper, “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” which documented his work in creating a computer that could store its program, at the Institute for Advanced Studies, at Princeton.

IBM came to dominate the field with its superior marketing, and backward compatibility with the punch card system of its older mechanical tabulation machines.

The first glimmers of interactive computing

What I covered above is one path that computers took, which came to dominate computer use for a long time. Another path that was very significant, though was not enjoyed by the public at the time, was interactive computing. For many years, interactive computing would only be known in the military. Eventually it would come to be known by the whole world.

Jay Forrester, from Wikipedia

Going back in time to before ENIAC was created, the Navy wanted to build a flight simulator for pilot training. They funded a project at MIT called Whirlwind, starting in 1944. It was designed by Jay Forrester. The system he ultimately built was the first real-time digital computer. Like ENIAC, it used vacuum tubes. What was different was it gave constant feedback to an operator (the pilot trainee) as it received input (movements of a joystick by the trainee), and it gave the operator something to look at, a simple graphical display, generated using an oscilloscope.

Whirlwind was never really “finished,” though it was put into military service in 1951. The scientists and engineers who worked on it were constantly tinkering with it, and changing its purpose. Under “Project Claude,” Whirlwind was tasked with displaying the positions of a real drone and a real aircraft flying in the sky, taking radar data as input in real time, allowing the operator to use the joystick that was part of the computer’s setup to remotely control the drone. The goal was to see if the person sitting at the computer could use the equipment to pilot the drone to intercept the aircraft. In tests it worked. This is not unlike the way military drones are piloted today.

A government-deputized committee that was formed shortly after the Soviets tested their first nuclear bomb in 1949, headed by MIT physicist George Valley, saw Whirlwind in action in 1950. It was a decisive proof-of-concept for them about what computers were capable of, and how they could be used for air defense. The Valley committee was tasked with assessing our existing defenses in the face of a Soviet nuclear threat, and making recommendations about any changes that were needed. They found that our defenses were woefully inadequate for defending against a bomber raid, the only major threat that was imagined at the time.

The committee’s report was delivered to the Pentagon in 1950. It recommended that the government install more radar stations to fill in the gaps in the radar net, and that an automated radar tracking and response system–a computer system–was needed. Seeing Whirlwind in action gave them confidence that this was possible. The U.S. government was very nervous about the situation with the Soviets. The thinking was the cold war could turn into a hot war quickly, and so they accepted the Valley committee’s recommendations without question, even though this was a radical, new idea at the time, only supported by experimental technology.

A man by the name of J.C.R. Licklider comes into the story here. He was a psychologist, with a background in mathematics and physics, who was very interested in computers. In addition to being a scientist, he was also a mathematician. He is a major figure in the history that follows.

Licklider came to MIT in 1950 to continue his work in psychology. While there, he developed custom-built analog computers to simulate isolated neurological activity in the human brain, as part of his research. He also contributed to the SAGE (Semi-Automatic Ground Environment) project in 1951, working on human factors research for the displays that operators would be working with. He helped determine when and how much information should be displayed to the operators, and what the intensity of the displays should be, so that the operators would be comfortable, and would not be overwhelmed with information, so they could make clear decisions.

SAGE was, in essence, the system that was recommended in the Valley Committee report. It was designed to be a real-time system, like Whirlwind, which would monitor our defensive airspace for Soviet bombers, and our own air force flights. It was developed at MIT, with several corporations contributing to its construction and planning, including IBM, System Development Corporation (which was spun off from RAND Corp. They managed all of the programmers for the SAGE project), Burroughs, and Western Electric.

The programming task for SAGE was very complex. It was estimated they needed 2,000 programmers. There weren’t enough in the whole country to work on it. So the SAGE project undertook an education program to recruit and train new programmers. All comers were invited, men and women, from all walks of life. It was not easy to tell who the best programmers would be. Scientists, engineers, mathematicians, the typical candidates one would think were ideal, were not sure bets. Music teachers tended to be the best candidates. It was found that women were better than men at paying attention to minute details while simultaneously not losing track of the big picture. One of the best programming groups in the project was 80% female.

The Navy lost interest in Whirlwind in 1954, and stopped work on it. The Air Force began construction of SAGE installations that same year. The SAGE system was brought online in 1958. Each installation was huge, due to the fact that, like ENIAC, its electronics were made up of large arrays of vacuum tubes. It took up a few floors of a building at each installation. Each one had two computers, so that one could always be taken down for maintenance if need be. It was networked so that information gathered about detected objects in the sky could be relayed to other stations. The first phone modems were created during this project, for this information transfer.

SAGE was fully built by 1963. The thing was, it was obsolete by this point. ICBMs (Inter-Continental Ballistic Missiles) had been developed, and the nuclear threat situation had become more a matter of nuclear deterrence than strategic defense. SAGE could not deal with ICBMs at all. Still, Whirlwind and SAGE would have a huge impact on the future of computing in the minds of the scientists and engineers who worked on them. SAGE was the first big proof-of-concept that interactive, networked computing was a viable concept. Still, it would take a lot of convincing to get more players in the computer industry to believe in it. Computers were still big and expensive, and computer time was expensive and precious. The idea of interactive computing was seen as pie in the sky by many–a waste of time, and SAGE was the exception that proved the rule that computers were only for data processing, not for interaction.

What came of it

One of MIT’s working groups on the SAGE project was spun off as the MITRE Corporation in 1958. Its purpose was to integrate new weapons systems into SAGE. In addition, in the coming years MITRE would develop the National Airspace System for air traffic control, which is still in use today, and AWACS (the Airborne Warning and Control System) (sources: Wikipedia, The MITRE Digest).

The SAGE project enhanced IBM’s ability to deliver computer systems that could handle big, real-time data processing projects, because their staff had learned how to do it through the experience of working on this project, and by bringing experienced people in from places like MIT. IBM developed SABRE (Semi-Automated Business-Related Environment) for American Airlines. It was a simplified commercial version of SAGE, but was designed for making airline reservations. The first experimental SABRE system went online in 1960. It was the largest real-time commercial data processing network in the world at the time. It was ultimately linked by phone lines to 1,200 teletype terminals across the country.

American made SABRE available to independent travel agents in 1976, and in the year 2000 spun it off as SABRE Holdings, which exists today. It’s divided into four business units: Travelocity (the e-commerce site for people to make their own travel reservations), Sabre Travel Network (a global distribution system providing travel information to agencies, corporations, and travelers), Sabre Airline Solutions (providing airline reservations systems and financial management), and Sabre Hospitality Solutions (providing technology solutions to hotels).

The SAGE system continued to be used by the military until 1984. Fortunately it was never tested in combat.

J. Presper Eckert, a co-inventor of the ENIAC, became an executive with Remington Rand when the Eckert-Mauchly Computer Corporation was purchased by Rand in 1950. He stayed on through the company’s many transitions. Remington Rand was purchased by Sperry in 1955, and went by the name of Sperry Rand, and then Sperry, until the company merged with Burroughs to form Unisys in 1986. It is still in existence today. Eckert retired from Unisys in 1989. He died on June 3, 1995.

John Mauchly, co-inventor of the ENIAC, became a founding member, and president, of the Association for Computing Machinery (ACM), and helped found the Society of Industrial and Applied Mathematics (SIAM). He stayed on with Remington Rand for 10 years, after their company was bought by Rand. He left the company in 1959 to form the consulting firm Mauchly Associates. He later formed another consulting firm called Dynatrend in 1967. He died on January 8, 1980. (sources: Wikipedia, the Association for Computing Machinery, Ohio History Central)

Jay Forrester, creator of Whirlwind, moved to the Sloan School of Management at MIT, and left the development of digital computing for good, in 1956. He created a new field of research called “system dynamics,” which looks at the interactions of objects in dynamic systems, with an emphasis on simulation of those systems. He developed ideas which have led to modern notions of supply chain management. (Update 11/21/2016: He died on November 16, 2016. Source: New York Times)

In Part 2, I describe the efforts that began in the 1960s to bring interactive computing to the masses.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

I heard earlier this month that Google Wave was going to be shut down at the end of the year, because it was not a hit with academia (h/t to Mark Guzdial). I was surprised. What was interesting and rather shocking is one of the reasons it failed was that academics couldn’t figure out how to use it. They said it was too complex. In other words, the product required some training. I’ve used Wave and it’s about as easy to use as an e-mail application, though there are some extra concepts that need to be acquired to use it. I saw it as kind of a cross between e-mail and teleconferencing. Maybe if people had been told this they would’ve been able to pick up the concepts more easily. It’s hard for me to say.

I thought for sure Wave would be a hit somewhere. I assumed business and bloggers would adopt it. The local Lisp user group I’ve been attending has been using Wave for several months now, and we’ve made it an integral part of our meetings. I think we’ll get along without it, but we were sorry to hear that it’s going away.

As I’ve reflected on it I’ve realized that Wave has some significant shortcomings, though it was a promising technology. I first heard about it in a comment that was left on my blog a year ago in response to my post “Does computer science have a future?” Come to think of it, Google Wave’s fate is an interesting commentary on that topic. Anyway, I watched Google’s feature video where it was debuted. It reminded me a bit of Engelbart’s NLS from 1968, though it really didn’t hold a candle to NLS. It had a few similarities I could recognize, but its vision was much more limited. As the source article for this story reveals, the Google Wave team used the idea of extending the concept of e-mail as their inspiration, adding idioms of instant messaging to the metaphor. That explains some things. What I think is a shame is they could’ve done so much more with it. One thing I could think of right off the bat was having the ability to link between waves, and entries in different waves, so that people could refer back and forth to previous/subsequent conversations on related topics. I mean, how hard would that have been to add? In business terms wouldn’t this have “added value” to the product concept? What about bringing over an innovation from NLS and adding video conferencing? Maybe businesses would’ve found Wave appealing if it had some of these things. It’s hard to say, but apparently the Wave team didn’t think of them.

At our Lisp user group meeting last week we got to talking about the problems with Wave after we learned that it was going away. A topic that came up was that Google realized Wave was going to raise costs in computing resources, given the way it was designed, as a client/server model. All Waves originated on Google’s servers. An example I heard thrown out was that if one person had 10,000 friends all signed up on one wave, and they were all on there at the same time, every single keystroke made by one person would cause Google’s server to have to send the update to every one of the people signed up for the wave–10,000 people. There was something about how they couldn’t monetize the service effectively as well, especially for that kind of load.

As they talked about this it reminded me of what Alan Kay said about Engelbart’s NLS system at ETech 2003. NLS had a similar client/server system design to Google Wave. I remember he said that the scaling factor with NLS was 2N, because of its system architecture. This clearly was not going to scale well. It was perhaps because of this that several members of Engelbart’s team left to go work on the concept of personal computing at Xerox PARC. What they were after at PARC was a network scaling factor of N2, and the idea was they would accomplish this using a peer-to-peer architecture. I speculated that this tied in with the invention of Ethernet at PARC, which is a peer-to-peer networking scheme over TCP/IP. I remember Kay and a fellow engineer showed off the concept with Croquet at ETech. It was quite relevant to what I talk about here, actually, because the demo showed how updates of multiple graphical objects on one Squeak instance (Croquet was written in Squeak) were propagated very quickly to another instance over the network, without a mediating central server. And the scheme scaled to multiple computers on the same network.

Google Wave’s story is a sorry tale, a clear case where the industry’s collective ignorance of computer history has hurt technological progress. In this case it’s possible that one reason Wave did not go well is because the design team did not learn the lessons that had already been learned about 38 years ago.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

In my guest post on Paul Murphy’s blog called “The PC vision was lost from the get go” I spoke to the concept, which Alan Kay had going back to the 1970s, that the personal computer is a new medium, like the book at the time the technology for the printing press was brought to Europe, around 1439 (I also spoke some about this in “Reminiscing, Part 6”). Kay made this realization upon witnessing Seymour Papert’s Logo system being used with children. More recently Kay has with 20/20 hindsight spoken about how like the book, historically, people have been missing what’s powerful about computing because like the early users of the printing press we’ve been automating and reproducing old media onto the new medium. We’re even automating old processes with it that are meant for an era that’s gone.

Kay spoke about the evolution of thought about the power of the printing press in one or two of his speeches entitled The Computer Revolution Hasn’t Happened Yet. In them he said that after Gutenberg brought the technology of the printing press to Europe, the first use found for it was to automate the process of copying books. Before the printing press books were copied by hand. It was a laborious process, and it made books expensive. Only the wealthy could afford them. In a documentary mini-series that came out around 1992 called “The Machine That Changed The World,” I remember an episode called “The Paperback Computer.” It said that there were such things as libraries, going back hundreds of years, but that all of the books were chained to their shelves. Books were made available to the public, but people had to read the books at the library. They could not check them out as we do now, because they were too valuable. Likewise today, with some exceptions to promote mobility, we “chain” computers to desks or some other anchored surface to secure them, because they’re too valuable.

Kay has said in his recent speeches that there were a few rare people during the early years of the printing press who saw its potential as a new emerging medium. Most of the people who knew about it at the time did not see this. They only saw it as, “Oh good! Remember how we used to have to copy the Bible by hand? Now we can print hundreds of them for a fraction of the cost.” They didn’t see it as an avenue for thinking new ideas. They saw it as a labor saving device for doing what they had been doing for hundreds of years. This view of the printing press predominated for more than 100 years still. Eventually a generation grew up not knowing the old toils of copying books by hand. They saw that with the printing press’s ability to disseminate information and narratives widely, it could be a powerful new tool for sharing ideas and arguments. Once literacy began to spread, what flowed from that was the revolution of democracy. People literally changed how they thought. Kay said that before this time people appealed to authority figures to find out what was true and what they should do, whether they be the king, the pope, etc. When the power of the printing press was realized, people began appealing instead to rational argument as the authority. It was this crucial step that made democracy possible. This alone did not do the trick. There were other factors at play as well, but this I think was a fundamental first step.

Kay has believed for years that the computer is a powerful new medium, but in order for its power to be realized we have to perceive it in such a way that enables it to be powerful to us. If we see it only as a way to automate old media: text, graphics, animation, audio, video; and old processes (data processing, filing, etc.) then we aren’t getting it. Yes, automating old media and processes enables powerful things to happen in our society via. efficiency. It further democratizes old media and modes of thought, but it’s like just addressing the tip of the iceberg. This brings the title of Alan Kay’s speeches into clear focus: The computer revolution hasn’t happened yet.

Below is a talk Alan Kay gave at TED (Technology, Entertainment, Design) in 2007, which I think gives some good background on what he would like to see this new medium address:

“A man must learn on this principle, that he is far removed from the truth” – Democritus

Squeak in and of itself will not automatically get you smarter students. Technology does not really change minds. The power of EToys comes from an educational approach that promotes exploration, called constructivism. Squeak/EToys creates a “medium to think with.” What the documentary Squeakers” makes clear is that EToys is a tool, like a lever, that makes this approach more powerful, because it enables math and science to be taught better using this technique. (Update 10/12/08: I should add that whenever the nature of Squeak is brought up in discussion, Alan Kay says that it’s more like an instrument, one with which you can “mess around” and “play,” or produce serious art. I wrote about this discussion that took place a couple years ago, and said that we often don’t associate “power” with instruments, because we think of them as elegant but fragile. Perhaps I just don’t understand at this point. I see Squeak as powerful, but I still don’t think of an instrument as “powerful”. Hence the reason I used the term “tool” in this context.)

From what I’ve read in the past, constructivism has gotten a bad reputation, I think primarily because it’s fallen prey to ideologies. The goal of constructivism as Kay has used it is not total discovery-based learning, where you just tell the kids, with no guidance, “Okay, go do something and see what you find out.” What this video shows is that teachers who use this method lead students to certain subjects, give them some things to work with within the subject domain, things they can explore, and then sets them loose to discover something about them. The idea is that by the act of discovery by experimentation (ie. play) the child learns concepts better than if they are spoon-fed the information. There is guidance from the teacher, but the teacher does not lead them down the garden path to the answer. The children do some of the work to discover the answers themselves, once a focus has been established. And the answer is not just “the right answer” as is often called for in traditional education, but what the student learned and how the student thought in order to get it.

Learning to learn; learning to think; learning the critical concepts that have gotten us to this point in our civilization is what education should be about. Understanding is just as important as the result that flows from it. I know this is all easier said than done with the current state of affairs, but it helps to have ideals that are held up as goals. Otherwise what will motivate us to improve?

What Kay thinks, and is convinced by the results he’s seen, is that the computer can enable children of young ages to grasp concepts that would be impossible for them to get otherwise. This keys right into a philosophy of computing that J.C.R. Licklider pioneered in the 1960s: human-computer symbiosis (“man-computer symbiosis,” as he called it). Through a “coupling” of humans and computers, the human mind can think about ideas it had heretofore not been able to think. The philosophers of symbiosis see our world becoming ever more complex, so much so that we are at risk of it becoming incomprehensible and getting away from us. I personally have seen evidence of that in the last several years, particularly because of the spread of computers in our society and around the world. The linchpin of this philosophy is, as Kay has said recently, “The human mind does not scale.” Computers have the power to make this complexity comprehensible. Kay has said that the reason the computer has this power is it’s the first technology humans have developed that is like the human mind.

Expanding the idea

Kay has been focused on using this idea to “amp up” education, to help children understand math and science concepts sooner than they would in the traditional education system. But this concept is not limited to children and education. This is a concept that I think needs to spread to computing for teenagers and adults. I believe it should expand beyond the borders of education, to business computing, and the wider society. Kay is doing the work of trying to “incubate” this kind of culture in young students, which is the right place to start.

In the business computing realm, if this is going to happen we are going to have to view business in the presence of computers differently. I believe for this to happen we are going to have to literally think of our computers as simulators of “business models.” I don’t think the current definition of “business model” (a business plan) really fits what I’m talking about. I don’t want to confuse people. I’m thinking along the lines of schema and entities, forming relationships which are dynamic and therefor late-bound, but with an allowance for policy to govern what can change and how, with the end goal of helping business be more fluid and adaptive. Tying it all together I would like to see a computing system that enables the business to form its own computing language and terminology for specifying these structures so that as the business grows it can develop “literature” about itself, which can be used both by people who are steeped in the company’s history and current practices, and those who are new to the company and trying to learn about it.

What this requires is computing (some would say “informatics”) literacy on the part of the participants. We are a far cry from that today. There are millions of people who know how to program at some level, but the vast majority of people still do not. We are in the “Middle Ages” of IT. Alan Kay said that Smalltalk, when it was invented in the 1970s, was akin to Gothic architecture. As old as that sounds, it’s more advanced than what a lot of us are using today. We programmers, in some cases, are like the ancient pyramid builders. In others, we’re like the scribes of old.

This powerful idea of computing, that it is a medium, should come to be the norm for the majority of our society. I don’t know how yet, but if Kay is right that the computer is truly a new medium, then it should one day become as universal and influential as books, magazines, and newspapers have historically.

In my “Reminiscing” post I referred to above, I talked about the fact that even though we appeal more now to rational argument than we did hundreds of years ago, we still get information we trust from authorities (called experts). I said that what I think Kay would like to see happen is that people will use this powerful medium to take information about some phenomenon that’s happening, form a model of it, and by watching it play out, inform themselves about it. Rather than appealing to experts, they can understand what the experts see, but see it for themselves. By this I mean that they can manipulate the model to play out other scenarios that they see as relevant. This could be done in a collaborative environment so that models could be checked against each other, and most importantly, the models can be checked against the real world. What I said, though, is that this would require a different concept of what it means to be literate; a different model of education, and research.

This is all years down the road, probably decades. The evolution of computing moves slowly in our society. Our methods of education haven’t changed much in 100 years. The truth is the future up to a certain point has already been invented, and continues to be invented, but most are not perceptive enough to understand that, and “old ways die hard,” as the saying goes. Alan Kay once told me that “the greatest ideas can be written in the sky” and people still won’t understand, nor adopt them. It’s only the poor ideas that get copied readily.

I recently read that the Squeakland site has been updated (it looks beautiful!), and that a new version of the Squeakland version of Squeak has been released on it. They are now just calling it “EToys,” and they’ve dropped the Squeak name. Squeak.org is still up and running, and they are still making their own releases of Squeak. As I’ve said earlier, the Squeakland version is configured for educational purposes. The squeak.org version is primarily used by professional Smalltalk developers. Last I checked it still has a version of EToys on it, too.

Edit: As I was writing this post I went searching for material for my “programmers” and “scribes” reference. I came upon one of Chris Crawford‘s essays. I skimmed it when I wrote this post, but I reread it later, and it’s amazing! (Update 11/15/2012: I had a link to it, but it’s broken, and I can’t find the essay anymore.) It caused me to reconsider my statement that we are in the “Middle Ages” of IT. Perhaps we’re at a more primitive point than that. It adds another dimension to what I say here about the computer as medium, but it also expounds on what programming brings to the table culturally.

Here is an excerpt from Crawford’s essay. It’s powerful because it surveys the whole scene:

So here we have in programming a new language, a new form of writing, that supports a new way of thinking. We should therefore expect it to enable a dramatic new view of the universe. But before we get carried away with wild notions of a new Western civilization, a latter-day Athens with modern Platos and Aristotles, we need to recognize that we lack one of the crucial factors in the original Greek efflorescence: an alphabet. Remember, writing was invented long before the Greeks, but it was so difficult to learn that its use was restricted to an elite class of scribes who had nothing interesting to say. And we have exactly the same situation today. Programming is confined to an elite class of programmers. Just like the scribes, they are highly paid. Just like the scribes, they exercise great control over all the ancillary uses of their craft. Just like the scribes, they are the object of some disdain — after all, if programming were really that noble, would you admit to being unable to program? And just like the scribes, they don’t have a damn thing to say to the world — they want only to piddle around with their medium and make it do cute things.

My analogy runs deep. I have always been disturbed by the realization that the Egyptian scribes practiced their art for several thousand years without ever writing down anything really interesting. Amid all the mountains of hieroglypics we have retrieved from that era, with literally gigabytes of information about gods, goddesses, pharoahs, conquests, taxes, and so forth, there is almost nothing of personal interest from the scribes themselves. No gripes about the lousy pay, no office jokes, no mentions of family or loved ones — and certainly no discussions of philosophy, mathematics, art, drama, or any of the other things that the Greeks blathered away about endlessly. Compare the hieroglyphics of the Egyptians with the writings of the Greeks and the difference that leaps out at you is humanity.

You can see the same thing in the output of the current generation of programmers, especially in the field of computer games. It’s lifeless. Sure, their stuff is technically very good, but it’s like the Egyptian statuary: technically very impressive, but the faces stare blankly, whereas Greek statuary ripples with the power of life.

What we need is a means of democratizing programming, of taking it out of the soulless hands of the programmers and putting it into the hands of a wider range of talents.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Paul Murphy saw fit to give me another guest spot on his blog, called “The tattered history of OOP”, talking about the history of OOP practice, where the idea came from, and how industry has implemented it. If you’ve been reading my blog this will probably be review. I’m just spreading the message a little wider.

Paul has an interesting take on the subject. He thinks OOP is a failure in practice because with the way it’s been implemented it’s just another way to make procedure calls. I agree with him for the most part. He’s informed me that he’s going to put up another post soon that gets further into why he thinks OOP is a failure. I’ll update this post when that’s available.

In short, where I’m coming from is that OOP, in the original vision that was created at Xerox PARC, still has promise. The current implementation that most developers use has architectural problems that the PARC version did not, and it still promotes a mode of thinking that’s compatible with procedural programming.

Update 6/3/08: Paul Murphy’s response to my article is now up, called “Oddball thinking about OOP”. He argues that OOP is a failure because it’s an idea that was incompatible with digital computing to begin with, and is better suited to analog computing. I disagree that this is the reason for its failure, but to each their own.

Update 8/1/09: I realized later I may have misattributed a quote to Albert Einstein. Paul Murphy talked about this in a post sometime after “The tattered history of OOP” was published. I said that insanity is, “Doing the same thing over and over again and expecting a different result.” Murphy said he realized that this was misattributed to Einstein. I did a little research myself and it seems like there’s confusion about it. I’ve found sites of quotations that attribute this saying to Einstein. On Wikipedia though it’s attributed to Rita Mae Brown, a novelist, who wrote this in her 1983 book, Sudden Death. I don’t know. I had always heard it attributed to Einstein, though I agree with the naysayers that no one has produced a citation that would prove he said it.

Read Full Post »

A while back I wrote my “Reminiscing” series of posts talking about the history of the machines I used growing up, as I remembered it. I came upon a few materials on reddit over a period of about a month that gave more authoritative histories of the Atari ST and Amiga. They’re really neat to look at. Atari and Commodore were fierce competitors since they both got into the computer market in the late 1970s, up until they both stopped computer production in 1993/94. There were the same flamewars during those years about which computer was better between devoted camps as there are now about which operating system, or system of software development (open source vs. closed source) is better. That competitive, beat your opponent to a pulp spirit still lives on.

The Atari ST

The Atari ST
The Atari ST, from Wikipedia.org

Landon Dyer has a blog called DadHacker. He’s written a two-part series on the building of the Atari ST, as an on-the-ground engineer. He gives a kind of blow-by-blow of what he could recall of the events of the time, and the issues they dealt with in developing it in a very short time period. Here are part 1 and part 2.

When I was getting my undergraduate CS degree at Colorado State University one of my dormmates, Darryl May, had worked at Atari under the Tramiel “regime” before coming to school. As I recall he had done customer support, among other things. He was not high up on the totem pole, and so was not privy to a lot of the stuff that was going on, but he told me a few stories.

The higher-ups in the company hierarchy were not entirely sealed off from everyone else. He said there were a few times where he just happened to bump into Jack Tramiel in the men’s bathroom.

Another story was that there were framed pictures put in odd places on some of the walls. He said they were where Jack Tramiel had caused a “dent” by punching them with his fist–just releasing stress…

I used to really like Atari computers, 8-bit and ST, and one of the aspirations I had was to maybe work at Atari one day. After hearing this, and maybe a few other tales I can’t remember, I lost interest. I didn’t conflate this with the machines themselves. I still liked them, and I have fond memories of them to this day. It’s just that the company culture didn’t sound so hot, and I began to see why there was such a love/hate relationship with Atari back then among the devoted.

In 1993 I had the opportunity to attend a Falcon 030 presentation given by David Small, the inventor of the Magic Sac and Spectre GCR/128 Macintosh emulators for the ST. The Falcon had started to ship a short while before. As I recall, and my memory’s fuzzy on this, he told a tale of how the Falcon development team was treated. One of the things I remember him saying is after promising to pay the engineers for their efforts, rather than paying them in cash, they were given company stock…which at the time was probably trading as a penny stock. I remember it hovered around $1 a share, often going under that amount, with really no hope of it getting any better. Small winced as he delivered the punch line, and drew audible sounds of disgust from the audience. We all knew what the situation was with Atari.

The impression I’ve gotten from listening to first hand accounts is that Jack Tramiel was a “penny wise, pound foolish” hard ass. I don’t get a sense that he had a creative spirit. His strategy, as it was when he was at the helm at Commodore, was to sell machines to a mass market. For whatever reason, it worked out for several years in Europe, where Atari was one of the dominant sellers of computers. In the U.S. it didn’t work out. I remember asking Darryl about this, and in his view Atari was just a tax write-off for the Tramiels. Jack Tramiel was set to retire, and just didn’t have the motivation to really make Atari do well. That was his theory anyway.

Giving credit where it’s due, in an interview I saw on a British computer TV show from the 1980s, Jack Tramiel revealed that one of his goals when he ran Commodore was to keep the Japanese out of the U.S. PC market. He did this by undercutting them on price. Maybe he succeeded, since I remember there was talk in the 80s about the Japanese working on low-end MSX machines to sell to the U.S. market. Somehow that never got off the ground. Incidentally, MSX was Microsoft’s attempt to do for low-end 8-bit computers what it did for IBM PCs and clones: create a standard OS. Apparently Tramiel was someone who fought to keep computer production in the U.S. We can thank him for that, though in hindsight he just delayed the inevitable. From what I remember, Atari shifted computer production to Asia, even under Tramiel’s management. Today a lot of the PC production lines are in Asia, though I’m sure there are still some here.

As I’ve mentioned before, Atari stopped producing computers in late 1993. The company continued on, trying to compete in the video game market, but dying a slow death, until it was finally “retired” in its sale to JTS (a disk drive manufacturer) in 1996.

The Amiga

The Commodore Amiga
The Commodore Amiga, from Wikipedia.org

I found a couple videos that cover the history of the Amiga. What’s intriguing to me is there are some juicy tidbits about what went down between Amiga Corp. and Atari. This was something I had read about, but never completely understood.

Amiga started out as its own company developing video game peripherals. As I recall, and parts of this may be wrong, at some point Atari started giving funding to Amiga, back when Atari was owned by Warner Communications (now called Time Warner), and expected to get first dibs on some technology that was being developed (maybe the chips). There came a point where Amiga was running low on cash, and put itself up for sale. Amiga wasn’t satisfied with the way they were being treated by Atari. Commodore appeared to be interested. Amiga pursued a deal with Commodore, and eventually reached an amicable price per share with them. I’m not clear if the Tramiels were in on the negotiations with Amiga. From what I read it sounded like the deal with Commodore was either near completion when Jack Tramiel bought Atari, or it was already overwith. They tell the story in the video (below) so I’ll leave the details to them, though they don’t talk about who was in charge at Atari.

Commodore bought Amiga, and Atari’s working relationship with them went out the window. Atari sued, citing their agreement, but ultimately lost. The Amiga computer went to Commodore. As I recall the Tramiels already had some ideas about a 16-bit machine they wanted to develop by the time they bought Atari, but they had to work fast to get it out in time to compete with the Amiga.

(Update 4-5-2010: The above was an account from memory of the relationship I had read about between Atari and Amiga. Apparently it was in the ballpark, but not precise. You can read more about the Atari-Amiga contract here under the subheading “Amiga Contract”. Martin Goldberg, who apparently worked for Atari at the time, left a comment to this post (see below) about what went down between Atari and Amiga. Martin strongly disputes RJ’s account (given in the video below) of the Atari-Amiga-Commodore negotiations.)

From what I read one of the main players at Amiga was Jay Miner, though according to the video below he was the head guy. He used to work at Atari, and developed the graphics chips for their 8-bit computers in the late ’70s. Maybe he did more than that, but I don’t recall. He brought the same know-how to the Amiga computer project, creating the famous graphics (and sound?) co-processors that gave the Amiga real pizzaz.

The video below is the story of the founding of Amiga Corp., and the purchase of Amiga by Commodore, as told by the original guys and gals who made it all happen.

Another neat thing about this video is they show the CES mock up of the Amiga Lorraine (its code name) that I remember reading about in Compute! Magazine back in 1984. I remember they told the story of the Amiga’s appearance at CES with a great sense of foreboding that the machine looked revolutionary, and if it became anything more than vaporware it would represent a next generation leap in computing. This is interesting in retrospect, because Compute! did not greet the Apple Macintosh, which was a final product ready for sale at the time, with the same sense of excitement. They covered the unveiling of the first Mac, but then they were like “on to the next subject…”.

You have to remember that back then what was considered “standard” was a computer that when you booted it up would greet you with a command line interface, and the only machines most people knew about that had high resolution graphics combined with a “large” color palette were the IBM PC and the Apple II.

I would say that all three, the Apple Mac, Atari ST, and the Commodore Amiga represented the next generation of popular computing at the time. Each came at it from a different angle, and none of them turned out the way their visionary creators anticipated. All three pursued the business computing market, thinking it would help establish the machines with a healthy customer base, but none of them made large inroads in it. Instead each found a following in creative businesses. The Mac was adopted for professional (paper) publishing. The Atari ST was adopted by musicians for its MIDI hardware and software. The Amiga was adopted by video production studios since its hardware capabilities fit in well with what they needed. Video production software followed suit.

A show I heard about during my last year in college (’92-’93) that used Amigas with Video Toaster cards for the special effects shots was Babylon 5. It’s the reason why I got interested in the show in the first place. But wow, the storywriting in the series held my attention. It told a tale of epic proportions. They only used Amigas for the first season though. I heard that Amiga/Video Toasters were used for the effects shots in the SeaQuest DSV series as well.

I can’t help it…Diversion into trivia: Two actors in the Babylon 5 TV series also starred in the movie Tron. What were the actors’ names, and what characters did they play in the movie? They’re both in this fan video. See if you can find them. One of them is pretty easy to pick out.

Here’s a video of an Amiga production plant being shut down in 1994, when Commodore shut its doors for good. Not much to see here. Pretty boring, but it gives you that sense that things are coming to an end.

Update 5/30/08: I found this video of a 25th anniversary commemoration of the Commodore 64, presented by the Computer History Museum. Finally the C-64 gets its due. It was really kind of a get together of some people who defined personal computing in the 1980s. At first it was just a one-on-one chat with Jack Tramiel. Later they brought up Steve Wozniak (representing Apple), a guy who used to work for IBM during the IBM PC days, and another guy who worked for Commodore. They reflect a bit on what happened at Apple and IBM at the time as well. Interesting discussion.

On a lighter note, a reminder of how far we’ve come since then. 😐

Read Full Post »