Feeds:
Posts
Comments

Posts Tagged ‘computer history’

I haven’t written about this in a while, but one of my first exposures to computing was from using Atari 8-bit computers, beginning in 1981. I’ve written about this in a few posts: Reminiscing, Part 1, Reminiscing, Part 3, and My journey, Part 1.

I found out about Antic–The Atari 8-bit podcast through a vintage computer group on Google+. They have some great interviews on there. At first blush, the format of these podcasts sounds amateurish. The guys running the show sound like your stereotypical nerds, and sometimes the audio sounds pretty bad, because the guest responses are being transmitted through Skype, or perhaps using cell phones, but if you’re interested in computer history like I am, you will get information on the history of Atari, and the cottage industry that was early personal computing and video gaming that you’re not likely to find anywhere else. The people doing these podcasts are also active contributors to archive.org, keeping these interviews for posterity.

What comes through is the business was informal. It was also possible for one or two people to write a game, an application, or a complete operating system. Those days are gone.

Most of the interviews I’ve selected cover the research and development Atari did from 1977 until 1983, when Warner Communications (now known as Time-Warner) owned the company, because it’s the part of the history that I think is the most fascinating. So much of what’s described here is technology that was only hinted at in old archived articles I used to read. It sounded so much like opportunity that was lost, and indeed it was, because even though it was ready to be produced, the higher-ups at Atari didn’t want to put it into production. The details make that much more apparent. Several of these interviews cover the innovative work of the researchers who worked under Alan Kay at Atari. Kay left Xerox PARC, and worked at Atari as its chief scientist from 1981 to 1984. In 1983, Atari, and the rest of the video games industry went through a market crash. Atari’s consumer electronics division was sold to Jack Tramiel in 1984. Atari’s arcade game business was spun off into its own company. Both used the name “Atari” for many years afterward. For those who remember this time, or know their history, Jack Tramiel founded Commodore Business Machines in the 1950s as a typewriter parts company. Commodore got into producing computers in the late 1970s. Jack Tramiel had a falling out with Commodore’s executives over the future direction of the company in 1984, and so he left, taking many of Commodore’s engineers with him. He bought Atari the same year. As I’ve read the history of this period, it sounds literally as if the two companies did a “switch-a-roo,” where Amiga’s engineers (who were former Atari employees) came to work at Commodore, and many of Commodore’s engineers came to work at Atari. 1984 onward is known as the “Tramiel era” at Atari. I only have one interview from that era, the one with Bob Brodie. That time period hasn’t been that interesting to talk about.

I’ll keep adding more podcasts to this post as I find more interesting stuff. I cover in text a lot of what was interesting to me in these interviews. I’ve provided links to each of the podcasts as sources for my comments.

Show #4 – An interview with Chris Crawford. Crawford is a legend in the gaming community, along with some others, such as Sid Meier (who worked at Microprose). Crawford is known for his strategy/simulation games. In this interview, he talked about what motivated him to get into computer games, how he came to work at Atari, his work at Atari’s research lab, working with Alan Kay, and the games he developed through his career. One of the games he mentioned that seemed a bit fascinating to me is one that Atari didn’t release. It was called “Gossip.” He said that Alan encouraged him to think about what games would be like 30 years in the future. This was one product of Crawford thinking about that. I found a video of “Gossip.”

It’s not just a game where you fool around. There is actual scoring in it. The point of the game is to become the most popular, or “liked” character, and this is scored by what other characters in the game think of you, and of other characters. This is determined by some mathematical formula that Crawford came up with. AI is employed in the game, because the other characters are played by the computer, and each is competing against you to become the most popular. You influence the other players by gossiping about them, to them. The other characters try to influence you by sharing what they think other characters are thinking about you or someone else. Since Atari computers had limited resources, there is no actual talking, though they added the effect of you hearing something that resembles a human voice as each computer character gossips. You gossip via. a “facial expression vocabulary.” It’s not easy to follow what’s going on in the game. You call someone. You tell them, “This is how I feel about so-and-so.” They may tell you, “This is what X feels about Y,” using brackets and a pulsing arrow to denote, “who feels about whom.” The character with brackets around them shows a facial expression to convey what the character on the other end of the phone is telling you.

Interview #7  with Bill Wilkinson, founder of Optimized Systems Software (OSS), and author of the “Insight: Atari” column at Compute! Magazine. He talked about how he became involved with the projects to create Atari Basic, and Atari DOS, working as a contractor with Atari through a company called Shepardson Microsystems. He also talked about the business of running OSS. There were some surprises. One was Wilkinson said that Atari Basic compiled programs down to bytecode, line by line, and ran them through a virtual machine, all within an 8K cartridge. I had gotten a hint of this in 2015 by learning how to explore the memory of the Atari (running as an emulator on my Mac) while running Basic, and seeing the bytecodes it generated. After I heard what he said, it made sense. This runs counter to what I’d heard for years. In the 1980s, Atari Basic had always been described as an interpreter, and the descriptions of what it did were somewhat off-base, saying that it “translated Basic into machine code line by line, and executed it.” It could be that “bytecode” and “virtual machine” were not part of most programmers’ vocabularies back then, even though there had been such systems around for years. So the only way they could describe it was as an “interpreter.”

Another surprise was that Wilkinson did not paint a rosy picture of running OSS. He said of himself that in 20/20 hindsight, he shouldn’t have been in business in the first place. He was not a businessman at heart. All I take from that is that he should have had someone else running the business, not that OSS shouldn’t have existed. It was an important part of the developer community around Atari computers at the time. Its Mac/65 assembler, in particular, was revered as a “must have” for any serious Atari developer. Its Action! language compiler (an Atari 8-bit equivalent of the C language) was also highly praised, though it came out late in the life of the Atari 8-bit computer line. Incidentally, Antic has an interview with Action!‘s creator, Clinton Parker, and with Stephen Lawrow, who created Mac/65.

Wilkinson died on November 10, 2015.

Interview #11 with David Small, columnist for Creative Computing, and Current Notes. He was the creator of the Magic Sac, Spectre 128, and Spectre GCR Macintosh emulators for the Atari ST series computers. It was great hearing this interview, just to reminisce. I had read Small’s columns in Current Notes years ago, and usually enjoyed them. I had a chance to see him speak in person at a Front Range Atari User Group meeting while I was attending Colorado State University, around 1993. I had no idea he had lived in Colorado for years, and had operated a business here! He had a long and storied history with Atari, and with the microcomputer industry.

Interview #30 with Jerry Jessop. He worked at Atari from 1977 to 1984 (the description says he worked there until 1985, but in the interview he says he left soon after it was announced the Tramiels bought the company, which was in ’84). He worked on Atari’s 8-bit computers. There were some inside baseball stories in this interview about the development of the Atari 1400XL and 1450XLD, which were never sold. The most fascinating part is where he described working on a prototype called “Mickey” (also known as the 1850XLD) which might have become Atari’s version of the Amiga computer had things worked out differently. It was supposed to be a Motorola 68000 computer that used a chipset that Amiga Inc. was supposed to develop for Atari, in exchange for money Atari had loaned to Amiga for product development. There is also history which says that Atari structured the loan in a predatory manner, thinking that Amiga would never be able to pay it off, in which case Atari would’ve been able to take over Amiga for free. Commodore intervened, paid off the loan, got the technology, and the rest is history. This clarified some history for me, because I had this vague memory for years that Atari planned to make use of Amiga’s technology as part of a deal they had made with them. The reason the Commodore purchase occurred is Amiga ran through its money, and needed more. It appears the deal Atari had with Amiga fell through when the video game crash happened in 1983. Atari was pretty well out of money then, too.

Interview #37 with David Fox, creator of Rescue on Fractalus, and Ballblazer at Lucasfilm Games. An interesting factoid he talked about is that they wrote these games on a DEC Vax, using a cross-assembler that was written in Lisp. I read elsewhere that the assembler had the programmer write code using a Lisp-like syntax. They’d assemble the game, and then download the object code through a serial port on the Atari for testing. Fox also said that when they were developing Ballblazer, they sometimes used an Evans & Sutherland machine with vector graphics to simulate it.

Interview #40 with Doug Carlston, co-founder and CEO of Brøderbund Software. Brøderbund produced a series of hit titles, both applications and games for microcomputers. To people like me who were focused on the industry at the time, their software titles were very well known. Carlston talked about the history of the company, what led to its rise, and fall. He also had something very interesting to say about software piracy. He was the first president of the Software Publishers Association, and he felt that they took too harsh a stand on piracy, because from his analysis of sales in the software industry, it didn’t have the negative effect on it that most people thought it did. He saw piracy as a form of marketing. He thought it increased the exposure of software to people who would have otherwise not known about it, or thought to buy it. He said when he compared the growth of software sales from this era to when software was first released on CDs, during a time when recording to CDs was too expensive for most consumers, and the volume of data held on CDs would have taken up a very large chunk of users’ hard drive space, if not exceeded it (so piracy was nearly impossible), he said he didn’t notice that big of a difference in the sales growth curves. Nevertheless, some developers at Brøderbund were determined to foil pirates, and a story he told about one developer’s efforts was pretty funny. 🙂 Though, he thought it went “too far,” so they promptly took out the developer’s piracy-thwarting code.

Interview #49 with Curt Vendel and Marty Goldberg. They spoke in detail about the history of how Atari’s video game and computer products developed, while Atari was owned by Warner Communications. A surprising factoid in this interview is they revealed that modern USB technology had its beginning with the development of the Atari 8-bit’s Serial Input/Output (SIO) port. This port was used to daisy-chain peripherals, such as disk drives, serial/parallel ports, and printers to the Atari computer. The point was it was an intelligent port architecture that was able to identify what was hooked up to the computer, without the user having to configure hardware or software. A driver could be downloaded from each device at bootup (though not all SIO devices had built-in drivers). Each device was capable of being ready to go the moment the user plugged it into either the computer, or one of the other SIO-compatible peripherals, and turned on the computer.

Interview #58 with Jess Jessop, who worked in Atari’s research lab. In one project, called E.R.I.C., he worked on getting the Atari 800 to interact with a laserdisc player, so the player could run video segments that Atari had produced to market the 800, and allow people to interact with the video through the Atari computer. The laserdisc player would run vignettes on different ways people could use the computer, and then have the viewer press a key on the keyboard, which directed the player to go to a specific segment. He had a really funny story about developing that for the Consumer Electronics Show. I won’t spoil it. You’ll have to listen to the interview. I thought it was hilarious! 😀 If you listen to some of the other podcasts I have here, Kevin Savetz, who conducted most of these interviews, tried to confirm this story from other sources. All of them seem to have denied it, or questioned it (questioned whether it was possible). So, this story, I think, is apocryphal.

Here’s a bit of the laserdisc footage.

Jessop also mentioned Grass Valley Lab (Cyan Engineering), a separate research facility that worked on a variety of technology projects for Atari, including “pointer” technology, using things like tablet/stylus interfaces, and advanced telephony. Kevin Savetz interviewed a few other people from Grass Valley Lab in other podcasts I have here.

Jessop said the best part of his job was Alan Kay’s brown bag lunch sessions, where all Alan would do was muse about the future, what products would be relevant 30 years down the road, and he’d encourage the researchers who worked with him to do the same.

The really interesting part was listening to Jessop talk about how the research lab had worked on a “laptop” project in the early 1980s with Alan Kay. He said they got a prototype going that had four processors in it. Two of them were a Motorola 68000 and an Intel processor. He said the 68000 was the “back-end processor” that ran Unix, and the Intel was a “front-end processor” for running MS-DOS, which he said was for “interacting with the outside world” (ie. running the software everyone else was using). He said the Unix “back end” ran a graphical user interface, “because Alan Kay,” and the “front-end processor” ran DOS in the GUI. He said the prototype wasn’t going to end up being a “laptop,” since it got pretty heavy, ran hot with all the components in it, and had loud cooling fans. It also had a rather large footprint. The prototype was never put into production, according to Jessop, because of the market crash in 1983.

Jessop expressed how fun it was to work at Atari back when Warner Communications was running it. That’s something I’ve heard in a few of these interviews with engineers who worked there. They said it spoiled people, because most other places have not allowed the free-wheeling work style that Atari had. Former Atari engineers have since tried to recreate that, or find someplace that has the same work environment, with little success.

Interview #65 with Steve Mayer, who helped create Grass Valley Labs, and helped design the hardware for the Atari 2600 video game system, and the Atari 400 and 800 personal computers. The most interesting part of the interview for me was where he talked about how Atari talked to Microsoft about writing the OS for the 400 and 800. They were released in 1979, and Mayer was talking about a time before that when both computers were still in alpha development. He said Microsoft had a “light version of DOS,” which I can only take as an analogy to some other operating system they might have been in possession of, since PC-DOS didn’t even exist yet. I remember Bill Gates saying in the documentary “Triumph of the Nerds” that Microsoft had a version of CP/M on some piece of hardware (the term “softcard” comes to mind) that it had licensed from Digital Research by the time IBM came to them for an OS for their new PC, in about 1980. Maybe this is what Mayer was talking about.

Another amazing factoid he talked about is Atari considered having the computers boot into a graphical user interface, though he stressed that they didn’t envision using a mouse. What they looked at instead was having users access features on the screen using a joystick. What they decided on instead was to have the computer either boot into a “dumb editor” called “Memo Pad,” where the user could type in some text, but couldn’t do anything with it, or, if a Basic cartridge was inserted, to boot into a command line interface (the Basic language).

Mayer said that toward the end of his time at Atari, they were working on a 32-bit prototype. My guess is this is either the machine that Jess Jessop talked about above, or the one Tim McGuinness talked about in a podcast below. From documentation I’ve looked up, it seems there were two or three different 32-bit prototypes being developed at the time.

Interview #73 with Ron Milner, engineer at Cyan Engineering from 1973-1984. Milner described Cyan as a “think tank” that was funded by Atari. You can see Milner in the “Splendor in the Grass” video above. He talked about co-developing the Atari 2600 VCS for the first 15 minutes of the interview (in the podcast). For the rest, he talked about experimental technologies. He mentioned working on printer technology for the personal computer line, since he said printers didn’t exist for microcomputers when the Atari 400/800 were in development. He worked on various pointer control devices under Alan Kay. He mentioned working on a touch-screen interface. He described a “micro tablet” design, where they were thinking about having it be part of the keyboard housing (sounding rather like the trackpads we have on laptops). Perhaps this is connected to what Tim McGuinness talked about (below). He also described a walnut-sized optical mouse he worked on (perhaps also connected with what Tim McGuinness discussed). These latter two technologies were intended for use on the 8-bit computers, before Apple introduced their GUI/mouse interface in 1983 (with the Lisa), though Atari never produced them.

The most striking thing he talked about was a project he worked on shortly after leaving Atari, I guess under a company he founded, called Applied Design Laboratories (ADL). He called it a “space pen.” His setup was a frame that would fit around a TV set, with what he called “ultrasonic transducers” on it that could detect a pen’s position in 3D space. A customer soon followed that had made investments in virtual reality technology, and which used his invention for 3D sculpting. This reminded me so much of VR/3D sculpting technology we’ve seen demonstrated in recent years.

Interview #75 with Steve Davis, Director of Advanced Research for Atari, who worked at Cyan Engineering under Alan Kay. He worked on controlling a laserdisc player from an Atari 800, which has been talked about above (E.R.I.C.), a LAN controller for the Atari 800 (called ALAN-K, which officially stood for “Atari Local Area Network, Model-K,” but it was obviously a play on Alan Kay’s name. Tim McGuinness talked about this project in a podcast below), and “wireless video game cartridges.” Davis said these were all developed around 1980, but at least some of this had to have been started in 1981 or later, because Kay didn’t come to Atari until then. Davis mentioned working on some artificial intelligence projects, but as I recall, didn’t describe any of them.

He said the LAN technology was completed as a prototype. They were able to share files, and exchange messages with it. It was test-deployed at a resort in Mexico, but it was never put into production.

The “wireless video game cartridge” concept was also a completed prototype. This was only developed for the Atari 2600. From Davis’s description, you’d put a special cartridge in the 2600. You’d choose a game from a menu, pay for it, the game would be downloaded to the cartridge wirelessly, and you could then play it on your 2600. It sounded almost as if you rented the game remotely, downloading the game into RAM in the cartridge, because he said, “There was no such thing as Flash memory,” suggesting that it was not stored permanently on the cartridge. The internet didn’t exist at the time, and even if it did, it wouldn’t have allowed this sort of activity on it, since all commercial activity was banned on the early internet. He said Atari did not use cable technology to transmit games, but cooperated with a few local radio stations to act as servers for these games, which would be transmitted over FM subcarrier. He didn’t describe how this system handled paying for the games, either. In any case, it was not something that Atari put on the broad market.

Kevin Savetz, who’s done most of these interviews, has described APX, the Atari Program eXchange, as perhaps the first “app store,” in the sense that it was a catalog of programs sponsored by Atari, exclusively for Atari computer users, with software written by Atari employees, and non-Atari (independent) authors, who were paid for their contributions based on sales. Looking back on it now, it seems that Atari pioneered some of the consumer technology and consumer market technologies that are now used on modern hardware platforms, using distribution technology that was cutting edge at the time (though it’s ancient by today’s standards).

Davis said that Warner Communications treated Atari like it treated a major movie production. This was interesting, because it helps explain a lot about why Atari made its decisions. With movies, they’d spend millions, sometimes tens of millions of dollars, make whatever money they made on it, and move on to the next one. He said Warner made a billion dollars on Atari, and once the profit was made, they sold it, and moved on. He said Warner was not a technology company that was trying to advance the state of the art. It was an entertainment company, and that’s how they approached everything. The reason Atari was able to spend money on R&D was due to the fact that it was making money hand over fist. A few of its projects from R&D were turned into products by Atari, but most were not. A couple I’ve heard about were eventually turned into products at other companies, after Atari was sold to Jack Tramiel. From how others have described the company, nobody was really minding the store. During its time with Warner, it spent money on all sorts of things that never earned a return on investment. One reason for that was it had well-formed ideas that could be turned into products, but they were in markets that Atari and Warner were not interested in pursuing, because they didn’t apply to media or entertainment. There have been several stories I’ve heard about through these podcasts where people inside the company came up with advanced technology for business use cases, that were cutting edge for their time, but the executives at Atari didn’t want to pursue them.

Interview #76 Tim McGuinness worked as a hardware engineer at Atari from 1980 to 1982. He co-developed the Atari 1200XL, and the 1400XL line, along with its peripherals. He was also the designer of the first version of the Amiga computer (which started development at Atari). This was the next best interview to the one with Ted Kahn (below).

McGuinness described an incident where he went to show Alan Kay the Atari 800, to discuss putting a graphical user interface on it, and when he brought it back, his boss freaked out, and threatened him with arrest, just for transporting it from one Atari office to another, to show it to someone else inside the company! McGuinness said there were fiefdoms inside the company, and this was one of them.

He talked about the relationship between Atari and Microsoft. Atari had contracted with a third party company, Shepardson Microsystems, to create Atari Basic. He said they needed a better version of Basic. Atari licensed Bill Gates’s and Paul Allen’s version of Basic for their computer line for $5 million ($14.7 million in today’s money). According to McGuinness, this was a godsend to Gates’s and Allen’s company, because at the time it was going through a rough patch, and was teetering on going belly-up. This would’ve probably been in 1980. He said Gates and Allen were so hard up for money that Atari had to fly them out to New York just so they could cash their check! McGuinness boldly declared, “Atari begat Microsoft.” He claims Gates’s and Allen’s company wasn’t called Microsoft at the time. It was called Personal Software. “Microsoft” was the name of one of their products (I guess their Basic language?). Once they licensed Microsoft Basic to Atari, Gates and Allen changed the name of their company to Microsoft.

McGuinness talked about some products he worked on at Atari that were not put into production. There was “Puffer,” which was a modification to the Battlezone arcade game where they hooked up an exercise bike to it. You’d manipulate the handlebars to turn the tank, and pedal forward or backward depending on if you wanted to advance, or back up, as bad guys came after you.

This isn’t “Puffer,” just Battlezone with its standard controls, but you can get an idea of what he was talking about.

He worked on developing the ALAN-K network interface (described above). It was a token ring network. He worked on a tablet device that could be used to trace and digitize figures on paper. The software that was developed for it also allowed editing the digitized drawing. He worked on what he called an “optical mouse” for the Atari 8-bit computers, but what he described was a ball mouse where the ball had dots on it, so its movement could be traced with an optical sensor inside the mouse. Another project was a high-volume rewritable laser optical storage device, which he said was ready for production in 1981. It wasn’t a disk, but a card format. What he described was the machine would draw in an optical card, and as it did so, a laser would read and write to it, and rewrite information on it, as the card was moved. The media could be rewritten 10 times before it started failing. Blue Cross-Blue Shield considered it a possibility for storing basic medical data in 1983. Another project he worked on was a 32-bit, 4 Mhz dual processor computer, using General Instruments processors, that he thought could be Atari’s next generation machine after the 8-bit series. He said the Amiga computer was the result of Ray Kassar, the CEO of Atari, deep-sixing this 32-bit project, and opting for a 16-bit design, which was code-named the 1800 at Atari. McGuinness designed a transparent touchpad into it, similar to the touchpads used on modern laptops, with an LCD display beneath it, so you could use it as a reprogrammable keypad. It also had a stylus that could be used as a pointing/drawing device on the touchpad.

He got into talking about something that I think is truly incredible. He said that Microsoft used some of the code from the version of Atari DOS that came out for the Atari 1200XL in its operating systems, including Windows! He claimed that if you have a 5.25″ floppy drive hooked up to a Windows machine, you can use it to read Atari DOS diskettes without emulation. He explained the reason is Atari shared their DOS code with Microsoft, since Atari wanted them to do some “tweaking” of it for the 1400XL series they were thinking of producing. Microsoft completed the modifications, but Atari never used it, since they decided not to produce the computer series. Microsoft, however, kept the Atari DOS code, and used its formatting scheme in their operating systems. So, Windows understands Atari 8-bit diskettes! I am amazed by this news, and I am half tempted to give it a try!

He helped explain in a succinct, understandable way why Atari went bust under the management of Warner Communications. He said that Ray Kassar, the CEO who was brought in to lead Atari when Warner bought the company, didn’t understand the tech business. However, Atari desperately needed one skill he had, which was the ability to manage a large business. Atari had 17,000 employees, and owned 120 buildings in Silicon Valley alone. Kassar was originally the CEO of a sock manufacturer. He didn’t understand that in order for Atari to continue to exist, it needed to innovate with new products. From what McGuinness described, Kassar saw the video game/computer business like a commodity business. You come up with a product, you optimize the process of making it, and you keep selling it indefinitely. There’s no point in innovating, because it’s a commodity.

McGuinness recounted the ripple effects of Atari’s collapse. The video game industry collapsed in a matter of 6 months in 1983. Tens of millions of dollars of inventory was in the supply chain to retailers when this happened. Ninety percent of video game developers lost their jobs. The collapse put Toys ‘R Us into bankruptcy, and nearly destroyed the Sears retail chain.

McGuinness left Atari in frustration, and founded Romox in 1982. They invented a reprogrammable cartridge, and software kiosks, which would sell and download software onto these cartridges. They also invented the idea of the “shopping cart” and online advertising for e-commerce. The kiosks not only sold software, but also dynamically downloaded ads that marketers would pay Romox for placement on these kiosks. The retailers that rented the kiosks could also buy ads on them for their own products and services, and the turnaround time was very quick, within several hours, since it was all digital. He said if 7-Eleven, for example, had a special one day on something they sold, they could pay for the ad for the special that morning, and it would be on their kiosks that afternoon. All of what he described was in production, and was being used by customers. Once the video game market went bust, he transferred Romox’s technology to a company in the UK. He and his associates founded Bloc Development Company, a software publishing company, in 1985. They created “FormTool,” which is a software product that’s still published today, though from what he described, it’s no longer published by his company. He said, “It’s been through several hands.” FormTool helps the user design paper forms. Bloc also produced several other top-selling software titles. The company ultimately became TigerDirect, which still exists today.

Interview #77 with Tandy Trower, who worked with Microsoft to develop a licensed version of Microsoft Basic for the Atari computers. Based on the interview with McGuinness (above), this probably would have been in 1980. Trower said that Atari’s executives were so impressed with Microsoft that they took a trip up to Seattle to talk to Bill Gates about purchasing the company! Keep in mind that at the time, this was entirely plausible. Atari was one of the fastest growing companies in the country. It had turned into a billion-dollar company due to the explosion of the video game industry, which it led. It had more money than it knew what to do with. Microsoft was a small company, whose only major product was programming languages, particularly its Basic language. Gates flatly refused the offer, but it’s amazing to contemplate how history would have been different if he had accepted!

Trower told another story of how he left to go work at Microsoft, where he worked on the Atari version of Microsoft Basic from the other end. He said that before he left, an executive at Atari called him into his office and tried to talk him out of it, saying that Trower was risking his career working for “that little company in Seattle.” The executive said that Atari was backed by Warner Communications, a mega-corporation, and that he would have a more promising future if he stayed with the company. As we now know, Trower was onto something, and the Atari executive was full of it. I’m sure at the time it would’ve looked to most people like the executive was being sensible. Listening to stories like this, with the hindsight we now have, is really interesting, because it just goes to show that when you’re trying to negotiate your future, things in the present are not what they seem.

Interview #132 with Jerry Jewel, a co-founder of Sirius Software. Their software was not usually my cup of tea, though I liked “Repton,” which was a bit like Defender. This interview was really interesting to me, though, as it provided a window into what the computer industry was like at the time. Sirius had a distribution relationship with Apple Computer, and what he said about them was kind of shocking, given the history of Apple that I know. He also tells a shocking story about Jack Tramiel, who at the time was still the CEO of Commodore.

The ending to Sirius was tragic. I don’t know what it was, but it seemed like the company attracted not only some talented people, but also some shady characters. The company entered into business relationships with a series of the latter, which killed the company.

Interview #201 with Atari User Group Coordinator, Bob Brodie. He was hired by the company in about 1989. This was during the Tramiel era at Atari. I didn’t find the first half of the interview that interesting, but he got my attention in the second half when he talked about some details on how the Atari ST was created, and why Atari got out of the computer business in late 1993. I was really struck by this information, because I’ve never heard anyone talk about this before.

Soon after Jack Tramiel bought Atari in 1984, he and his associates met with Bill Gates. They actually talked to Gates about porting Windows to the Motorola 68000 architecture for the Atari ST! Brodie said that Windows 1.0 on Intel was completely done by that point. Gates said Microsoft was willing to do the port, but it would take 1-1/2 years to complete. That was too long of a time horizon for their release of the ST, so the Tramiels passed. They went to Digital Research, and used their GEM interface for the GUI, and a port of CP/M for the 68000; what Atari called “TOS” (“The Operating System”). From what I know of the history, Atari was trying to beat Commodore to market, since they were coming out with the Amiga. This decision to go with GEM, instead of Windows, would be kind of a fateful one.

Brodie said that the Tramiels ran Atari like a family business. Jack Tramiel’s three sons ran it. He doesn’t say when the following happened, but I’m thinking this would’ve been about ’92 or ’93. When Sam Tramiel’s daughter was preparing to go off to college, the university she was accepted at said she needed a computer, and it needed to be a Windows machine. Sam got her a Windows laptop, and he was so impressed with what it could do, he got one for himself as well. It was more capable, and its screen presentation looked better than what Atari had developed with their STacy and ST Book portable models. Antonio Salerno, Atari’s VP of Application Development, had already quit. Before heading out the door, he said to them that, “they’d lost the home computer war, and that Windows had already won.” After working with his Windows computer for a while, Sam probably realized that Salerno was right. Brodie said, “I think that’s what killed the computer business,” for Atari. “Sam went out shopping for his daughter, and for the first time, really got a good look at what the competition was.” I was flabbergasted to hear this! I had just assumed all those years ago that Atari was keeping its eye on its competition, but just didn’t have the engineering talent to keep up. From what Brodie said, they were asleep at the switch!

Atari had a couple next-generation “pizza box” computer models (referring to the form factor of their cases, similar to NeXT’s second-generation model) that were in the process of being developed, using the Motorola 68040, but the company cancelled those projects. There was no point in trying to develop them, it seems, because it was clear Atari was already behind. What I heard at the time was that Atari’s computer sales were flat as well, and that they were losing customers. After ’93, Atari focused on its portable and console video game business.

Interview #185 with Ted Kahn, who created the Atari Institute for Education Action Research, while Warner owned Atari. This was one of Antic’s best interviews. Basically Kahn’s job with Atari was to make connections with non-profits, such as foundations, museums, and schools across the country, and internationally, to see if they could make good use of Atari computers. His job was to find non-profits with a mission that was compatible with Atari’s goals, and donate computers to them, for the purpose of getting Atari computers in front of people, as a way of marketing them to the masses. It wasn’t the same as Apple’s program to get Apple II computers into all the schools, but it was in a similar vein.

What really grabbed my attention is I couldn’t help but get the sneaking suspicion that I was touched by his work as a pre-teen. In 1981, two Atari computers were donated to my local library by some foundation whose name I can’t remember. Whether it had any connection to the Atari Institute, I have no idea, but the timing is a bit conspicuous, since it was at the same time that Kahn was doing his work for Atari.

He said that one of the institutions he worked with in 1981 was the Children’s Museum in Washington, D.C. My mom played a small part in helping to set up this museum in the late 1970s, while we lived in VA, when I was about 7 or 8 years old. She moved us out to CO in 1979, but I remember we went back to Washington, D.C. for some reason in the early 1980s, and we made a brief return visit to the museum. I saw a classroom there that had children sitting at Atari computers, working away on some graphics (that’s all I remember). I wondered how I could get in there to do what they were doing, since it looked pretty interesting, but you had to sign up for the class ahead of time, and we were only there for maybe a half-hour. This was the same classroom that Kahn worked with the museum to set up. It was really gratifying to hear the other end of that story!

Interview #207 with Tom R. Halfhill, former editor with Compute! Magazine. I had the pleasure of meeting Tom in 2009, when I just happened to be in the neighborhood, at the Computer History Museum in Mountain View, CA, for a summit on computer science education. I was intensely curious to get the inside story of what went on behind the scenes at the magazine, since it was my favorite when I was growing up in the 1980s. We sat down for lunch and hashed out the old times. This interview with Kevin Savetz covers some of the same ground that Tom and I discussed, but he also goes into some other background that we didn’t. You can read more about Compute! at Reminiscing, Part 3. You may be interested to read the comments after the article as well, since Tom and some other Compute! alumni left messages there.

A great story in this interview is how Charles Brannon, one of the editors at the magazine, wrote a word processor in machine language, called SpeedScript, that was ported and published in the magazine for all of their supported platforms. An advertiser in the magazine, called Quick Brown Fox, which sold a word processor by the same name for Commodore computers, complained to the magazine that what they were publishing was in effect a competing product, and selling it for a much lower price (the price of a single issue). They complained it was undercutting their sales. Tom said he listened to their harangue, and told them off, saying that SpeedScript was written in a couple months by an untrained programmer, and if people were liking it better than a commercial product that was developed by professionals, then they should be out of business! Quick Brown Fox stopped advertising in Compute! after that, but I thought Tom handled that correctly!

— Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Various computer historians, including Charles Babbage’s son, built pieces of Babbage’s Analytical Engine, but none built the whole thing. Doron Swade and colleagues are planning on finally building the whole thing for the London Science Museum! They have a blog for their project, called Plan 28, where you can read about their progress.

Here is John Graham-Cumming in a TEDx presentation from 2012 talking about some of the history of the Analytical Engine project that Babbage and Ada Lovelace engaged in, in the mid-1800’s, and the announcement that Cumming and others are going to build it. Very exciting! I got the opportunity to see the 2nd replica that Swade and his team built of Babbage’s Difference Engine #2 at the Computer History Museum in Mountain View, CA, back in 2009. I’d love to see this thing go!

Related post: Realizing Babbage

Read Full Post »

See Part 1

This turned into a much larger research project than I anticipated. I am publishing this part around the 1-year anniversary of when I started on this whole series! I published the first part of it in late June 2011, and then put the rest “on the shelf” for a while, because it was crowding out other areas of study I wanted to get into.

To recap, the purpose of this series is to educate the reader on government-funded research that led to the computer technology we use today. I was inspired to write it because of gross misconceptions I was seeing on the part of some who thought the government has had nothing to do with our technological advancement, though I think technologists will be interested in this history as well. This history is not meant to be read as a definitive guide to all computer history. I purposely have focused it on a specific series of events. I may have made mistakes, and I welcome corrections. The first part of this series covers government-funded research in computing during the 1940s and 50s. In this article I continue coverage of this research during the late 1950s, and 1960s, and the innovations that research inspired.

Most of what I will talk about below is sourced from the book “The Dream Machine,” by M. Mitchell Waldrop. Some of it is sourced from a New York Times opinion blog article called, “Did My Brother Invent E-mail with Tom Van Vleck?” A great article, by the way, on the development of time-sharing systems; a major topic I cover here. There’s a video produced by DARPA at the end of this post that I use as a source, and I’ll just refer to as “the DARPA video.” Quite a bit of material is sourced from Wikipedia as well. Knowing Wikipedia’s reputation for getting facts wrong, I have tried as much as possible to look at other sources, to verify it. Occasionally I’ll relay some small anecdotes I’ve heard from people who were in the know about these subjects over the years.

Revving up interactive, networked computing

Wes Clark in 2002, from Wikipedia.org

Ken Olsen, an electrical engineer, Wes Clark, an electrical engineer, and Harlan Anderson, a physicist, all graduates of the Massachusetts Institute of Technology (MIT), and veterans of Project Whirlwind, formed the Advanced Computer Research Group at MIT in 1955. Since all of the interactive computers that had been made up to that point were committed to air defense work, due to the Cold War, they wanted to make their own interactive computer that would be like what they had experienced with Whirlwind. It would be a system they would control. The first computer they built was called the TX-0, completed in 1956. It was about the size of a room. It was one of the first computers to be built with transistors (invented at AT&T Bell Labs in 1947). It had an oscilloscope for a display, a speaker for making sound, and a light pen for manipulating and drawing things on the display. They started work on a larger, more ambitious computer system they called the TX-2 in 1956. It was completed in 1958, and took up an entire building. It would come to play an important role in computer history when an electrical engineer named Ivan Sutherland did his Sketchpad project on it in 1963 for his doctoral thesis (PDF). Sketchpad is considered to be the prototype for CAD (Computer-Aided Design) systems. It was also in my view the first proof-of-concept for the graphical user interface, particularly since one used gestures to manipulate objects on the screen with it.

Even though the idea behind building these computers was to build an interactive machine that these guys could control, away from the military, I’m not so sure they succeeded at that, at least for long. From what Alan Kay has said in his presentations on this history, Sutherland had to work on his Sketchpad project in the middle of the night, because the TX-2 was used for air defense during the day…

In the midst of the TX-2’s construction, Ken Olsen, his brother Stanley, and Harlan Anderson founded Digital Equipment Corporation (DEC) in 1957 out of frustration with the academic and business world. They had tried to tell their colleagues, and people in the computer industry, that interactive computing had a promising future, but they got nowhere. Hardly anybody believed them. And besides, the idea of interactive computing was considered immoral. Computers were expensive, and were for serious work, not for playing around. That was the thinking of the time. Olsen and the founders of DEC figured that to make interactive computing something people would pay attention to, they had to sell it.

A pivotal moment happened for J.C.R. Licklider, a man I introduced in Part 1, in 1957 that would have a major impact on the future of computing as we have come to know it. Wes Clark, seemingly by chance, walked into Licklider’s office, and invited him to try out the TX-2. It was an event that would change Licklider’s life. He had an epiphany from using it, that humans and computers could form an interactive system together–a system made up of the person using it, and the computer–that he called “symbiosis.” He had experienced interactive computing from working on the SAGE (Semi-Automated Ground Environment) project at MIT in the 1950s (covered in Part 1 of this series), but this is the moment when he really got what its significance was, what its societal impact would be.

The difference between the TX-2 and SAGE was that with SAGE you couldn’t create anything with it. Information was created for the person using it from signals that were sent to the system from radar stations and weapons systems. The person could only respond to the information they were given, and send out commands, which were then acted upon by other people or other systems. It was a communication/decision system. With the TX-2, the system could respond to the person using it, and allow them to build on those responses to create something more than where the person and the computer had started. This is elementary to many computer users now, but back then it was very new, and a revelation. It’s a foundational concept of interactive computing.

Licklider left his work in psychology, and MIT, to work at a consulting firm called Bolt Beranek and Newman (BBN) to pursue his new dream of realizing human-computer symbiosis with interactive computing. He wrote his seminal work, “Man-Computer Symbiosis,” in 1960, while working at BBN. DEC was an important part of helping Licklider understand his “symbiosis” concept. The first computer they produced was a commercialized version of the TX-0, called the PDP-1 (Programmed Data Processor). BBN bought the very first PDP-1 DEC produced, for Licklider to use in his research. With the right components, it was able to function as an interactive computer, and it was a lot less expensive than the mainframes that dominated the industry.

John McCarthy, from History of Computers

Another person comes into the story at this point, a mathematician at MIT named John McCarthy, who had developed a new field of computing studies at MIT, which he called “Artificial Intelligence.” He had developed a theoretical computing language, or notation, that he called Lisp (for “List Processing language”), which he wanted to use as an environment in which he and others could explore ideas in this new area of study. He thought that ideas would need to be tried in an experimental setting, and that researchers would need to be allowed to quickly revise their ideas and try again. He needed an interactive computer.

A typical run-of-the-mill computer in the 1950s was not interactive at all. The Whirlwind and SAGE systems I talked about in Part 1 were the exceptions, not the rule. First off, there were very few computers around. They were designed to be used by an administrator, who would receive batches of punched cards from people, which contained program code on them. They would be fed through the computer one at a time in batches. The data that was to be processed was often also stored on punch cards, or magnetic tape. The government was heavily invested in systems like this. The IRS and the Census Bureau used this type of system to handle their considerable record keeping, and computation needs. Any business which owned a computer at the time had a system like this.

Each person who submitted a card batch would receive a printout of what their program did hours, or even days later. That was the only feedback they’d get. There were no keyboards, no mice, no input devices of any kind that an ordinary person could use, and no screens to look at. Typical computation was like a combine that’s used on the farm. Raw material (programs and data) would go in, and grain (processed data) would come out the other end. This was called batch processing. McCarthy said this would not do. Researchers couldn’t be expected to wait hours or days for results from a single run of a Lisp program.

McCarthy developed a concept called “time sharing” in 1957. It was a strategy of taking a computer that was designed for batch processing and adapting it to have several ordinary people use it. The system would use the pauses that occurred within itself, where it was just waiting for something to happen, as opportunities to grant computer time to any one of the people using it, in a sort of round-robin fashion. This switching would typically happen fast enough so that everyone using it would notice little pauses here and there, but otherwise operation would appear continuous to them. McCarthy figured that this way, computer time could be used more efficiently, and he and his colleagues could do the kind of work they wanted to do, killing two birds with one stone.

The way time sharing worked was several people would sign on to a single computer using teletype terminals, and use their account on the computer to create files on a shared system. Each person would issue commands to the system through their terminal to tell it what they wanted it to do. They could run their own programs on the system as if the machine was their own, shall we say, “personal computer” (though the term didn’t exist at the time), since they wouldn’t need to think about what anyone else on the system was doing. The equivalent today would be like working on a Unix or Linux system from what’s called a “dumb terminal,” with several people logged in to it at the same time. The way in which our computers now can have several people using them at once, and doing a bunch of things seemingly simultaneously, has its roots in this time sharing strategy.

McCarthy, and a team he had assembled, created a jury-rigged prototype of a time-sharing system out of MIT’s IBM 704 mainframe. They showed that it could work, in a demonstration to some top brass at MIT in 1961. (source: “The MAC System, A Progress Report,” by Robert Fano (PDF)) This was the first public demonstration of a working Lisp system as well.

Fernando Corbató (also known as “Corby”), who was then MIT’s deputy director of their Computation Center, undertook a project to build on McCarthy’s work. This project was funded initially by the Navy. He managed the retrofitting of the university’s then-new IBM 709 computer to create the Compatible Time-Sharing System (CTSS) in 1961. It was named “Compatible” for its compatibility with the mainframe’s existing operating system. This would come to be a very influential system in the years to come.

McCarthy set out a vision at the MIT centennial lecture series, also in 1961, that a computing utility industry would develop in the future that would allow public access to computer resources, and it would be based on his time sharing idea.

IBM was asked if it had any interest in time sharing for the masses, because it was the one company that had the ability to build computers powerful enough at the time to do it. They were unwilling to invest in it, calling the idea impractical. It wouldn’t have to wait long, however. Other researchers started work on their own time-sharing projects at places such as Carnegie Tech (which would become Carnegie-Mellon University), Dartmouth, and RAND Corp.

The “Big Mo” in time sharing, and in many other “big bang” concepts, came when ARPA invited Licklider to lead their efforts in computer research.

ARPA

Republican President Dwight Eisenhower created ARPA, the Advanced Research Projects Agency, under the Department of Defense, in 1958, in response to the Soviet Sputnik launch. I covered some background on the significance of Sputnik in Part 1. The idea was that the Soviets had surprised us technologically, and it was ARPA’s job to make sure we were not surprised again. A lot of ARPA’s initial work was on research for spacecraft, but this was moved to the National Aeronautics and Space Administration (NASA) and the National Reconnaissance Office (NRO) (source: “The DARPA video”). NASA was created the same year as ARPA. ARPA was left with the projects nobody else wanted, and it was in search of a mission.

Jack Ruina was ARPA’s third director, appointed in 1961. He set the pace for the future of the agency, encouraging the people under him to seek out innovative, cutting edge scientific research projects that needed funding, whether they applied to military goals or not, whether they were in government labs, university labs, or in private industry. He knew that the people in ARPA needed to understand the science and the technology thoroughly. He wanted them to seek out people with ideas that were cutting edge, to fund them generously, and to take risks with the funding. He made it clear that researchers that received ARPA funding would need to have the freedom to pursue their own goals, without micromanagement. The overall goal was to, “assault the technological frontiers.”

By my reading of the history, ARPA was forced into researching computing by necessity, in 1961. The way Waldrop characterized it, they “backed into it.” Old, but expensive leftover hardware from the SAGE project needed to be used. It was considered too valuable to junk it. There was also the looming problem of command and control of military operations in the nuclear age, where military decisions and actions needed to happen quickly, and be based on reliable information. It had already been established via. projects like SAGE that computers were the best way anyone could think of to accomplish these goals. So ARPA would take on the task of computer research.

J.C.R. “Lick” Licklider, from Wikipedia.org

J.C.R. Licklider (he preferred to be called “Lick”) was recruited by Ruina to be the director of research in developing command and control systems for the military. Ruina had known of Licklider through his work on SAGE, his career as a researcher in psychology, and the consulting he did on various military projects. It helped a lot that Lick was such an affable fellow. He made a lot of friends in the military ranks. They also needed a director of behavioral science research, which he was uniquely qualified to handle. Licklider was hesitant to accept this position, because he enjoyed his work at BBN, but he became convinced that he could pursue his vision for human-computer symbiosis more comprehensively at ARPA, since they had no problem with him having a dual mission of developing technology for military and civilian use, and they could give him much larger budgets. He started work at ARPA in 1962, in an office of the Pentagon that would come to be named the Information Processing Techniques Office (IPTO).

True to the military side of his mission, Lick created plans to computerize the military’s data gathering, and information storage infrastructure, including digital communication plans for information sharing. He would gradually come to realize the urgency of helping the military modernize. He became aware of some harrowing “near misses” our military experienced, where our nuclear forces were put on high alert, and were nearly ready to launch, all on a misleading signal.

I’ll be focusing on the civilian side of Lick’s research, though all of the research that was funded through the IPTO was always intended to have military applications.

Lick had met John McCarthy while at BBN, and was excited about his idea of time sharing. He wanted to use his position at ARPA to create a movement around this idea, and other innovative computer research. He began by working his existing contacts. He gathered together a prestigious group of computing academics: Marvin Minsky, John McCarthy, Alan Perlis, Ben Gurley, and Fernando Corbató to convince the engineers at System Development Corporation (SDC) to start developing time-sharing technology. It was a hard sell at first, but they came around once these heavy hitters showed how the technology would really work.

He solicited project proposals from all quarters: in government, at universities, and at private companies. He particularly targeted individuals who he thought had the right stuff to do innovative computer research, and sent out funding for various research projects that he thought had merit.

In 1962 he sent $300,000 (more than $2 million in today’s money) to Allen Newell, Herbert Simon, and Alan Perlis at Carnegie Tech to fund whatever they wanted to do. He knew them, and the kinds of minds and motivations they had. In the years that followed, ARPA would send $1.3 million (more than $9 million in today’s money) per year to this team at Carnegie Tech, because they were recognized for having an expansive view of computer science: “The study of all phenomena surrounding computers,” including the impact of computing on human life in general. Lick had confidence that whatever knowledge, program, or technology they came up with would be great.

He was very interested in what RAND Corp. was doing. In 1962 they had developed the first computer with a “pen” (stylus/pad) interface, which was linked to a visual display. This jived very much with what Lick envisioned as “symbiosis.”

John McCarthy left MIT for Stanford in 1962, where he created their Artificial Intelligence Laboratory. Lick funded McCarthy’s work at Stanford for the rest of his time at the IPTO.

Lick began funding an obscure researcher, named Doug Engelbart, working at the Stanford Research Institute (SRI), in 1962. Engelbart had an, at the time, radical idea to create an interactive computer system with graphical video displays that groups could use to deal with complex problems together.

In 1963 Lick funded both Project MAC, a time sharing project at MIT, overseen by Bob Fano, and Project Genie, a time sharing project at Berkeley, overseen by Harry Husky. Fano started developing a curriculum for computer science in the Electrical Engineering Department at MIT at this time as well. Fano, by his own admission, had almost no technical knowledge about computers!

Under Project MAC, ARPA funded a project to create a new time-sharing system called Multics in 1963. This sub-project was overseen by Fernando Corbató.

In 1963, Lick envisioned a network of time-sharing systems, which he dubbed the “Intergalactic Network” to his working groups. The idea was to link computers together in a network, and he wanted people to think big about this; not just a few hundred computers together, but billions!

It was all coming together. Lick realized that he could not juggle all of these projects and continue to work at BBN, so he resigned from BBN in 1963.

Project MAC

Robert Fano came up with the name “Project MAC.” It stood for “Multi-Access Computer,” though people came up with other meanings for it as well, like “Machine-Aided Cognition.” Some artificial intelligence work at MIT, led by Marvin Minsky, was funded through it.

The working group used the existing CTSS project as its starting point, and the goal was to improve on it. CTSS was moved to an IBM 7094 mainframe in 1964, and the first information utility was made available to people at MIT around the clock. The 7094 ran at about the speed of an IBM PC (about 5 Mhz). So if more than a dozen people were logged into it at the same time, it ran very slowly. Eventually they added the ability for members of the MIT Computation Center staff to connect to the time-sharing system using remote teletype terminals from their homes.

In the following video, Bob Fano lays out his vision for the research that was taking place with computers at MIT, and gives a demonstration of their time-sharing system. It’s thought that this film was made in 1964.

CTSS became a knowledge repository, as people were able to share information and programs with each other on it. If enough people found some programs useful, they were added to the standard library of programs in the system, so that everyone could use them. This method of growing the features of the system by adding software to it is seen in all modern systems, particularly Unix and Linux. It created the first software ecosystem, where software could use other software. This is also seen in the more commonly used commercial systems, like on Windows PCs, and Apple Macs. As a couple examples, Tom Van Vleck created a MAIL command for the system that allowed people to send text messages to each other. It was probably the first implementation of e-mail. Allan Scher created ARCHIVE, which allowed people to take a bunch of files and compress and package them into one file.

This was not an attempt to create the internet, but they ended up creating something like it on a smaller scale, under a different configuration. Machines were not networked together, but people were, through the workings of a single computer system.

Things that we take for granted today were discovered as the project went along. Some examples are they used a hierarchical file system that enabled people to organize their files into directories (think folders) and subdirectories (subfolders). The system didn’t have a system clock at first. They attached an electronic clock to the computer’s printer port, and software was added to take advantage of it. Now the system could time-stamp files, and processes could be run at specific times.

It was noticed that people came to depend on the system. They became agitated and irate when it went down. This was something that the researchers were looking for. They meant for it to become a computing utility, and people were coming to depend on it as if it was one. They took this as a positive sign that they were on the right track. It also motivated them to make the system as reliable as they could make it.

The CTSS team developed fault-tolerant features for it, like secure memory, so that programs that crashed didn’t interfere with other running programs, or the rest of the system. They developed an automated backup system, so that when a disk failed (all information was stored on disk), people’s work wasn’t lost. A backup to tape of newly created files ran every half-hour. It was also the first system to enforce passwords for access. This created the first instance of user revolt. Already the idea of “information wants to be free” had developed among people at MIT, and they resented the idea that access to the system could be restricted. Hackers committed various acts of “civil disobedience” (pranks in the system) to protest this, but the Project MAC staff insisted password protection was going to stay!

Time sharing comes of age

The vision of Project MAC was to promote “utility computing,” where private companies would rent computer time to businesses, and ordinary people, like a utility company charges for gas and electricity, so that they could experience interactive computing, and make some use of it. This business model is used today in systems that make applications available on the web on a subscription basis, except now the concept is not so much renting computer time, as renting application time, and data storage space–the whole idea of SaaS (Software as a Service).

Unlike today, the sense of the time was that computers needed to be large, and computing power needed to be centralized, because it was so expensive. It was thought that this was the only way to get this power out to as many people as possible.

In 1963 the leaders of Project MAC wanted to create a model system. Most of the time-sharing systems that had been successfully created up to this point began as mainframes that were only meant for batch processing, and had been modified to do time sharing. They wanted a system that was designed for time sharing from the ground up. It was also decided that an operating system was needed which would be designed for a large scale time-sharing service. CTSS was good as an internal, experimental system, but it was still unstable, because it was an adaptation of a system that was not designed for time sharing. No time was taken to make it a clean design. It had lots of things added onto it as the team learned more about what features it needed. It was a prototype. It wasn’t good enough for the vision Project MAC had in mind of utility computing for the masses.

Project MAC partnered with General Electric to develop the hardware, and AT&T Bell Labs to develop the operating system. Fernando Corbató managed the development team at MIT for the new system they called “Multics,” for Multiplexed Information and Computing Service. The plan was to have it run on a multi-processor machine that had built-in redundancy, so that if one processor burned out, it would be able to continue with the good processors it had left, without missing a beat. Like CTSS, it would have sophisticated memory management so that processes and user data wouldn’t clobber each other while people were using it at the same time. It would also have a hierarchical file system. System security was a top priority. It was noted when I took computer science in college that Multics was the most secure operating system that had ever been created. It may still carry that distinction.

Licklider left ARPA in 1964 to work at IBM. According to Waldrop, it seemed he had a mission to “bring the gospel” of interactive computing to the “high temple of batch processing.” Ivan Sutherland replaced Lick as IPTO director.

The research at Project MAC was having payoffs out in the marketplace. The publicity from the project, and the deal with GE for Multics, inspired companies to develop their own commercial time-sharing systems. The first was from DEC, the PDP-6, which came out in 1965. General Electric and IBM came out with time-sharing service offerings, which usually was a single programming language environment as their sole service. It was rather like the experience of using the early microcomputers in the late 1970s, come to think of it. When you turned them on, up would come the Basic programming language, and all you’d have was a prompt of some sort where you could type commands to make something happen, or write a program for the computer to run.

By 1968 there were 20 separate firms offering time-sharing services. GE’s service was operating in 20 cities. The University Computing Company was operating a time-sharing service in 30 states, and a dozen countries.

In 1968, Bill Gates, who was in the 8th grade, got his first chance to use the Basic programming language on a GE Mark II time-sharing system, using a teletype, through his school in Lakeside, Washington. Basic was invented at Dartmouth as its own time-sharing system. The main reason I mention this is Basic would play a major role in the formation of Microsoft in the 1970s. The first product Gates and Paul Allen created that led to the founding of the company was a version of Basic for the Altair computer. Microsoft Basic went on to become the company’s first big-selling software product, before IBM came calling.

The first release of Multics came out in 1969. AT&T left Project MAC the same year. GE sold its computer division to Honeywell in 1970. Along with GE’s hardware came the Multics software. Other time-sharing systems outsold Multics, and it never caught up as a commercial product. This was mainly because it was 3 years late. Multics had been beset with delays. The complexity of it overwhelmed the software development team at MIT. ARPA considered ending the project. An evaluation team was created to look at its viability. They ultimately decided it should continue, because of the new technologies that were being developed out of the project, and the knowledge that was being generated as a result of working on it. ARPA continued funding development on Multics into the early 1970s.

Honeywell didn’t want to sell Multics to customers, and only did so after considerable prompting by Larry Roberts, who was IPTO director at this time. A major barrier for it was that it was basically off-loaded onto Honeywell. Most of the people in the company, particularly its sales staff, didn’t know what it was! This was a chronic problem for the life of the product. Honeywell sold 77 Multics systems, in all, to customers over the next 20+ years. They stopped selling it in the early 1990s. Wikipedia claims the last Multics system was retired at the Canadian Department of National Defence in the year 2000.

Despite Multics being a dud in the commercial market, much was learned from it. It was the inspiration for Unix (which was originally named “Unics”, a play on “Multics”). Ken Thompson and Dennis Ritchie, the creators of Unix, had worked on Multics at AT&T Bell Labs. They missed the kind of interaction they were able to have with the Multics system, though they disliked the operating system’s bigness, and bureaucratic restrictions, which were key to its security. They brought into Unix what they thought were the best ideas from Multics and Project Genie (which I describe below).

I once had a conversation with a computer science professor while I was in college regarding the design of Unix. From what he described, and what I know of the design of early time-sharing systems, my sense is the early design of Unix was not that different from a time-sharing system. The main difference was that it shared computer time between multiple programs running at the same time, as well as between people using the system. This scheme of operation was called “multitasking.”

Unix would spread into the academic community in the years that followed. The main reason being that under an antitrust agreement with the government, AT&T was not allowed to get into other businesses besides telecommunications at the time. So they retained ownership of the rights to Unix, and the trademark to its name, but gave away copies of it to customers under a free license. They made the binary distribution, as well as all source code available to those who wanted it. This made it very attractive to universities. They liked getting the source code, because it was something that computer science students could investigate and use. It contributed to the academic environment.

Unix continues to be used today in IT environments, and in Apple products. It inspired the development of Linux, beginning in the early 1990s.

The Multics project taught the Project MAC team members valuable lessons in software engineering. These people went on to work at Digital Equipment Corp. (which was bought by Compaq in 1998, which was bought by Hewlett-Packard in 2002), Data General (which was bought by EMC in 1999), Prime (which was bought by Parametric Technology Corp. in 1998), and Apollo (which was bought by HP in 1989). (source: Wikipedia) A few of these companies created their own “lite” versions of Multics for commercial sale, under other product names.

When microcomputers/personal computers started being sold commercially in the late 1970s, the idea of the information utility was pared back as a widespread system strategy. It’s tempting to think that it died out, because computers got smaller and faster, and the idea of a central computing utility became old-hat. That didn’t really happen, though. It continued on in services like CompuServe, BIX, GEnie (offered by GE, no relation to Project Genie), and America Online of the late ’80s, and into the ’90s. The idea of large, centralized computing services has recently had a resurgence in such areas as software-as-a-service, and cloud computing. Individually owned computers, in all their forms, are mainly seen now as portals, a way to consume a service offered by much larger, centralized computer systems.

Project Genie

The information on this project comes from the Wikipedia page on it, and its list of references. The section on Evans and Sutherland is sourced from Waldrop’s book and Wikipedia. I do not have much of the details of this project, nor of what Dave Evans and Ivan Sutherland did at the University of Utah. This is best read as a couple anecdotes about the kind of positive influence that ARPA had on institutions and students of computer science at the time.

Project Genie began in 1963 at Berkeley. Harry Husky, who had been in charge of it, left the project in 1964, leaving Ed Feigenbaum, an associate professor with a background in electrical engineering, and Dave Evans, a junior professor with a background in physics and electrical engineering, to manage it. Both were dubious about their chances of accomplishing much of anything. I think it’s safe to say that what actually happened exceeded their expectations.

Feigenbaum wouldn’t stay with the project for long, though they had their system almost operational by the time he left. He joined John McCarthy’s Artificial Intelligence Laboratory at Stanford in 1964.

A team of graduate students who joined this project, which included Peter DeutschButler LampsonChuck Thacker, and Ken Thompson, modified a Scientific Data Systems 930 minicomputer to do time sharing as their post-graduate project. This project was perhaps unique in its goal, since other time sharing efforts had been done on mainframes, since only they had enough computing power to share. Licklider wanted to fund time sharing projects of various scales, presumably to see how each would work out.

Over the next 3 years the team wrote their own operating system for the 930, which came to be called the Berkeley Time Sharing System. The main technological advancements I can pick out of it were paged virtual memory, and a particular design for multitasking (the idea of sharing computer time between programs loaded into memory, as well as people)., and state-restoring crash recovery.

Paged virtual memory is a scheme where the system can “swap” information in memory out to hard disk in standard-sized segments, and bring it back in again, at the system’s discretion. This is done to allow the system to operate on a collection of programs and data that, put all together, are greater in size than the computer’s memory. This scheme is used on all modern personal computer systems, and on Unix and Linux.

State-restoring crash recovery is something we have yet to see on modern computer systems. It allows a system to restore itself to an operational state after a crash has occurred, picking up where it left off, without rebooting. It should be noted that this project did not invent this idea. The earliest version of it I’ve heard about is in Bob Barton’s Burroughs B5000 computer, which was created in 1961.

Not a lot has been said about this project in the literature I’ve researched, and I think it’s because the innovations that were developed were technical. No social study took place during the development of this system that I can tell, as took place with Project MAC at MIT, where many things about necessary time-sharing features were learned. Nevertheless, the technical talent that was developed during this project is legendary in the field of computing.

Scientific Data Systems eventually chose to market the Berkeley Time Sharing System as the SDS 940 computer, with some encouragement by Bob Taylor, who was IPTO director at the time. This system would go on to be used by Doug Engelbart for his NLS system, in the late 1960s (which I describe below).

Edit 8/5/2012: I added in “The trail to Xerox” as it provides background for the discussion of Xerox PARC in Part 3.

The trail to Xerox

Scientific Data Systems wasn’t enthusiastic about the 940, though. They considered it a one-off machine, and they had no intention of developing it further. Butler Lampson and others in Project Genie got impatient with SDS. They left Berkeley, and founded their own company, the Berkeley Computer Corporation (BCC), in 1969. Bob Taylor left ARPA the same year.

SDS was bought by Xerox in 1969, and its name was changed to Xerox Data Systems (XDS).

BCC didn’t last long. It was going broke in 1970. Taylor had founded Xerox Palo Alto Research Center (PARC) that year, and recruited a bunch of people from BCC to create their Computer Science Laboratory.

XDS lost so much money that Xerox shut it down in 1975.

Evans and Sutherland, and the University of Utah
David_C._Evans

David C. Evans

Dave Evans left Berkeley to chair a brand new computer science department at the University of Utah, his alma mater, in 1966. The department was one of ARPA/IPTO’s projects, and was fully funded by the IPTO.

Back then, the term “computer science” was new, and was considered an aspirational title for what the department was doing. It was expected that computation would become a science someday, but it was clearly acknowledged in Evans’s department that it was not a science then. I can safely say that it still isn’t, anywhere.

Ivan Sutherland, who had left ARPA to teach at Harvard, joined the computer science department at the University of Utah in 1968, serving as a professor. He came on the condition that Evans would join him in starting a new company. The new company was founded that year, called Evans & Sutherland. It was, and still is, a maker of computer graphics hardware.

Young Ivan Sutherland

Ivan Sutherland

Evans and Sutherland made a deliberate decision to stick to the philosophy of “doing one thing well,” and so the department specialized in computer graphics.

It should be noted that at the time their computer science department only took post-graduate students. There was no undergraduate computer science program. Dave Evans and Ivan Sutherland would have a who’s who of computing pass through their department. They had students participate in IPTO projects, and the IPTO fully funded many of their tuitions. Some of the most notable of their students (not all were shared between Evans and Sutherland) were Alan KayNolan BushnellJohn WarnockEdwin CatmullJim Clark, and Alan Ashton. (sources: Wikipedia, and utahstories.com)

I get more into the work of Alan Kay, Chuck Thacker, and Butler Lampson in Part 3.

Contributions of the students

Peter Deutsch, who had been with Project Genie at Berkeley, went on to work at Xerox PARC, and then Sun Microsystems. He is best known for his work in programming languages. In 1994 he became a Fellow at the Association of Computing Machinery (ACM). He developed Ghostscript, an open source PostScript and PDF viewer. More recently he’s taken up music composition. (Sources: Wikipedia, and Coders at Work)

Nolan Bushnell and Ted Dabney founded the video game company Atari in 1972. They produced two arcade video games that were big hits, Pong and Breakout. Steve Jobs and Steve Wozniak briefly worked for Atari during this period, helping to develop Breakout. When Wozniak developed his first computer, built with parts that some Atari employees had “liberated” from Atari, which would become the Apple I, they showed it to Bushnell to see if he was interested in marketing it. He turned them down. The rest, as they say, is history… Bushnell created the Pizza Time (eventually named “Chuck E. Cheese”) chain of restaurants as a way of marketing Atari arcade games in 1977. Atari was sold to Warner Communications that same year. Bushnell founded Catalyst Technologies, a business incubator, in the early 1980s. Bushnell resigned from Chuck E. Cheese in 1984. (Source: Atari history timelines) More recently Bushnell has been part of a business venture called uWink that’s been trying to set up futuristic, technology-laden restaurants. In 2010, the third iteration of Atari (the most recent acquirer of the rights to the name and intellectual property of Atari is a company formerly known as Infogrames) announced that Bushnell had joined their board of directors. There are a bunch of other entrepreneurial ventures he was involved in that I will not list here. Check out his Wikipedia page for more details.

John Warnock worked for a time at Evans & Sutherland, and then at Xerox PARC, where he and a colleague, Charles Geschke, developed InterPress, a computer language to control laser printers. Warnock couldn’t convince Xerox management to market it, and so he and Geschke founded Adobe in 1982, where they marketed a newer laser printing control language called PostScript. In 1991, Warnock created a preliminary concept called “Camelot” that led to Postscript Document Format, otherwise known as “PDF.”

Edwin Catmull developed some fundamental concepts used in 3D graphics today, and went on to become Vice President of the computer graphics division at Lucasfilm in 1979. This division was eventually sold to Steve Jobs, and became Pixar, where Catmull became Chief Technical Officer. He was a key developer of Pixar’s RenderMan software, used in such films as Toy Story and Finding Nemo. He has received 4 Academy awards for his technical work in cinematic computer graphics. He is today President of Walt Disney and Pixar Animation Studios.

Jim Clark worked for a time at Evans & Sutherland. He founded Silicon Graphics in 1982, Netscape in 1994, Healtheon in 1996 (which became WebMD), and myCFO in 1999 (an asset management company for Silicon Valley entrepreneurs, which is now called “Harris myCFO”). More recently, he co-produced the Academy award-winning documentary The Cove.

Alan Ashton co-founded Satellite Software International with one of his students, Bruce Bastian, in 1979. The company created the WordPerfect word processor. SSI was eventually renamed WordPerfect Corporation. Throughout the 1980s, WordPerfect was the leading word processing product sold on PCs. The company was bought by Novell in 1994. Ashton stayed on with Novell until 1996. He founded ASH Capital, a venture capital firm, in 1999.

For those who are interested, you can watch a 2004 presentation with Ivan Sutherland and his brother Bert, talking about their lives and work, here. The guest host for this presentation is Ivan’s long-time friend and business associate, Bob Sproull.

Dave Evans went on to help design the Arpanet, which I describe below.

Doug Engelbart and NLS

If, in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?

— Doug Engelbart, introducing NLS at Menlo Park in 1968

“Doug and staff using NLS to support 1967 meeting with sponsors — probably the first computer-supported conference. The facility was rigged for a meeting with representatives of the ARC’s research sponsors NASA, Air Force, and ARPA. A U-shaped table accommodated setup CRT displays positioned at the right height and angle. Each participant had a mouse for pointing. Engelbart could display his hypermedia agenda and briefing materials, as well as the documents in his laboratory’s knowledge base.”
— caption and photo from Doug Engelbart Institute

It’s a bit striking to me that after the interactive computing that was done in the 1950s, which I covered in Part 1, that a lot of effort in the ARPA work was put into making interactive systems that operated only from a command line, rather than from a visual, graphical display. At one point Licklider asked the Project MAC team if it might be possible for everybody on the CTSS time-sharing system to have a visual display, but the answer was no. It was too expensive. In the scheme of projects that ARPA funded, NLS was in a class by itself.

In 1962, an electrical engineer named Doug Engelbart was having trouble getting his computer research project funded. Funders in his proximity couldn’t see the point in his idea. Most of the people he talked to were openly hostile to it. He wanted to work on interactive computing, but they’d say that computers were only meant to do accounting and the like, using batch processing. Engelbart’s vision was larger, and a little more disciplined than Licklider’s. He wanted to create a system that was specifically designed to help groups work together more effectively to solve complex problems.

Lick at ARPA/IPTO allocated funding for his project that lasted several years. Engelbart also got funding from NASA (via. Bob Taylor), and the Air Force’s Rome Air Development Center. With this, he established the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI).

Engelbart envisioned word processing at a time when it had just been invented, and even then only on an experimental basis. He spent a lot of time exploring how to enhance human capability with information systems. In the mid-1960s, he envisioned the potential for online research libraries, online address books, online technical support, online discussion forums, online transcripts of discussions, etc., all cross-linked with each other. He drew inspiration from Vannevar Bush’s Atlantic Monthly article called, “As We May Think,” which was published in 1945 (Vannevar Bush is no relation to the Bush political family), and Ivan Sutherland’s Sketchpad project. Engelbart said that he came up with the idea of hyperlinked documents on his own, though he later recalled reading Bush’s article, which focused on a theoretical concept of a stored library of linked documents that Bush called a “Memex.”

The system that was ultimately created out of the ARC’s work was built on an SDS 940 time-sharing system, the system software for which was developed by Project Genie. Engelbart’s software ran in 192 kilobytes of memory, and 8 MB of hard disk space, if I recall correctly. He called the end result NLS, for “oN-Line System.” He demonstrated the system at the Fall Joint Computer Conference at Menlo Park, CA in 1968. It was all recorded. The full-length annotated video is here. It’s interesting, but gets very technical.

What Engelbart and his team accomplished was so far ahead of its time, there were people in the audience who didn’t think it was real, because, you see…computers couldn’t do this kind of stuff back then! It was real alright. What Engelbart had created was the world’s first computer mouse (that’s what people in the computing field give him credit for today, but there was so much more), one of the first systems to implement rudimentary windows on a screen, the first system to integrate graphics and text together on a screen, the first system to implement hyperlinking (these were like links on a web page, though much more sophisticated), and the first collaborative computing platform that was networked, all in one system. He demonstrated live for the audience how he and a worker in another office, in a separate building, could collaboratively work on the same system, at the same time, on a shared workspace that appeared on both of their screens. Each person was able to use their own mouse and keyboard to issue commands to the system, and edit documents. They could see each other on their terminals, since there were cameras set up in both locations to capture their images. They each had headsets with microphones, and a speaker system set up, so that they could hear each other.

The whole thing was a prototype, and a lot of infrastructure had to be set up just for the demo, to make it all work. Not all of it was digital. In the collaborative computing part of the demo, the video display of the remote worker, and the audio system, were analog. Everything else was digital. Even so, it was a revolutionary event in the history of computers. This was 16 years before the first Apple Macintosh, which couldn’t do most of this! NLS cost a lot more, too, more than a million dollars (more than $6 million in today’s money)! There is still technology Engelbart developed (which I haven’t discussed here) that has not made it into computer products yet, nor the typical web environment most of us use.

After the demo, Bob Taylor, who was the director at IPTO at this time, agreed to continue funding Engelbart’s work. However, other government priorities reduced the funding available to him, mainly due to the cost of the Vietnam War.

Considering the magnitude of this achievement, one would think there would be nothing but better and better things coming out of this project, but that was not to be. A lot of Engelbart’s technical talent left his project to work at Xerox PARC, pursuing the idea of personal computing, in the 1970s. Innovation with NLS slowed, and the system became more bloated and unwieldy with each update. ARPA could see this directly, since they ended up using NLS to do their own work. The IPTO decided to cut off Engelbart’s funding in 1975, since their goals had diverged.

NLS was transferred out of SRI to a company called Tymshare in 1978. They attempted to make it into a commercial product called “Augment.” Tymshare was bought by McDonnel-Douglas in 1984. They authorized the transfer of Augment to the Bootstrap Alliance, founded by Doug Engelbart, which is now known as the Doug Engelbart Institute. NLS/Augment was ported to a modern computer system, thanks to funding from ARPA in the mid-1990s, and continues to be used at the Doug Engelbart Institute today! (source: article on NLS/Augment at The Doug Engelbart Institute)

His efforts to get his ideas out into the world would be somewhat vindicated in the research and development that was carried out by Xerox PARC, and ultimately in the computers we use today, but this is only a pale shadow compared to what he had in mind. Still, we owe Engelbart a great debt of gratitude. It is not a stretch to say that the interactive computers we’ve been using for the past 26 years would’ve come into existence quite a bit later in our history had it not been for his work. The Apple Lisa and the Apple Macintosh wouldn’t have come out in 1983 and 1984, respectively. Something like Microsoft Windows would’ve been a more recent invention. It’s possible most of us would still be using command-line interfaces today, using a text display, rather than a graphical one, were it not for him. It also should be noted that the IPTO’s funding (by Licklider) of Engelbart’s ARC project at SRI was crucial to NLS happening at all. It was this funding that lent it credibility so that Engelbart could pursue funding from the other sources I mentioned. Were it not for Licklider, Engelbart’s dream would have remained just a dream.

To really drive home the significance of Engelbart’s accomplishment, I’m including this video, created by Smeagol Studios, which demonstrates some of the technology Engelbart invented, but was eventually brought to us by others.

The Arpanet

In terms of the technology we use today, all of the other projects that ARPA funded generated ideas that we see in what we use. They were inspirations to others. The Arpanet is the one ARPA project that directly resulted in a technology we use today, in the sense that you can trace, in terms of technology and management, the internet we’re using now directly back to ARPA initiatives.

The basis for the Arpanet, and later the internet, is a concept called packet switching, which divides up information into standard sized pieces, called “packets,” that have a “from” and “to” address attached to them, so that they can be routed on a network. It turned out ARPA did not invent this concept. It had gone through a couple false starts before ARPA implemented it.

The earliest known version of this idea was created by Paul Baran (pronounced “Barren”) at RAND Corporation in 1964. He had done an amazing amount of work designing a packet switching network for a digital phone system (think voice over IP, as an analogy!) for the U.S. government. The big difference between his idea and what we eventually got was that he designed his system only for transmitting digitized sound information over a network, not generic computer data. The amount of design documentation he produced filled 11 volumes. This is where the idea came from to design a network that could survive a nuclear attack. Baran’s goals were dovish. He wanted to avoid all-out nuclear war with the Soviets. His rationale was that if the government’s conventional communication system was wiped out in a first strike, our government would have no choice but to launch all its nuclear missiles, because it would be deaf and blind, unable to communicate effectively with its forces, or with the Soviet government. In his scheme, the communication network would route around the damage, and continue to allow the leaders of both countries to communicate, hopefully allowing them to decelerate the conflict. It didn’t get anywhere. Baran tried working through the Defense Communications Agency to get it built, but the engineers there, and at AT&T (Baran’s system was designed to work over the existing long-distance phone network), didn’t understand the concept. Baran even proposed that the U.S. government offer his system to the Soviets, so that if the roles were reversed, they could still communicate with the U.S. That didn’t convince anybody to go ahead with it, either. It was shelved and forgotten, until it was discovered by Donald Davies, a researcher at the National Physical Laboratory (NPL) in Teddington, UK.

Davies’s idea to develop a packet switching network (Davies is the one who invented the term) was inspired by a conversation he had with Larry Roberts and Licklider at a conference on time sharing Davies hosted in 1965. Davies found Baran’s project later while researching how to build his own system. Davies had come up with the concept of using separate message processing computers to route traffic on the network, what we call “routers” today. He finished his design in 1966. He got one node going, where a bunch of terminals communicated with each other through a single message processor, demonstrating that the idea could work. Like with Baran’s system, his idea was to use the UK’s phone network. He ran into a similar problem: The British Postal Service, which at the time was in charge of the country’s phone network, didn’t see the point in it, didn’t want to fund it, and so it died. Just imagine if things had turned out differently. The Brits might’ve taken all the credit!

Bob Taylor, from Stanford University

ARPA eventually got going with its packet switching network, but it took some searching, and leadership, on the part of a psychologist named Bob Taylor. Taylor moved from NASA to the IPTO in 1965. He served under Ivan Sutherland, who had replaced Licklider as IPTO director. Taylor and Licklider had worked in the same area of psychological research, called psychoacoustics, in the 1950s, and they were both passionate about developing interactive computing.

One of the first things Taylor noticed was that the IPTO had several teletype terminals set up to communicate with their different working groups, because each working group’s computer system had its own way of communicating. He thought this was something that needed to be remedied. The thought occurred to him that there should be just one network to connect these groups.

Wes Clark once again enters the story, and introduces a new idea into this mix. I’ll give a little background.

Clark had come up with a “backwater” concept in 1961 at MIT of individual computing, the idea that computers should be small and something that individuals could afford and own. Clark had developed what was considered a “small” computer at the time. It had a portion that fit on a desk (though it took up almost the entire desk), which contained a bunch of components fitted together. It had a keyboard and a 6″ screen. The part on the desk was hooked up to a single cabinet of electronics that stood on its own. It might’ve been the world’s first minicomputer. Clark dubbed his creation the LINC, after the place where he invented it, Lincoln Lab, which is part of MIT.

It was a highly technical box. It wasn’t something that most people would want to sit down and use for the stuff we use computers to do today, but it was the beginning of a concept that would come to define the first 25 years of personal computing. The LINC came to be manufactured and sold, and was used in scientific laboratories.

Clark obtained funding from the IPTO in 1965 to continue his work in developing his individual computing technology, at Washington University at St. Louis. From what I’ve read of his history, Clark did not end up playing a role in the development of the personal computer as most people would come to know it, though he nevertheless contributed to the vision of the networked world we have today, in more ways than one.

Sutherland and Taylor met with Clark in 1965. Taylor tried out the LINC, typing out characters and seeing them appear on its small screen. As he used it, he and Clark talked about networking. The idea of the “Intergalactic Network” that Lick had talked about a couple years earlier came into Taylor’s mind. Clark regarded time sharing as a mistake. To him, the future was in networking small machines like his. Taylor could see the validity of Clark’s argument. Clark hadn’t incorporated the idea of networking into the LINC, but he thought that was something that would come along later. Taylor observed that what made time sharing a valid concept was the community it created. He asked himself questions like, “Why not expand that community into a cross-country network,” and “Why couldn’t the network serve the same function as the time-sharing system in forming communities?” Clark agreed with this idea, and urged Taylor to go for it. Clark didn’t like the idea of linking time-sharing systems together, but link thousands, or millions of individually owned computers together? Yes!

Licklider encouraged Taylor to pursue the idea of a network as well. Lick said the only reason he didn’t try to implement his idea while at ARPA was the technology wasn’t ready for it. They went on to discuss ideas about how the network might be used. They thought about online collaboration, digital libraries, electronic commerce, etc. Electronic commerce is now well established, but the other areas they talked about are still being developed on the internet today.

When Taylor got back together with the IPTO’s working groups, he discussed the idea of creating a nationwide network, with them involved. Most of them didn’t like the idea at all. The only one who was excited about it was Doug Engelbart. It was one thing to talk about a network, as they had in the past. It was another to actually want to do it, primarily because most of them didn’t want to share computer time with outsiders. They wanted to keep it within their own communities, because they were having enough trouble keeping computer performance up with the users they had. Plus, they didn’t want to contribute funding to the project. All projects before this had come from the bottom up. Researchers put forward proposals, and ARPA would pick which to fund. This was a project coming from ARPA, top-down.

Taylor talked with Charles Herzfeld, who was the director of ARPA at the time, about the idea of building the network. He liked it, and they agreed on an initial funding amount of $1 million (about $7 million in today’s money). This allayed some fears so that investigations for how to build the network could move forward. Taylor assured the working groups that if they needed additional computing power to take on the load of being a part of the network, ARPA would find a way to provide it.

Ivan Sutherland left ARPA to become a professor of electrical engineering at Harvard in 1966, and Taylor became IPTO director. Taylor would come to play a role in the development of the Arpanet, the ancestor to the internet, and would later lead the effort to develop modern personal computers at Xerox PARC.

Larry Roberts, from his home page at http://www.packet.cc/

Larry Roberts, an electrical engineer from MIT, was brought into the Arpanet project, after some arm twisting by Taylor and Herzfeld. He was made a chief scientist at ARPA, to lead the technical team that would design and build the network. Roberts, Len Kleinrock, and Dave Evans developed a preliminary design for the Arpanet, independently from Donald Davies and Paul Baran, in 1967. Roberts noticed that in addition to the systems associated with the IPTO, all of the time-sharing systems that were sprouting up around the country had no way to communicate with each other. People who used one time-sharing system could communicate with each other, but they couldn’t easily communicate information, programs, and data with others who used a different system. Roberts wanted to help them all communicate with each other, furthering the goals of Project MAC.

Like the other ideas that came before it, the Arpanet was designed to work over AT&T’s long-distance phone network. The idea was to open dedicated phone connections and never hang up, so that data could be sent as soon as possible at all times. Roberts ran into hostile resistance at the Defense Communications Agency over this idea from some of the same people who told Baran his idea wouldn’t work. The difference this time was the man who wanted to implement it worked for the Pentagon, and ARPA was fully behind it. Part of ARPA’s mission was to cut through the bureaucratic red tape to get things done, and that’s what happened.

The original idea Roberts had was to have each time-sharing system on the network devote time to routing data packets on it. One of the objections raised by the ARPA working groups was that most or all of their computer time would be taken up routing packets. Unbeknownst to them at the time, this problem had already been solved by Davies over in England. Roberts changed his mind about this arrangement after he had a conversation with Wes Clark. He suggested having dedicated “interchange” computers, so that the computers used by people wouldn’t have to handle routing traffic–again, routers. Clark got this idea by thinking about the network as a highway system. He observed that they had specialized ramps to get from one highway to another when they’d intersect–interchanges. Thus was born Arpanet’s Information Message Processors (IMPs).

Roberts’s first inclination was to make the data rate on the Arpanet 9.6Kbps. He thought that would be plenty fast for what they would need. He reconsidered, though, upon meeting with Roger Scantlebury and his colleagues from the UK’s National Physical Laboratory. Scantlebury had worked with Donald Davies. They suggested a faster data rate. Roberts saw that a speed of 56Kbps was achievable, and decided that should be the data rate for the network. This is the same speed that’s used when people now use a dial-up connection to get on the internet, though this was the network’s speed, not just the speed at which the people who used it connected with it.

Three groups worked on the implementation details for the Arpanet: The ARPA group led by Roberts, a second group at BBN led by Robert Kahn, a mathematician and electrical engineer, and a third group made up of graduate students from many different universities (really whoever wanted to participate), who were part of what became the Network Working Group, which was coordinated by Steve Crocker at UCLA. For the techies reading this, Crocker was the one who invented the term “RFC” for the internet’s design documentation, which stands for “Request For Comments.” He was looking for something to call the design that the graduate students had done, without coming across as superior to the people at ARPA leading the effort.

BBN won the contract from ARPA in a competitive bidding process to manufacture and install the IMPs. BBN chose the Honeywell 516 computer as the basis for them. It was customized to do the job. Each IMP was the size of a refrigerator. Graduate students and employees at the sites where these IMPs were installed were expected to come up with their own software (their own network stack), based on a specification that had been hammered out by the aforementioned working groups, to send and receive packets to and from the IMPs.

Crocker and the Network Working Group came up with several basic protocols for how computers would interact with each other over the Arpanet, including “telnet” and “file transfer protocol” (FTP). BBN created the system and protocol that enabled e-mail over the Arpanet in 1972. It became the most popular service on the network. These protocols would become the foundation for the protocols on the internet, as it developed several years later.

Along the way, the Arpanet’s designers had an idea that we would recognize as cloud computing today, where computers would have distributed, dedicated functionality, so that it would not have to be duplicated from machine to machine. A computer application that needed functionality could access it through the network interface. You could say that the idea of “the network is the computer” was starting to form, though from what I can tell, no one has pursued this idea with vigor until recently.

Also of note, ARPA had contracts with BBN in 1969 to research how medical information could be used and accessed over a network, to see what doctors could do with online access to medical records. Perhaps this research would be relevant today.

The first node on the Arpanet was set up at UCLA in September 1969. Bob Taylor left his position as IPTO director soon after. He was replaced by Larry Roberts. Another five sites were set up at SRI, UC Santa Barbara, the University of Utah, BBN, and MIT, by the end of 1970. The Arpanet became fully operational in 1975, and its management was transferred from ARPA to, ironically, the Defense Communications Agency.

Here’s a good synopsis on the history of the internet I’ve just covered, done by a couple of students at Vanderbilt University.

People often look askance at nostalgic notions that there was ever such a thing as a “golden age” of anything, but Bob Taylor said of this period of computer research, “Goddammit there was a Golden Age!” It was a special time when great leaps were made in computing. A lot of it had to do with changing the dominant paradigm in how they could be used.

What happened to the people involved?

Edit 11-23-2013: As I hear more news on these individuals, since the writing of this post, I will update this section.

Ken Olsen, who I introduced at the beginning of this post, retired from Digital Equipment Corp. in 1992. As noted earlier, DEC was purchased by Compaq in 1998, and Compaq was purchased by Hewlett-Packard in 2002. Olsen died on February 6, 2011.

Harlan Anderson was fired from DEC by Olsen in 1966 over a disagreement on what to do about the company’s growth. He founded the Anderson Investment Company in 1969. He was a trustee of financing for Rensselaer Polytechnic Institute for 16 years. He served as a trustee of the King School of Stamford, Connecticut, and the Norwalk Community Technical College Foundation. He is currently a member of the Board of Advisors for the College of Engineering at the University of Illinois. He is also a trustee of the Boston Symphony Orchestra, and the Harlan E. Anderson Foundation. He has written a book called, “Learn, Earn & Return: My Life as a Computer Pioneer.” (Sources: Wikipedia, the Boston Globe, and Rensselaer Board of Trustees)

Wes Clark left Washington University in St. Louis in 1972, and founded Clark, Rockoff, and Associates, a consulting firm in Brooklyn, New York, with his wife, Maxine Rockoff. His oldest son, Douglas Clark, is a professor of computer science at Princeton University. (Source: IEEE Computer Society) Wes Clark died on February 22, 2016.

John McCarthy worked at Stanford in the field he invented, artificial intelligence research, from 1962 until he retired in 2000. He remained a professor emeritus at Stanford until his death on October 24, 2011.

Fernando Corbató became a professor at the Department of Electrical Engineering and Computer Science at MIT in 1965. (see my note about Robert Fano below re. this department) He was an Associate Department Head of Computer Science and Engineering at MIT from 1974-1978, and 1983-1993. He retired in 1996. (Source: MIT Computer Science and Artificial Intelligence Laboratory)

Jack Ruina served as a professor of electrical engineering at MIT from 1963 to 1997, and is currently professor emeritus. During a two-year leave of absence from MIT he served as president of the Institute for Defense Analysis. In addition, he served as deputy for research to the assistant secretary of research and engineering of the U.S. Air Force, and Assistant Director of Defense Research and Engineering for the Office of the Secretary of Defense. He also served on a couple presidential appointments to the General Advisory Committee, from 1969 to 1977, and as senior consultant to the White House Office of Science and Technology Policy from 1977 to 1980.

Ivan Sutherland left the University of Utah in 1976 to help create the computer science department at the California Institute of Technology (CalTech). He served as professor of computer science there until 1980. In that year he founded a consulting firm with one of his students, Bob Sproull, called Sutherland, Sproull and Associates. Sproull is the son of Dr. Robert Sproull, who was one of ARPA’s directors. (Sources: BookRags.com and “the DARPA video”) The firm was purchased by Sun Microsystems in 1990, becoming Sun Labs. Sutherland became a Fellow and Vice President at Sun Microsystems. Sun was purchased by Oracle in 2010, and Sun Labs was renamed Oracle Labs. (Source: Wikipedia) Sutherland, and his wife, Marly Roncken, are currently involved with computer science research at Portland State University. (Sources: Wikipedia and digenvt.com)

Bob Fano, who supervised Project MAC, stayed with the project until 1968. He became an associate department head for computer science in MIT’s Department of Electrical Engineering in 1971. Soon after, the Electrical Engineering department was renamed the Department of Electrical Engineering and Computer Science, and Project MAC was renamed the Laboratory of Computer Science (LCS). He was a member of the National Academy of Sciences, and the National Academy of Engineering. He was a fellow with the American Academy of Arts and Sciences, and the Institute of Electrical and Electronics Engineers (IEEE). (Sources: Wikipedia.org and “Did My Brother Invent E-Mail With Tom Van Vleck?” from the New York Times opinion blog) He died on July 13, 2016. (source: “Robert Fano, computing pioneer and founder of CSAIL, dies at 98”)

Ken Thompson was elected to the National Academy of Engineering in 1980 for his work on Unix. Bell Labs was spun off into Lucent in 1996. He retired from Lucent in the year 2000. He joined Entrisphere, Inc. as a fellow, and worked there until 2006. He now works at Google as a Distinguished Engineer.

Dennis Ritchie – I feel I would be remiss if I did not note that Ritchie developed a computer programming language called “C” as a part of his work on Unix, in 1972. It came to be known and used by professional software developers, and became pervasive in the software industry in the 1990s, and beyond. Most of the software people have used on computers for the past 20+ years was at some level written in C. This language continues to be used to this day, though it’s gone through some revisions. Its influence is felt in the use of many different programming languages used by professional developers. At some level, some of the software you are using to view this web page was likely written in C, or some derivative of it.

Dennis Ritchie worked in the Computing Science Research Center at Bell Labs throughout his career. Bell Labs became Lucent in 1996. Lucent and Alcatel merged in 2006. Ritchie retired from Alcatel-Lucent in 2007. He died on October 12, 2011. (sources: Wikipedia.org and Dennis Ritchie’s home page)

Dave Evans – Aside from his computer science work at the University of Utah, he was a devout member of the Church of Jesus Christ of Latter-day Saints for 27 years. I do not have documentation on when he left the University of Utah. All I’ve found is that he retired from Evans & Sutherland in 1994, and that he died on October 3, 1998.

Doug Engelbart went into management consulting after Tymshare was sold. He tried to spread his information technology vision in large organizations that he thought would benefit from having their knowledge generation and storage/retrieval processes improved. He continued his work at the Doug Engelbart Institute at SRI until his death on July 2, 2013. There are many more details about his professional career at his Wikipedia page.

Len Kleinrock joined UCLA in the mid-1960s, and never left, becoming a member of the faculty. He served as Chairman of their Computer Science Department from 1991-1995. He was President and Co-founder of Linkabit Corporation, the co-founder of Nomadix, Inc., and Founder and Chairman of TTI/Vanguard, an advanced technology forum organization. He is today a Distinguished Professor of Computer Science. He is a member of the National Academy of Engineering, the National Academy of Arts and Sciences, an IEEE fellow, an ACM fellow (Association of Computing Machinery), an INFORMS fellow (INstitute For Operations Research and the Management Sciences), an IEC fellow (International Electrotechnical Commission), a Guggenheim fellow (which I assume refers to the Guggenheim arts museum), and a founding member of the Computer Science and Telecommunications Board of the National Research Council. (Source: Len Kleinrock’s home page)

Steve Crocker has worked on the internet community since its inception. Early in his career he served as the first area director of security for the Internet Engineering Task Force (IETF). He also served on the Internet Architecture Board (IAB), the IETF Administrative Support Activity Oversight Committee (IAOC), the Board of the Internet Society, and the Board of The Studio Theatre in Washington, DC. Other items on his resume are that he conducted research management at the Information Sciences Institute at the University of Southern California, and The Aerospace Corporation. He was once vice-president of Trusted Information Systems, and co-founded CyberCash, Inc., and Longitude Systems. He was Chair of ICANN’s Security and Stability Advisory Committee (SSAC) from its inception in 2002 until December 2010. He is currently chairman of the board for the Internet Committee for Assigned Names and Numbers (ICANN). He is also CEO and co-founder of Shinkuro, Inc., a start-up focusing on dynamic sharing of information on the internet, and on improved security protocols for the internet. (Source: ICANN Biographical Data)

I’ll close this post with a video that gives a wider view of what ARPA was doing during the late ’50s, when it was founded, into the mid-70s. Note that “Defense” was added to ARPA’s name in 1972, so it became “DARPA.” This is “the DARPA video” I’ve referred to a couple times. There is a brief segment with Licklider in it.

In Part 3 I cover the later events and operating prerogatives at DARPA that are mentioned in this video. I will talk a little more about Larry Roberts, Ed Feigenbaum, and Bob Kahn, more about Licklider, the IPTO, Robert Taylor’s work at Xerox PARC, along with Alan Kay, Chuck Thacker, and Butler Lampson in the invention of personal computing.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

See Part 1, Part 2, Part 3

The real world

Each year while I was in school I looked for summer internships, but had no luck. The economy sucked. In my final year of school I started looking for permanent work, and I felt almost totally lost. I asked CS grads about it. They told me “You’ll never find an entry level programming job.” They had all landed software testing jobs as their entree into corporate software production. Something inside me said this would never do. I wanted to start with programming. I had the feeling I would die inside if I took a job where all I did was test software. About a year after I graduated I was proved right when I took up test duties at my first job. My brain became numb with boredom. Fortunately that’s not all I did there, but I digress.

In my final year of college I interviewed with some major employers who came to my school: Federal Express, Tandem, Microsoft, NCR. I wasn’t clear on what I wanted to do. It was a bit earth-shattering. I had gone into CS because I wanted to program computers for my career. I didn’t face the “what” (what specifically did I want to do with this skill?) until I was about ready to graduate. I had so many interests. When I entered school I wanted to do application development. That seemed to be my strength. But since I had gone through the CS program, and found some things about it interesting, I wasn’t sure anymore. I told my interviewer from Microsoft, for example, that I was interested in operating systems. What was I thinking? I had taken a course on linguistics, and found it pretty interesting. I had taken a course called Programming Languages the previous year, and had a similar level of interest in it. I had gone through the trouble of preparing for a graduate level course on language compilers. I was taking it at the time of the interview. It just didn’t occur to me.

None of my interviews panned out. Looking back on it in hindsight it was good this happened. Most of them didn’t really suit my interests. The problem was who did?

Once I graduated with my Bachelor’s in CS in 1993, and had an opportunity to relax, some thoughts settled in my mind. I really enjoyed the Programming Languages course I had taken in my fourth year. We covered Smalltalk for two weeks. I thoroughly enjoyed it. At the time I had seen many want ads for Smalltalk, but they were looking for people with years of experience. I looked for Smalltalk want ads after I graduated. They had entirely disappeared. Okay. Scratch that one off the list. The next thought was, “Compilers. I think I’d like working on language compilers.” I enjoyed the class and I reflected on the fact that I enjoyed studying and using language. Maybe there was something to that. But who was working on language compilers at the time? Microsoft? They had rejected me from my first interview with them. Who else was there that I knew of? Borland. Okay, there’s one. I didn’t know of anyone else. I got the sense very quickly that while there used to be many companies working on this stuff, it was a shrinking market. It didn’t look promising at the time.

I tried other leads, and thought about other interests I might have. There was a company nearby called XVT that had developed a multi-platform GUI application framework (for an analogy, think wxWindows), which I was very enthusiastic about. While I was in college I talked with some fellow computer enthusiasts on the internet, and we wished there was such a thing, so that we didn’t have to worry about what platform to write software for. I interviewed with them, but that didn’t go anywhere.

For whatever reason it never occurred to me to continue with school, to get a masters degree. I was glad to be done with school, for one thing. I didn’t see a reason to go back. My undergrad advisor subtly chided me once for not wanting to advance my education. He said, “Unfortunately most people can find work in the field without a masters,” but he didn’t talk with me in depth about why I might want to pursue that. I had this vision that I would get my Bachelor’s degree, and then it was just a given that I was going to go out into private industry. It was just my image of how things were supposed to go.

Ultimately, I went to work in what seemed like the one industry that would hire me, IT software development. My first big job came in 1995. At first it felt like my CS knowledge was very relevant, because I started out working on product development at a small company. I worked on adding features to, and refactoring a reporting tool that used scripts for report specification (what data to get and what formatting was required). Okay. So I was working on an interpreter instead of a compiler. It was still a language project. That’s what mattered. Besides developing it on MS-DOS (UGH!), I was thrilled to work on it.

It was very complex compared to what I had worked on before. It was written in C. It had more than 20 linked lists it created, and some of them linked with other lists via. pointers! Yikes! It was very unstable. Anytime I made a change to it I could predict that it was going to crash on me, causing my PC to freeze up every time, requiring me to reboot my machine. And we think now that Windows 95 was bad about this… I got so frustrated with this I spent weeks trying to build some robustness into it. I finally hit on a way to make it crash gracefully by using a macro, which I used to check every single pointer reference before it got used.

I worked on other software that required a knowledge of software architecture, and the ability to handle complexity. It felt good. As in school, I was goal-oriented. Give me a problem to solve, and I’d do my best to do so. I liked elegance, so I’d usually try to come up with what I thought was a good architecture. I also made an effort to comment well to make code clear. My efforts at elegance usually didn’t work out. Either it was impractical or we didn’t have time for it.

Fairly quickly my work evolved away from doing product development. The company I worked for ended up discarding a whole system they’d worked two years on developing. The reporting tool I worked on was part of that. We decided to go with commodity technologies, and I got more into working with regular patterns of IT software production.

I got a taste for programming for Windows, and I was surprised. I liked it! I had already developed a bias against Microsoft software at the time, because my compatriots in the field had nothing but bad things to say about their stuff. I liked developing for an interactive system though, and Windows had a large API that seemed to handle everything I needed to deal with, without me having to invent much of anything to make a GUI app. work. This was in contrast to GEM on my Atari STe, which was the only GUI API I knew about before this.

My foray into Windows programming was short lived. My employer found that I was more proficient in programming for Unix, and so pigeon-holed me into that role, working on servers and occasionally writing a utility. This was okay for a while, but I got bored of it within a couple years.

Triumph of the Nerds

Around 1996 PBS showed another mini-series, on the history of the microcomputer industry, focusing on Apple, Microsoft, and IBM. It was called Triumph of the Nerds, by Robert X. Cringely. This one was much easier for me to understand than The Machine That Changed The World. It talked about a history that I was much more familiar with, and it described things in terms of geeky fascination with technology, and battles for market dominance. This was the only world I really knew. There weren’t any deep concepts in the series about what the computer represented, though Steve Jobs added some philosophical flavor to it.

My favorite part was where Cringely talked about the development of the GUI at Xerox PARC, and then at Apple. Robert Taylor, Larry Tesler, Adele Goldberg, Andy Warnok, and Steve Jobs were interviewed. The show talked mostly about the work environment at Xerox (how the researchers worked together, and how the executives “just didn’t get it”), and the Xerox Alto computer. There was a brief clip of the GUI they had developed (Smalltalk), and Adele Goldberg briefly mentioned the Smalltalk system in relation to the demo Steve Jobs saw, though you’d have to know the history better to really get what was said about it. Superficially one could take away from it that Xerox had developed the GUI, and Apple used it as inspiration for the Mac, but there was more to the story than that.

Triumph of the Nerds showed the unveiling of the first Macintosh in 1984 for the first time, that I had seen. I read about it shortly after it happened in 1984, but I saw no pictures and no video. It was really neat to see. Cringely managed to give a feel for the significance of that moment.

Part 5

Read Full Post »

See Part 1, Part 2

College

I went to Colorado State University in 1988. As I went through college I forgot about my fantasies of computers changing society. I was focused on writing programs that were more sophisticated than I had ever written before, appreciating architectural features of software and emulating them in my own projects, and learning computer science theory.

At the time, I thought my best classes were some basic hardware class I took, Data Structures, Foundations of Computer Architecture, Linguistics (a non-CS course), a half-semester course on the C language, Programming Languages, and a graduate level course on compilers. Out of all of them the last two felt the most rewarding. I had the same professor for both. Maybe that wasn’t a coincidence.

In my second year I took a course called Comparative Programming Languages, where we surveyed Icon, Prolog, Lisp, and C. My professor for the class was a terrible teacher. There didn’t appear to be much point to the course besides exposing us to these languages. To make things interesting (for my professor, I think), he assigned problems that were inordinately hard when compared to my other CS courses. I got through Icon and C fine. Prolog gave me a few problems, but I was able to get the gist of it. I was taking the half-semester C course at the same time, which was fortunate for me. Otherwise, I doubt I would’ve gotten through my C assignments.

Lisp was the worst! I had never encountered a language I couldn’t tackle before, but Lisp confounded me. We got some supplemental material in class on it, but it wasn’t that good in terms of helping me relate to it. What made it even harder is our professor insisted we use it in the functional style: no set, setq, etc., or anything that used it was allowed. All loops had to be recursive. We had two assignments in Lisp and I didn’t complete either one. I felt utterly defeated by it and I vowed never to look at it again.

My C class was fun. Our teacher had a loose curriculum, and the focus was on just getting us familiar with the basics. In a few assignments he would say “experiment with this construct.” There was no hard goal in mind. He just wanted to see that we had used it in some way and had learned about it. I loved this! I came to like C’s elegance.

I took Programming Languages in my fourth year. My professor was great. He described a few different types of programming languages, and he discussed some runtime operating models. He described how functional languages worked. Lisp made more sense to me after that. We looked at Icon, SML, and Smalltalk, doing a couple assignments in each. He gave us a description of the Smalltalk system that stuck with me for years. He said that in its original implementation it wasn’t just a language. It literally was the operating system of the computer it ran on. It had a graphical interface, and the system could be modified while it was running. This was a real brain twister for me. How could the user modify it while it was running?? I had never seen such a thing. The thought of it intrigued me though. I wanted to know more about it, but couldn’t find any resources on it.

I fell in love with Smalltalk. It was my very first object-oriented language. We only got to use the language, not the system. We used GNU Smalltalk in its scripting mode. We’d edit our code in vi, and then run it through GNU Smalltalk on the command line. Any error messages or “transcript” output would go to the console.

I learned what I think I would call a “Smalltalk style” of programming, of creating object instances (nodes) that have references to each other, each doing very simple tasks, working cooperatively to accomplish a larger goal. I had the experience in one Smalltalk assignment of feeling like I was creating my own declarative programming language of sorts. Nowadays we’d say I had created a DSL (Domain-Specific Language). Just the experience of doing this was great! I had no idea programming could be this expressive.

I took compilers in my fifth year. Here, CS started to take on the feel of math. Compiler design was expressed mathematically. We used the red “Dragon book”, Compilers: Principles, Techniques, and Tools, by Aho, Sethi, and Ullman. The book impressed me right away with this introductory acknowledgement:

This book was phototypeset by the authors using the excellent software available on the UNIX system. The typesetting command read:

pic files | tbl | eqn | troff -ms

pic is Brian Kernighan’s language for typesetting figures; we owe Brian a special debt of gratitude for accomodating our special and extensive figure-drawing needs so cheerfully. tbl is Mike Lesk’s language for laying out tables. eqn is Brian Kernighan and Lorinda Cherry’s language for typesetting mathematics. troff is Joe Ossana’s program for formatting text for a phototypesetter, which in our case was a Mergenthaler Linotron 202/N. The ms package of troff macros was written by Mike Lesk. In addition, we managed the text using make due to Stu Feldman. Cross references within the text were maintained using awk created by Al Aho, Brian Kernighan, and Peter Weinberger [“awk” was named after the initials of Aho, Weinberger, and Kernighan — Mark], and sed created by Lee McMahon.

I thought this was really cool, because it felt like they were “eating their own dog food.”

We learned at the time about the concepts of bootstrapping, cross-compilers for system development, LR and LALR parsers, bottom-up and top-down parsers, parse trees, pattern recognizers (lexers), stack machines, etc.

For our semester project we had to implement a compiler for a Pascal-like language, and it had to be capable of handling recursion. Rather than generate assembly or machine code, we were allowed to generate C code, but it had to be generated as if it were 3-address code. We were allowed to use a couple C constructs, but by and large it had to read like an assembly program. A couple other rules were we had to build our own symbol table (in the compiler), and call stack (in the compiled program).

We worked on our projects in pairs. We were taught some basics about how to use lex and yacc, but we weren’t told the whole story… I and my partner ended up using yacc as a driver to our own parse-tree-building routines. We wrote all of our code in C. We made the thing so complicated. We invented stacks for various things, like handling order of operations for mathematical expressions. We went through all this trouble, and then one day I happened to chat with one of my other classmates and he told me, “Oh, you don’t have to do all that. Yacc will do that for you.” I was dumbfounded. How come nobody told us this before?? Oh well, it was too late. It was near the end of the semester, and we had to turn in test results. My memory is even though it was an ad hoc design, our compiler got 4 out of 5 tests correct. The 5th one, the one that did recursion, failed. Anyway, I did okay in the course, and that felt like an accomplishment.

I wanted to do it up right, so I took the time after I graduated to rewrite the compiler, fully using yacc’s abilities. At the time I didn’t have the necessary tools available on my Atari STe to do the project, so I used Nyx, a free, publicly available Unix system that I could access via. a modem through a straight serial connection (PPP hadn’t been invented yet). It was just like calling up a BBS except I had shell access.

I structured everything cleanly in the compiler, and I got the bugs worked out so it could handle recursion.

A more sophisticated perspective

Close to the time I graduated a mini-series came out on PBS called “The Machine That Changed The World.” What interested me about it was its focus on computer history. It filled in more of the story from the time when I had researched it in Jr. high and high school.

My favorite episode was “The Paperback Computer,” which focused on the research efforts that went into creating the personal computer, and the commercial products (primarily the Apple Macintosh) that came from them.

It gave me my first glimpse ever of the work done by Douglas Engelbart, though it only showed a small slice–the invention of the mouse. Mitch Kapor, one of the people interviewed for this episode, pointed out that most people had never heard of Engelbart, yet he is the most important figure in computing when you consider what we are using today. This episode also gave me my first glimpse of the research done at Xerox PARC on GUIs, though there was no mention of the Smalltalk system (even though that’s the graphics display you see in that segment).

I liked the history lessons and the artifacts it showed. The deeper ideas lost me. By the time I saw this series, I had already heard of the idea that the computer was a new medium. It was mentioned sometimes in computer magazines I read. I was unclear on what this really meant, though.

I had already experienced some aspects of this idea without realizing it, especially when I used 8-bit computers with Basic or Logo, which gave me a feeling of interactivity. The responsiveness towards the programmer was pretty good for the limited capabilities they had. It felt like a machine I could mold and change into anything I wanted via. programming. It was what I liked most about using a computer. Being unfamiliar though with the concept of what a medium really was, I thought when digital video and audio came along, and the predictions about digital TV through the “Information Superhighway,” that this was what it was all about. I had fallen into the mindset a lot of people had at the time: the computer was meant to automate old media.

Part 4

Read Full Post »

See Part 1

A sense of history

I had never seen a machine before that I could change so easily (relative to other machines). I had this sense that these computers represented something big. I didn’t know what it was. It was just a general feeling I got from reading computer magazines. They reported on what was going on in the industry. All I knew was that I really liked dinking around with them, and I knew there were some others who did, too, but they were at most ten years older than me.

The culture of computing and programming was all over the place as well, though the computer was always portrayed as this “wonder machine” that had magical powers, and programming it was a mysterious dark art which made neat things happen after typing furiously tappy-tappy on the keyboard for ten seconds. Still, it was all encouragement to get involved.

I got a sense early on that it was something that divided the generations. Most adults I ran into knew hardly anything about them, much less how to program them, like I did. They had no interest in learning about them either. They felt they were too complicated, threatening, mysterious. They didn’t have much of an idea about its potential, just that “it’s the future”, and they encouraged me to learn more about them, because I was going to need that knowledge, though their idea of “learn more about them” meant “how to use one”, not program it. If I told them I knew how to program them they’d say, “Wow! You’re really on top of it then.” They were kind of amazed that a child could have such advanced knowledge that seemed so beyond their reach. They couldn’t imagine it.

A few adults, including my mom, asked me why I was so fascinated by computers. Why were they important? I would tell them about the creative process I went through. I’d get a vision in my mind of something I thought would be interesting, or useful to myself and others. I’d get excited enough about it to try to create it. The hard part was translating what I saw in my mind, which was already finished, into the computer’s language, to get it to reproduce what I wanted. When I was successful it was the biggest high for me. Seeing my vision play out in front of me on a computer screen was electrifying. It made my day. I characterized computers as “creation machines”. What I also liked about them is they weren’t messy. Creating with a computer wasn’t like writing or typing on paper, or painting, where if you made a mistake you had to live with it, work around it, or start over somewhere to get rid of the mistake. I could always fix my mistakes on a computer, leaving no trace behind. The difference was with paper it was always easy to find my mistakes. On the computer I had to figure out where the mistakes were!

The few adults I knew at the time who knew how to use a computer tended to not be so impressed with programming ability, not because they knew better, but because they were satisfied being users. They couldn’t imagine needing to program the computer to do anything. Anything they needed to do could be satisfied by a commercial package they could buy at a computer store. They regarded programming as an interesting hobby of kids like myself, but irrelevant.

As I became familiar with the wider world of what represented computing at the time (primarily companies and the computers they sold), I got a sense that this creation was historic. Sometimes in school I would get an open-ended research assignment. Each chance I got I’d do a paper on the history of the computer, each time trying to deepen my knowledge of it.

The earliest computer I’d found in my research materials was a mechanical adding machine that Blaise Pascal created in 1645, called the Pascaline. There used to be a modern equivalent of it as late as 25-30 years ago that you could buy cheap at the store. It was shaped like a ruler, and it contained a set of dials you could stick a pencil or pen point into. All dials started out displaying zeros. Each dial represented a power of ten (starting at 100). If you wanted to add 5 + 5, you would “dial” 5 in the ones place, and then “dial” 5 in the same place again. Each “dial” action added to the quantity in each place. The dials had some hidden pegs in them to handle carries. So when you did this, the ones place dial would return to “0”, and the tens place dial would automatically shift to “1”, producing the result “10”.

The research material I was able to find at the time only went up to the late 1960s. They gave me the impression that there was a dividing line. Up to about 1950, all they talked about were the research efforts to create mechanical, electric, and finally electronic computers. The exception being Herman Hollerith, who I think was the first to find a commercial application for an electric computer, to calculate the 1890 census, and who built the company that would ultimately come to be known as IBM. The last computers they talked about being created by research institutions were the Harvard Mark I, ENIAC, and EDSAC. By the 1950s (in the timeline) the research material veered off from scientific/research efforts, for the most part, and instead talked about commercial machines that were produced by Remington Rand (the Univac), and IBM. Probably the last thing they talked about were minicomputer models from the 1960s. The research I did matched well with the ethos of computing at the time: it’s all about the hardware.

High school

When I got into high school I joined a computer club there. Every year club members participated in the American Computer Science League (ACSL). We had study materials that focused on computer science topics. From time to time we had programming problems to solve, and written quizzes. We earned individual scores for these things, as well as a combined score for the team.

We would have a week to come up with our programming solutions on paper (computer use wasn’t allowed). On the testing day we would have an hour to type in our programs, test and debug them, at the end of which the computer teacher would come around and administer the official test for a score.

What was neat was we could use whatever programming language we wanted. A couple of the students had parents who worked at the University of Colorado, and had access to Unix systems. They wrote their programs in C on the university’s computers. Most of us wrote ours in Basic on the Apples we had at school.

Just as an aside, I was alarmed to read Imran’s article in 2007 about using FizzBuzz to interview programmers, because the problem he posed was so simple, yet he said “most computer science graduates can’t [solve it in a couple minutes]”. It reminded me of an ACSL programming problem we had to solve called “Buzz”. I wrote my solution in Basic, just using iteration. Here is the algorithm we had to implement:

input 5 numbers
if any input number contains the digit "9", print "Buzz"
if any input number is evenly divisible by 8, print "Buzz"
if the sum of the digits of any input number is divisible by 4,
   print "Buzz"

for every input number
A: sum its digits (we'll use "sum" as a variable in this
loop for a sum of digits)
   if sum >= 10
      get the digits for sum and go back to Step A
   if sum equals 7
      print "Buzz"
   if sum equals any digit in the original input number
      print "Buzz"
end loop

test input   output
198          Buzz Buzz
36
144          Buzz
88           Buzz Buzz Buzz
10           Buzz

In my junior and senior year, based on our regional scores, we qualified to go to the national finals. The first year we went we bombed, scoring in last place. Ironically we got kudos for this. A complimentary blurb in the local paper was written about us. We got a congratulatory letter from the superintendent. We got awards at a general awards assembly (others got awards for other accomplishments, too). A bit much. I think this was all because it was the first time our club had been invited to the nationals. The next year we went we scored in the middle of the pack, and heard not a peep from anybody!

(Update 1-18-2010: I’ve had some other recollections about this time period, and I’ve included them in the following seven paragraphs.)

By this point I was starting to feel like an “experienced programmer”. I felt very comfortable doing it, though the ACSL challenges were at times too much for me.

I talk about a couple of software programs I wrote below. You can see video of them in operation here.

I started to have the desire to write programs in a more sophisticated way. I began to see that there were routines that I would write over and over again for different projects, and I wished there was a way for me to write code in what would now be called “components” or DLLs, so that it could be generalized and reused. I also wanted to be able to write programs where the parts were isolated from each other, so that I could make major revisions without having to change parts of the program which should not be concerned with how the part I changed was implemented.

I even started thinking of things in terms of systems a little. A project I had been working on since Jr. high was a set of programs I used to write, edit, and play Mad Libs on a computer. I noticed that it had a major emphasis on text, and I wondered if there was a way to generalize some of the code I had written for it into a “text system”, so that people could not only play this specific game, but they could also do other things with text. I didn’t have the slightest idea how to do that, but the idea was there.

By my senior year of high school I had started work on my biggest project yet, an app. I had been wanting for a while, called  “Week-In-Advance”. It was a weekly scheduler, but I wanted it to be user friendly, with a windowing interface. I was inspired by an Apple Lisa demo I had seen a couple years earlier. I spent months working on it. The code base was getting so big, I realized I had to break it up into subroutines, rather than one big long program, to make it manageable. I wrote it in Basic, back when all it had for branching was Goto, Gosub, and Return commands. I used Gosub and Return to implement the subroutines.

I learned some advanced techniques in this project. One feature I spent a lot of time on was how to create expanding and closing windows on the screen. I tried a bunch of different animation techniques. Most of them were too slow to be useable. I finally figured out one day that a box on the screen was really just a set of four lines, two vertical, and two horizontal, and that I could make it expand efficiently by just taking the useable area of the screen, and dividing it horizontally and vertically by the number of times I would allow the window to expand, until it reached its full size. The horizontal lines were bounded by the vertical lines, and the vertical lines were bounded by the horizontals. It worked beautifully. I had generalized it enough so that I could close a window the same way, with some animated shrinking boxes. I just ran the “window expanding” routine in reverse by negating the parameters. This showed me the power of “systematizing” something, rather than using code written in a narrow-minded fashion to make one thing happen, without considering how else the code might be used.

The experience I had with using subroutines felt good. It kept my code under control. Soon after I wanted to learn Pascal, so I devoted time to doing that. It had procedures and functions as built-in constructs. It felt great to use them compared to my experience with Basic. The Basic I used had no scoping rules whatsoever. Pascal had them, and programming felt so much more manageable.

Getting a sense of the future

As I worked with computers more and talked to people about them in my teen years I got a wider sense that they were going to change our society, I thought in beneficial ways. I’ve tried to remember why I thought this. I remember my mom told me this in one of the conversations I had with her about them. Besides this, I think I was influenced by futurist literature, artwork, and science fiction. I had this vague idea that somehow in the future computers would help us understand our world better, and to become better thinkers. We would be smarter, better educated. We would be a better society for it. I had no idea how this would happen. I had a magical image of the whole thing that was deterministic and technology-centered.

It was a foregone conclusion that I would go to college. Both my mom and my maternal grandparents (the only ones I knew) wanted me to do it. My grandparents even offered to pay full expenses so long as I went to a liberal arts college.

In my senior year I was trying to decide what my major would be. I knew I wanted to get into a career that involved computer programming. I tried looking at my past experience, what kinds of projects I liked to work on. My interest seemed to be in application programming. I figured I was more business-oriented, not a hacker. The first major I researched was CIS (Computer Information Science/Systems). I looked at their curriculum and was uninspired. I felt uncertain about what to do. I had heard a little about the computer science curriculum at a few in-state universities, but it felt too theoretical for my taste. Finally, I consulted with my high school’s computer teacher, and we had a heart to heart talk. She (yes, the computer teacher was a woman) had known me all through my time there, because she ran the computer club. She advised me to go into computer science, because I would learn about how computers worked, and this would be valuable in my chosen career.

She had a computer science degree that she earned in the 1960s. She had an interesting life story. I remember she said she raised her family on a farm. At some point, I don’t know if it was before or after she had kids, she went to college, and was the only woman in the whole CS program. She told me stories about that time. She experienced blatant sexism. Male students would come up to her and say, “What are you doing here? Women don’t take computer science,” and, “You don’t belong here.” She told me about writing programs on punch cards (rubber bands were her friends 🙂 ), taking a program in to be run on the school’s mainframe, and waiting until 3am for it to run and to get her printout. I couldn’t imagine it.

I took her advice that I should take CS, kind of on faith that she was right.

Part 3.

Read Full Post »