Feeds:
Posts
Comments

Archive for November, 2007

Going away for a while

Due to circumstances going on in my life right now I’m going to have to take a time-out from this blog. Seeing as how I won’t be able to monitor it well, and I’d like to avoid spam comments from showing up, I’m turning on administrative approval for comments. So if you post a comment you won’t see it show up unless I’ve had time to come around and approve it. So don’t worry if you posted something and you don’t see it for a while. I’ll get it out eventually.

Update 12/1/07: Having said this, I do intend to try to keep blogging here in the near future. It’s just that I’m going to be pretty busy with other things, and I may not have readily available internet access. It’s just something where I’ll have to see how it goes.

I am being vague because the situation is personal and private. I’m not in trouble or anything. It’s just life.

Advertisements

Read Full Post »

…playing off the name of an old Bangles tune…

I ran across a case in point for this, thanks to James Robertson’s blog. Steve Jones is talking about the current state of the art in the organization of IT software development:

So why do I choose to have strict contracts, static languages, early validation of everything and extremely rigorous standards applied to code, build and test? The answer was nicely highlighted by Bill Roth on his post around JavaOne Day 1, there are SIX MILLION Java developers out there. This is a clearly staggering number and means a few things.

  1. There are lots of jobs out there in Java
  2. Lots of those people are going to be rubbish or at least below average

But I think this Java groups (sic) is a good cross section of IT, it ranges from the truly talented at the stratosphere of IT down to the muppets who can’t even read the APIs so write their own functions to do something like “isEmpty” on a string.

The quality of dynamic language people out there takes a similar profile (as a quick trawl of the groups will show) and here in lies the problem with IT as art. If you have Da Vinci, Monet or even someone half-way decent who trained at the Royal College of Art then the art is pretty damned fine. But would you put a paintbrush in the hand of someone who you have to tell which way round it goes and still expect the same results?

IT as art works for the talented, which means it works as long as the talented are involved. As soon as the average developer comes in it quickly degrades and turns into a hype cycle with no progress and a huge amount of bugs. The top talented people are extremely rare in application support, that is where the average people live and if you are lucky a couple of the “alright” ones.

This is why the engineering and “I don’t trust you” approach to IT works well when you consider the full lifecycle of a solution or service. I want enforced contracts because I expect people to do something dumb maybe not on my project but certainly when it goes into support. I want static languages with extra-enforcement because I want to catch the stupidity as quickly as possible.

He concludes with:

This is the reason I am against dynamic languages and all of the fan-boy parts of IT. IT is no-longer a hackers paradise populated only by people with a background in Computer Science, it is a discipline for the masses and as such the IT technologies and standards that we adopt should recognise that stopping the majority doing something stupid is the goal, because the smart guys can always cope.

IT as art can be beautiful, but IT as engineering will last.

I like that Jones stresses testing. I realize that he is being pragmatic. He has to use the hand he’s been dealt. He is sadly mistaken though when he calls his approach “engineering” or a “discipline” in any modern sense of the word. Engineering today is done by skilled people, not “muppets”. The phrase “IT as art” is misleading, because it implies that it was made the way it was to satisfy a developer’s vanity. Good architecture is nothing to sneer at (it’s part of engineering). His use of the phrase in this context shows how useless he thinks good software architecture is, because most people don’t understand it. Well that sure is a good rationale, isn’t it? How would we like it if that sort of consideration was given to all of our buildings?

And yes, we can “cope”, but it’s insulting to our intelligence. It’s challenging in the sense that trying to build a house (a real one, not a play one) out of children’s building blocks is challenging.

I guess what I’m saying is, I understand what Jones is dealing with. My problem with what he said is he seems to endorse it.

In a way this post is a sequel to a post I wrote a while ago, called “The future is not now”. Quoting Alan Kay from an interview he had with Stuart Feldman, published in ACM Queue in 2005 (this article keeps coming up for me):

If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. 

Egyptologists have found that the ancient Egyptians were able to build their majestic pyramids through their mastery of bureaucratic organization. They had a few skilled engineers, foremen, and some artisans–skilled craftsmen. Most of the workforce was made up of laborers who could be trained to make mud bricks, to quarry and chisel stone into the shape of a rectangle, and could be used to haul huge stones for miles. They could not be trusted to construct anything by themselves more sophisticated than a hut to live in, however. It took tens of thousands (though estimates vary wildly on the magnitude) of laborers 10-20 years to build The Great Pyramid at Giza. It was inefficient, the building materials were difficult to work with (it took careful planning just to maneuver them), used a LOT of building material, and was very expensive.

I don’t know about you, but I think this matches the description of Steve Jones’s organization, and many others like it. Not that it takes as many “laborers” or as long. What I mean is it’s bureaucratic and inefficient.

Alan Kay has pointed to what should be the ultimate goal of software development, in the story of the planning and building of the Empire State Building. I believe the reason he uses this example is that this structure is more than twice as tall as the ancient pyramids, uses a lot less building material, has very good structural integrity, used around 4,000 laborers, and took a little more than 1 year to build.

He sees that even our most advanced languages and software development skills today are not up to this level. When he rated where Smalltalk was in the analog of history, he said it’s at the level of Gothic architecture of about 1,000 years ago. The problem is ancient Egyptian pyramid architecture is about 4,500 years old. That’s the state of the art in how software development is organized today.

I recently talked a bit with Alan Kay about his conception of programming in an e-mail conversation. I expected him to elaborate on his conception of OOP, but his response was “Well, I don’t think I have a real conception of programming”. His perspective is “most of programming is really architectural design”. Even though he invented OOP, he said he doesn’t think strictly in terms of objects, though this model is useful. What this clarified for me is that OOP, functional, imperative, types, etc. is not what’s important (though late-binding is, given our lack of engineering development). What’s important is architecture. Programming languages are just a means of using it.

As I mentioned in my last Reminiscing post, programming is a description of our models. It’s how we construct them. Buildings are built using appropriate architecture. We should try to do the same with our software.

What Kay was getting at is our languages should reflect a computing model that is well designed. The computing model should be about something coherent. The language should be easy to use by those who understand the computing model, and enable everything that the computing model is designed for. Frameworks created in the language should share the same characteristics. What this also implies is that since we have not figured out computing yet, as a whole, we still have to make choices. We can either choose an existing computing model if it fits the problem domain well, or create our own. That last part will be intimidating to a lot of people. I have to admit I have no experience in this yet, but I can tell this is the direction I’m heading in.

The difference between the pyramids, and the Empire State Building is architecture, with all of the complementary knowledge that goes with it, and building materials.

The problem is when it comes to training programmers, computer science has ignored this important aspect for a long time.

Discussing CS today is kind of irrelevant. As Jones indicates, most programmers these days have probably not taken computer science. The traditional CS curriculum is on the ropes. It abandoned the ideas of systems architecture a long time ago. Efforts are being made to revive some watered down version of CS via. specialization. So in a way it’s starting all over again.

So what do the books you can buy at the bookstore teach us? Languages and frameworks. We program inside of architecture, though as Kay said in his response to me, “most of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess”. One could call it “architecture of a sort”…

In our situation ignorance is not bliss. It makes our jobs harder, it causes lots of frustration, it’s wasteful, and looked at objectively, it’s…well, “stupid is as stupid does”.

Did you ever wonder how Ruby on Rails achieves such elegant solutions, and how it saves developers a lot of time for a certain class of problems? How does Seaside achieve high levels of productivity by the almost miraculous ability to allow developers to build web applications without having to worry about session state? The answer is architectural design, both in the computing/language system that’s used, and the framework that’s implemented in it.

The problem of mass ignorance 

I had a conversation a while back with a long-time friend and former co-worker along the lines of what Jones was talking about. He told me that he thought it was a waste of time to learn advanced material like I’ve been doing, because in the long run it isn’t going to do anyone any good. You can build your “masterful creation” wherever you go, but when you leave–and you WILL leave–whoever they bring in next is not going to be as skilled as you, won’t understand how you did what you did, and they’re probably going to end up rewriting everything you created in some form that’s more understandable to themselves and most other programmers. I had to admit that in most circumstances he’s probably right, in terms of the labor dynamic. He does like to learn, but he’s pragmatic. Nevertheless this is a sad commentary on our times: Don’t try to be too smart. Don’t try to be educated. Your artifacts will be misunderstood and misused.

I’m not trying to be smarter than anyone else. I’m genuinely interested in this stuff. If that somehow makes me “smarter” than most programmers out there (which I wish it didn’t), so be it.

History shows that eventually this got worked out in Western civilization. Eventually the knowledge necessary to build efficient, elegant, strong structures was learned by more than a few isolated individuals. It got adopted and taught in schools, and architecture, design, and building techniques have improved as a result, improving life for more people. The same thing needs to happen for software.

I think what’s needed is a better CS curriculum, one that recognizes the importance of architectural design, thinking about computing systems as abstractions–models that are universal, and most importantly gets back to seeing CS as “a work in progress”. What distresses me among other things is hearing that some people believe CS is now an engineering discipline, as in “stick a fork in it, and call it done.” That is so far from the truth it’s not even funny.

Secondly, and this is equally important, the way business manages software development has got to change. Everything depends on this. Even if the CS curriculum were modified to what I describe, and if business management hasn’t changed, then what I’m talking about here is just academic. The current management mentality, which has existed for decades, puts the emphasis on code, not design. What they’re missing is design is where the real bang for the buck comes from.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

If you are offended easily, it would be best just to skip the following link. If you want to see an indication of what the pop culture in computing is centered around today, this is pretty good (h/t to James Robertson). There’s a little rough language. It’s a Web 2.0 “catchy-phrase” generator. Amazing how a computer can come up with stuff that sounds just like sales material, isn’t it?

Read Full Post »

The pop culture

This is my last post in this series. In the other parts I’ve written positively about my experience growing up in the computing culture that our society has created. Now, bringing this full circle, I will be looking at it with a critical eye, because I’ve come to realize that some things have been missing. If you compare what I’ve been talking about with the work of Engelbart and Kay, from the 1960s and 70s, I think you’ll see that not much progress has been made since then. In fact, this business began as a retrogression from what had been developed. Some of what had been invented and advocated for has been misunderstood. Some of it was adopted for a time but pushed aside later. Some has remained in a watered-down form. I’ve spoken of analogies before about the amazing things that were invented more than 30 years ago. I’m going to make another one. They were like Hero’s steam engine. The culture was not ready to understand them, and so they didn’t go much of anywhere, despite their tremendous potential.

Technological progress has been made pretty much in a brute force, slog-it-out fashion. Eventually things get refined, and worked out, but mainly by “reinventing the wheel”.

Quoting from an interview with Alan Kay in ACM Queue published in 2005:

In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.

So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

It was a different culture in the ’60s and ’70s; the ARPA (Advanced Research Projects Agency) and PARC culture was basically a mathematical/scientific kind of culture and was interested in scaling, and of course, the Internet was an exercise in scaling. There are just two different worlds, and I don’t think it’s even that helpful for people from one world to complain about the other world—like people from a literary culture complaining about the majority of the world that doesn’t read for ideas. It’s futile.

I don’t spend time complaining about this stuff, because what happened in the last 20 years is quite normal, even though it was unfortunate. Once you have something that grows faster than education grows, you’re always going to get a pop culture.

The way he defined this pop culture is “accidents of history” and expedient gap-filling. He said for example that BASIC became popular because it was implemented on a GE time-sharing system. GE decided to make this system accessible by many. This made BASIC very popular just by virtue of the fact that it was made widely available, not because it had any particular merit. Another more modern example he gave was Perl, which has been used to fill short-term needs, but it doesn’t scale. In a speech (video) he gave 10 years ago he mentioned the web browser as a terrible idea, because it locks everything down to a standardized protocol, rather than having data come with a translator which makes the protocol more flexible. It got adopted anyway, because it was available and increased accessibility to information. Interestingly, contrary to the mantra of the open source movement of “standards”, I get the impression he doesn’t agree with this very much. Not to say he dislikes open source. I think he’s very much in favor of it. My sense is, with a few exceptions, he thinks we’re not ready for standards, because we don’t even have a good idea of what computing is good for yet. So things need to remain malleable.

I think part of what he’s getting at with his comments on the “pop culture” is a focus on features like availability, licensing, and functionality and/or expressiveness. Features excite people because they solve immediate problems, but since by and large we don’t look at architectures critically, this leads to problems down the road.

We tend to not think in terms of structures that can expand and grow with time, as requirements change, though the web has been forcing some developers to think about scalability issues, in concert with Information Systems. Unfortunately all too often we’re using technologies that make these goals difficult to accomplish, because we’ve ignored ideas that would make it easier. Instead of looking at them we’ve tried to adapt architectures that were meant for other purposes to new purposes, because the old model is more familiar. Adding on to an old architecture is seen as cheaper than trying to invent something more appropriate. Up front costs are all that count.

I’m going to use a broader definition of “pop culture” here. I think it’s a combination of things:

  • As Kay implied, an emphasis on “features”. It gets people thinking in terms of parts of computing, not the whole experience.
  • An emphasis on using the machine to accomplish tasks through applications, rather than using it to run simulations on your ideas, though there have been some applications that have brought out this idea in domain-specific ways, like word processing and spreadsheets.
  • It’s about an image: “this technology will define you.” Of course, this is an integral part of advertising. It’s part of how companies get you interested in products.
  • A technology is imbued with qualities it really doesn’t have. Another name for this is “hype”.

The problem with the pop culture is it misdirects people from what’s really powerful about computing. I don’t see this as a deliberate attempt to hide these properties. I think most vendors believed the messages they were selling. They just didn’t see what’s truly powerful about the idea of computing. Customers related to these messages readily.

The pop culture has defined what computing is for years. It has infected the whole culture around it. According to Kay, there’s literally been a dumbing down of computer science as a result. This has continued on into the age of the internet. You hear some people complain about it with “Web 2.0” now. Personally I think LOLCode is a repetition of this cycle. Why do I point this out? Microsoft is porting it to .Net…Great. Just what we need. Another version of BASIC for people to glom onto.

The truth is it’s not about defining who you are. It’s not about this or that feature. That’s what I’m gradually waking up to myself. It’s about what computers and software really can be–an extension of the mind.

The “early years”

I found a bunch of vintage ads, and a few that are recent, which in some ways document, but also demonstrate this pop culture as it’s evolved. I’ve also tried to convey the multiplicity of computer platforms that used to exist. I couldn’t find ads for every brand of computer that was out there at the time. Out of these, only a couple give a glimpse of the power of computing. As I looked at them I was reminded that computer companies often hired celebrities to pitch their machines. A lot of the ads focused on how low-price their computers were. The exceptions were usually IBM and Apple.

The pitch at that time was to sell people “a computer”, no matter what it was. You’ll notice that they had different target audiences: some for adults, some for businesspeople, and some for kids. There was no sense until the mid-1980s that software compatibility mattered.

I went kind of wild with the videos here. If a video doesn’t seem to play properly, refresh the page and try again. I’ve tried to put these in a semi-chronological order of when they came out, just so you can kind of see what was going on when.

Dick Cavett pitches the Apple II

Well, at least she’s using the machine to think thoughts she probably didn’t before. That’s a positive message.

William Shatner sells the Commodore Vic-20

Beam into the future…

The Tandy/Radio Shack TRS-80 Color Computer

The Tandy TRS-80 Model 4 computer

Tandy/Radio Shack (where the “TRS” moniker came from) had (I think) 4 “model” business computers. This was #4. The Model I looked more like the Color Computer in configuration, but all the other ones looked like the Model 4 shown here. I think they just had different configurations in the case, like number of disk drives, and the amount of memory they came with.

The IBM PC

The one that started it all, so to speak. Want to know who your Windows PC’s granddaddy was? This was it. Want to know how Microsoft made its big break? This was it.

These ads didn’t feature a celebrity, per se, but an actor playing Charlie Chaplin.

The Kaypro computer

As I recall, the Kaypro ran Digital Research’s CP/M operating system.

This reminds me of something that was very real at the time. A lot of places tried to lure you in telling you the price of the computer–with nothing else to go with it…The monitor was extra, for example. So as the ad says, a lot of times you’d end up paying more than the list price for a complete system.

The Commodore 64

What’s funny is I actually wrote a BASIC program on the Apple II once that did just what this ad says. I wrote a computer comparison program. It would ask for specifications on each machine, weight them based on importance (subjective on my part), and made a recommendation about which was the better one. I don’t remember if I was inspired by this ad or not.

Bill Cosby pitches the TI-99/4a with a $100 rebate

Kevin Costner appears for the Apple Lisa

This ad is from 1983.

“I’ll be home for breakfast.” Okay, is he coming back in later, or did he just come to his desk for a minute and call it a day?

The Apple Macintosh

This ad is from 1984. I remember seeing it a lot. I loved the way they dropped the IBM manuals on the table and you could literally see the IBM monitor bounce a little from the THUD. 😀

“1984”

Of course, who could forget this ad. Ridley Scott’s now-famous Super Bowl ad for the Macintosh.

Even though it’s a historical piece, this pop culture push for the Macintosh was unfortunate, because it made the Mac into some sort of “liberating” social cause (in this case, against IBM). It’s a piece that’s more useful for historical commentary (like this, I guess…).

Ah yes, this is more like it:

This ad gave a glimpse of the true potential of computing to the audience of this era–create, simulate whatever you want. Unfortunately it’s a little hard to see the details.

Alan Alda sells the Atari XL 8-bit series

Alan Alda did a whole series of ads like this for Atari in 1984. I gotta say, this ad’s clever. I especially liked the ending. “What’s a typewriter?” the girl asks. Indeed!

The Coleco Adam

This is the only ad video I could find for it. It features 4 ads for the Colecovision game console and the Coleco Adam computer. Yep, the video game companies were getting in on the fun, too. Well, what am I saying? Atari got into this a long time before. I remember that the Adam came in two configurations. You could get it as a stand-alone system, or you could get it as an add-on to the Colecovision game console. The console had a front expansion slot you could plug the “add-on” computer into.

What I remember is the tape you see the kid loading into the machine was a special kind of cassette, called a “streaming tape”. Even though data access was sequential, supposedly it could access data randomly on the tape about as fast as a disk drive could access data on a floppy disk. Pretty fast.

According to a little research I did, the Adam ran Digital Research’s CP/M operating system.

Microsoft Windows 1.0 with Steve Ballmer

This ad is from 1986. Steve Ballmer back when he had (some) hair. 🙂 This is the only software ad of any significance during this era. I don’t remember seeing this on TV. Maybe it was an early promo at Microsoft.

Atari Mega ST

I never saw TV ads for the Atari STs. I’ve only seen them online. In classic “Commodore 64” fashion, this one compares the Atari Mega ST to the Mac SE on features and price. Jack Tramiel’s influence, for sure.

The Commodore Amiga 500

A TON of celebrities for this one!

Pop culture: The next generation

As with the above set, there are more I wish I could show here, but I couldn’t find them. These are ads from the 1990s and beyond.

“Industrial Revelation”, by Apple

I like the speech. It’s inspiring, but…what does this have to do with computing?? It sounds more like a promo ad for a new management technique, and a little for globalization. This was a time when Apple started doing a lot of ads like this. It was getting ridiculous. I remember an ad they put on when the movie Independence Day came out, because the Mac Powerbook was featured in it: “You can save the world, if you have the right computer”. A total pop culture distraction.

IBM OS/2 Warp

Windows 95

The one that “started” it all–“Start me up!” What they didn’t tell us was “then reboot!”

“I Am” ad for Lotus Domino, from IBM

Wow. So Lotus Domino makes you important, huh? Neat ad, but I think people who actually used Domino wouldn’t have said it turns them into “superman”. What was nice about it was you could store pertinent information in a reasonably easy-to-access database that other people could use to access it, rather than using an e-mail system as your information “database”. Unfortunately the ad never communicated this. Instead it tried to pass it off as some “empowerment” tool. Domino was okay for what it did. I think it’s a concept that could be improved upon.

IBM Linux ad

So…Linux is artificially intelligent? I realize they’re talking about properties of open source software, but this is clearly hype-inducing.

The following isn’t an ad, but it’s pop culture. It’s touting features, and it claims that technology defines us. It doesn’t really talk about computing as a whole, but it claims to.

The Machine is Us/ing Us
by Michael Wesch, Assistant Professor of Cultural Anthropology,
Kansas State University

I think it’s less “the machine” than “Google is using us”. Tagging is a neat cataloging technique, but each link “teaches the machine a new idea”? Gimme a break!

So what are you saying?

In short, for one, I’m saying, “Don’t believe the hype!”

Tron is one of my favorite movies of all time. I listened to an interview with its creator, Steven Lisberger, that was done several years ago. He said now that the public had discovered the internet, there was a “stepping back” from it, to look at things from a broader view. He said people were saying, “Now that we have this great technology, what do we do with it?” He said he thought it was a good time for artists to revisit this subject (hinting at a Tron sequel, which so far hasn’t shown up). This is what I loved. He said of the internet, “It’s got to be something more than a glorified phone system.”

For those who have the time to ponder such things, like I’ve had lately, I think we need to be asking ourselves what we’re creating. Are we truly advancing the state of the art? Are we reinventing what’s already been invented, as Alan Kay says often? Are we regressing, even? Are we automating what used to exist in a different medium? Think about that one. Kay has pointed this out often, too.

What is a static web page, a word processing document, or a text file? It’s the automation of paper, an old medium. What’s new with it is hypertext, though as I mention below, Douglas Engelbart had already done something similar decades before. Form fields and buttons are very old. The VT3270 terminal had these features for decades (there was a button for the “submit” function on the terminal keyboard).

What is a web browser? It’s like a VT3270 terminal, a technology created by IBM in the 1970s, adapted to the GUI world, with the addition of componentized client-side plug-ins.

What is blogging? It’s a special case of a static web site. It’s a diary done in public, or at best an automation of an old-fashioned bulletin board (I’m talking about the cork kind), or newsletter.

What is an e-commerce site? It’s the automation of mail order, which has existed since the 19th century.

What about search? It’s a little new, because it’s distributed, but Engelbart already invented a searchable system that used hypertext in the late 1960s.

What is content management? It’s the automation of copy editing. Magazines and newspapers have been doing it for eons, even when you take internationalization into account. It takes it from an analog medium, and brings it to the internet where everything is updated dynamically. I imagine it adds to this the idea of content for certain groups of viewers (subscribers vs. non-subscribers, for example). A little something new.

What about collaborative computing over a network, with video? Nope. Douglas Engelbart invented that in the late 1960s, too.

What about online video? It’s the automation of the distribution of home movies. The fact that you can embed videos into other media is fairly new (a special case), but the idea that you could bring various media together into a new form existed in the Smalltalk system in the 1970s. It’s just been a matter of processing power that’s enabled digital video to be more widespread. Network streaming is new, though, to the best of my memory.

What about VOIP? It’s just another version of a phone system, or a PBX.

What about digital photography and video (like from a camcorder)? I don’t really have to explain this one, because you already know what I’ll say about it. The part that’s really new is you don’t have to wait to get your photos developed, and you can see pretty much how the video will look while you’re taking it. I’ll put these under the “something new” column, because it really facilitates experimentation, something that was difficult with film.

What about digital music? Same thing. The only things new about it are it’s easier to manipulate (if it’s not inhibited by DRM), and your player doesn’t have to have any moving parts. The main bonus with it is since it’s not analog its quality doesn’t degrade with time.

What about photo and video editing? Professionals have done this in analog media for years.

And there’s more.

Are you getting the impression that this is like that episode of South Park called “The Simpsons Already Did It”?

South Park: Blocking out the Sun

There’s not a whole lot new going on here. It just seems like it because all of the old stuff is being brought into a new medium, and/or the hardware has gotten powerful and cheap enough to have finally brought old inventions to a level that most people can access.

The pop culture’s mantra: accessibility

The impression I get is a lot of what people have bought into with computing is accessibility, but to what? When you talk to a lot of people about open source software they think it’s about this as well, along with community.

Here’s what’s been going on: Take what used to be exclusive to professionals and open it up to a lot more people through technology and economies of scale. It’s as if all people have cared about is porting the old thing to the new cheaper, more compact thing just so we can pat ourselves on the back for democratizing it and say more people can do the old thing. Accessibility and community are wonderful, but they’re not the whole enchilada! They are just slices.

What’s missing?

From what I’ve heard Alan Kay say, what’s not being invested in so much is creating systems where people can simulate their ideas, experiment with them, and see them play out; or look at a “thought model” from multiple perspectives. There are some isolated areas where this has been with us for a long time: word processing, spreadsheets, desktop publishing, and later photo and video editing. Some of these have been attempts to take something old and make it better, but the inspiration has always been some old medium.

One guess as to why Kay’s vision hasn’t become widespread is because there hasn’t been demand for it. Why? Because our educational system, and aspects of our society, have not encouraged us to think in terms of modeling our ideas, except for the arts, and perhaps the world of finance. Making this more broad-based will require innovation in thought as well as technology.

The following are my own impressions about what Kay is after. It seems to me what he wants to do is make modes of thought more accessible to the masses. This way we won’t have to depend on professional analysts to make our conclusions for us about some complex subject that affects us. We can create our own models, our own analyses, and make decisions based on them. This requires a different kind of education though, a different view of learning, research, and decision-making, because if we’re going to make our own models, we have to have enough of a basis in the facts so that our models have some grounding in reality. Otherwise we’re just deceiving ourselves.

What Kay has been after is a safe computing environment that works like we think. He sees computers as a medium, one that is interactive and dynamic, where you can see the effects of what you’re trying to do immediately; a medium where what it synthesizes is under our control.

He’s never liked applications, because they create stovepipes of data and functionality. In most cases they can’t be expanded by the user. He envisions a system where data can be adapted to multiple uses, and where needed functionality can be added as the user wants. Data maintains some control over how it is accessed, to free the user from having to interpret its format/structure, but other than that, the user is in control of how it’s used.

It seems like the lodestone he’s been after is the moment when people find a way to communicate that is only possible with a computer. He’s not talking about blogging, e-mail, the web, etc. As I said above, those are basically automation of old media. He’s thinking about using a computer as a truly new medium for communication.

He believes that all of these qualities of computing should be brought to programmers. In fact everyone should know how to program, in a way that’s understandable to us, because programming is a description of our models. A program is a simulation of a model. The environment it runs in should have the same adaptive qualities as described above.

What’s hindered progress in computing is how we think about it. We shouldn’t think of it as a “new old thing”. It’s truly a new thing. In Engelbart’s and Kay’s view it’s an extension of our minds. We should try to treat it as such. When we succeed at it, we will realize what is truly powerful about it.

A caveat that Kay and others have put forward is we don’t understand computing yet. The work that Engelbart and Kay have done have been steps along the way towards realizing this, but we’re not at the endpoint. Not by a longshot. There’s lots of work to be done.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »

Reminiscing, Part 5

Going Retro

Here are some modern “retro” videos I found that harken back to the 8- and 16-bit era. I just thought they were neat.

“Move Your Feet,” by Junior Senior

A friend introduced me to this music video a few years ago. It harkens back to the olden days of blocky but colorful 8-bit graphics. I’m sure none of these graphics were generated on an actual 8-bit computer. Anyone know what they used?

“Atari Robot Demo,” by Alistair Bowness

This is a clever piece, splicing together video from a bunch of 8-bit games using professional video editing.

In the very beginning you see a demonstration of what Atari users actually had to do to load a binary program. You had to go to a DOS (Disk Operating System) menu, which you loaded from disk, select “Binary Load”, and then type the name of the machine language program you wanted to run, at a prompt. The rapid-fire, high-pitched “beeps” you hear are the actual sound you’d hear out the TV or monitor speaker as the program loaded into memory from floppy disk.

It starts out with video of a real Atari 8-bit demo (video link) Atari used at the trade shows to show off the Atari XE series. Bowness took the music from the real demo and created a remix of it with a modern bass and drum track.

Atari and Commodore 8-bit games are shown, along with some coin-op games. There’s a little Commodore Amiga stuff in there, too.

It surprised me that the video included an Atari 8-bit game called “Dropzone.” It flies by rather quickly. It’s the screen before Dig Dug shows up. I was surprised because my memory is this was a little known game. I tried it out in college. The graphics performance on it was amazing. It was fast–too fast. The movement of graphics around the screen was very smooth. I remember I got killed all the time because I had a difficult time calibrating my movements on the joystick with what was going on onscreen. I’d make dumb moves, crash into stuff and die. Some of the graphics in it were a dead-ringer for the coin-op “Defender” game.

“Video Computer System,” by Golden Shower

I read how they made this song and video. “Video Computer System” is an actual music title you can get from Golden Shower. What they did was record real 8-bit game sounds, digitize them into their music system, and used them in the song, along with some 8-bit-like synth sounds they created themselves. The graphics for the video were created entirely in Photoshop, and maybe some other modern graphics packages. The 8-bit-like graphics look an awful lot like the real thing, but none of it was generated on an 8-bit machine.

This is a video of a small personal computer museum in Canada. Just about all of the computers that came out in the 1970s and 1980s can be seen, and used by patrons. You see some kids playing some of the old video games.

As you can see, a LOT of different personal computers were created in this time period. It shows you what I meant when I said earlier that everybody and his brother was coming out with their own personal computer in the early to mid-1980s.

I said earlier in Part 4 that this would be my last post. I lied…I’ve got one more coming up.

Read Full Post »

Atari ST

While in college I got an Atari Mega STe, around 1992. It had a 16-bit Motorola 68000 CPU, and was soft-switchable between 8 Mhz and 16 Mhz. I got it with 1 MB of RAM. I eventually upgraded it to 4 MB, and added an internal 40 MB hard drive. That was kind of the average size at the time.

Some history

Jack Tramiel, the founder of Commodore Business Machines (the maker of the Commodore 8-bit and 16-bit computers), dramatically left his company and bought Atari from Time Warner dramatically left his company and bought Atari from Warner Communications in 1984. Commodore and Atari were fierce competitors, so he was competing against his own handiwork. After he took over, Atari produced new models: the XE 8-bit series, and the ST series. The first ST models came out in 1985, just a year after the Apple Macintosh was introduced. They ran on the Motorola 68000 CPU, running at 8 Mhz, using Digital Research’s Graphical Environment Manager (GEM) for the desktop interface. It and the Commodore Amiga came out around the same time. Both featured GUIs that had a similar “look and feel” to the Macintosh, with the main difference being they had color displays, whereas the Mac ran in monochrome. An exciting feature of the Amiga was it had pre-emptive multitasking; the first personal computer I knew of to have that, though I recently discovered that the Apple Lisa, which had been released a couple years earlier, could multitask as well.

Here’s a Computer Chronicles episode from 1985 that profiled the Atari ST:

(Update 11/17/2012: I originally had a different episode of Computer Chronicles here, showing both the Atari ST and the Commodore Amiga. You can see it here.)

The ST developed a niche with musicians. I wasn’t a musician, so I didn’t buy one for that reason, but I used to hear about some famous musical artists who used STs in the recording studio and in live concerts. The most prominent of them was Tangerine Dream. The ST also had a 16-bit, 3-voice Yamaha synthesizer chip. From what I understand it was the same sound chip used in the TI-99/4A personal computer that was sold in the early 1980s.

The STe series had 8-bit stereo PCM sound capability as well, similar to what was available on the Commodore Amiga, though the Amiga had more sound channels.

Productivity

At first I used my STe mainly for connecting up to the school’s computers. I finally got decent 80-column ANSI terminal emulation. I also used it to write a few papers, using what was called at the time a “document processor” called “Word Up.” My main accomplishment on it was writing an article for a magazine called Current Notes.

Programming

After I got my bachelor’s degree I did some C coding on it, using Sozobon C. Of all the computers I had used up to that point, I did the least amount of programming on it. I always intended to get into programming GEM, but I never got around to it.

I started into something that really interested me though. A few years earlier I had been introduced to the idea that the solid inner planets in the solar system had been formed by asteroid collisions. I was curious about exploring this idea, so I created a “gravity simulator” in C, using Newtonian physics to see if I could recreate this effect, or just see what would happen. I got some basic mechanics going, creating orbits with dots on the screen, but it wasn’t very exciting. For the first time the speed limitations of the hardware were holding me back. It was able to handle maybe 10 or 20 gravitational objects on the screen at once, and after that it really started to bog down. I continued this exploration in C++ when I got my first Windows machine some years later, to better effect.

MiNTing an Atari

In the meantime I had found out about an open source adjunct kernel for the ST called MiNT (a recursive acronym for “Mint is Not Tos”), written by Eric Smith. The term “open source” didn’t exist back then. There was just GNU, and “free software” with source code available. “TOS” was the name of the ST’s operating system. It literally stood for “The Operating System.” Original, huh? MiNT offered a Unix work-alike environment. It had a complete C/Unix API. It came with a lot of the amenities, too, like Bourne shell, GCC, UUCP, NNTP, rn, mail, man pages, X/Windows, etc. A lot of the GNU stuff had ports that worked on the ST. I remember taking my semester project I had written in a graduate level course on compilers, and compiling it on MiNT using GCC, Flex, and Bison.

It implemented pre-emptive multitasking. This only worked inside the MiNT environment, and it only went so far. I quickly realized that even though I could run multiple simultaneous processes, if I ran too many (more than two or three) it would grind down the machine. Since it was an adjunct kernel, I could still run a GEM program at the same time. GEM only offered a single-tasking environment. MiNT didn’t help there.

GEM had a feature for a special kind of program called a “desk accessory.” The Macintosh had them, too. You could call one up from the “Atari”/”Fuji” menu. A desk accessory called “TOSWin” was my friend. It enabled me to run a MiNT and GEM session simultaneously. For some reason GEM allowed any desk accessory to run side by side, simultaneously with a GEM application, as if it was multitasking, but it didn’t allow you to run more than one GEM application at a time. I remember one of the things I did with TOSWin was I’d sometimes call a BBS that was really busy. Instead of running my GEM terminal program and letting it redial continuously, making it impossible for me to do anything else, I wrote a shell script in MiNT that would redial the number for me until I got a connection. In the meantime I could run another task in GEM while I waited.

The only thing MiNT really lacked was virtual memory. It was not able to page to disk, so I was limited to a maximum of 4 MB in which processes could run. For what I did with it I don’t think I ever hit a memory ceiling. It had virtual memory in the sense that it ran programs using relative addressing. It also had a unique way of handling multiple instances of the same program in memory. It created a new data area, and a new program counter for each instance of a program, and each program counter iterated over the same in-memory program image for each duplicate instance. This helped save memory from redundancy.

It eventually got to the point where I was working in MiNT most of the time, and I was using GEM hardly at all.

Soon after I got into this stuff I realized that MiNT was not unique on the 68000. A Unix-like OS had been around for years on the Radio Shack Color Computer, which used Motorola’s 8-bit 6800 CPU, called OS9. There was also a version for the 68000. MiNT was a lot like it.

Just a note of trivia. Eric Smith came to work at Atari around 1991/92 and worked on the operating system for the Falcon 030. The end result was MultiTOS. MiNT was the new kernel (it was renamed “MiNT is Now Tos”). A new GUI layer was written on top of it, called XAES, which enabled users to run multiple GEM programs at once. The OS was loaded from disk instead of ROMs.

A bit about the Falcon

The Falcon was a pinnacle achievement for Atari. It had some things going for it. Like the TT, their first 32-bit model, it produced VGA-compatible video, though it had 16-bit color. The TT had 8-bit color. One of its features that attracted a lot of developer interest initially was its digital signal processing (DSP) chip; the same one that came in NeXT computers. The problem was it was too little, too late. The market for Atari computers was dying by the time it was released. Atari underpowered the Falcon, forcing its 68030 CPU to run at 16 Mhz so it wouldn’t compete with the TT, which ran at 32 Mhz. Atari made the Falcon for about a year. They stopped producing computers altogether in late 1993.

Games

These are the games I can remember playing a lot on my STe.

Llamatron, by Jeff Minter, Llamasoft

This video was put up by someone else. This game was a version of Robotron 2084, except you’re a…um, a llama. The game was freeware, though Minter took donations. It was ported to a few platforms. I know it ran on the Amiga, and PC as well. I thought it was incredibly creative. Each level was something new. A great feature of it is Minter knew you’d get beat by it. So he put in several “Do you want to continue?” prompts whenever you lost your last life, so you could get a new set of lives and continue. The point was to have fun with the game, not to lose and get discouraged and have to start all over.

MIDI Maze, by Xanth Software F/X and Hybrid Arts

I just had to mention this one, even though I only played it once. Oh my god! This game was SO MUCH FUN! 🙂 It was the first 3D multiplayer networked first-person-shooter I ever played. It was released in 1987. Every player was a round smiley face of a different color. The only opportunity I got to play it was at a computer show at a mall. I can’t remember who set it up, maybe a user group. A bunch of STs were set up, all networked together via. their MIDI ports (hence the name of the game). The early STs had no built-in networking capability, but someone figured out how to use the ST’s MIDI ports as network ports. No special adapters were required. Just plain ‘ole MIDI cables were used to daisy chain a bunch of STs together.

We had to find each other in the maze and shoot at each other. All the mazes were single-level. Scoring was done on the musical staff you see in the upper-right corner of the screen. Once you got up to the G note, you won. 🙂

Fire and Ice, by Graftgold

I only had a demo of this from a disk I got out of ST Format magazine. I played it often anyway. You play a character named “Cool Coyote.” It was an entrancing platform game. The map I played is the one you see in the video above. It kind of felt like you could take your time. Even the music was calming. You’re moving in kind of slow motion through the water, freezing and then smashing sea creatures to get pieces of a key you need to escape the map. Along the way you pick up score items like gems, stars, and dog bones. The audio is a bit out of sync with the video. I couldn’t help that, unfortunately.

F-15 Strike Eagle II

F-15 Strike Eagle II, by Microprose

This was my flight simulator. It was a port of the version from the PC, so it wasn’t great. Everything was represented by non-shaded, non-texture-mapped polygons, which was how most games did 3D on the ST series. You went on bombing missions of enemy targets, taking off and landing from various locations. Along the way you had to evade SAMs and enemy jet fighters trying to shoot you down. The most “exciting” location you could land at was an aircraft carrier. I hated it. I could land the plane fine, but unless I hit the deck right where the landing strip began, I risked going all the way across it as I slowed to a stop, thinking I had made a successful landing, and then tipping over the opposite edge “into the drink.” I could have a completely successful mission, and then “fail” on the landing like this. I couldn’t tell if this was a bug, or if they made it this way on purpose.

The ones that got away

I always admired Starglider and Starglider II. I saw others play them. For some reason I never got them for myself. Maybe they had gone out of print by the time I got my STe.

Demos

Like I was saying in Part 1, I really liked the demos that were works of art. They were usually created at “demo meets” where coders would get together for annual conventions, and they’d either prepare the demos beforehand, or make them on-site. They were released for free on the internet. I used to download tons of them. They’d usually come compressed into disk images, which I’d have to decompress to one or more floppy disks, and then I’d try to run them. That was the thing. Sometimes they’d use tricks that were so specific to the machine they coded them on I had to be using the exact same model to get it to work.

The demos were about showing mastery of the hardware technology, showing just how much computing power you could squeeze out of the machine. I don’t know what went into creating them, because that was never my thing, but I’m sure there was a lot of assembly coding involved, maybe some C, and a lot of math. To the best of my knowledge there were no multimedia construction sets around for the ST, so each demo was a unique creation from hours of coding. Of course, they were all from Europe.

These are a few of the best demo examples I could either find on You Tube, or put up myself. “High Fidelity Dreams” and the “Synergy megademo” ran on most STs.

High Fidelity Dreams, by Aura, 1992

Edit 5-11-2017: There was a demo I’d mentioned before that I couldn’t get a screencast of because my computer was too slow. Someone else posted a video of it playing on an Atari STe, and I’ve added it here. Fair warning if you are easily offended, there is a little mature language in this video towards the end when the credits roll.

Grotesque demo, by Omega, 1992

You wouldn’t know it by watching this, but in my opinion this is was the best Atari ST/STe demo I’d seen for many years, in terms technical accomplishment. I remember it sounded really cool if I ran the audio through an amplifier! There are some strobe effects in the demo (probably not good to watch if you have epilepsy!).

When the credits roll the author gets into some of the technical specs. for the demo. Right towards the end the author put up a 3D cube that obscures some of the text. So here it is, misspellings and all:

It took me three weeks for the first version. One weeks extra for the final. I made an extra version because I found some bugs in the demo. And to make a new version more interesting, I added the vector objekts under this text. Just to show the speed of my polygon routine. Even I need to boast…

I have even made up some ideas for future demos. But I would be surprised if Omega released more demos on the Atari… Sadly to say, but the new Atari computers doesn’t live up to my ideals. And even more sad is, that my ideals are not unrealistic high… So for the moment I think that the Archimedes is the best choise. It only have 4096 colors, eight 8 bit sound channels, but the Archimedes 5000 model runs on 20 mips! It can move a half meg in one VBL. The Falcon moves about 56K in the same time…

But time will show us what computers (or media…) Omega will turn up next time.

Bye. Micael (TFE) Hildenborg


Brain Damage demo on the Atari STe, by Aggression and Kruz, 1993

Megademo intro., by Synergy, 1993

All of these demos came out in the early 1990s. I think they’re exceptional for the time period. The soundtracks rock! But that’s just my opinion. 🙂

I always enjoyed the Synergy Megademo intro. Members of the group as they introduced themselves flashed words on the screen that were on their minds. I thought this was unique and intriguing. It told you a little about them, but left you guessing. Some of them always gave me a smile, because of the sequencing, like: “hairy,” “Sharon,” “Stone.” Yech! What an image! 😛

I can explain some of the stuff in it:

“No Precalc”: A common trick in demos was coders would put precalculated tables or coordinates into the graphical parts, so it would look impressive, but the computer wasn’t working very hard to show it to you, just rendering data. Coders got more respect if they had their demo calculate the graphical effects, because it showed true optimizing cred.

“MULS”: A 68000 opcode for “signed multiply” of 16-bit integers.

“NO CARRIER”: This was a phrase computer users of the time were intimately familiar with, but you barely see anymore. It’s the message you used to get in a terminal program from an aborted phone call when using a modem.

“YM 2149”: This was the model number of the ST’s Yamaha sound chip.

“DTP”: This is an acronym that could’ve stood for many things. My guess is “desktop publishing.” There were a couple professional DTP packages for the ST.

“Autoexec”: There was a folder you could set up on an Atari disk called Autoexec where you could put executables, and the computer would automatically run them when you booted the machine.

“GFA”: This stood for GFA Basic, a programming language.

“133 BPM”: I thought at first this meant “bits per minute,” but it really means “beats per minute” for music.

“Front 242”: A lot of ST demos put the name of the industrial music group “Front 242” somewhere in their demo, usually in the credits. You see it in this intro. I tried listening to some Front 242 music to see what was so inspiring about it, but I couldn’t get into it. The music in these demos was usually real good, and I kept thinking it was from the group.

There’s also a graphic from an “Art of Noise” album cover, a group that was popular in the 1980s.

Edit 5-22-2015:

 

Summer Delights on the Atari STe, by Dead Hackers Society, 2011

An interesting thing about this one is it combines use of the STe’s Yamaha and PCM sound chips.

Edit 5/11/2017:

Sweetness Follows on the Atari STe, by KÜA, 2015

All the audio in this demo is coming from the computer.

Edit 3-29-2010: I’ve looked for Falcon 030 material to put here, but a lot of it has been lackluster. I just found the following demos, which were ported to a modified Falcon from the Amiga 4000 a few years ago. These demos play on a Falcon 030 with an accelerator card, containing a Motorola 68060 running at 66 Mhz. They’re some of the best I’ve seen on an Atari computer. I said earlier that in my opinion the best graphics & sound demos are works of art. These take what had been my concept of computerized art up to another level. They were created by a group called “The Black Lotus” (TBL). According to the descriptions, they contain some features unique to the Falcon versions. All the music you hear is coming from the Falcon as the demo runs. Enjoy.

Silkcut

Starstruck

Ocean Machine

Edit 5-22-2015: The Black Lotus does it again. They came out with a new demo for the Amiga 4000, and once again ported it to an enhanced Atari Falcon030 (with a 66 Mhz 68060 accelerator board). It came out in January 2015.

Rift

Edit 5/11/2017: I found yet another good enhanced Atari Falcon demo (run with a 68060 accelerator). Once again, it was originally written for the Amiga in 2012, and was ported by the same group for the Falcon, in 2013.

Kioea, by Mad Wizards

I finally found a couple good stock (unmodified) Atari Falcon demos.

In2ition, by Mystic Bytes, 2013

Electric Night, by Dune and SMFX, 2016

(The part that looks like it’s coming from a VCR at the beginning is coming from the demo.)

Atari popular in Europe

Atari had its success with its computers in Europe, not in the U.S. The ST was able to create some buzz in the computer industry the first couple years after its introduction, but after that its influence gradually faded away.

I went to the UK in 1999. While in Salisbury I visited an internet cafe. As I entered I noticed a shelf unit running along a wall. Piled on the shelves were old Atari STs and Commodore Amigas. I’m not sure why they were just sitting there. I talked to the guy running the shop, saying I was glad to see them. They brought back memories. I think he said people in the neighborhood dropped them off because they were getting rid of them. I can’t remember if the guy said he was giving them away, or what. Anyway, I’d venture to guess this would be an unusual sight in the U.S. The Atari ST didn’t make it big here. It might have gotten to 1-2% market share at most, and even that’s a rough guess. A lot of the people who bought them here were programmers and musicians.

The end of an era

There were 4 generations of Atari 16- and 32-bit computers before Atari (the consumer electronics company) faded away: 1) the original ST line, released in 1985; 2) the Mega ST line, which came out in the late 1980s; 3) the STe line, and the TT (their first 32-bit model), which came out in 1990/91; and 4) the Falcon 030 (the 2nd and last 32-bit model), which came out in 1992.

Atari had a complex history, but the company that was named Atari, which produced the home console gaming machines and computer lines, stopped making computers in 1993. They tried to focus exclusively on their video game systems. Atari faded away to nearly nothing. The company was bought by a disk drive manufacturer, JTS, in 1996. The purchase of the company was really a way to sell Atari’s intellectual property. The company ceased to exist for all intents and purposes. There were no employees left to speak of, and the Tramiels did not come over to JTS. A couple years later the IP was bought by Hasbro, and they brought out updated versions of some old Atari classic video games like Pong and Frogger. A few years after that the IP was bought by Infogrames. Infogrames officially changed its name to “Atari” in 2003. In name, Atari still exists, but it’s not the same company. The only legacy that still lives on from its former existence is its video games.

In my opinion Atari computers were a casualty of the internet, though I could be wrong. The PC was finally surpassing them in capability as well. When the internet began to get popular, Atari was caught flat-footed. You could always hook up a modem to an Atari, but it had no TCP/IP stack at the time, and no web browser or e-mail applications. This was true even on the Falcon. The Mega STe and TT had LAN capability, but that was used for P2P connections, if anything. I remember hearing in the mid-90s that someone had developed a free TCP/IP stack as a desk accessory. I believe it was called “STiK.” I don’t recall hearing about a web browser at that time though. The only browser that may have existed for it was probably lynx, an open source text-only HTML browser.

I kept using my STe until 1997, when I got my first Windows PC.

The passing of Atari and Commodore (they declared bankruptcy in 1994) from the computer scene marked the end of an era.

The first generation of personal computers

Beginning in the late 1970s many companies got in on the “personal computer craze.” By the early 1980s everybody and his brother was in on it, though this mass proliferation of hardware designs flamed out within a few years. When the shakeout was overwith, the only companies still in the business in the U.S. were Commodore, Atari, Apple, and IBM. Commodore and Atari made low-end and mid-range consumer machines, and Apple and IBM made high-end business machines. At some point in the late 1980s, IBM retreated from the personal computing business, largely due to the PC clone manufacturers who undercut it in price. It didn’t completely leave the business until a few years ago when Lenovo bought its PC manufacturing operations.

The era was characterized by an emphasis on what the hardware could do. The operating system was incidental. Different computers had different capabilities. This began to change with the introduction of GUIs and then multitasking to personal computers. The operating system became more visible, and you communicated with it more, rather than some programming interface. In the early machines it was always possible to access the hardware directly. In the 1990s this became less possible. The operating system became your “machine.”

Edit 11/17/2012: I used to like this idea of the operating system becoming your “machine,” but I now realize it gave no semblance of a computing environment, as the older machines did. You still had an “instruction set” of sorts, but it wasn’t as nice.

A common theme in the 1980s was that each system was incompatible with the other. You couldn’t exchange software or data, because each computer stored stuff in a different format. Each computer used a different CPU and chipset. Attempts at compatibility were tried through emulation, but it was rare to see an emulator that ran fast enough to make it truly useful to most people. Computers were not powerful enough then for software emulators to provide a satisfying experience. The best emulators were hardware-based, containing some of the original chip technology, like an Intel CPU, for example.

Eventually standardization set in. IBM became the standard business machine, with Microsoft becoming the standard bearer of the operating system that ran on it, and its clones. The Apple II and then the Macintosh became the standard computer used in schools and creative businesses, such as publishing. The Commodore 64 became the “standard” computer most people bought for the home, though the PC eventually took over this role once it came down in price. The same has happened with the schools, with PCs taking over there as well.

Returning to what I said earlier, what really changed the game in the 1990s was the internet. The computing platforms that have survived are the ones that best adapted to it. The ones we use today are a product of that era, though the GUIs we use are the lasting legacy of the 1980s.

I’ve got “one more part in me” for this series, coming up.

—Mark Miller, https://tekkie.wordpress.com

Read Full Post »