.Net Framework 3.0 final is available

Just saw today, from Mary Jo Foley, that .Net Framework 3.0 is now available. You can get it here [Update 6/9/07 – I’ve updated this link. It goes to the download page for the .Net Framework 3.0 SDK. It says it’s “for Vista”, but it will work with Windows XP or Windows Server. It also provides a link XP and Windows Server users will need to follow to download the runtime components so that everything works].

As I noted earlier, .Net Framework 3.0 is what Microsoft was calling earlier “WinFX”. All they did was change the name. It runs on the .Net 2.0 runtime (CLR). So if you previously got the .Net 2.0 redistributable, or the .Net 2.0 Framework (either as an SDK or through a version of Visual Studio 2005), you do not need to upgrade the runtime. .Net Framework 3.0 is a library upgrade to the .Net 2.0 package.

It will be included with Windows Vista when it’s released, and is now available for download for Windows XP SP2, and Windows Server 2003 SP1. If you want to work with the IDE tools for .Net Framework 3.0, called “extensions” at this point, you must use Visual Studio 2005 Professional Edition. It doesn’t look like these extensions work with VS 2005 Express.

.Net Framework 3.0 includes Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), and Workflow Foundation (WF).

WPF is the technology that includes the XAML markup scheme where you can define what is essentially a thick client UI in XML, have the user download it through a URL, and it compiles and runs on the user’s machine. You get the ease of web deployment with the responsiveness to the user of a thick client. A nice technology. It also solidifies the notion of separating presentation from application logic in .Net. All XAML handles is the UI that the user sees and interacts with. Internal logic to an application, which determines what happens next, is decoupled, and is done somewhere else. This logic can be in a web service or remote component, accessed via. WCF (I get into this below). It can be in a managed DLL that’s downloaded along with the XAML. It can also be handled via. WF (I also get into this below).

WCF is what at one point Microsoft called “Indigo”. It’s the new web services API, and it replaces a method of explicit RPC that existed in .Net 1.x, called “remoting”. As I understand it, it supports remotable components, though some remoting capabilities (like customizing the RPC) that used to exist are not supported any longer. From what I gather it takes care of the plumbing work to determine what it’s communicating with, so the developer doesn’t have to worry about it. If it’s a web service, a component accessed via. remoting, or even a COM+ or DCOM component, it’ll do the work of detecting what’s at the other end, and using it.

WF is a neat addition, but I imagine it’ll mostly be used on the server end (Windows Server 2003 for now). WF allows you to stitch together a bunch of loosely coupled components to create a program flow. It’s essentially the glue for what’s called Service-Oriented Architecture (SOA).

Where you used to stitch a bunch of web pages together in code, through links in each page, or through CGI, to form an application, now you can define the web pages in their own separate components, each not knowing about the others, and use WF to pass data and control flow between each of them. Don’t get me wrong, it’s not just restricted to traditional web applications. WF can be used with WPF as well.

What this enables is applications that can change without recompilation. To add new features to an application, you either add them to existing components, or create new ones, and if necessary add them to the workflow. If you want to change the order of execution of components, or take some components out, just change the workflow. No recompilation is necessary. The workflow does the decision making about what happens next. It can handle conditional execution of components, and loops. BizTalk Server has had this capability for years, but now you can get it for free.

From everyone I’ve heard talk about .Net Framework 3.0, it is a HUGE upgrade to the .Net 2.0 Framework class library, hence the reason they upped the version number to 3.0. As I discussed earlier, I’m sure it makes sense from Microsoft’s perspective, but to technology buyers it’s going to be confusing, because people who use .Net are familiar with the versioning scheme: Framework version number = CLR version number. With .Net Framework 3.0 this is no longer the case. I’ve been wondering if this is going to continue.

For example, the Linq Project CTPs currently run on the .Net 2.0 CLR, even though its scheduled release is in the Orcas version of .Net, further down the road. Linq is a promising technology being developed at Microsoft that will benefit all of us developers who work with databases. As a techie myself, the exciting features are that it supports dynamic types (known as “duck typing” nowadays), and lambda expressions (analogous to closures). I’ve seen a couple demonstrations of it, and it’s very exciting. For those who don’t know about it, I’ll just say that with what’s coming, someday you’ll be able to create collections of objects and query them as though you were querying a database. Someday you won’t have to define SQL query strings to pass to a database. You’ll be able to inline your database query in your code as naturally as though it was part of the programming language, and get back a result set of objects. That’s as much of Linq as I’ve seen. The next stage of it is completing the C/R/U/D operations: inserts, updates, and deletes. I’m looking forward to it.

Giving technology a bad name

I read a story today from the Rocky Mountain News. It talks about the total debacle of the Denver voting system on Tuesday. I was hearing about this election night, that there were people waiting in line to vote anywhere from 3 to 5 hours. Part of the blame has been placed on the fact that they had “open precinct” voting, where people from anywhere in the Denver Metro Area could vote at voting centers, away from their geographical precincts. This was supposed to increase voter participation, because people wouldn’t have to rush home to their precincts, from their jobs in Denver.

It doesn’t sound to me like the real glitch was in the software, as the article’s headline claims, but rather in the network infrastructure. This sounds like IT mismanagement to me. No one measured their network capacity. They said that the system had been used in a primary before this last election, and it worked fine. 12,000 people voted then. When tens of thousands of people showed up to vote (much more than 12,000), the network got overloaded. The application that was moving at a crawl was a “poll book” app., which verified people’s voter registration before they went in to vote. The voting systems themselves seemed to be working fine. From reading the article it sounds like they assumed that since the system worked fine for the primary, it would work for the general election. It also sounds like they used a central database, and a web server for their “electronic poll book” app. The application was browser-based.

Some voters stuck around for the few hours it took to get their registration verified. Others just left. As of this writing, the vote counting on statewide issues is still not done either. There’s been talk of firing people in the Denver County Clerk & Recorders office.

What’s kind of interesting about this article is they profile an alternative, cobbled together voter registration system that Larimer County has used for a few years, which has worked fine. They said they used Microsoft Access and an Oracle database. The app. was home grown, and it cost very little to put together. Instead of waiting 20 minutes for the system to respond to one request in Denver, the Larimer County system takes 30 seconds. Larimer County has offered to share their system with other counties.

Anyway, voting went without a hitch in Boulder, Tuesday. I can remember in 2004 when we went to a new fill-in-the-circle paper voting ballot system that we had a real debacle here. Counting the votes took days because the vote counting machines turned out to be incompatible with the paper ballots! They had to all be hand counted. Funny these things don’t get tested before they’re used. It shows you how we truly “care” about our elections. The glitches have been fixed here, even when electronic voting machines were introduced for the first time. So it was a pleasant experience for us Boulderites, except for the long ballot. We had a ton of ballot issues to vote on this year.

People in Boulder have been highly skeptical of the electronic voting machines, so they were slow to arrive here. I’ve become skeptical of them of late, because of a demonstration I saw where a guy slipped a flash memory card into a voting machine, and changed the way it counted votes. That creeped me out. I went with the paper fill-in-the-circle ballot. I feel more comfortable with a paper trail, thanks.

A recurring thought I’ve had around this issue is why don’t we just use computerized equipment that records our votes, using a touch screen, and then prints out a filled-in paper ballot for us that we then stick in the ballot box, rather than a voting system where the votes are kept in its memory? The reason electronic voting machines came into use was because of the 2000 presidential election, where it was difficult in some parts of the country to tell if people voted one way or another because their ballots were not marked or punched out clearly. What was getting messed up was the recording of the vote. Why not use a machine to make marking a ballot consistent? The computer can print out a paper ballot that can then be read by a vote counting machine. After the person finishes voting, they’ll get the printout, and they can review it to make sure it’s marked the way they intended before turning it in. If the printer is running low on ink, that will show up and the voter can complain, and get another printout after the problem has been fixed.

The simpler you make the technology the more reliable it is.

Edit 11/10/06: I remembered today that on the afternoon of the election it was announced that more laptops were being sent to the voting centers in Denver, due to the lines of people waiting to vote. It was later that evening, probably on the 10 o’clock news where I learned that server capacity was blamed for the slow voting process. So more laptops didn’t help. They would’ve slowed down the process more. Server capacity was probably the more likely problem than my network capacity explanation.

Election officials have claimed that there was no way for them to test the system, that the elections were the test. I think this position is irresponsible and false. Sites like Amazon, eBay, etc. have tested their systems to make sure they can handle the traffic. Their ability to stay in business partly depends on that, and I can assure you they get far more traffic than the electronic poll book system did. I’m a web developer and I’ve seen web site profiling tools used for the purpose of testing response time under loads of thousands of requests happening at the same time. These tests can be scaled as well to test for tens of thousands, hundreds of thousands, and millions of requests. It’s just a matter of the computing capacity of the machine that conducts the test. If the City of Denver didn’t have a machine that could conduct the test, there are services they could’ve contracted with that would’ve done it for them. Maybe it was a matter of funds not being available, but in that case I think paper poll books would’ve been more efficient.

There’s been talk of firing the election staff. This would seem like the thing to do, but I would hope they would handle this smartly when it comes to the IT staff. True they didn’t handle this in the best way, but they have seen the system’s weaknesses now, and have probably diagnosed the problems. Getting rid of them and then bringing in a whole new staff would probably not help the situation, because the new staff is not going to understand the voting system at all, coming in. It’s better to have an experienced staff that’s made some mistakes and is willing to improve, than to have one that knows nothing about it.

More Squeak information sources

The Squeak community set up a Squeak wiki a while back. I’ve run into it a few times when I’ve looked up material on the project. The other day I finally came upon a link to the main page at wiki.squeak.org. It tells you how to set up your own wiki pages on it, and it has a bunch of resources for learning more about Squeak, and eToys. Some of the links on there are ones I’ve already talked about on earlier posts, but there are others I have not.

I also happened upon a site that appears to document all of the classes in a standard Squeak distribution. [Update 6/9/07 – This wasn’t what I thought it was. It has class documentation but I suspect it’s out of date.]

Anyway, I thought I’d point these out.

What to do about eToys?

I first heard about this through The Weekly Squeak blog. There’s been quite a debate going on in the squeak-dev mailing list about what to do with eToys. It’s been a part of the main Squeak distribution for a long time, and is in the version of Squeak you can get from squeak.org, and at squeakland.org. By my own reading of the debate it started with someone suggesting that eToys be removed from the future 3.10 release of Squeak on squeak.org (squeakland.org has an earlier release of Squeak and is not directly affected by this discussion). I was strongly opposed to this idea, for reasons that I’ve expressed in earlier posts, and I expressed my opposition. I learned later that eToys has been a boondoggle in Squeak for a while, and the Squeak development community has struggled with what to do with it. On the one hand there are developers who see no value in eToys and would just assume remove it. There are users/developers who use eToys and want it left in. There was also the concern that removing eToys would create confusion with new users who have seen the online Squeak demos which use eToys, and will expect it to be in Squeak when they download it from squeak.org. I agreed with this concern.

(Update 5-4-2011: I decided to cross out the last part of the paragraph below after giving this a lot of thought. I recall seeing Alan Kay say in an online conversation, quite a while after I wrote this post, that eToys was not a “hack,” and this caused me to reconsider my interpretation of the “prototype” term he used. I deleted the paragraph that followed the one below, which went into my own experiences with software prototypes, working in IT, since I no longer think it has relevance to this discussion.)

The story is, as I’ve been told, the reason for this struggle is that eToys was not implemented well (that’s the interpretation of some people, anyway). When eToys was written, a partial implementation of a graphics manipulation package called “Morphic”, and eToys, were combined together into one mass, so to speak. According to developers who have looked at it, it’s really hard to tell where Morphic ends and eToys begins, and vice versa (but this is a characteristic of all of Smalltalk). eToys was implemented when Disney sponsored work on Squeak probably in the late in the mid-1990s. Alan Kay was a Disney fellow then. In an earlier conversation about possibly porting eToys to Python, Kay acknowledged that eToys was implemented in Squeak as a prototype, whose purpose was to demonstrate the capabilities of Squeak. It was not production-quality code. I don’t know yet why this was the case, and in fact it was surprising to me. I figured that with Kay being such a perfectionist, and Smalltalk being so elegant, eToys would share those same qualities. As far as I’m concerned it runs great for being written so badly.

According to what I’ve heard from others, eToys has created a logjam in Squeak user interface development. Anytime someone comes along to try to improve it, it breaks. In a way eToys is Squeak’s greatest strength, and its greatest weakness. It’s what tends to attract people to Squeak, but it also prevents Squeak from advancing technologically as a GUI environment.

The Squeak development community is all volunteers. No one is paid to work on it at this point. Many times contributions to Squeak come as a result of work that researchers and/or consultants have been paid to do for private projects. They donate portions of what they created back to the community. I’m sure there’s altruistic development going on. I’m just not sure about the proportions.

There are some who would like a Morphic/eToys-free an eToys-free version of Squeak to do their own development work with, and a few others who would like to clean up the whole mess, start fresh, and then build a new version of Morphic onto the Squeak “developer’s image”. The hope is that a reworked version of eToys can follow that is written better and is more efficient.

I’m a little uncertain about this, but my reading of the conversation in squeak-dev was that eventually a plan was announced to try to satisfy everyone. The plan was that whenever the next release (3.10) is made, either a) a single “full” Squeak image will be released, which contains everything including Morphic/eToys, but will contain an option a developer can exercise which will remove Morphic/eToys eToys from their copy of the Squeak image and leave Morphic in; or b) two images will be released: one “full” image containing Morphic/eToys, and a stripped down “developer’s image” with Morphic/eToys eToys removed (and Morphic left in). This way people will have a choice. They can either get Squeak with Morphic/eToys, or without it eToys. Squeak users will most likely want the version with Morphic/eToys. Even new developers like myself will probably want it. Some other more experienced Squeak developers will want the non-Morphic/eToys non-eToys version. Both can be satisfied. That was the proposed plan, anyway.

A Squeak developer named Juan Vuletich has spearheaded an effort to build Morphic Version 3.0 on a Morphic/eToys-free an eToys-free image. He would love some help if anyone is willing to give it. I would gladly get involved myself, but I’m still learning about Squeak. It’s best to recognize one’s limitations and know when to stay out of the way. If work is still ongoing by the time I’ve learned some more, I’ll take a look at joining in. I’d like to see this improvement effort succeed.