Did some “housecleaning”

I occasionally go in and fix past blog posts as a matter of course. I decided to go through all of my past blog posts today and fix any links that have gotten out of date or don’t work anymore. In a few cases I deleted links because the sites they refer to have disappeared. In one case I got rid of a link because it turned out to be misleading.

A while back I got a complaint from a reader that old posts were showing up as “new” in his RSS reader. I don’t know what’s causing that, but it may be my occasional edits in old posts. Just a guess. Sometimes I’ll make frequent edits to posts I’ve just put up. I try to proofread my posts before putting them up, but in the long ones something will usually slip by me, or I’ll realize a spot doesn’t say what I intended. Hopefully it’s not annoying. Just trying to keep this blog informative and useful, and occasionally a little fun. 🙂

Corrections to C# and Smalltalk case study #1

I’ve uploaded some corrections to both the C# and Smalltalk versions of my first case study of these two. I’ve put up a new C# version. The old one is still there (and can be referenced from the post I just linked to). The link to the new version is below. I’ve replaced the Smalltalk code with a new version.

Edit 10/18/2017: I’ve hosted the code samples on DropBox. DropBox will say that the files cannot be previewed, but it will display a blue “Download” button. Click that to download each zip file.

Case study code

C# EmployeeTest

Smalltalk Employees-Test package

I was inspired to make these changes by a blogger’s scathing post on my case study (Update 5-24-2013: This blog has apparently disappeared). By the way, if you want to see what this same app. would look like using C#/.Net 2.0, read his code. My code is in C#/.Net 1.x. Leaving aside his pompous language, he found a bug in my C# exception code in the Employee class, which I’ve fixed, and he found that I was using overly complex code to convert some of the data to strings. As I said in my defense, “My C#-fu is not what it used to be.” I’ve optimized those places. I also changed the exception’s message so it’s more descriptive.

He criticized the way I named things. He had some good suggestions for changing the names of things, which I have not followed here. I figured I should fix the bug and the needlessly complex code, but I wasn’t going to rewrite the app. I want to move on. I’ll try to be more conscious of how I name things in code in the future.

The only change I made to the Smalltalk code was making the exception message (for the same reason as in the C# code) more descriptive.

The end result of doing this was in terms of lines of code, the C# code and the Smalltalk code came out virtually the same in size. The Smalltalk code increased to 71 LOC (from 70), and the C# code decreased from 75 LOC to 73. The difference in size is statistically insignificant now. I haven’t bothered to look, but maybe C#/.Net 2.0 would’ve required less LOC than the Smalltalk solution in this case. As I said in my first post, this was not a straight line-of-code count. I used certain criteria which I discussed in my first post on this (at the first link in this post).

I think what this exercise put into relief is that there are certain problems that a dynamic language and a statically typed language are equally good at solving. What I’ll be exploring the next time I feel inspired to do this is if there are problems Smalltalk solves better than C# can.

Edit 9/29/2015: Since writing this, I lost interest in doing such comparisons. I think the reason is I realized I’m not skilled at doing this sort of thing. I had the idea at the time that I wanted to somehow apply Smalltalk to business cases. I’m less interested in doing such things now. I’ve been applying myself toward studying language/VM architecture. Perhaps at some point I’ll revisit issues like this.

C# (1.x) and Smalltalk case study #1

As promised, here is the first case study I did comparing the “stylings” of C# and Smalltalk. I used C# 1.0 code on the .Net CLR 1.0/1.1. I set up a programming problem for myself, creating a business object, and looked at how well each language handled it. I was inspired to do this by Hacknot’s “Invasion of the Dynamic Language Weenies,” (it has to be viewed as a series of slides) where he said there was no discernable difference in lines of code used in a static vs. a dynamic language, nor were there any advantages in programmer productivity (my response to his article is here). I figured I’d give it my own shot and see if he was right. Up to that point I hadn’t bothered to question my impressions.

What follows is the programming problem I set up for myself:

Create a program that records name, home address, work phone, home phone, salary, department, and employment class of employees. The program must store this information in some fashion (like a data structure).

On input, it must validate the department and employment class of employees when their information is entered. The departments are as follows:

– Manufacturing
– Sales
– Accounting
– Board

The employment classes are as follows:

– Employee
– MidLevel <- Management designation
– Executive <- Management designation

Workers with “Employee” or “MidLevel” class designations can be placed in Manufacturing, Sales, or Accounting departments. They cannot be placed in the Board department. Executives can only be placed in the Board department.

The program will restrict access to the employee information by the following criteria. “all access” means all employment classes may see the information:

name – all access
home address – MidLevel and Executive only
work phone – all access
home phone – MidLevel and Executive only
salary – Executive only
department – all access
class – all access

No elaborate keys are necessary. For the purposes of this exercise using the labels “Employee”, “MidLevel”, and “Executive” as “access tokens” will be sufficient evidence for the information store that the user is of that employment class, and can have commensurate access to the data.


Edit 10/18/2017: Here are the two versions of the project. I use DropBox to host the files. DropBox will say that the files cannot be previewed, but will display a blue “Download” button. Press that to download each file:

C#/.Net 1.x Employee test

Smalltalk Employee test

The C# zip file contains the test project I used.

The Smalltalk zip file contains the “file out” of the Employees-Test package I created, the test code (as a separate file), and the test output I got. It contains two sets of these files. The files in the main directory have been converted to use CR-LF for easy reading (by you) on PCs. They have the suffix “.txt”. The Squeak folder contains the files in their original format, which use LF as the end-of-line character This will be human readable on Unix-compatible systems, and within the Squeak environment. They have the suffix “.text” or “.st”.

For those not familiar with Squeak, the “file out” operation causes Squeak to create a single text file that contains the source code for a package, plus its metadata. A package is a collection of classes. The only code that a programmer would normally see in the Squeak environment are the class definitions, class comments, the method names and parameters, and the method code and comments. There’s some other code and comments that Squeak puts out when it does its file out operation. This is done so that when the code is read by Squeak (or some other Smalltalk system) in what’s called a “file in” operation, it can reconstitute the classes and their methods as they were intended. Normally code is accessed by the programmer strictly via. a “code browser”, which is the equivalent of a developer IDE within the Squeak environment. All classes in the whole system, including this package, are accessible via. a code browser. There’s no need to look up code via. directories and filenames. Code is accessed via. package name, category, class, and method.

The code that carries out the test for the Smalltalk version is included in a separate file in the zip, called “Employee test data.text” in the Squeak folder (“Employee test data.txt” if you want to display it on the screen on a PC). The test code for the C# version is included in main.cs in the C# project zip file.

I timed myself on each version, evaluated how many lines of code (LOC) it took to accomplish the task in each one, and I looked at code readability. I’ll comment on the other factors, but I’ll leave it up to you to judge which is more readable.

As I said earlier, the Smalltalk version came out slightly ahead in using fewer lines of code, despite the fact that a little more code had to be written for it to accomplish the effect you get with enumerations in C# (validating that a symbol–rough equivalent to an enumeration–is in the set of legal symbols). To keep things fair between them I did not count comments, or lines of code taken by curly (C#) or square (Smalltalk) braces. I also did not count test code, just the code for the business object and helper classes. Within these parameters I counted every line that was broken by an end of line character.

I wrote the C# version first. It came to 75 LOC, and took me 125 minutes to code, debug, and test.

The Smalltalk version came to 70 LOC, and took me 85 minutes to code, debug, and test. Another way of looking at it is 93% the LOC of the C# version, but I wrote it in 68% of the time.

A possible factor in the shorter time it took with the Smalltalk version is I wrote the solution first in C#. I can’t deny that doing it once already probably factored into the shorter time. I made a point, however, of not looking at the C# code at all while I was writing the Smalltalk version, and I wrote them on different days. Though I didn’t break it down, my guess is that debugging the code may have taken longer in the C# version. Since I haven’t worked in .Net for a while I’m a bit rusty. Despite having worked out the basic scheme of how to do it the first time, I was writing it in a different language. So I don’t chalk it up to me just repeating what I had already written.

This is my first crack at doing this. I hope to present more test cases in the future. Since I consider Smalltalk worth evaluating for real world uses, I think it should be evaluated seriously, which means taking measurements like this. Though this isn’t scientific, I’m doing this for me, to check my own perceptions. I just thought I’d share it with the reading audience as well. Something I’ll try in the future is writing a test case in Smalltalk first, and then in C#, and I’ll see if that has an impact on how much time it takes me with each. What I may also do is do future C# test cases in C#/.Net 2.0, just to be more current.

If you’d like to try out my sample projects:

I explain below how to run the Smalltalk and C# tests. Smalltalk first.

I wrote and tested my Smalltalk code using Squeak 3.8.1, from Ramon Leon’s site, onsmalltalk.com. I’ve tested it on the standard Squeak 3.8 version as well (which I got from squeak.org), and it works. Ramon has updated his public version of Squeak to 3.9, which still works with my Employee test package. So you can use any of these versions.

The best thing to do would be to unzip the Employee test files into the same directory as where you installed Squeak. My instructions below will assume you have done this.

Here’s a little Squeak primer. For those who are new, you’ll need this to try out the Employee test. These instructions will work on PCs and Unix/Linux machines. For Mac owners, I’m not sure how this would work for you. My guess is clicking the single mouse button would be equivalent to left-clicking.

When you run Squeak you will see a new desktop interface appear. It’s inside an application window frame supplied by your operating system. As long as your mouse stays inside this window, and the window has focus, most of your mouse and keyboard actions will be directed at objects and activities in the Squeak desktop space.

What follows are instructions for helper applications which you’ll be asked to open in the Employee test instructions (below), to load the necessary files, and execute the test.

File list – This is kind of like Windows Explorer. You access it by left-clicking on the Squeak desktop. This brings up what’s called the “World menu”. Within this menu select “Open…”. A second menu will appear. From it, select “file list”. You will see a new window open, which is the File list app. It has a top row which contains a text box on the left where filter expressions go, and then there are a series of buttons. Below this are 3 panes. The middle-left pane is the directory tree of the drive you’re running Squeak on. The middle-right pane is a listing of all the files in the currently selected directory. The lower, 3rd pane contains the text of whatever file is selected in the middle-right pane.

Workspace window – This gives you a text workspace to work in. You can work with code, and assign variable values within it, and it will keep them persistent for you (the variables won’t disappear when you’re done executing a task), for as long as the window is kept on the desktop. You bring up a Workspace by clicking on the right tab on the Squeak desktop (called “Tools”). You should then find the Workspace icon, and click and drag it onto the Squeak desktop.

Transcript window – This is like an active log viewer. It’s basically output-only. You carry out your tasks in the Workspace window, but the output (for this test) goes to the Transcript window. You bring up a Transcript window by clicking on the “Tools” tab. You should then find the Transcript icon, and click and drag it onto the Squeak desktop.

DoIt (action) – This is how you interactively evaluate an expression in Squeak. It’s a universal command. Anywhere, in any window on the Squeak desktop, where you have legal Smalltalk code, you can select it with the mouse, just as you would with text in your native operating system, and hit Alt-D. This will evaluate the expression, carrying out the instruction(s) you selected with the mouse.

Running the Smalltalk Employee test

Run Squeak, and open the File list. Go to the middle-left pane and click on the small triangle to the left of the currently selected directory’s name (this is the Squeak directory). This will expand its directory tree. Select the Squeak sub-folder (this came from the zip file), and select “Employees-Test.st” in the middle-right pane. You will see a button appear in the top row of the window that says “filein”. Click on this button. This will load the Employees-Test package into Squeak.

Click on “Employee test data.text” in the middle-right pane of File list. Click on the lower pane, where the text of this file is displayed, then press Alt-A, and then press Alt-C. Open a Workspace window from the “Tools” tab, and then press Alt-V. (Note that many of the keyboard commands are the same as on Windows, but they use Alt rather than Ctrl).

You should also open a Transcript window (from the “Tools” tab).

Select the Workspace window. Select and evaluate every line of code in the Workspace. You can do this by pressing Alt-A, and then pressing Alt-D. This will “select all” of the text in the window, and evaluate it, which will initialize the collection of Employee objects, and run the test. Look at the Transcript window. That’s where the output will be.

The employee class symbol (in the last line of the Workspace) is set to #Executive. This will output all of the input data to the Transcript window. You can change the employee class symbol (to #Employee or #MidLevel) to get different results. All you need to do hereafter is select and evaluate the very last line of code in the Workspace to rerun the test.

Running the C# Employee test

Unzip the project files into a directory, load the project into Visual Studio.Net (2002 or 2003), build the project, and run it. The output will show up in a console window. The test code is initialized to use the Employee.zClass.EXECUTIVE enumeration. This will output all input data. You can change the employee class enumeration passed in to emp.PrintUsingViewerToken() on line 178 of main.cs (to Employee.zClass.EMPLOYEE or Employee.zClass.MIDLEVEL) to get different results.

Edit 5/16/07: I’ve made corrections to the C# version. See my post here.

Microsoft patches Visual Studio 2005 for Vista. Confused? Answers here

Late last year Microsoft revealed that Visual Studio 2005 would not work on Windows Vista when it was released. About a month after Vista’s release, Mary Jo Foley reported that Microsoft had released a patch for VS 2005 that would make it compatible with Vista. It’s available here. They’re calling it “Visual Studio 2005 Service Pack 1 Update for Windows Vista”. This is not SP 1. In fact the download page says in the system requirements that you should only apply this patch to Windows Vista with VS 2005 SP 1 installed on it. It is an update to SP 1, only for Vista. Windows XP users do not need this patch.

Service Pack 1 for VS 2005 is available here. Despite the name you see in the title: “Microsoft Visual Studio 2005 Team Suite Service Pack 1”, this is SP 1 for all commercial editions of VS 2005.

They’ve made a version of Service Pack 1 for the Express editions of VS 2005 as well, here.

Here’s the information page on VS 2005 SP 1 where information on all editions is covered.

Microsoft releases ASP.Net AJAX, and an assessment of WPF

I saw this on Mary Jo Foley’s blog today: ASP.Net AJAX has been released. This is the add-on to ASP.Net 2.0 which was formerly known as “Atlas”. It’s a combination of .Net code (on the server) and Javascript code (on the client) that allows you to set things up in your .Net code, and not have to write a line of Javascript. It may have some add-ons for the Visual Studio designer as well, but I’m not sure. I haven’t looked at it.


AJAX has had a long history, 10 years in fact. It’s my understanding that it all started with IE4 in 1997. It introduced a Javascript method called XMLHttpRequest(). This is what made AJAX possible. One of the earliest AJAX applications, before Google Maps, was a web add-on for Microsoft Exchange Server, called Outlook Web Access, or OWA. It reproduced the look and feel of Microsoft Outlook (pretty much) in a web browser.

Whenever a traditional web application needs to update a page, it does what is called a “round trip” to the server. The entire page is sent to the server, and then a whole new page is sent back. You will often see the whole page refresh in the web browser screen. This typically happens when you click on a link or a button in the browser window. This is not a very efficient use of network bandwidth, particularly if the change is small. Depending on the speed of the network and the server the browser is communicating with, this process can also be slow. It’s reminiscent of the way old mainframe terminals used to work, except that they were completely text-based.

AJAX is a method by which a client web page can make an asynchronous request to a server, and receive a response back, without having to use the “round trip” method. This is done using Javascript on the browser. The Javascript code takes the response and uses it to update just the part of the web page that needs to be refreshed.

It’s a hack. Some might even call it a kludge. For those not familiar with this term, I believe it goes back a long ways. A “kludge” is when you use something in a way that is contrary to its design or intended purpose, but works anyway. The XMLHttpRequest() call enabled AJAX, but the Javascript language and the browser were not adapted to this new way of doing things. Instead the Javascript call was added, some code was added to the browser to make the call work, but nothing more was added to the browser that would’ve made AJAX make more sense.

I’ve seen some descriptions of how AJAX typically works, and it’s loopy. Anyone who believes in software engineering as a discipline recoils in horror at it (okay, I exaggerate a little). It’s very messy. Making AJAX cross-platform is even more of a challenge, because not all Javascript calls work on all browsers. This is the reason that ASP.Net AJAX and other such frameworks are nice, because you can work with a more familiar language, like C# or VB.Net, work with a nicely constructed framework, set things up the way you want them on the server end, and the framework takes care of sending the right Javascript to the browser at the right time to get the job done so you don’t have to worry about it. You can get the benefits without having to “get your hands dirty”.

Where things stand now 

Right now AJAX and Flash (and technologies that work with it) are the future of web development. WPF from Microsoft was released earlier, but in the eyes of one expert who has done application development for many years it’s not ready for prime time yet. I went to a presentation on WPF last night by Rocky Lhotka, and the sense I got of it is it’s nice for creating web-like applications that have a multimedia focus, but it’s not ready for anything serious in the realm of business applications yet. It would be alright to use for simple applications that need only simple form input. Anything more sophisticated than that, and you’re looking at a headache. The way WPF works is not that much different than the way a web browser works at this point. I’d put it at the level of IE 1.0 or 2.0. It allows some logic to be used on the client side, but it doesn’t allow the flexibility of Javascript in a browser. Rocky said it might be another 2-4 years before WPF is up to the level of Windows Forms. He predicted that it’s the future of rich client development, but it’s not that great to work with yet.

The future of programming

For some months I’ve taken time to study dynamic languages, particularly Lisp and Smalltalk. I’ve kind of wondered why though. There is a certain elegance that attracts me to them. I also like the philosophy that Dr. Edsgar Dijkstra espoused, which is that the computer should enable the programmer to express his/her idea to it, and let it figure out how to execute it. That idea sounded great to me. I didn’t quite know what he meant, but I knew it’s something I desired. I knew I liked Smalltalk’s elegance, because I remembered admiring it years ago.

Some time back I read a post by Ramon Leon on domain-specific languages (DSLs), and how Smalltalk fit into that picture. He linked to an online video of a presentation by Martin Fowler, talking about this. It was very interesting. Ramon continued talking about DSLs in subsequent posts. Each time clarifying it in my mind. Over time I’ve come to realize this is the answer to why I’ve been so interested in these languages.

In languages like Lisp, Smalltalk, and Ruby it’s possible to “embed” a domain-specific language within the language. Lisp programmers (“Lispers”) are famous for this (insofar as they are known at all). It’s often been said that Lisp is a language for writing other languages. I wasn’t sure what that meant, but I think I’m beginning to understand. Fowler explained that in a way it’s possible to create DSLs in mainstream languages like Java as well.

Fowler discussed the different options for how to implement a DSL. One way is to create a language native to the problem domain. This is expensive, because you have to craft your own parser and your own virtual machine/interpreter, or code generator. He mostly discussed the technique of embedding a DSL in a general-purpose language.

Using the embedding technique, a DSL is a collection of objects and methods, or functions and variables (in the case of a non-OO language like Lisp), that enable a developer to totally focus on the problem domain, and avoid language primitives and API calls that are built into the general-purpose language platform, unless they directly relate to the domain. It forms a kind of language that is specific to the problem. Depending on the general-purpose language used, you can see the DSL coming through. In Java and C#, it’s more difficult to see, since they require you to use type specifiers and a method call notation that clearly delineates what is considered code and what is considered data. And it’s difficult to add methods to standard classes. This can come in handy when writing DSLs, because it’s all about terminology. Sometimes a term that would be most intuitive is already taken by a standard class, and you’d like to add a few more operations that apply to your problem.

Some might ask, “Why would you want to add methods to standard types? Isn’t this dangerous?” When using the DSL technique, you are only redefining the language for the problem at hand. You are not redefining the standard types to that problem domain from then on for all other projects. Typically you would start with the standard types as they were when the next project comes along.

In Lisp, Smalltalk, and Ruby (in that order) the native language syntax is minimalist, and you can add methods to the standard types like falling off a log (in Lisp you can redefine standard functions). So the DSL has an opportunity to “shine through”.

If nothing else, adopting the DSL mindset will have you creating code that on its face makes more sense, because it directly relates to the problem at hand. Using the technique Fowler talks about, depending on the language, it may seem more like a hat trick, because you are still using the same syntax and semantics of the general-purpose language. You can think of it as “painting over” parts of the general-purpose language and putting new labels on things, new nouns and new verbs. You can get more creative, replacing what should be a noun with an adjective, and replacing a verb with a noun, so it makes more sense. There’s an example of this at a link below, when I mention a package called Albatross.

Fowler in his presentation showed an example of a simple DSL for parsing a text file with fixed-length fields, and how it can be manifested in a general-purpose programming language. He showed two examples, one in Java and one in Ruby.

Ramon Leon came up with his own version of a DSL in Smalltalk for the problem of parsing the same text file that Fowler showed in his presentation. You can see it at the link above.

You can see another example of a Smalltalk DSL, created for a web app. testing package called Albatross at this post on Ramon’s blog, On Smalltalk. Maybe I’m going overboard here, but this last example is a thing of beauty, in my opinion. What it’s doing is so obvious. It’s a pleasure to read. This is kind of the point. It not only makes code more understandable, it’s also easier to construct solutions to problems.

The future of software development

Microsoft is adding features to .Net that will enable easier renditions of DSLs in that platform. The Linq Project team is adding new features to C# and VB.Net that make them more like dynamic languages. They will introduce dynamic types and lambda expressions, plus what they call “extension methods”, which will allow methods to be added to any type, including a standard type, that is used within a certain scope. Lisp, Smalltalk, and Ruby have, in effect, had these features for years. In any case it will save programmers time, and make their code more understandable.

A few years ago I heard that Charles Simonyi, a former Microsoft developer, had started his own company, Intentional Software. He said he was developing a tool for .Net that he hoped would solve a glaring problem in the software industry. I personally haven’t run into this problem on projects, but from what I hear it’s common for a customer or corporate manager to discuss a project with a development team, or pass spec’s to them, and receive a product months later that is nothing like what they asked for. Something got “lost in translation” between the domain expert (the customer or manager) and the implementation from the dev. team.

Simonyi said that in his experience, large applications like Microsoft Word eventually develop into their own sort of language, or API. Simonyi said he was working on a system whereby the domain expert would be able to graphically specify what they want, on their terms, and that it would translate the specification into a domain-specific model in .Net that the developers could work with. They could then fill it with the logic that would make the application work. He didn’t say it, probably because it would confuse most readers (I think it was in BusinessWeek, something like that), but what he was talking about was creating a tool for creating DSLs.

I have a feeling this concept will become the future of software development. It’s been done in dynamic languages for years. In terms of mainstream technologies it’s not here yet, but it’s coming.

.Net Framework 3.0 final is available

Just saw today, from Mary Jo Foley, that .Net Framework 3.0 is now available. You can get it here [Update 6/9/07 – I’ve updated this link. It goes to the download page for the .Net Framework 3.0 SDK. It says it’s “for Vista”, but it will work with Windows XP or Windows Server. It also provides a link XP and Windows Server users will need to follow to download the runtime components so that everything works].

As I noted earlier, .Net Framework 3.0 is what Microsoft was calling earlier “WinFX”. All they did was change the name. It runs on the .Net 2.0 runtime (CLR). So if you previously got the .Net 2.0 redistributable, or the .Net 2.0 Framework (either as an SDK or through a version of Visual Studio 2005), you do not need to upgrade the runtime. .Net Framework 3.0 is a library upgrade to the .Net 2.0 package.

It will be included with Windows Vista when it’s released, and is now available for download for Windows XP SP2, and Windows Server 2003 SP1. If you want to work with the IDE tools for .Net Framework 3.0, called “extensions” at this point, you must use Visual Studio 2005 Professional Edition. It doesn’t look like these extensions work with VS 2005 Express.

.Net Framework 3.0 includes Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), and Workflow Foundation (WF).

WPF is the technology that includes the XAML markup scheme where you can define what is essentially a thick client UI in XML, have the user download it through a URL, and it compiles and runs on the user’s machine. You get the ease of web deployment with the responsiveness to the user of a thick client. A nice technology. It also solidifies the notion of separating presentation from application logic in .Net. All XAML handles is the UI that the user sees and interacts with. Internal logic to an application, which determines what happens next, is decoupled, and is done somewhere else. This logic can be in a web service or remote component, accessed via. WCF (I get into this below). It can be in a managed DLL that’s downloaded along with the XAML. It can also be handled via. WF (I also get into this below).

WCF is what at one point Microsoft called “Indigo”. It’s the new web services API, and it replaces a method of explicit RPC that existed in .Net 1.x, called “remoting”. As I understand it, it supports remotable components, though some remoting capabilities (like customizing the RPC) that used to exist are not supported any longer. From what I gather it takes care of the plumbing work to determine what it’s communicating with, so the developer doesn’t have to worry about it. If it’s a web service, a component accessed via. remoting, or even a COM+ or DCOM component, it’ll do the work of detecting what’s at the other end, and using it.

WF is a neat addition, but I imagine it’ll mostly be used on the server end (Windows Server 2003 for now). WF allows you to stitch together a bunch of loosely coupled components to create a program flow. It’s essentially the glue for what’s called Service-Oriented Architecture (SOA).

Where you used to stitch a bunch of web pages together in code, through links in each page, or through CGI, to form an application, now you can define the web pages in their own separate components, each not knowing about the others, and use WF to pass data and control flow between each of them. Don’t get me wrong, it’s not just restricted to traditional web applications. WF can be used with WPF as well.

What this enables is applications that can change without recompilation. To add new features to an application, you either add them to existing components, or create new ones, and if necessary add them to the workflow. If you want to change the order of execution of components, or take some components out, just change the workflow. No recompilation is necessary. The workflow does the decision making about what happens next. It can handle conditional execution of components, and loops. BizTalk Server has had this capability for years, but now you can get it for free.

From everyone I’ve heard talk about .Net Framework 3.0, it is a HUGE upgrade to the .Net 2.0 Framework class library, hence the reason they upped the version number to 3.0. As I discussed earlier, I’m sure it makes sense from Microsoft’s perspective, but to technology buyers it’s going to be confusing, because people who use .Net are familiar with the versioning scheme: Framework version number = CLR version number. With .Net Framework 3.0 this is no longer the case. I’ve been wondering if this is going to continue.

For example, the Linq Project CTPs currently run on the .Net 2.0 CLR, even though its scheduled release is in the Orcas version of .Net, further down the road. Linq is a promising technology being developed at Microsoft that will benefit all of us developers who work with databases. As a techie myself, the exciting features are that it supports dynamic types (known as “duck typing” nowadays), and lambda expressions (analogous to closures). I’ve seen a couple demonstrations of it, and it’s very exciting. For those who don’t know about it, I’ll just say that with what’s coming, someday you’ll be able to create collections of objects and query them as though you were querying a database. Someday you won’t have to define SQL query strings to pass to a database. You’ll be able to inline your database query in your code as naturally as though it was part of the programming language, and get back a result set of objects. That’s as much of Linq as I’ve seen. The next stage of it is completing the C/R/U/D operations: inserts, updates, and deletes. I’m looking forward to it.

Microsoft’s first step in WPF/E?

I saw this today over on ZDNet: “Hints of support for WPF/E on Mac”. One of Microsoft’s developers, Nathan Herring, has announced he’s moving from the CoreCLR team to work on what’s called miniCLR for the Mac. Supposedly it’ll be just enough of the CLR to run WPF/E. Hey, maybe they’ll come out with a miniCLR for Windows clients as well. I’m sure a lot of people don’t like having to download and install 20 Meg. for the complete .Net Framework just to run a GUI application.

You can see some video of WPF/E here.

I look forward to what Microsoft will do with this. It will be nice to be able to run .Net rich clients on another platform besides Windows. My guess is that Microsoft will only port WPF/E to the Mac. In my opinion they don’t really have an incentive yet to port it to any other platform.

Microsoft is confusing people again

I wish Microsoft would find people to assign good names to things and stick with those names. They have a habit of changing names on things to the point that they confuse people. The latest example is the announcement Microsoft made recently that they’re changing the name “WinFX” to “.Net Framework 3.0”. You would think this name isn’t confusing, because it appears to succeed .Net Framework 2.0, but in fact it is. Up to this point, Microsoft has been saying that WinFX will be a layer on top of .Net. It will be an add-on. It is still a layer on top of .Net–.Net 2.0, that is–the 2.0 CLR, 2.0 compilers, and the 2.0 Framework Class Library. But when WPF, WCF, and WF are released, they’re going to bundle it with the 2.0 CLR, compilers, and the 2.0 FCL, but they’re going to call the whole package “.Net Framework 3.0”. In addition they’re adding some tools to Visual Studio.

This is going to confuse people, because people are going to think now that Microsoft is foisting a whole new version of their programming languages, C# and VB.Net, a new version of the CLR, and a whole new version of the Framework Class Library on the development community, a little more than a year after they released .Net Framework 2.0. People are going to start saying things like, “Wait a minute! We just got done training our developers for .Net 2.0, and now they’re releasing 3.0?? You’ve got to be kidding me. Why did we train them for 2.0 when we should’ve waited to train them for 3.0,” or, “Are our 2.0 applications going to work in the 3.0 CLR?” These are all misconceptions, but I can guarantee you, there are going to be some people who have them, and Microsoft is going to have to do damage control to clear up the confusion.

One thing this does once and for all is you can’t corrolate the .Net Framework version number to the version of the CLR, the compilers, or the Framework Class Library. Great. It was so simple before.

I’ll tell you what Microsoft should’ve done. If they wanted to rename WinFX to something that would’ve communicated continuity, they should’ve stuck with the 2.0 major version number, and called it something like .Net Framework 2.0 R2 (for Revision 2), or .Net Framework 2.0.5. This way, people could still identify the major version number with the version of the underlying technologies: CLR, compilers, and FCL.