More Creative Destruction on Tap

I am an unabashed capitalist. It's not that I believe that free markets make perfect decisions on the allocation and use of resources, but rather that on average they make better decisions than individuals. People have a collection of biases that influence economic decisions and often cause them to make poor choices concerning careers, spending, and life in general. The free market is not always efficient, but it has fewer biases.

It was with that thought I read that Intel was laying off around ten percent of its workforce, or about 10,500 people over the next nine months. As layoffs go, it is not the biggest we've seen in the industry by any means, but it is significant. And it is especially significant given that Intel has been a model of both technical innovation and good management for the last two decades. Has there been a slip?

I'm nowhere near close enough to the operation of Intel to make even an educated guess on that question. But it seems like every time a technology company gains an advantage in its industry, hiring and spending decisions seem to be made with just a bit less focused. In economic terms, when capital becomes less scarce, it is often used for things that do not provide for a good return.

But ultimately, it is not a reflection on Intel's management, but rather the consequences of our fast-paced industry. Competitive advantage can be measured in weeks, with every high-level decision having consequences that begin immediately and continue for years. In Intel's case, for example, Itanium was not the logical successor to the X86 architecture, though for years after that became apparent, the company continued to spend as though it were.

Compare it to, say, theUS automotive industry, which is undergoing a significant transformation. While arguably the two industries are at a similar level of complexity, the strategic decisions make in automobiles tend to play out much more slowly. Decades may pass before the consequences of poor decisions are felt.

All of that is in retrospect, of course. The decisions are much more clear when you are able to make them in the rearview mirror. That's why management, especially that part of management responsible for allocating capital for strategic uses, gets paid the big bucks.

None of the economic speculation makes it easier on the employees affected, however. Intel will likely provide a reasonable severance package, but anything offered will be pale in comparison to the severance offered by the automotive industry. Part of that is no doubt driven by the power of unions, but it is probably mostly driven by the speed of change in technology.

We need less, because we know (or at least hope) that we can turn our fortunes around more quickly than the industrial workers of the last generation. Our attitudes are also more flexible; few of us believe that the job we currently hold will be the same one from which we will retire. While that doesn't make a transition any easier, at least we can look to the future with some optimism.

Posted by Peter Varhol on 09/07/20060 comments


It’s a Wonder We Get Any Work Done

It was with some trepidation that I started a new ASP.NET project in Visual Studio 2005. I had already spent three hours earlier that morning cleansing errant viruses and tracking cookies off of my system, and now I was going to start IIS and open a port through my firewall and router to the outside world.

But I had been meaning to investigate development using Microsoft Virtual Earth since I sat in on a lunchtime session on the topic at TechEd in June. And I convinced colleaguePatrick Meader that an article on the technology would be a great contribution to Visual Studio magazine. So I pressed on, making remarkable progress and writing a couple of simple map applications in less than a day.

I suspect I'm like most people when it comes to system protection. I have a subscription to a comprehensive virus checker, and I mostly curse it for getting in the way. For example, my McAfee seems to either block or take an inordinate amount of time scanning outgoing mail, so I have to turn off that particular feature to send any mail. But mostly I deal with the hassles, including much too frequent upgrade and special deal pop-ups, because I have a vague sense that it is good for me.

I use Windows XP Firewall, and also depend on my router to deflect any attack based on my IP address. I don't visit questionable sites and don't (voluntarily) download software that I don't know. In other words, I'm not particularly attentive, but I don't do stupid things.

Yet still I had to spend several hours cleaning my system. Of course, by working on a Virtual Earth application using the Virtual Earth Map Control and ASP.NET, I had to turn on IIS and open a port to the Internet. I regret to say that by the end of the day I once again started getting suspicious messages from McAfee, indicating still more viruses and tracking cookies. These messages will almost certainly lead to another round of cleaning over the weekend.

Enterprise developers often have to operate under a lot of restrictions. In some cases, they cannot even be local system administrators for their own development systems. In all cases, they have to perform regular virus scans and comply with enterprise and IT restrictions on Internet access.

These restrictions make it difficult to write services applications such as the type that Virtual Earth enables with ease. You have to be as diligent at keeping your system clean and satisfying corporate mandates as you do at writing quality code.

Some developers are able to approach system security with the same mindset that they do writing code, and it becomes a part of the normal work routine. Even then, it is a drain on software development productivity.

I don't code enough to have that approach. For me, and I'm sure for others, it is just a time-consuming chore undertaking only when necessary. But the cost of both illegitimate and technically-legal-but-shady intrusions (like Microsoft's Genuine Windows Advantage) take an enormous toll on our ability to do our jobs.

Posted by Peter Varhol on 08/26/20062 comments


Back to Basics

When you spend a good part of your day reading or listening to yet another announcement of a product that has a slightly different take than others on the hot problems of today, you tend to forget that there are real issues in computer science that are the subject of serious research and discussion. I took the opportunity over the last week or so to get back to some of the intellectual curiosities that attracted me to computer science in the first place.

My most recent encounter was with a company called 1060 Research (http://www.ftponline.com/channels/business/2006_08/companyfocus/1060research/), which describes a way to abstract the process of calls and data passing into resources that are described by a URI. While most of us use URIs in specifying Web locations or making database calls (using XQuery), 1060 Research's NetKernel uses them for everything, including function calls and storing return values. CEO Peter Rodgers pointed out to me the amazing fact that most code just makes sure that messages and data get from one location to another, so developing NetKernel applications is rapid and efficient.

Canvassing the blogs provides other opportunities to look for some of the fundamental issues of computer science. Joel Spolsky notes that it can be useful for a programming language to be able to treat constructs in the same way in code, even if those constructs turn out to be very different things (http://www.joelonsoftware.com/items/2006/08/01.html). This is especially true if you don't know in advance what the construct will be. He uses the Lisp programming language as an example of how a term can be a variable, function, or instruction, depending on how it is ultimately used, yet manipulated in exactly the same way up until that point. This has special significance to me, as Lisp was my predominant language when I was doing graduate research in artificial intelligence in the 1980s.

Eric Sink ponders the question of concurrency in application execution (http://software.ericsink.com/entries/LarryO.html). Concurrency was once a real problem for scientific programmers writing Fortran code for a Connection Machine from Thinking Machines, Inc. or other significant multiprocessor system, and a much more abstract problem for the rest of us. That is changing. Dual core desktop systems are becoming common, and he notes that it is only a matter of time before we have 16-core systems at our personal disposal.

Eric presents the Erlang programming language as a possible solution (yet incongruously bemoans its lack of a C-style syntax, even though the C specification was published only a couple of years earlier). Erlang provides for concurrency through the use of threads that don't share memory space. Instead, threads (actually very lightweight processes) communicate through message passing, a mechanism similar to that used by 1060 Research's NetKernel.

The key for all of these problems is abstraction and modeling; that is, creating a model of the underlying execution engine (both hardware and operating software), and abstracting it to something useful for your particular purpose. It turns out that the better models we have, the better the abstraction can be. Of course, it is unlikely that there exists such a thing as a universal model of execution, so there will be no universal programming language to abstract to. This is perhaps the most cogent argument for the need to have several programming languages in your personal set of skills.

You might say that there is nothing new or significant here; in Joel's example, using C you typically use pointers to reference variables and call functions. Likewise, you can do threading and reliable concurrency in C, by setting up critical sections and resource locking in your code.

But the fact that you can use pointers to reference (and dereference) both variables and functions in C doesn't mean that the language treats them the same. For example, when you dereference a pointer to a variable, you are finding the value associated with that variable. When you dereference a pointer to a function, you are finding the entry point of that function. And while you can do concurrency in C, writing your own critical sections and locking resources manually, it is difficult, time-consuming and error-prone.

There was a time when we didn't think very much about concurrency and other problems, but that is changing. We require new thinking, and new approaches to these problems, because these problems are starting to touch mainstream computing.

These are all fundamental problems in computer science. Solutions here are not going to convince a venture capitalist to fund a start-up company, or let you go public or sell out to a larger company in two years. In many cases, the only reward to solutions to fundamental problems of computing will be intellectual satisfaction.

The fundamental problems can seem obtuse when you're in crunch mode for your latest product delivery, or in my case reading yet another press release for a point release to an established product. But they will have a far greater influence on your work, and the direction of software in the future, than any product you may be shipping today.

Posted by Peter Varhol on 08/20/20060 comments


One Man’s Trash is Another Man’s Treasure

That was my thought after picking up a press release from Sun Microsystems in support of its NetBeans IDE. Sun announced a free program to migrate Borland JBuilder users to NetBeans (http://developers.sun.com/prodtech/javatools/borlanddevs/index.html), citing the upcoming divestment by Borland of its development tools and noting that the Borland has discontinued investing in their tools business. It is especially ironic as Borland has given FTPOnline reason to believe that it and the development tools successor company will continue to make strategic investments in its tools.

Still, there is nothing inherently incorrect in Sun's statements. Once the divestment occurs, Borland will not be investing in development tools; the successor company will. Sun is merely practicing the spread of FUD, that is, Fear, Uncertainty, and Doubt, on a competitor's products. And they are not the only one; BEA has announced a similar program.

This practice is nothing more than marketing; it might even be good marketing, as the costs of putting such a program in place are minimal, and the return could be substantial. I have used FUD during my time as a software product manager, albeit with little success.

But a developer cannot see something like this and immediately jump to one conclusion or another. Everyone's situation is unique, and unless the tools in question are problematic today, there is little reason to panic.

Vendors announce shifts in strategic direction and product retirements all the time. Almost every technical professional faces this situation at some point in time. In the example cited above, migrating from JBuilder to NetBeans might make sense, but not as a universal truth. Based on my own experience as a FUD creator, here are some steps that you should take when you get word that a product you depend upon is undergoing upheaval:

1. Take a deep breath, and don't do anything right away. All too often emotion rules our decision-making process, and this is one time where logic must come to the forefront.

2. Collect information. Often the best information won't be available right away, so taking your time in making a decision takes on added importance. Attend conferences and talk to others in the same boat. There may be more information out there than you think.

3. Assess your needs, today and in the future. Today your needs are well understood, but the future is not so clear, so you have to do some guessing. The best way is to lay out three separate scenarios, and determine your tools needs under each of them.

4. Talk to different vendors about your actual and projected needs, and see how they translate into purchases, training, and changes in ways of working. The feedback you get will be biased, so make sure you evaluate your tradeoffs logically.

5. Make a decision. The decision can range anywhere from making an immediate change to milking the current solutions until they are no longer viable. Don't let FUD scare you into spending money or taking a risk if you don't have to.

If it sounds like the above steps might take a year or two, that is the correct idea. Making a decision right away almost always results in a poor choice and hurried implementation. If you do, then the FUD wins.

Posted by Peter Varhol on 08/11/20060 comments


Not a Day at the Beach

As the heat index in New England once again gallops past a hundred degrees, I am safely ensconced in my basement, where the primary source of heat is my own agitation at yet another round of software upgrades. In this case, it was the Adobe Acrobat Reader, but it could just have easily been any of the several dozen applications I have on my system.

When I launched the Acrobat Reader, I got the notification that there was a new version available, and thought, What the heck, I have a few extra minutes this morning. Well, three reboots, one hung system, and one hour later, I finally had the latest version of the Acrobat Reader on my system. And somewhere in the interim, I rather forgot what I was going to do with it.

The disingenuous thing was that Adobe called two of the updates critical security releases. Sound familiar? (If not, I still have the Windows Genuine Advantage sitting in my Updates cache as a critical update). How could you not install that?

Now, I am not intentionally singling out Adobe; other software vendors engage in similar practices. And Adobe may well respond that the hour I spent upgrading is a trivial amount of time, especially when balanced against a more secure system. And within that company's microcosm, it is correct.

But multiply this experience by the thirty-seven applications I have on my system (I counted), and it has the potential to become a full-time occupation, or at least a significant drag on productivity. Yet it makes perfect sense from the standpoint of individual software companies, because taken individually, the effort is minor.

In 1968, Garrett Hardin published an essay in Science called "The Tragedy of the Commons" (http://en.wikipedia.org/wiki/Tragedy_of_the_commons), in which he postulated the then-radical notion that when resources were finite, each person pursuing their own self-interest in maximizing their individual return would in fact not even come close to maximizing the use of that resource for all. Applied to my situation, it stands to reason that software companies are engendering no good will by working to their own individual advantage.

My system is a Windows system, and my applications Windows (or Web) applications. I have limited recent experience with Linux and open source software. I wonder if the open source model is better at addressing the issue of finite end user resources. Any thoughts?

Posted by Peter Varhol on 08/02/20060 comments


Open SOA Announces New Members, New Specifications

On July 26, 2006, the Open Service Oriented Architecture (SOA) group announced an expanded membership for the group, a new Web site that offers the goals of the group as well as specifications in different stages of development, and a roadmap of future activities. This announcement represents a significant step forward in the ability of designers of Web services and SOA architectures to better work with enterprise data.

The Open SOA group, consisting of BEA, IBM, IONA, Oracle, SAP AG, Sybase, Xcalia and Zend, have been working together to create Service Component Architecture (SCA) and Service Data Objects (SDO) specifications. The SCA specifications are designed to help simplify the creation and composition of business services while the SDO specifications focus on uniform access to data residing in multiple locations and formats.

The group announced new members to the consortium, including Cape Clear, Interface21, Primeton Technologies, Progress Software (formerly Sonic Software), Red Hat, Rogue Wave Software, Software AG, Sun Microsystems and TIBCO Software. defining a language-neutral programming model that meets the needs of enterprise developers who are developing software that exploits Service Oriented Architecture characteristics and benefits.

The group's work has resulted in the development of new draft SCA specifications for a declarative policy framework, improved description of connectivity with bindings specifications for JMS, JCA and Web Services, and new BPEL and PHP authoring models. In addition, draft specifications for Service Assembly; Java and C++ service authoring; and SDO have been updated.

The SCA and SDO specifications can help organizations to more easily create new and transform existing IT assets, enabling reusable services that may be rapidly assembled to meet changing business requirements. These specifications can greatly reduce complexity associated with developing applications by providing a way to unify services regardless of programming language and deployment platform.

The group has also launch a web site that provides information on goals and specifications. You can read more at http://www.osoa.org.

Posted by Peter Varhol on 07/27/20060 comments


An Expression of Frustration

Last week Microsoft announced that the Expression development tools were probably not going to be available for at least another year. To many, that didn't even produce a blip on the radar. The Windows Presentation Foundation (WPF), upon which the Expression works, is scheduled to be an integral part of the Windows Vista operating system. To developers, WPF is a new model for graphics programming, meaning that it is also a new model for user interfaces. To administrators, it represents a new model of computing to support, one that requires new hardware to fully appreciate.

Both groups are likely to be happy that they don't have to deal with applications using these new technologies until later. It may be a happiness born out of deferred pain, but it means another year before they have to worry about dramatic changes in application user interfaces and hardware requirements.

But it should be a big deal to Microsoft. With Windows Vista shipping around the end of the year, the development tools for new technologies should be available today so that developers can start coding, and administrators can begin planning for higher levels of graphic performance.

Expression is specifically targeted toward building user interfaces on WPF using the Aero user interface. Expression consists of a set of three tools – Graphic Designer, Interactive Designer, and Web Designer. The Graphic Designer is a powerful drawing tool intended for graphic designers to create high-quality graphics for user interfaces. Interactive Designer is a UI designer's tool, while Web Designer is a Web UI building tool. Both the Interactive and Web Designers let you visually create Aero user interfaces for their respective platforms.

The big problem that Microsoft faces is that no one is quite sure why they want a new graphics model. Granted, there are some intriguing new features in WPF and the Aero user interface. For example, it is possible to add perspective, shading, and transparency to graphic objects, and graphic regions can be overlaid on top of one another (which incidentally seems to render the GotFocus event more or less obsolete for graphical regions). And it does look impressive, although some viewers of early versions of Windows Vista find it disconcerting.

But except for gamers and a few people doing high-end data visualization, better graphics are pretty far down the list of new features people want. This may in fact be a case where there is new technology looking for a broad use. Back in 2002, managed code also fell into that category, and is moving into extensive use only today. And managed code was pushed both by Java and by the need to reduce memory errors. Aero applications may also be shunned by administrators, who don't want to upgrade hardware unnecessarily.

But there is no doubt that, once the Aero genie is out of the bottle, people's expectations of UIs will change in its favor. It simply does Microsoft no good to delay when that occurs.

It's also intriguing to note that while Microsoft intends for developers to create and deploy applications and UIs with managed code, while the upcoming Office 2007 (now tracking to the beginning of next year) uses the Aero look and feel with purely unmanaged code. A contact at Infragistics told me that WPF provides an unmanaged API as well as the managed one, although Microsoft is certainly not publicizing the unmanaged one. The Office group at Microsoft certainly has to target its users, and this is a clear indication that at least one large application group at the company isn't expecting a fast uptake of Vista and Aero.

So what should you do? If you're a developer with responsibility for user interfaces, start looking at WPF and Aero now. You may not be developing applications with these technologies for two or three years, but you should understand how UI development will change, and how to best plan for it. When the Expression tools become generally available, you'll be in a position to determine if they meet the needs of your applications going forward.

If you're a system administrator, look at the graphics hardware requirements for Vista and Aero today. After Vista ships, start buying systems, especially client systems, that meet these requirements. It might mean budgeting a bit more today, but when the expectations become set, the transition will be swift.

Posted by Peter Varhol on 07/25/20060 comments


Testing Web Services

As more enterprises embark on the path of turning enterprise applications into Web services, development teams are finding it increasingly difficult to assure the quality of those services. Building and maintaining Web services is a very different prospect than for traditional applications. Certainly the code inside individual services is developed and tested in similar ways, but Web services are more about the interactions than among code components than they are about what goes on inside those components.

This requires new rules, and new tools to make and enforce those rules. I found both in ample evidence in the offices of Mindreef (www.mindreef.com), a web services testing vendor exploring new ways to define quality in the lifecycle of Web services. Mindreef started with a product called SOAPScope, a single-user debugger for SOAP packets. This was largely Java-based and was a natural way for doing basic debugging on communication between Java Web services.

Today, the company has significantly expanded its offering, delivering SOAPScope Server (try saying that three times fast), for team-based Web service quality. SOAPScope Server provides a Web-based interface for teams from business analysts to developers to partners. SOAPScope Server is a portal that is more agnostic about the nature of Web services, focusing on testing the WSDL and underlying supporting schema.

There are two ways to use SOAPScope Server. One is as a decidedly nontechnical user, who can perform basic testing of interfaces by entering data into an abstraction of the WSDL and ensuring that the proper values are returned. The product also serves as a type of collaboration platform, where users can exchange notes based on testing results and fixes put into place.

Developers have a higher level of access to schema, SOAP packets, and other technical details of the services. If necessary, they can examine both WSDL and schema to perform detailed analysis of interfaces. They can also create stubs of services to behave in an expected way in order to test other services. In other words, you can design an interface to give a certain output based on a given input, and without writing any code. This by itself can significantly improve the ability to test services in an SOA.

There are several challenges in moving ahead with a server-based platform, according to founder and president Frank Grossman. Probably the biggest is how to integrate into an enterprise environment that may involve multiple project groups, users, QA groups, and even teams from different business partners. These challenges involve both getting the collaboration model right, and getting the technology right so that all of these disparate participants can get access to the information they need. Once that occurs, however, SOAPScope Server is very well positioned to offer Web service quality features in a Software as a Service package.

Most enterprises that have made a strong commitment to Web services and SOA realize that the problems of quality are very real, but they are also very different than in the past. Traditional testing tools might be able to help assure the quality of individual Web services, but they don't get you very far in looking at a collection of interacting services. And this collection is what comprises your application. SOAPScope Server is a first attempt at true testing of an SOA in a collaborative environment.

Posted by Peter Varhol on 07/14/20060 comments


Just Because You're Paranoid

As a follow-up from my post of a month ago ("When Data Theft Hits Home"), it is worthwhile noting that the computer stolen from the Veterans Administration employee has been recovered, and there did not appear to be any questionable access to its data. My personal information (name, Social Security number, birth date) does not seem to have been compromised. I, and 26 million other veterans, collectively breathed a sigh of relief.

But the point of my previous post remains valid. Entities that hold our personal data are many, and few protect that data in a way that lets us sleep well at night. These entities claim that they are the victims when data is stolen, but that is not quite correct. We are the victims, and they are the intermediaries that enabled it to happen. We entrust our data to them, whether knowingly or not, and occasionally that trust is abused.

I am presumably safe this time. But I'm sure it won't be the last time my personal data is misappropriated.

Posted by Peter Varhol on 07/06/20060 comments


Time for a REST

What I say three times is true.

- The Bellman from Alice in Wonderland

Although Lewis Carroll the mathematician certainly knew that proof by repetition offered no proof at all, it is equally true in real life ideas that are heard again and again over time probably have some staying power. One of them is REST, the concept of Representational State Transfer. The term originated in a 2000 doctoral dissertation about the web written by Roy Fielding, and refers to an architecture style than emphasizes certain tenets of practice.

It turns out that the principles behind REST are useful in the design of Web-based applications. REST received a lot of attention at the Gartner Enterprise Architecture Summit in San Diego last week. Gartner defined the REST style as applying the following principles:

Use of universal resource identification (URI) to abstract resources

Manipulation of resources through representations

Self-descriptive messages and a uniform intermediary processing model

Hypermedia as the engine of application state

One point to these principles is that we have to start thinking of data in different ways than we have in the past. We have traditionally used the Web as a way of communicating data that is stored in a specific, fixed location. REST implies that data should be an integral part of the Web, distributed and always available in a location that is abstracted away from a physical location. Further, data should be completely accessible through a small and fundamental number of processes; in the case of HTTP, POST, GET, PUT and DELETE.

REST provides the tenets of the so-called Web 2.0 (with both credit and blame to Tim O'Reilly). While the term is wildly over-hyped, the concepts are worthwhile. Simplicity is better than complexity, and abstraction is better than physical reality.

If you're not looking at the REST architectural style to build your Web-enabled applications, you're making more work for yourself, and building applications that are difficult to maintain and enhance. You can find more on REST on the Wikipedia at http://en.wikipedia.org/wiki/Representational_State_Transfer.

Posted by Peter Varhol on 06/26/20060 comments


Where in the World?

I'm at Microsoft TechEd in Boston, a cornucopia of computers, applications of all types, networks, and technology in general. Not since the heyday of COMDEX has there been a larger and more eclectic collection of technology under one roof. Thanks to the fact that the new Boston Convention Center is within commuting distance of my home, I am able to spend some time enjoying the energy of 13,000 people focused on this technology.

I've always had an interest in maps and geography. Despite being digitally connected to the world, I have a lifelong fascination for knowing where I am. It goes well beyond that, in fact. In my youth my family never traveled, so I used maps to travel in my imagination around the world. Today, I am somewhat better traveled, but still spend downtime browsing maps as others browse the news headlines.

So I loved Google Earth when it first became available. I could see not only the layout of all of the places I had visited and wanted to visit, but also the physical buildings (at least the roofs), the streets, and even the traffic on the street. With few exceptions, it didn't even bother me that the photos were in many cases several years old (they show only a construction site where the completed Boston Convention Center now resides in South Boston, for example).

I sat in on a lunchtime session today on Microsoft Virtual Earth, a part of the Windows Live hosted application and service site (local.live.com). When Microsoft first announced Virtual Earth, I couldn't help but feel that the company was simply copying an idea conceived of by Google. And doing so in a second-rate fashion.

The former may be true, but Microsoft delivers on its reputation of taking existing ideas and improving upon them. Virtual Earth combines both traditional online maps with aerial and satellite photos to provide an experience that takes you seamlessly from maps to photos as you zoom in on a location. And Virtual Earth seems to have more high-resolution photos than its Google counterpart.

The Virtual Earth user interface is a bit clunky compared to Google Earth, but it is delivered entirely within the browser, so that's to be expected. It represents a difference in philosophy rather than quality.

Where Microsoft has excelled is with the ability to enable developers to easily build applications that make use of Virtual Earth services. The API is well-defined and easy to use from Visual Studio. It includes tools that make it possible to add a great deal of value to a geographic application. For example, you can take existing custom maps, such as subway route maps, and overlay them onto Virtual Earth maps, in about 20 minutes with no programming skills you can present a Virtual Earth view that incorporates that custom data. I can't wait to try this with, for example, a Boston T (the city subway system) route map.

In short, expect to see me talk in the future about some of my own applications that use Virtual Earth services. For information on the future of Virtual Earth, the presenters provided a link that lets anyone sample services currently under development. Visit preview.local.live.com.

In a larger sense, I can only conclude that Google needs a conference that is the equivalent of TechEd. You learn how to develop geographic applications using Google Earth by looking for online resources. With Virtual Earth, you can attend a TechEd session and pick up enough to get started. Google gets plenty of exposure already, but this would be a still further boost to its technology.

Posted by Peter Varhol on 06/13/20060 comments


Windows Workflow Foundation and Business Process Modeling

I have an ongoing professional interest in business process modeling (BPM). I did an afternoon workshop on the topic at the FTP Enterprise Architect Summit last month, and I played to a packed and very engaged room full of professionals. That's why I attended the Microsoft TechEd session on using the upcoming Windows Workflow Foundation (WWF – now in beta version 2.2, if you can believe that numbering scheme) and using a state-based approach to building workflow applications.

I gave a brief demonstration of WWF in my workshop at the Enterprise Architect Summit. My sense at the time was that it was useful in tying together both modeling and the actual code that implements the model, but I was disappointed that it didn't follow one of the already-established standards. Both the Business Process Modeling Language (and the accompanying Business Process Modeling Notation) and the Business Process Execution Language (BPEL) have significantly more industry backing and tool support than WWF. The new technology, which will be in beta until at least the release of Windows Vista, seeks to establish yet another standard, and one that is tied closely to Vista and Visual Studio.

The principal advantage of WWF is that it provides a seamless interface between the workflow model and the code that implements that model. In contrast, BPMN is a notation language only, and while BPML provides the execution language for the notation, there is not the tight integration that Microsoft has created between model and code. As a state machine, WWF can probably be modeled by a state-based approach such as BPMN or Petri nets, but that removes the link between model and code.

While I came away impressed with the power and simplicity of WWF, I still can't help but think that it confuses rather than solidifies the BPM world. It is a good approach for those who rely on Windows and a Microsoft language for their workflow applications, but few enterprises fall entirely into this category. Instead, I would like to see Microsoft support one of the established standards while providing a similar link to Visual Studio and .NET code.

Posted by Peter Varhol on 06/13/20060 comments


Subscribe on YouTube