When Data Theft Hits Home

Today I received the letter that I had been expecting for the last couple of weeks. It was from the Veterans Administration, which informed me that I was among the 26.5 million veterans and active duty military personnel whose data had been on a laptop computer stolen sometime earlier this spring. Despite the many reports of accidental or intentional loss of personal data, it was the first time that I had ever received official word that I was one of the victims. It was inevitable once the scale of this loss became known, as it covers just about everyone who has served or is serving in the last thirty years.

The letter is long on rhetoric and short on apologies or solutions. It tells me to check my credit history and watch out for fraudulent activity in my accounts. If I expected the VA to take any action on my behalf beyond sending me that letter, I was sadly mistaken.

While neither myself nor any of the other victims have experienced identity theft as a result of this loss, there are many lessons to be learned in this experience. First, though my military service is long since behind me, it seems that I still carry its baggage in the form of my electronic personnel records. There is no telling where else my personal data may be residing, waiting to become a part of yet another news story.

Second, as Sergey Brin, president and co-founder of Google, said recently (http://money.cnn.com/2006/06/06/technology/google_congress.reut/index.htm) during a trip to Washington DC, Internet users have expectations about privacy that are not in accordance with the direction the Net is going. The Veterans Administration theft was not an Internet problem, of course, but the principle is similar. We have expectations that our credit and accounts should not be misused, and are unpleasantly surprised when they are. The problem is as much with our perceptions as it is with others' systems.

Last, it seems foolish worry about the NSA having records of our land-line telephone calls when any GS-9 in the VA or other Federal agency can wreak more havoc by accident than the NSA possibly can by design. I know there are those who would disagree with that position, but it remains clear that our privacy and identities are most at risk through accidents or theft than through specific government or commercial activity.

I began to systematically monitor my credit history and look at account activity only last year. Because each of the three credit agencies provides a free credit report once a year, every four months I download a report from one of them in succession. I feel fortunate that the VA doesn't have my current address (they used IRS records to coordinate the mailing, which is also troubling, albeit in a different way), but they certainly have my full name, date of birth, and Social Security number.

Don't get me wrong – I'm furious that the Veterans Administration has failed to take responsibility for this theft, and is only warning me to watch out rather than taking action to protect me against its own blunder. I think that laws against data theft, intentional or accidental, should be strengthened to make sure the holder of that data is accountable, because it only requires commitment to properly protect such data.

But all of us have a responsibility here, too. We are the ultimate watchdogs of our financial and personal data. Playing dumb with our own data was never a good idea, and today can have drastic consequences.

Posted by Peter Varhol on 06/08/20060 comments


We Can’t Live with Embedded Bugs

The Tuesday, May 30, 2006 Wall Street Journal (www.wsj.com; paid subscription required) offers an unusually candid look at how flaws in avionics software can introduce new elements of risk into flying. It cites several documented examples where software bugs provided wrong or conflicting information to flight control systems, which responded by making the wrong decisions, putting crew and passengers in danger.

Avionics and other safety-critical systems are built differently from business software, and any comparison between the two is unrealistic. Aviation software is regulated by a standards body called RTCA (http://www.rtca.org); the applicable standard is DO-178B, Software Considerations in Airborne Systems and Equipment Certification. Aviation system software cost a great deal more to develop, and is commensurately more reliable.

But aviation software is also getting much more complex. The WSJ article notes that the average airliner today has about five million lines of code, as opposed to about one million on older planes. As complexity grows, so does the potential for bugs. And future airliners (the Airbus A380 and Boeing 787) are integrating avionics systems so that the same software will perform many related tasks, rather than using separate programs for specific and narrow operations. This could well further increase complexity.

How can we resolve the conflict between complexity and reliability when lives hang in the balance? Here are two broad suggestions.

1. Establish development and testing procedures that maximize the likelihood of producing high quality software. Experience tells us that perfection cannot be achieved, even at prohibitive costs. But if there were ever a need for defined and enforced processes for development and test, this is it. But rigorous adherence to software engineering principles during development, and the use of comprehensive cases and analysis during test, can bring incremental improvements to software quality.

2. Fail safe. Besides being the name of a classic movie, it also refers to the concept that a system should fail in the safest possible way. In the case of aviation systems, a system failure or data conflict should result in control being turned back over to the pilots in a way that enables them to seamlessly take over. The design of such software will be a challenge, but it is achievable.

There is no panacea to producing high-quality software for safety-critical systems. The challenge will be greater as avionics become more complex and interrelated. But as the WSJ article points out, software-enhanced avionics systems have actually helped make flying significantly safer over the last two decades. We shouldn't give up these benefits, but building safety-critical systems is a need that is growing rapidly.

Posted by Peter Varhol on 05/30/20060 comments


Should Java Be Open Source?

Be careful what you ask for, because you might get it.

Get ready for a wild ride. At Java One, Sun Microsystems announced that it would release Java under an as-yet-to-be determined open source license. Various parties have demanded it for years, and at some point in the near future, it will happen.

Now what?

I think this was driven by the fact that Sun never had a viable business model for Java. Despite a significant investment in defining, implementing, and evangelizing the platform, it is not clear that Sun realized a net profit. That's a shame, because many other companies that got on the Java bandwagon did. Java may have sold more Sun servers than would have been sold otherwise, but I suspect that it still doesn't cover the company's costs of development.

So Sun needed yet another Java strategy. Open sourcing the platform will no doubt please many in the community. Vendors such as IBM and BEA looking for an edge will no doubt build proprietary extensions that benefit their respective platforms and the users of those platforms. Enthusiasts will feel more comfortable experimenting with extensions to the language, which will no doubt ultimately result in innovations that can be folded back into the official version, wherever that may reside.

There is certainly a down side. Java will likely become less compatible across platforms and implementations. Some vendors will resort to lock-in rather than innovation to hold onto customers.

What does Sun get out of it? That is a good question. It is likely that there is some favorable public relations to be had from such a significant open source gesture. But will it sell more servers? Perhaps, but not enough by itself to make a difference to Sun's future.

Perhaps Sun hopes for a groundswell of support and innovation comparable to what IBM achieved through the Eclipse Foundation. Then it can add value to the product that comes from such an industry collaboration, as IBM did with Rational Application Developer. That could ultimately benefit Sun, but it has to be prepared to invest a great deal, and be patient for results.

And then comes the difficult part. When those results arrive, Sun has to learn how to sell value-add software and services. As a company, it has never demonstrated these skills before. For the sake of a platform and language that so many depend on, I hope it succeeds.

Posted by Peter Varhol on 05/19/20060 comments


Architecture on the Edge

Normally in IT and software, when we think of architecture on the edge, we think of how to make applications accessible beyond the desktop, to the factory floor, or to handheld and mobile devices. But Michael Platt, a web architect at Microsoft and keynote speaker at the Enterprise Architect Summit, had other ideas.

Michael noted that many of the innovations that enterprises are just starting to become aware of originate among the consumer spaces. In particular, he cites lightweight technologies that make consumer web sites more agile, in particular, the REST architectural style. That stands for Representational State Transfer, an architecture that emphasizes revealing data gradually in a lightweight way using POST and GET rather than heavier interfaces (see http://www.xfront.com/REST-Web-Services.html for a good summary description). Consumer sites such as MySpace are using these approaches to build sites that easily scale into the millions of users.

Michael outlines several sets of architectural approaches that have been explored in the consumer space and are worth looking into by enterprises seeking to radically change how they interact within and outside of their organizations. These are:

  • Relationship management. Enterprises seeking to build better bridges to their customers must reassess how they interact, using technologies to open more nontraditional lines of communication.
  • User-generated rich content. Often customer know more about products than the manufacturer. Tapping into this knowledge through media such as wiki and online product reviews can make the enterprise be far more responsive than it is today.
  • Collaboration. Enterprises need to continue breaking down silos so that information can more easily flow between different parts of the organization.
Techniques for achieving these goals have already been tested on many consumer-oriented sites. It is time to bring some of those techniques into the enterprise.

Posted by Peter Varhol on 05/16/20060 comments


Modeling Business Processes Has Nothing to Do with Standards

I gave a workshop at the Enterprise Architect Summit Monday entitled IT Drivers for Business Process Modeling. When it came time for me to explain the plethora of language stacks and standards that comprise this field, my only option was to present an unreadable two-slide PowerPoint list of the languages, their standards body, and their purpose.

Then the fun began. Why can't BPMN and BPEL play together if they are both standards? What is the difference between a notation and an execution language? Isn't there an execution language that goes along with BPMN? Why are we using that one? What is the Workflow Management Coalition?

The fundamental problem in Business Process Modeling (BPM) is that there are competing standards that in many cases would be better off complementing each other. So the Business Process Execution Language (BPEL), an OASIS standard, actually competes with the Business Process Modeling Language (BPML), a standard of the Business Process Management Initiative (BPMI), now under the auspices of the Object Management Group (OMG). So the industry has adopted the notation language from this group, BPMN, but also BPEL from OASIS. See?

It gets even more complicated, but the point is that Business Process Modeling as a field is still pretty immature. The standards bodies (OASIS, OMG, and WfMC) have to work together rather than compete, and devise a single language stack that practitioners can learn and vendors can build tools for. There is a great deal of potential in BPM to bring together disparate applications that until now have operated in silos in the enterprise. SOA isn't sufficient unless you have a plan for what business activities it will address, and that's where BPM comes in. But it can't happen until we have a single language stack that everyone can work toward.

Posted by Peter Varhol on 05/16/20060 comments


Microsoft and Software Engineering

Last fall, at the invitation of Visual Studio editor Patrick Meader, I wrote an editorial note on the subject of whether Microsoft products were enterprise-ready. After some thought, I concluded in the negative, not because of the products themselves, but at least partially because of the poor reputation of its own software development practices. "If Microsoft development teams use state of the art software engineering processes and tools, measure and manage risk, and adhere to their schedules," I said at the time, "the company needs to make this known, and make these processes and tools widely available as best practices."

Someone at Microsoft must have heard, because Russ Ryan, Product Unit Manager with the Visual Studio team, spoke in a VSLive keynote entitled 'How We Make the Sausage: Lessons from "the Factory Floor" on How Microsoft Does Software Engineering.' Because of my previous statements, I felt obligated to attend. That Microsoft was willing to discuss this topic in front of an audience of fifteen hundred software engineers was encouraging.

After listening to Russ, my impression is that Microsoft software engineering practices are no better than average, though trending better. Testing and fixing bugs were the big stories, but there were no state of the art software engineering practices, no research results cited, and no experimentation that could result in moving the industry as a whole forward. If this is representative of all Microsoft product groups, it was a disappointment that the company is not pushing the boundary of software engineering research and practice. If Microsoft cannot advance its own software engineering practices, what hope does the software industry have? I expect leadership from the leading software company, and it is clearly not there.

One positive note out of the session was the statement from Russ that "Superman doesn't scale." This, of course, refers to the stories of marathon coding and debugging stretches and rapid burnout among Microsoft coders. Acknowledging you have a problem is the first step to fixing it. But fixing a big problem isn't the same thing as being a leader.

Posted by Peter Varhol on 02/05/20060 comments


Architecting for Scalability

I had heard Pat Helland speak several times over the last couple of years, when he served Microsoft, and enjoyed his Megalopolis talk. That presentation compared an enterprise architecture to the evolving infrastructure of a city, putting the architect in the role of city planner and government official.

Well, Pat moved on to Amazon last year, but his talk this week on Architecting for Scalability was as entertaining and thought-provoking as any I had heard from his Microsoft days. One thought in particular that sticks in my mind was his assertion that it might be time to rethink some of the accepted wisdoms concerning data management.

For example, he notes that data normalization is not necessarily a good thing. Normalization, which manifests itself (in one way) as the requirement that data should only be changed in one location in the database, makes sense only if you're planning on changing that data, or, as Pat put it, executing UPDATE WHERE . . .

In many cases, that is a normal database activity. But he points out that is changing. First, offline storage (especially volatile storage) is cheap, enabling people to more easily save whole copies of databases for archive. Second, more businesses have to save database information as it is, rather than continually update it. Humorously, he notes that not only are business required not to update certain databases, but in the Sarbanes-Oxley era, doing so can be a felony (not so humorous, I suppose, if you are the one declared a felon).

But the point is that data management requirements are changing dramatically, and both architects and DBAs have to understand the implication to their jobs. In the case of architects, it has implications to how you go about architecting an application. Scalability has a different meaning when you have to treat data differently. In Pat's case, he notes that it becomes more important to not apply data across different business entities. It doesn't necessarily matter what database that data comes from; rather, it depends greatly on how it is used.

I don't normally get excited about data management, but I am in this case for two reasons. First, I work for a company (Progress) that produces a non-normalized database (although it can be validated for most of the behavior of the third normal form), so it was interesting to hear conventional wisdom being questioned in that regard. Second, it is always interesting to watch business requirements change and observe how both technology and skills adapt. We're still in the early stages here, so keep your eyes open to see how database technologies and best practices shift over the coming years to adapt to new business and legal standards.

Posted by Peter Varhol on 02/01/20060 comments


Building Modern Software

I'm at the VSLive conference, speaking at the co-located Software Architect Summit (SAS). I'll provide a couple of posts discussing specific sessions over the next few days. My first is the Tuesday morning keynote.

David Chappell (Chappell and Associates) gave his usual rousing presentation of how the future of software is in services, and the role of Microsoft solutions in that future. Microsoft is in the process of taking several of their object and services technologies and abstracting them into a single approach to developing services-based applications. This is the technology that has been the focus of the last couple of major Microsoft conferences (including the last San Francisco VSLive) -- Indigo, or Windows Communications Framework.

It's a difficult pitch to make to coders, many of whom don't think far beyond the immediate impact of their code or their specific project. It's doubly difficult when you fit in a product pitch for Microsoft's next generation of tools and infrastructure (although David is such an accomplished presenter that it flowed smoothly).

At the beginning of the talk, David described the connections (none but the same name) between himself, the comedian David Chappelle (not present), and David Chappell (Sonic Software, and a colleague of mine), who was in the audience. The amazing thing was that David (Sonic) did not get up in the middle of David's presentation and shout, "It's an ESB!" The combination of Biztalk Server 2006 and the Windows Workflow Foundation contain pretty much what the non-Microsoft world thinks of as the Enterprise Service Bus. It would be a great debate if we could get the two of them in a room talking about that (although it would be very hard to moderate – "Your turn, David, er, Mr. Chappell, oh, forget it!").

David also compared the role of the software developer to that of a liquor salesman, describing his own days as a musician in his youth. "Whether or not I got hired back depended entirely on how many drinks they sold that night." Ultimately, software developers have day jobs because someone is willing to pay for the results of their collective endeavors (don't tell Richard Stallman that, though).

But it is clear that developers, whether working with Microsoft, Java, or other platform and set of technologies, must make their code more open and accessible to both automated and human interaction.

Posted by Peter Varhol on 01/31/20060 comments


Regional Advantage Revisited

I had the dubious pleasure of spending several days last week inFlorida at a conference while my New England home was getting hit with a foot or more of snow. A pleasure for all of the obvious reasons; dubious because I had to shovel out my car at the airport upon my return, then shovel out my driveway when I got home.

But the exercise made me once again think of Regional Advantage, Annalee Saxenian's uneven tome of a dozen years ago that compared the cultures of Silicon Valley and the Boston technology corridor. Her thesis was that culture of Silicon Valley was more open and freewheeling than that of Boston, and that resulted in a more dynamic and flexible high technology economy. It made a kind of intuitive sense, although from the standpoint of science was impossible to prove. And it wasn't entirely clear that the Boston high tech economy was any less dynamic than that of Silicon Valley.

Most relevant to my current circumstances, what was it that brought technology to Silicon Valley but entertainment to Orlando? Was it culture? Unlikely; unlike Boston or New York, both regions boast little more than a single generation of economic success. Perhaps something happened in that last generation or two to make these areas more appealing and, well, advantageous.

I thought about that as I endured the onslaught of theme parks and tourists in Orlando. Before the advent of inexpensive air conditioning, the area around me was brush land and swamp. Before inexpensive air travel, it was a vacation luxury that few could afford. It was clear that climate disadvantages could be overcome, and cultures built from scratch over a short period of time.

So it seems to me that circumstances well beyond a purported culture spell the relative economic success of one area over another. For example, weather and other natural characteristics may be more important. For example, Silicon Valley has a wonderful climate and great scenery. Orlando has a reasonable winter climate and lots of sunshine. Both have a high potential for natural disasters (earthquakes and hurricanes), but that doesn't seem to be a significant factor in innovation and economic success.

It might be possible to draw some broad generalizations on regional advantages. It seems to help to have natural transportation (waterways) or natural beauty (mountains). Good weather is a bonus, or at least warm weather, as long as it can be mitigated with air conditioning. Domination by a single company or industry looks like a negative.

Today, Regional Advantage seems like a quaint anachronism, missing the point entirely about the advantage of various regions based on a shared culture or climate. There is probably much less of a cultural difference between Boston and Silicon Valley than there is between Orlando and Hyderabad, for example. Yet all of them share a common future.

Serendipitously, perhaps, Saxenian is coming out with a new tome next spring, entitled New Argonauts: Regional Advantage in a Global Economy. I will withhold final judgment until I have read it, but it feels like the same old thesis being recast to meet changing world dynamics.

After all, if it were that easy, everyone would do it.

Posted by Peter Varhol on 12/18/20050 comments


Predicting the Future

Why do some technologies and products take off and become ubiquitous, while others die a quiet and ignoble death? In my two decades in computing, I've seen many instances of both, and it at least on the surface seems almost impossible to tell them apart.

I can offer some examples. In the early 1990s, the industry analyst community was confidently predicting the rise of IBM's OS/2 operating system. All of the adoption curves showed that it was destined for dominance, and for each year that those predictions didn't happen, they simply pushed those same curves farther into the future. Until, of course, they simply stopped drawing the curves altogether.

The same was true with the OSI (Open Systems Interconnect) network protocols. When I was working on my doctorate in the early 1990s, several of my professors confidently predicted the day when OSI would supplant TCP/IP and associated protocols as the networking standard. By then I had become cynical enough to doubt any pronouncement of future technology adoption that I didn't believe any of it (and paid for that cynicism with lower grades).

We can explain both in retrospect. OS/2, while no doubt the technically superior product at the time, was undercut by Microsoft in favor of its nascent Windows franchise, while poor, clueless IBM had no idea of either the value of its product or the depths of Microsoft's deceit. And OSI, designed by committee, as technically correct but complex and expensive, and the birth of the bohemian Internet locked in the less complex and less expensive TCP/IP took over long haul data transport.

But was there any way of predicting which technologies will win or lose? Is there a pattern? Perhaps. The first thing we have to do is distinguish between the idea and its implementation. Good ideas usually get broad acceptance, and most people can agree on what constitutes a good idea, even if they disagree on its implementation.

Second, we should seek out those ideas whose implementation is being driven by formal standards drafted by standards bodies with wide representation. Specifically, we should seek out these implementations and disregard them. This conclusion harkens back to my experiences with OSI, which had all of the support from a broad range of standards committees and their members, but failed in the market. While it sounds egalitarian to participate in and support standards bodies, the conflicting goals of their members and the glacial pace at which they make progress almost guarantee their strategic irrelevance.

Cost is also a barrier to acceptance of new technologies. Even the most elegant implementation won't be broadly accepted if it cost too much. A good example of this is the early PC development environments, promoted by Microsoft and IBM, which typically cost over two thousand dollars. PC development leaped ahead only with the low-cost Borland's Turbo development tools, which almost took over the market before Microsoft lowered its own prices and improved its tool set.

With those thoughts in mind, it seems to me that an idea that everyone agrees is innovative is worthwhile observing. Those implementations that reach the market quickly, with pricing and distribution to reach a large number of users, are off to a good start.

One more characteristic of a winning implementation is the willingness of the vendor to rapidly assimilate feedback and make changes in response. Far too many companies fall in love with their own technology, and are unwilling to adapt it to customer needs. Anytime you hear a spokesperson say that the market had to catch up to their solution, run away as fast as you can.

I can't say that these observations are foolproof in identifying a winning technology, but they represent my impressions from observing technology over a long period of time. There are certainly other factors involved, but I'll offer these as making a real difference.

Posted by Peter Varhol on 11/13/20050 comments


Delivering on Big Thoughts

There are far too few people who are capable of both thinking big thoughts and doing the grunt work necessary to help make those thoughts happen. While I'm sure he would disagree, my friend Jon Udell (http://www.infoworld.com/article/05/10/19/43OPstrategic_1.html) is one of those people.

The rain inNew Hampshire over the last couple of weeks has caused some significant hardship – not on the scale of New Orleans and its environs, but large for a small state unaccustomed to natural disaster. Jon lives in a region hard hit by flooding and mudslides (not far from me as the crow flies, but I live on top of a hill in more gentle terrain, so didn't experience the problems widely reported in the news). At a time when the outcome was uncertain for his own home, Jon went out on his bicycle to take photos of the area, then wrote a rudimentary application on top of Google Maps to relate those photos to geographic location.

Jon noted that a local television station (WMUR out of Manchester, at http://www.thewmurchannel.com/index.html) enabled viewers to send in photos of damage, which it posted on its web site. He decried the inability to search those photos based on postal address or other geographic key, and noted that we were very close to enabling web applications that could do just that.

The activities that Jon calls citizen journalism go far beyond letting people click on Google Maps to find out if their own house or property has suffered damage. One of the big failings of emergency response professionals in the New Orleans disaster was the lack of information that reached decision-makers on the severity of the crisis. The participation of citizens in gathering data that could be available to officials with the click of a mouse button could turn a slow and uncoordinated response into an effective and efficient operation.

In what we call an information society, the one thing we seem to lack is adequate information on which to make intelligent decisions. This is no different than in past eras, except that today we expect to know enough, and are unpleasantly surprised when we are proven wrong. It is only when we can employ data from all of the available resources, including people who are also fighting for their lives and their property, can we have the information we need to deliver the expected results. Jon's attempt to build a useful application out of the limited data he collected was not only a brilliant implementation of a big thought, but it was heroic given the constraints and very real dangers he faced personally.

Posted by Peter Varhol on 10/22/20050 comments


Eclipse Conference Brings Together People, Processes

I had the opportunity last week to represent my employer, Progress Software, at the Eclipse Members Conference in Chicago last week (Fawcette Technical Publications is also an associate member). I was able to attend only the first day of the two-day meeting, which included a quick start meeting for new members, followed by an afternoon-long marketing symposium.

Few of us will get to attend a set of meetings like this; Eclipse members are generally commercial software companies, although there is also a category of member known as a strategic consumer, which is a user organization that pays the membership fees and commits to a certain amount of development effort to support the Eclipse Foundation. And, of course, individual developers can also become active in Eclipse, as individual members or as committers.

But most users of Eclipse don't fall into that category, and may lack the larger perspective of the platform and its goals and direction. The number of major projects being undertaken by Eclipse members is pretty impressive, and includes the various Eclipse Platform projects, the Web Tools project, the Test and Performance Platform, and the Data Tools project.

My sense is that the Eclipse Foundation is remarkably focused on moving Eclipse forward. There seems to be little in the way of ego within the ongoing discussions between the Foundation and its members. Certainly the members have their own agendas for working on Eclipse projects, but the Eclipse approach seems to preclude the worst abuses of vendor lock-in and upgrade hell.

What does this mean for Java developers? First, Eclipse continues to drive down the cost of development tools. The freely available Eclipse tools provide the majority of what developers need to get their jobs done. Free or low-cost plug-ins can provide much of the rest.

Second, Eclipse is keeping the tools vendors honest. There is no way that any one vendor can enforce a feature set or pricing structure that forces developers to pay through the nose. While you have to wonder if any vendor can make this model work for them over the long haul, it is great for developers looking for the best tools at the lowest cost today.

Posted by Peter Varhol on 09/25/20050 comments


Subscribe on YouTube