I'm currently at EclipseCon (
www.eclipsecon.org), the conference surrounding the adoption and use of the Eclipse framework and its collateral projects. I call it a framework because not even the most jaded person can call it simply an IDE any more. This is true even though those who have downloaded and used Eclipse have done so in order to develop software applications, mostly Java-related applications.
But there is no doubt that Eclipse is targeting much more than just another IDE. This can be seen by examining the growing list of projects that are being established by the Eclipse membership. These include the Web Tools project, the Business Intelligence and Reporting Tools project, and the Test and Performance project. With over a thousand attendees, and dozens of technical and business sessions, this conference has to be about more than yet another IDE.
The immediate goal for Eclipse is beyond that of an IDE. With the establishment of these and other projects, Eclipse has moved past the traditional IDE and has made great strides toward becoming a true application lifecycle platform. It hosts tools for the software developer, tester, integrator, and application administrator. There were also a number of embedded software vendors (QNX Software Systems and Wind River were prominently represented) hosting C/C++ compilers and tools for building deeply embedded systems. So expect that other languages will be hosted in Eclipse, and for purposes well beyond building enterprise applications.
But perhaps the most interesting revelation on my part of the Rich Client Platform (RCP), a version of the Eclipse framework without any of the development plug-ins. Some bright person had the idea that this might make a dandy host to individual applications. A representative from Scapa Technologies pointed out that it had many of the characteristics of a client operating system, but I think that stretched credulity a little far. Even so, the thought of using the RCP as a foundation for application development is very appealing.
There are a number of conclusions that can be drawn from the events of the past couple of days. First, it's pretty clear that anyone looking for vitality in a Java IDE should look toward Eclipse. With the multiple projects, contributions by a number of commercial vendors, and a wealth of interest and energy, most future advances are likely to find their way to Eclipse first.
Second, any application lifecycle vendor who doesn't recognize this vitality and hasn't yet figured out a business model for making money from Eclipse is going to get left in the dust. It won't be much longer until customers demand Eclipse, because they are already familiar with it, or because they can customize it for their own unique uses, or because they realize that they don't have to pay for the framework.
There are other interesting things being said here, and other conclusions to be drawn, and I'll touch upon some of them in future entries.
Posted by Peter Varhol on 03/02/20050 comments
No, Valentine's Day is past for this year, so I am not referring to the feeling of affection toward another human being. Rather, I'm referring to the specter that haunts all thinking software developers – the ghost of product liability. In the Feb. 24
th Wall Street Journal (
www.wsj.com; I would provide a direct link, but the site requires a paid subscription), a feature entitled Companies Seek to Hold Software Makers Liable for Flaws describes efforts by some customers of enterprise software to make software vendors reimburse for lost time, revenue, or labor due to bugs.
This should come as no surprise to any software veteran. The standard software license ('makes no representation of the utility of the software for any purpose) patently flies in the face of just about any product liability law or precedent. While courts are still holding that the software license represents a contract to which both parties must adhere, it is only a matter of time (and likely not very much time) that product liability statutes prevail.
The straw that broke this camel's back seems to be security. Applications and operating systems that make it possible for attackers and thieves to break in, disrupt operations, and steal data are costing companies both money and reputation. Legal and regulatory pressures on enterprises are compounding this problem, because software customers are being held liable for breaches by their own customers. The loss of personal data leading to identity theft of a growing number of people makes us all potential victims of security holes in the software we write.
Of course, Microsoft is coming in for its own share of criticism, due to the numerous security holes in Windows and Internet Explorer. One company cited in the Wall Street Journal feature wanted to charge Microsoft for the labor needed to install the litany of monthly patches across its thousands of PCs.
You might think that this is a fool's errand by software users. The problem is more than buggy software, although that is problem enough. There is simply no way of determining all of the potential conflicts between software that might be running on an enterprise network, or even on the same server. Even if a software vendor could compatibility test the universe of commercial applications and operating systems, there is no possibility of testing either with applications that do not yet exist, or that have been developed internal to the enterprise.
Yet critics claim that it is not only possible, but already being accomplished today by embedded system developers, whose code is typically an integral part of a cell phone, router, or other electronic device. The argument is that these developers know their code can't be patched, and simply do a more meticulous job writing and debugging their code. There is some truth to this assertion. While we occasionally hear of a bug or security vulnerability in a cell phone or other device (Paris Hilton comes to mind), these devices seem to work much more reliably than our enterprise applications.
But that is false analogy. It is certainly true that many embedded operating systems can put Windows to shame in reliability, and applications do tend to work as advertised, but both no doubt have their issues. Embedded systems work in a closed environment, with a relatively known platform and set of interactions. As devices become more Internet-friendly, it is likely they will encounter some of the same problems enterprise applications have today.
This recognition might open the door to an equitable compromise. There is no question that commercial software developers can do better testing, including better compatibility testing with other products. The standard software license should warranty against those tested environments and interactions. However, platforms and applications not tested with can't be reasonably guaranteed. And this type of warrantee might encourage enterprises to use the applications on the platforms they were intended, and not to expect that applications will continue to work correctly with new platforms and for new uses.
The software industry must take charge of defining the extent of its liability, and the concept of no liability simply won't work any more. If we don't define something that is both reasonable and legally defensible, those who don't understand the technology or the business will do so for us.
Posted by Peter Varhol on 02/27/20050 comments
Over the years, I've observed a number of shifts in the technology landscape. DOS gave way to Windows, client/server moved aside for N-tier distributed applications, and other sea changes have made the software industry dynamic and exciting, but each shift has come at a cost. One of those costs has been the professionals skilled in that specific technology, and unable to make the shift to the Next Big Thing.
While that proposition sounds dubious, I call your attention to the thousands of certified Novell Netware engineers of the late 1980s who were skilled at getting IPX/SPX networks to operate, but were lost when TCP/IP became the dominant set of networking protocols. When I was in academia, I had my own lost cause – a student highly skilled in Borland Turbo C++ and DOS applications. While he was no doubt the master of this domain, he was completely unprepared to attack any other language, platform, or even IDE. He was out of computing entirely within five years, not even certain of why he had no staying power.
Others seem to pick up new languages and architectures quickly and easily. I'd like to think that I am one of them. While I dabble in direct software development, most of my professional activities today involve everything but actually putting code into a file and compiling it. And my formal education in software development is approaching the two decade mark, and today's most popular languages didn't exist when that education concluded. Even during my time as a college professor, in the early to mid-1990s, I looked long and hard at C++, the mainstream language of the day, and gave it a pass, convinced that it was simply too difficult for what you got out of it.
So what enables me to pick up first Visual Basic, then Java, then C# as the need arose? I would argue that my foundation in computer software was laid in my first data structures course, now twenty years in the past. It was in Pascal, a language of simplicity and elegance that is almost nonexistent today. But in this course I learned how to build structures that are pretty much the same today as they were with the dawn of structured languages, perhaps even as far back as Algol.
You might argue that objects are different than the field-based data structures found in Pascal or C. And I also had a background in objects, as a computer science graduate student, working with Smalltalk and Lisp. But there is a fundamental similarity between the two. For example, inheritance and polymorphism are implicit in objects, but can be implemented in fields.
If I am correct, there are some implications that are broadly useful to enterprise IT and to individual software developers. Over the last two or three years, there has been a significant effort in many large enterprise IT groups to retrain veteran Cobol developers in Java. I seem to remember seeing a Gartner study from a couple of years ago that (correct me if I'm wrong) claimed that such retraining took an average of a year, and roughly sixty percent of experienced Cobol developers failed at making the transition.
I wonder if the length of time, and the high failure rate, is because Cobol is neither structured nor object-oriented, and the retraining fails to effectively teach these concepts so that they become inherent in the developer's mindset. If that's the case, one approach might be to spend little time initially on Java syntax, and leverage that limited syntax knowledge into a more comprehensive study of data structures.
As for individuals, even if you're not a language expert, you can be an effective developer if you comprehend how to manipulate data. In an era of smart editors and debuggers, knowing the details of the language may be less important than understanding what to do with it.
Posted by Peter Varhol on 02/21/20050 comments
Monday night at VSLive! (I was speaking at the accompanying Software Architecture Summit), I went out to dinner in San Francisco's Chinatown with a former colleague and longtime friend who for the purposes of this missive will remain nameless. It was an early night because, as my friend explained, he wanted to log onto Disney's Toontown Online in order to engage with his friends.
Now, an explanation is in order. Disney Toontown is a children-oriented adventure game and chat facility where visitors take on a persona from a list provided by Disney, and interact with one another using a limited number of words and statements provided by Disney. Persona team together to accomplish goals within the game that can't be accomplished individually. It's all very child-friendly and not at all like many open or topical chat rooms, or multiplayer adventure games, on the Internet in general.
Well, I come to find out that parents sign up their children for Toontown accounts, then in the course of ensuring the appropriateness of their interactions become hooked themselves. They acquire a persona, and find other parents to team with and chat with. That's the key point here. My friend is not a pedophile, but a perfectly normal family man, and I would imagine that most other adult participants are similar upstanding citizens.
To some extent, it is the challenge that is interesting. There are apparently a limited number of approved phrases available, so identifying other parents and establishing meaningful communications can be a difficult puzzle. The phrases available are child-oriented and patently innocuous, so they make use of double-entendres, complex combinations of phrases, and other techniques. And there are other ways of communicating, though actions, for example. And if you can learn another persona's secret code, you can communicate with them in free text.
But there's more than just the challenge. It's about connecting with other people in a safe environment, something that's all too difficult to find on the Internet today. My friend claims to have spent hundreds of hours finding other adults, interacting with them, and teaming together to play the game.
There are two lessons here for software developers. First, if you build software that is robust and interesting to others, you can find a much larger audience than you would have anticipated. Did Disney really believe that its children's interactive adventure game would be popular with adults? I doubt it, but the company created software that crossed a wide age and experience gap.
Second, even with the march of technology that lessens our dependence on our friends and neighbors, people still have a need to interact socially. And software offers the potential for connections that weren't possible even a few years ago. Even at the simplest level, e-mail and instant messaging let me stay in close touch with distant friends that I would have lost touch with years earlier had I had to depend on my letter-writing skills. And technology has also enabled me to make friends that I could never have had earlier in my life.
In our own software architecture and development practices, remembering these lessons will help us build software that not only meets requirements, but also produces willing and enthusiastic users. It doesn't have to be a game, but it does have to be engaging. And if it can provide a gateway to other users so that experiences can be shared, you might have the makings of a classic.
Posted by Peter Varhol on 02/13/20050 comments
I've always thought Microsoft keynote speeches had to have been assembled using BizTalk, and Eric Rudder's Indigo talk at VSLive! was no different. BizTalk, of course, orchestrates application components into fulfilling a business process, and Microsoft's keynotes are always well-orchestrated affairs, with multiple speakers, supported by special hardware and software setups and comprehensive slideshows.
Eric outlined the benefits of and developer path to Indigo. The benefits were stated as productivity, interoperability, and Web services orientation. The Web services orientation was a given, so the most impressive thing about Indigo was productivity. A part of that was in net lines of code to implement specific Web service features. For example, Eric showed how Microsoft reduced the manual effort of implementing Web services security, reliable messaging, and transactions from more than 56,000 lines of code to just three.
But I was most impressed with how Microsoft product manager
Ari Bixhorn came on stage and assembled the services behind his demo. I've been in software development in one capacity or another since the 1980s, and the visual assembly of applications has always been a much sought-after goal. In the last 20 years, the graveyards have been littered with companies who promised the ability to develop applications visually, without coding. While Visual Basic was one of the early successes in at least assembling user interfaces visually, Microsoft has generally not taken a leadership role here.
But Ari took some existing Web services, displayed as blocks on a form, and connected them by drawing lines to assemble an application. He even added a Java Web service running on BEA WebLogic, simply by selecting its WSDL to make it visible to his assembly form. He then drew a line from another Web service to it, to complete the application.
The combination of the ability to assemble Web services by diagram and the dramatic reduction in code needed to implement Web services features forms a message that resonates well with me. Both of these enable developers to produce higher-quality applications more quickly.
There is a downside, of course (see my previous blog post entitled "TANSTAAFL"). By abstracting the details away from the implementation, developers lack an understanding of the underlying mechanisms at work. That understanding is necessary when it comes time to enhance applications over their lifecycle, and to debug problems as they arise. While I cheer the work Microsoft is doing, I fear that we are creating a class of developers who lack even a superficial knowledge of the applications they create.
Posted by Peter Varhol on 02/08/20050 comments
At VSLive! and its affiliated conferences (I am speaking at the
Software Architecture Summit), Microsoft will be talking more about its big developer release of the year, Visual Studio 2005 and the
Visual Studio Team System. While I normally write about Java topics, or at least remain technology-neutral, I have some insight on these new products that might be helpful to those looking at them for the first time.
I was one of a select group that had detailed information on Visual Studio 2005 and the Visual Studio Team System well before its public announcement at Tech•Ed in 2004. I worked at Compuware's NuMega Lab at the time, and Microsoft briefed us and other development tool partners on its plans under NDA (nondisclosure agreement). For my colleagues, it meant that Microsoft was giving fair warning that it was getting ready to encroach on core features that we had relied on for many years. Our response was to conceive of and build new products, while still hoping that the maturity of the existing ones was sufficient to add value to developers. That theory will be put to the test in the next 12 months.
Visual Studio 2005 remains pretty much Visual Studio. The Team System adds most of the new features to the development environment. These include performance and memory profiling, code coverage, static source-code analysis, and automated load testing. These are supported by a repository and source-code control system (
not SourceSafe, Microsoft is meticulous about pointing out).
But that's just the starting point. The source-code control is rules-based, so you can establish prerequisites for checking in code. You can define rules for meeting certain milestones during the development process, and assess your quality over time by using standard or custom measures.
The Team System is pure Microsoft. It takes many established concepts, but combines them in unique and interesting ways. In doing so, it defines an application development lifecycle that reaches the pinnacle of automation possible with today's technology.
For instance, the application development lifecycle starts with the infrastructure of the production environmentwhat servers are inside the DMZ vs. outside it, what ports are turned on, what account privileges are permitted, and things like that. This is expressed diagrammatically, with metadata, so that a diagram of the application architecture can be compared against it for compatibility. If there is a mismatch, the architect must change that part of the application, or negotiate with the holders of the production environment to make the necessary changes.
One of the primary reasons an application makes it all the way into production and then fails to work, or fails to perform, is that the developers were not sufficiently cognizant of the deployment environment or user patterns. Microsoft's approach with Team System is to enable the development team to work from within existing constraints from the beginning, to greatly reduce the likelihood of failure at the end.
As you might expect, there are several caveats to that bright vision. First, the bulk of these features work only with code developed in Visual Studio 2005, so to take full advantage of those features means that all developers must work in that version, and on code developed or fully ported there. For most teams, it will take at least a couple years to get to that point.
Second, as a first release, the Team System will lack the reliability and breadth of features that will make it truly useful. It's only through years of actual use, feedback, and improvements that such a system can become a backbone of the development lifecycle.
There is a larger issue, at least with most of the development with which I am familiar. Many development teams simply don't work this way. It has less to do with tools availability than organizational and cultural issues, which will be much more resistant to change. Automating the development workflow can lead to some incremental efficiencies, but won't reconcile the conflicting goals and priorities of those who build software, those who run systems, and those who use software.
And there is no longer any question that Microsoft is focusing the bulk of its vast resources on custom software development at the enterprise level. This is great news for those who have been struggling to employ Microsoft technologies for enterprise-scalable applications, but less relevant to the force of ISVs, startups, and individual developers who were key in making Microsoft the development force that it is. Most industry observers believe that enterprise custom software development is where the growth and the money reside for the foreseeable future. Microsoft, like many others, is chasing that engine. I hope that it doesn't leave the rest of its community behind in the process.
Posted by Peter Varhol on 01/31/20050 comments
Whenever everyone buys into a particular idea, I start to view that idea with some suspicion. Everyone simply can't be right about it. That's how it is with SOA today. Virtually all software vendors (including my own employer) are delivering on one or more pieces of an SOA solution, while enterprise IT departments are running around frantically trying to get the latest Web service to work, in the belief that it will make their operations more efficient.
And in truth, it is simply more difficult to get excited about the Next Big Thing in computing. I've been a part of this industry since MS-DOS was the top operating system, and DOS extenders were the Next Big Thing to make available more system memory for application use. Every few years, there is a technology or even a design concept that promises to revolutionize some aspect of application development.
So when the SOA is hailed as the solution to all application problems, I remember that I've heard that claim before. I'd like to think that I can view the implications of an SOA dispassionately, rather than rely on the hype that it's yet another way of making our lives easier.
And because I have the history I do, I look to that history for lessons. As we've asked our applications to do more over the years, they have become more complex. That seems like an obvious observation, but we've hidden it by enabling developers to work at higher levels of abstraction. In a DOS application, we had to make direct calls to a graphics library to produce the equivalent of windows, buttons, and menus. Today, we select them from a palette and draw them on the screen. In the mid-1990s, I was doing networking virtually by hand, by calling low-level functions from a sockets library. Now all I have to do is write an HTML page, and the operating system manages the rest.
Programming has gotten easier, but our applications have gotten more complex. We are running in place on a treadmill, the figurative Red Queen's race from
Alice in Wonderland.
What does this have to do with SOA? An SOA is possibly the most complex software construct that application programmers can create today (I'm excluding things like operating systems and optimizing compilers, which are truly black magic, from that equation). Despite all the advances we've made in software technology over the last 20 years, and the ability to work at higher levels of abstraction, this is hard stuff.
It's hard because the problems that we're trying to solve are harder. This is yet another attempt to make software more responsive to actual needs, when those needs have become highly complex. Further, an SOA has greater dependencies on components outside the control of individual development teams. Not a programmatic dependency, but rather an assumption that other services will be available to complete the operation of the application.All of this makes developing and maintaining an SOA both difficult and complex. Are we all resigned to dealing with these complexities? I don't think so. An SOA might make sense in a large or even perhaps medium-sized business (say a couple thousand employees or more), where business logic is diffused throughout the organization, and data takes a number of different forms and is logically located in separate places.
Many of us don't work in such places. And job growth has typically been in smaller businesses, so it is possible that we might never work in a place where an SOA is worth the investment. Web N-tier, client/server, or even standalone applications that access local stores serve the needs of many, and will continue to do so.
Commercial services might also make sense, in cases where the service delivers discrete information that is widely needed or of value. Even small businesses might find a need to subscribe to one or more specific services, but those can feed into classic applications in the office. But this is not an SOA, at least not in the common definition.
But remember that Gartner Group, a prime promoter of the SOA concept, sells its services mostly to large enterprises. And vendors who advertise that their products and services are perfect for an SOA are also selling into that demographic.
So don't be concerned if you think you're the only one on the planet not actively building an SOA. While it's no excuse to ignore good programming practices—such as separating data, logic, and presentation—it doesn't have to be an interacting collection of services. Choose the architecture that makes sense for your business and application, and don't get on the bandwagon if you don't have to.
Posted by Peter Varhol on 01/20/20050 comments
As I mentioned last week, this is the completion of my own list of events that should happen in 2005, even though they probably won't. If you missed the first five items,
you can read them here. If you're not interested, you can at least be relieved that this is the only list I have compiled to date.
6. The Linux controversy is resolved, providing a means for open source software in its many forms to continue moving forward. Given the speed at which the U.S. legal system advances, this one might be a real stretch, but enterprises might finish the year with some guidance on their use of open source in general and Linux in particular. This will be essential for anyone seeking to make further investments in such software as a part of an overall strategic direction.
I am a layman with regards to the legal arguments involved, and many more learned opinions than mine are readily available (
http://www.groklaw.net/). But it seems to me as though open source software is bound tightly and perhaps irrevocably to software development efforts in thousands of companies and other organizations, and an adverse ruling has the potential to create havoc. And I don't think that anyone can safely predict how the U.S. legal system will respond.
7. On another legal note, a disaster or critical system failure will bring the issue of application quality to legal, regulatory, and legislative authorities. Those of us who have been intimately involved with the software industry for a number of years can only marvel at the resiliency of our most critical systems. While the systems themselves might fail, they either have backup systems or they tend to fail in a safe mode (as characterized by counterexample in the 1964 movie Fail-Safe).
But it might not appear that way to those responsible for ensuring public safety and integrity. So once a software failure is shown to cause significant harm or expense, it will be the beginning of a trend to mandate higher software quality and greater system reliability. This might ultimately manifest itself in additional certifications of software beyond what we have today (such as Software Considerations in Airborne Systems and Equipment Certification, found at
http://www.rtca.org/), or even certification of software engineering skills.
8. Again on a related note, platform alternatives to Microsoft and Java start to emerge onto the market. I draw upon my experience in observing the software industry for more than two decades to observe that while most software consumers crave stability and standardization, entrepreneurs note movement in this direction as a signal to innovate and offer alternatives.
We appear to be in one such period of platform consolidation, and might be ripe for an explosion of new technologies. Linux certainly qualifies as an alternative platform, but its widespread acceptance will be dependent on the outcome of its legal case. Even if it prevails in the courts, it might have too much of a geek reputation to be broad-based. But I have no doubt that there are others that are cooking and might start to reach the market in 2005. Innovation abhors stability.
9. Application integration becomes standardized. The collection of technologies that fall under the umbrella of application integration is one of the most important issues that businesses face today. You might wonder how I get so excited about this. Organizations today have discrete software systems managing just about every phase of the business, and there are few efficiencies that can still be driven from individual business processes.
But there are still substantial efficiencies that can be obtained once data from processes is combined, analyzed, and acted upon in the aggregate. That's because those processes don't act in a vacuum; they influence, and are influenced by, other things that happen in the organization. And common ways of making this happen mean that the combination won't be haphazard and work for some applications but not others.
10. In a similar vein, connections in general will become more important than applications in 2005. Two things have presaged this. First, we've wanted our various mobile gadgets to talk to one another, and to our computers. Second, networking, especially wireless networking, has become so easy that many otherwise technically disinclined people are running Wi-Fi, Bluetooth, and other networks to exchange data between devices.
This heralds a new direction for applications. New applications will be based on the availability of data from several or more individual applications. Application developers who focus on the availability of services from several devices and their applications are likely to find receptive markets.
Posted by Peter Varhol on 01/10/20050 comments
I've never done a list before, so that is both my motivation and my excuse. I apologize in advance to those who have had more than enough of lists during this season. If it turns out miserably, I promise I'll never do it again. I'll title this list, "10 Things That Should Happen in Technology in 2005, Even Though They Probably Won't." This list features my ideas for stimulating innovation in technology. While we don't necessarily have to start another technology boom (followed by another technology bust), innovation is our lifeblood; without it, technology would be just like any other business. Innovation brings excitement and growth, both of which drew me to technology in the first place.
And because I'm a wordy SOB, I have to say something about each of them. To keep this to a length that doesn't scare away most readers, I'll include five items on my list today, and five next week.
1. The Java platform embarks on a simplification effort.
Even the most experienced Java developers think J2EE using EJBs is too hard, especially for applications that don't need enterprise scalability. There are several free/open source and commercial frameworks that attempt to address this complexity, but few if any use J2EE in any approved fashion.
Sun and the JCP should take the hint. Sure, Java needed to be enterprise-capable in order to survive and thrive, and early attempts at rich Java clients weren't particularly promising. But managed code execution technology has improved enormously since then, making it one alternative that should be developed further. And even with Web-based multitier applications, there should be something between the JSP/servlet and the EJB.
2. Microsoft announces plans to break up into three separate and competing companies.
Microsoft claims to be an innovator, but everyone knows that the products it releases are simply its own versions of products that have already been proven in the market by smaller competitors. There is a reason for this. While the company has among the best R&D efforts of any technology enterprise, it simply can't bring anything to market if it won't result in revenues of at least a hundred million dollars.
However, a Microsoft split into three equally competitive parts wouldn't need a hundred million dollar business in order to bring exciting new technologies out of its lab. And because all of these parts have been instilled with the Microsoft competitive spirit, they would compete fiercely with one another, innovate unabashedly, and bring many more of those innovations to market.
3. Gadgets once again become simple, elegant, and useful.
I have a midrange cell phone less than a year old, and it makes calls pretty well. And it's also an address book, game machine, timepiece, and even Web browser. Somewhere in there is also a GPS receiver. It probably has features I've not yet discovered. It doesn't have a camera, or Bluetooth. It certainly doesn't double as a PDA. I don't think it does SMS, but I can't say for sure.
Adding established features to existing devices doesn't qualify as innovation in my book. Cell phone manufacturers and wireless service providers are catering to an increasingly smaller percentage of people who use most of those features. Granted, the cost of the features is trivial, but those features remain inaccessible and unused.
Instead, let's have some real innovation in mobile devices. I don't know what that is, but it must be something more than heaping still more features onto ever-smaller form factors.
4. Bandwidth becomes universal, and universally inexpensive.
Whether it is delivered by cable, wi-fi, digital cellular, or other means, bandwidth is the next key to innovation in technology. It is comparable to the interstate highway system built across America in the 1950s; better, actually, because while so much of America was connected, many places were also isolated.
As for the universally inexpensive part, just keep it away from the telcos. If you haven't already, I encourage you to read David Isenberg's essay, "The Rise of the Stupid Network." The gist of David's thesis is that bandwidth is inherently inexpensive, and that the value is in the intelligent endpoints (this is contrary to his experience with the telcos, which attempt to add intelligence to the network).
5. Commercial Web services businesses start to take off.
This builds on the previous item. We haven't seen too much of commercially available Web services yet, except for some specialty services such as realtime stock tickers supplying cell phone users. But there is a lot of potential here to deliver some really good data and servicesflight schedules and bookings, directions to wherever we want to go, even entire storefronts with goods and services ready to be bought.
The business models, including nominal fees for subscription services, don't look that bad, and we have both the client technology and the bandwidth to deliver some really kickin' stuff. And I'm not talking about music or video; the only reason we're so excited about those is that the Internet has managed to break the stranglehold of content distributors.
Posted by Peter Varhol on 01/03/20050 comments
I mentioned a few weeks ago that I was getting ready to change jobs, and I wanted to say a little more about that now. Today I now toil by day for < color="#003399">Progress Software, a developer of a fine database management system and business development language, along with lifecycle tools supporting application development and management using that database.
When I was considering the job, I queried several acquaintances about Progress. The answer almost invariably came back, "Oh, yesthe database that doesn't require administration, and doesn't go down." That must have been a difficult reputation to achieve, and I'm looking forward to finding out how we did it and continuing the tradition.
Even more so, I'm very much looking forward to helping navigate a relatively minor but still growing platform through a minefield of Java and Microsoft applications and standards. Progress has the distinction among public technology companies of exhibiting double-digit revenue growth for each of the last eight fiscal quarters. That fact played a significant role in my joining the company. Continued career growth is difficult to come by these days, but the chances of increasing responsibility and new challenges are immeasurably better at a growing company.
The primary buyer of Progress database and application development technology is the "application partner," typically a small software company that targets a vertical industry, such as manufacturing, retail, or health care. These vendors, who might not always be on technology's cutting edge but do understand their customers' needs very well, build applications that target these industries.
But those end-user customers are increasingly questioning the need to include yet another application platform into their IT mix. What, after all, does Progress do that can't be done in Java or .NET? That is where some of the distinctive competencies of my new employer fit in. Many of the customers who use Progress applications don't have a DBA or even an IT department, and a database that doesn't require administration is much more important than using one of the more established platforms.
I'm not writing any of this to promote my new employer. Rather, it's instructive to me to see how a third platform can possibly fit into an enterprise dominated by one or both of the others. We use both Java and .NET in our own development efforts, and with a maturing service-oriented architecture strategy manage to play well with components built using either. One of my tasks is to define and help drive a strategy for enabling developers to use Progress technology in conjunction with a variety of different approaches to building user interfaces, a topic I will speak on at the upcoming Software Architecture Summit in February.
And it is a bonus for me to be able to join fellow industry thought leaders Dave Chappell of Sonic Software and Chris Keene of Persistence. Sonic is wholly owned by Progress, while Persistence was recently acquired by ObjectStore, the descendant of two of the object-oriented database pioneers and now a division of Progress. While my professional focus is changing from developer quality and performance tools to application development lifecycle across a single platform, I anticipate continued technical challenges and further career growth over the coming years.
Posted by Peter Varhol on 12/26/20040 comments
I've been reading a bit recently about how software companies continue to "stick it" to customers in the course of doing business. Even if you don't experience such things personally, you can read examples daily on advocate sites such as Ed Foster's Gripe Line at http://www.gripe2ed.com/scoop/ (and most recently, http://www.gripe2ed.com/scoop/story/2004/12/6/8182/06280).
Well, I'm ostensibly a journalist, and I'm also a consumer and buyer of software. But I'm also a software corporate guy, a product manager for software bought by businesses. If anyone can bridge the gap between what goes on inside a corporate decision-making process, and how it comes out to the buyer, it should be someone like me.
To protect the innocent and perhaps not-so-innocent, let's consider a hypothetical situation. Changes in licensing policy are one of the common areas of aggravation for many software buyers. Let us say, for the sake of discussion, that something along the lines that Ed Foster reports has occurred. You have made a software purchase, and its role is important for your development process; for example, defect tracking or source code control. Your purchase price was $50,000. This software functions quite well for two months, until Microsoft releases the next service pack of the Windows operating system, and then stops working.
Your vendor has released an upgrade to that software that addresses the problem. However, because you chose not to buy the 25 percent maintenance or subscription or whatever they call it, you have to pay an additional $35,000 to "upgrade" to the new version. The end result is that you paid $85,000 in order to use the software for longer than two months.
On the surface, this looks like evidence that there is a concerted effort to extract more money from you while providing little or no additional value. But let's take a look at what likely happened inside the software vendor that led to this outcome.
It's entirely possible that no one knew that the Windows service pack would break the software. Even if it was known, that knowledge was confined to the engineering staff that is far more interested in finding and fixing the problem than in how it will be sold. However, they fixed the problem in the current code base, which also includes new features, and determine that it's too expensive to retrofit it into older version. But now you've paid for something that you didn't plan on.
So why isn't it fixed once it is uncovered? That has more to do with corporate inertia than malicious intent. Let's say that you start complaining. Who do you complain to? The goal of the sales team is to maximize revenue, making it unlikely that your account manager will give you much more than sympathy. And in his defense, you've received a new version, with new features, for your money. If you are a sufficiently large and active customer, perhaps you can get the attention of an executive, which might gain you nothing, or perhaps a discount on a future release.
I'm not defending any of these practices. But it is possible, even likely, that no one is attempting to stick it to the software buyer. Rather, the real culprit is one or more broken business processes within the software vendor.
And what this really means is that the way we buy and use software is also broken. Part of the problem is our own expectations as software buyers. We want to own the software, and have control over its use. But there's a price to pay for ownership. To pay for the non-recurring engineering costs, most commercial vendors have to release new versions at least annually. New versions add features; after a few years, more and more of those features add value to fewer and fewer users. And it still costs a lot of money to keep up with these versions, even as they add less and less value.
The answer might be software rental, if we can get around our ownership hang-up—and if software vendors can reduce their dependence on the big revenue pop that comes with releasing the next great version. But software purchase (or "license to use," as some vendors call it) is simply a recipe for dissatisfaction for the buyer, and a source of financial risk for the vendor.
Posted by Peter Varhol on 12/18/20040 comments
This is the holiday season for most major religions, as well as for popular culture. During this time, even the most devoted techies among us should enjoy the company of family and friends, and to be cognizant of our individual good fortune.
I have a friend and former colleague; let's call him Bob, because that is his name. Over the last year, Bob has suffered debilitating illnesses that have left him disabled and with a poor quality of life. After months of pain and blood clots, his primarily ailment was diagnosed this fall as Reflex Sympathetic Dystrophy Syndrome. According to the Web site www.rsdhope.org, RSDS is a progressive disease of the autonomic nervous system that can follow a trauma. Its symptoms include chronic burning pain, inflammation, and spasms in blood vessels and muscles of the extremities. More recently, Bob has been diagnosed with a growth in his brain that is giving him almost constant migraine headaches. Bob is in his forties, with a wife and three children dependent upon him for support.
It is not my intent to be morose and depressing this holiday season. Here's what Bob has to say about his situation:
"I am at peace with dealing with managing my pain. It is just the way it is going to be for a long time, and we should set realistic expectations. So we have, and things are OK, really."
I'd like to ask each of you, over the next several weeks, to perform a kind and considerate act for someone in your life. There are plenty of people who both need and deserve it. Thank you.
Posted by Peter Varhol on 12/14/20040 comments