Navigating Between Security and Privacy

Much has been made over the last several years on the interrelated topics of personal security and personal privacy. A lot of the discussion around security has surfaced in response to the seminal terrorist attack in 2001, while privacy was borne of the growing masses of databases that contained the aggregate of our lives in their tables. "No one's life, liberty, and pursuit of happiness is safe while Congress is in session," noted Mark Twain, and in the intervening 100 years or so the threat became not only Congress, but also any e-commerce vendor with a credit card database.

The problem is that security and privacy are at opposite ends of the same continuum. While it is unlikely we can have absolute security from those with nefarious motives, we can get close, but only if we are willing to surrender all information about ourselves. That information will enable those who are responsible for our security to know where we are, what kind of threat we might be under, and how it might be mitigated.

It is that same information that interferes with our privacy. Knowing where we are, what actions we are taking, and who we are taking them with makes it possible to better protect us, but at the cost of the loss of that privacy. And the biggest problem is that we cannot individually choose where we want to camp out on this continuum. The choice is as a society, because anything less than full participation results in incomplete information and a security hole.

So as a society we choose sacrificing some privacy to obtain better, but still imperfect security. My airline reports my commercial flight movements to the federal government, for example. This typically occurs well after the fact of travel, so it only means that the government possessed a record of my past travel, rather than a realtime tracking of my movements, which would be much more invasive. But that information can be used in other ways.

"Who will watch the watchers?" is a reasonable question. In the past, we were able to rely on government inefficiencies to protect our privacy. But government is becoming less inefficient; in fact, many of us are helping that process through our application development, deployment, and management efforts. So my airline travel record over the last year can be matched up with the old arrest warrant for draft dodging, for which I never bothered to apply for a pardon. (Before you drop a dime on me, this is only hypothetical; I was too young forVietnam service, and in fact served in a later era.)

We could rely on inefficiencies and gaps in data to compartmentalize our information and prevent correlations such as this, but those times, if they still exist today, will not for much longer. Our lives are more open to the various government agencies, not necessarily because they collect much more data, but because software makes it possible to share it.

I am comfortable with compromises (see my previous posting, "TANSTAAFL"), but individual preferences don't count here. I have to participate in the same systems that everyone else does. Although I know the statistical probabilities are small, I accept that my next air trip could coincide with a terrorist incident. And I might be willing to give up something to lessen those odds still more. But what would I be willing to trade for that extra protection? I might also be willing to increase those odds to protect my identity and credit history. But how much? That's the part none of us is sure of.

Where do you stand on the continuum between security and privacy, and why?

Posted by Peter Varhol on 12/06/20040 comments


TANSTAAFL

Science fiction writer Robert Heinlein's truism, which applied to life on the moon in his classic novel The Moon is a Harsh Mistress, is also a key piece of advice for those engaged in application development and the processes surrounding the building and management of enterprise applications. The acronym stands for "There Ain't No Such Thing As a Free Lunch," and in my mind has always referred to the tradeoffs that an application developer makes in writing code.

It was a classic that I used extensively while teaching computer science. There is no single best way of doing anything in software development, I explained to my introductory classes. Rather, different approaches offered different benefits and limitations. It was part of their job as fledgling developers to determine what characteristics of an application were important to them, and to make choices that optimized those characteristics while minimizing the limitations.

It is a concept that doesn't sit well with some. Those seeking a cookbook approach to writing code, or those without the ability to judge and balance the relative merits of decisions leave early on in the degree program to attempt other pursuits. On the other hand, those who intuitively accepted that they made tradeoffs with every line of code tend to comprehend the other aspects of application development easily.

Observing this behavior is what led me to believe that the ability to understand tradeoffs and instinctively make the right choice in writing code was the mark of a good developer. You can see the result of the choices they make in the running application—fewer bugs, better performance, and even less code overall.

But the concept of TANSTAAFL also goes beyond coding. The fact of the matter is that no IT solution, whether it is a server operating system, a local area network, or application, is the single best solution for all, or even for a few. Any solution, no matter how desirable, will have unwanted characteristics that make it less than optimal.

While most of us understand this without even thinking about it, it's important to realize that those we work with, especially those with only peripheral responsibility for application development, might not have the same appreciation for the compromises that must be made to build and deploy an application.

For example, one of the advantages of commercial enterprise applications, such as SAP, is that they can (and in most cases must) be customized to meet the unique needs of an organization. This makes it possible to adapt the software to unique processes and specialized data used in the conduct of business. The downside, of course, is that customization takes time, skill, and money. And one of the byproducts of the flexibility that arises from being customizable is complexity.

But many of our masters believe that an investment of several million dollars and a year or more of custom software development means that all needs have been addressed in the most effective and convenient manner. The fact of the matter is that compromises were most likely made during the entire process, resulting in an application that works for the most part, but has limitations that make it useful but flawed. This is likely not the fault of the designers or developers, but rather the result of tradeoffs they all made to get the best out of the limitations they faced.

All of us advise the decision-makers in our organization. Some of us might even get to make decisions of some importance. Excluding the content, the best advice we can deliver is TANSTAAFL. Our applications will never turn out ideal, because there ain't no such thing.

Posted by Peter Varhol on 11/28/20040 comments


Got Your Back

That's what my regrettably soon-to-be former public relations manager Kayla White says to me after covering for yet another one of my miscues, lost documents, and senior moments in communicating with the outside world on products, progress, and plans. What she means, of course, is that she is watching out for just such inadvertent mistakes on my part, and making sure they don't cause harm or create havoc to the image of my products.

It occurs to me that this is the same reason we have built up a long and elaborate application lifecycle to deliver a working application that actually has value to those using it. Because application frameworks do a lot of the heavy lifting today, many applications can be successfully built and deployed by a small team, or even a single individual. We rarely if ever do it that way because there are simply too many things that can go wrong.

And so we divide the roles and responsibilities into the application lifecycle. While it can take many possible forms, the lifecycle ensures that numerous people in many different roles review the work done in designing and building the application. In other words, the process is so unreliable that we have accepted multiple layers of redundancy to get even a small measure of consistent results.

What do I mean by this? Development practices such as code reviews, unit tests, and extensive exception-handling code all exist to watch our backs, or in other words, to correct for potential mistakes or unanticipated conditions in the application code. Exception-handling code, and other efforts to find and account for application defects, all constitute watching the back of the developer or development team in implementing application features.

This isn't necessarily a bad thing; getting quality to the point where the application is useful under a wide range of circumstances is important. Thanks to a combination of complex applications and imperfect developers, manual coding will always contain anything from simple mistakes to unanticipated consequences. But the journey there is still far too time-consuming for the pace of the industry. It still takes months of design, development, and testing before we get a useful business application into the hands of users, which is months after someone has identified a real need or a real opportunity for that application.

Instead of building in human redundancy, it would be more efficient if we could do more to use automated redundancy. We do some of this today, through the use of debuggers and analysis tools for memory or performance optimization. Prepackaged exception code, better testing of that code, and simply better quality code in general can reduce some of the human redundancies we have built into the system.

What makes these things happen is automation—the ability to use tools to effectively ensure that we write high-quality code and deliver it quickly without other developers, project managers, testers, beta cycles, and documentation writers to make sure the software does what it's supposed to do, and does so consistently.

We're not there yet, and it might take quite some time. But the time will come when analyzing source code for good software engineering practices, automated error-handling code, and performance and load testing through simulated application workloads will take the place of their manual counterparts. These development and testing capabilities will watch our backs, freeing more of us for the exacting yet satisfying role of creating new applications and application features.

Posted by Peter Varhol on 11/17/20040 comments


Is There Life After 40?

I don't often talk about the means I use to earn my living, except to occasionally note that I toil by day for one of the larger application lifecycle tools vendors. I speak of it now because that is about to change. I am in the midst of a job change, the first in half a dozen years and arguably the first that doesn't involve a significant change of careers altogether. I'll tell more about it when it's completed, but I wanted to explore a somewhat different spin on the job search process, that is, age discrimination.

I'm at that difficult age—somewhere between 40 and death—that gives one pause in light of all that has been written about the inability of older workers to obtain jobs for which they are highly qualified, usually losing out to younger and less qualified individuals who appear more energetic and enthusiastic. And it seems as you get older, the decision to move on is an increasingly difficult one. I'll admit that the pull of the current job became stronger as I spent more time in that role, even though it lacks the prospect of further professional growth that I know I'm capable of.

So I gingerly sought new opportunities. It was with some regret, as in my years with my current employer I had developed many personal relationships that were both fulfilling and functional. I was well paid, and the local office was only a few miles from my home, in the opposite direction from most rush-hour traffic. Jobs were certainly less available than the last time I went looking, but I thought I had an appreciation of both my strengths and the available market, and was ready to consider alternatives.

I was painfully cognizant that what I had gained in experience I had traded away in age. Would I have to take the inevitable pay cut for this new job? I didn't need the money, but it would be an indication that I had pretty much topped out at my career. Or would it be increasingly difficult to get from interview to offer. I always had trouble getting the interview, but once I did I could usually land the offer, usually by showing uncommon knowledge and enthusiasm. Would that still work, even as my hair added still more gray to its hue?

I don't have an answer to that one, but I'd like to relay an experience of my own, along with my conclusions surrounding that experience. At one point in my process, I interviewed at a large software company for a role for which I was uniquely qualified. I could have written the job description for myself, and I doubt there were more than a handful of people in the area as qualified as I was for this job.

The job was similar to what I did before I was promoted into my current position. I was forewarned that they couldn't match my current salary, but the pull of an opportunity for new and different experiences, yet building on things I enjoyed doing, was too high. The company was large (over half a billion dollars in yearly sales) and headquartered locally, and offered plenty of opportunity for professional growth.

So I interviewed. The hiring manager was about 12 years my junior, and looked younger still. The interview, which took place out on the cafeteria patio on a beautiful day, was cordial enough. However, I never felt as though I connected with the manager, even though I explained that I had performed a similar role several years earlier and understood his company and products well. When asked why I was looking to make a change, I emphasized a truthful desire for continued career advancement. Still, I didn't feel that I had impressed him sufficiently, and the interview and the opportunity ended.

I was supremely qualified, yet significantly older than the hiring manager. Clearly this was an act of age discrimination, right? Well, I don't think that was the case. Instead, the lesson I got from this experience is that older job seekers must send different messages from the ones they sent 15 years ago. Part of that is that a younger hiring manager needs to hear different things from an older applicant. In effect, I told him that I had already done everything that his job entails, and that I wanted it as a stepping stone to a more challenging position. Wrong!

I gave him no assurance that I wanted his job, or would grow well within it. The second mistake I made was to assure him that I fully understood the company business and products.You might think that it would be great for a prospective hire to step in and immediately understand your business and products. But I had preconceived notions, based on my experience with a product line the company had acquired several years earlier, and had noted my abortive effort to try to form a company to compete in a market hole left by that acquisition. That was probably an error too, as I told him I was prepared to compete against his company at one time. Rather than interpreting that as outstanding knowledge of the product line, he likely thought that I might well try to compete again in the future, given the opportunity. This is New England, after all, and the practice of shifting loyalties is not viewed favorably here.

I learned a lesson that day, and I didn't have to wait for the rejection to arrive in the mail to know that this wasn't going to happen. The lesson was that while my experience was important, my presentation was more important still. But it wasn't the same presentation I would have given 15 years ago. Specifically, I would have highlighted my experience, downplayed my knowledge of his company and the products, and stressed that I still had a lot to learn about the business of this company.

Knowing what I know now, could I have gotten this job? I think so. Would it have been a good move on my part? Probably not. Whatever his reasons, I think the young hiring manager made the correct decision. The job was a role I had outgrown, and I would likely have taken it with my sights set on rapid promotion, rather than fulfilling current responsibilities. My age figured into the equation, probably indirectly. But was it discrimination? No, it was my own failure to adapt to changing expectations.

Posted by Peter Varhol on 11/08/20040 comments


The Hurricane Called Eclipse

One of the most intriguing presentations at Java Pro Live! was the keynote by Mike Milinkovich, executive director of the Eclipse Foundation. He spoke on what Eclipse is and what it will become in the future. While a few in the audience thought it a blatant product pitch, my sense was that most were interested in just what was this thing that everyone had come to think of as a freely downloadable, if less full-featured, development environment.

But I learned that Eclipse is much more than that. Unfortunately, its story isn't told well by the Eclipse Web site (www.eclipse.org), and its origins as that development environment just keep perpetuating. It is true that just about everyone who uses Eclipse does so as an IDE, but that will change in the future.

Why? Today Eclipse bills itself as "an environment for everything and nothing in particular." What it is, in fact, is a universal framework. Think of it as a container, for a compiler, editor, and debugger, for example. The interesting thing is the container can hold anything, and is free for any use. And the container enables its contents to do things that they might not have been able to do in the past, such as copy and paste. At the very least, it makes those types of tasks easier.

You might expect the rush to Eclipse to be led by developers of the open source and freeware tools, and there is some logic to that. Many of these tools are very functional, but have only the most basic of user interfaces, often only usable from the command line. It is simply easier for the developers of these tools to add a graphical user interface by making them Eclipse plug-ins.

But commercial vendors are starting to take notice, too. In addition to IBM/Rational, tools providers like Borland, Flashline, and Micro Focus have joined the Eclipse Foundation. At the very least, they are adapting existing tools to be available as Eclipse plug-ins. In other cases, they are using Eclipse as the platform on which to build a more comprehensive application lifecycle set of tools.

The growing acceptance of Eclipse represents the next step in the evolution of application lifecycle tools. You can assemble free or commercial tools based on the "best of breed" concept, and use them together as your development environment. If they are Eclipse plug-ins, you have the added advantage of a similar look and feel, and a similar way of using them, even if they were developed by different people for different purposes. The IDE can consist of tools from a variety of vendors, all integrated simply by being Eclipse plug-ins. The integration isn't likely to be total, because the tools might not be able to share data, but they can more easily be used together.

And the use of Eclipse goes far beyond simply development tools. There is no reason why it cannot play the same role for testing tools and application and network monitoring tools. Look for Eclipse to be both a test management platform as well as a monitoring console.

Eclipse will end up changing the entire direction of application lifecycle tools, and it's not possible to guess what the outcome will be. Project Hyades, for example, is an integrated test, trace, and monitoring environment based on Eclipse. Other Eclipse projects support graphical editing, tools generation, and UML. Commercial vendors and free tools developers have to look at what Eclipse is offering and make some honest decisions about just how their tools add value beyond it.

If there is still significant value in the existing tool, it will likely migrate to an Eclipse plug-in. Unfortunately, if the amount of value remaining is "not much," then the tool will probably be superceded by the Eclipse platform and one or more of its collateral projects. Either way, the ground under the tools vendors is shifting.

Posted by Peter Varhol on 10/31/20040 comments


A Modest Proposal (With Apologies to Swift)

One of the key themes at Java Pro Live! was the need for more integrated tools across the application development lifecycle. Roles are becoming blurred, according to George Paolini, general manager of tools at Borland, so the tools used must be enable the transition from one part of the development lifecycle to another seamlessly. One way of doing this is with a platform that lets you plug in new tools based on the concept of "roles" people assume in the lifecycle. Paolini's goal is to make JBuilder just that platform.

I'm all in favor of more integration between tools across the application development lifecycle. The place where I see it needed is between development and testing. Developers have a continual frustration with the inability to understand and reproduce bugs identified by QA, while the QA testers wish that developers paid more attention to requirements, especially those that stipulate performance and scalability needs. In support of this objective, Paolini demonstrated a feature that enables developers using JBuilder to bring up the requirements from CaliberRM and compare them against working code.

I've always thought that the seamless handoff of an application from one group to the next depended on the easy and natural sharing of information. If operations could provide information to developers that was truly useful in diagnosing a sick application in production, that application would have far less downtime than it does today.

But the idea of tools and information integration across the development lifecycle gives me a more radical thought. Accepted wisdom seems to be hung up on the various roles within the application development lifecycle, and how the individuals filling those roles can work together more closely. That's a worthy goal, but I'm struck by the fact that sharing data still can't hide the fact that those individuals are still separated by common goals.

What do I mean? Everyone wants the application to be successful, but each constituency has a different definition of success. Developers think that a mostly bug-free product using newer technologies to build interesting features is a success. Testers want to ensure those features meet requirements and don't break, while the operations people mostly don't want complaints about errors or poor performance from the users. I realize that it's more complex than that, but I think that most would agree that success across the development phase depends on—at least to some extent—your own role in the process.

My radical solution is to have a single project team design, develop, test, and bring an application to production. The same group designs, builds, tests, and moves the application to the production servers. And even though they might move on to another project at that time, they are still responsible for maintenance and enhancements to the application.

How does this help the problem with tools integration? Integration is trying to solve a problem that was created by the separation and specialization of tasks across the development lifecycle. Eliminate the specialization, and you eliminate the need to integrate tools and share information.

There's a lot in the software development culture working against this concept. It is accepted wisdom that developers shouldn't be responsible for testing their own code (except that many do), because they might unconsciously or consciously not rigorously test in areas they know the code might be weak. But if they are responsible for seeing the application through to production and for fixing any problems that arise with real users, testing might actually become more rigorous.

But you might question whether you can effectively use a multidisciplinary project team at all stages of the lifecycle, or if specialists in, say, software development can be motivated to install and monitor their application on production servers. I would argue that they already do many of these tasks, but from the standpoint of setting up and maintaining the development environment, rather than seeing the application through to production.

And I've always believed that a broader range of skills is better than a narrow one, even if the broader skills are less deep. My friend and colleague Jim Farley (www.eblacksheep.com), who has done just about everything, from system and network architecture to application design and development, would say that he can look up anything that he doesn't know in a matter of minutes, as long as he understands why he needs to know it and how it fits into the solution.

I wonder if that is the type of approach we should be teaching and rewarding, rather than encouraging a depth of specific technical skills. And I think that a single project team with multidisciplinary skills might just be a better solution than getting all of the tools integrated together and sharing information. Tell me what you think.

Posted by Peter Varhol on 10/24/20040 comments


Security is a Lifecycle Responsibility

I'm currently at Java Pro Live! in Boston, where about three hundred attendees have been participating in sessions on designing, building, and managing Java applications. While I haven't been able to look in on every session, I've certainly learned a lot about current and future directions bringing together these three aspects of the application lifecycle.

In his keynote, Paul Patrick, chief security officer for BEA, talked about changing expectations around Java with regard to application security. For those of us who think that security is a matter of configuring firewalls and network authentication, this was a sobering reminder that despite billions of dollars spent on infrastructure protection, enterprises are still losing money and data on application intrusions.

Part of the problem is that most of us have an incomplete picture of who is trying to get into our applications. The image of the rogue hacker seeking to intrude primarily for the technical challenge might have been accurate during the early days of the Internet, but in recent years this type of person has been supplemented by two other groups. The first is the internal person, the disgruntled employee, who already has at least some level of access to the network and quite possibly the application. This person might be motivated by thoughts of either riches or revenge, but because most enterprises don't adequately protect from an intrusion from inside, this kind of attack can be relatively easy.

The second type of person is the professional intruder, the person who does it for a living. Patrick pointed out that organized crime has discovered the Internet, and uses highly skilled people to fake financial transactions or obtain information that can be sold. And he noted that both terrorists and spies have become adept at getting information for their own nefarious purposes.

What makes security such a problem is that we have much more to protect today. It is certainly true that the things we lose today—money, system stability, and data—are the same that we lost 10 years ago, but the consequences today are much more significant. Any downtime at all on an e-commerce Web application can cost an enterprise millions of dollars, and the loss of data might not only be expensive, but also cause legal or regulatory difficulties.

Mr. Patrick called attention to the fact that protecting only the infrastructure means that anyone who can get past those protections has relatively free reign to create havoc with any application running on that infrastructure. Applications have many known potential vulnerabilities, and intruders can easily exploit those vulnerabilities in the pursuit of money, information, or chaos (the pun with the 1960s era spy comedy, "Get Smart," is intentional).

This is bad news for application developers and testers, who already have enough technical demands on them even before they start thinking about security. Yet there is no getting around the fact that learning and applying secure coding practices, and testing known hacks against applications will become a necessary part of the application lifecycle in the very near future.

Posted by Peter Varhol on 10/18/20040 comments


See You at Java Pro Live!

Java Pro Live!, FTP's first all-Java conference, is taking place this Sunday through Tuesday at the Sheraton in Boston adjacent to the Hynes Convention Center. Through my involvement with other FTP conferences, I asked for a leadership role in helping to kick off this effort. The result was that I was asked to co-chair the conference, both a singular honor as well as a considerable time sink.

So I have a significant interest in helping to make sure Java Pro Live! is a success. Fortunately, a number of dedicated and hard-working folks at FTP have laid a lot of groundwork leading up to this point, so I anticipate that all the details have been taken care of, and that the conference will go smoothly.

And the conference sessions look exciting too. We have keynotes from executives at Eclipse, Borland, BEA, and JBoss, as well as sessions from both industry experts and technology users breaking new ground. Because the tracks are divided up between architecture, development, and management, you can focus your efforts in areas of the application lifecycle most important to your work.

Other than JavaOne, few if any conferences focus exclusively on Java. And this first iteration of Java Pro Live! will be a small conference, so that it will be possible to meet and interact with peers and speakers.

You can get more information at http://www.ftponline.com/conferences/javaprolive/. If you attend, please stop by and say hello.

Posted by Peter Varhol on 10/14/20040 comments


Are Computers a Self-Selecting Skill?

I am seriously dating myself when I note that this past weekend I attended a comedy performance by Tim Conway and Harvey Korman. The former comic duo of The Carol Burnett Show performed with lively impressionist Louise DuArt in routines that were probably slightly dated but energetic and certainly remembered fondly by me and just about everyone else in the audience. If you're into 1970s-era humor, and in particular the unique deadpan style of Tim Conway, you would enjoy the show.

In pursuit of a midlife career change, my wife is undertaking a degree program in a health care field at a nearby state university. Recently she offered to assist a student in the computer lab at the university library learning center, and for her trouble, obtained a job doing just that for student computer users on a more regular basis.

The amusements that come from that labor are in many ways similar to—and even exceed—those chronicled from Computerworld's Shark Tank on a daily basis. My wife now keeps a pair of pliers handy to extract floppy disks from Zip drives and CD slots. Stacks of spare keyboards are available for those who spill coffee or soft drinks, and missing system and student files abound on a daily basis. The level of questions and actions bespeaks more than simply carelessness and vandalism; instead, they are harbingers of a complete cluelessness of how to conduct interactions between person and computer.

This puzzles me. As I understand it, virtually all public schools, especially those in the urbanized east, have at least some measure of computer lab. In some schools, anything less than a one-to-one ratio between student and computer is considered inadequate (the last I heard, the nationwide average was more like three-to-one). And unlike gym class, there is no chance of being able to fake your way through it.

Sure, there are students for whom education of any type doesn't take. And if I want to draw analogies to other types of machinery such as cars or DVD players, there are certainly people who are poor drivers, and those who don't make good use of features of their consumer electronics devices.

But these are university students, whose aspirations—if not intelligence—are high. One would expect at least tolerance of computers, if not adequacy. There are enough counterexamples to demonstrate this theory incorrect.

This makes me wonder if the barrier toward universal computer use is a less permeable one than the barrier to other forms of learning. Let's take a look at the larger statistics. At last measure, almost 80 percent of American adults are "computer literate." This is up from about 46 percent 12 years ago when I was first researching productivity improvements brought about by automation.

Let's put that in perspective. According to the World Almanac, theUnited States claims 100 percent literacy in terms of the ability to read and write. That leaves room for some improvement in computer literacy, and it does appear as though the number is trending toward full literacy.

The question I'm getting to is whether full computer literacy is possible. I would like to think that it is, if only to be egalitarian, but have yet to be convinced of that fact. Actually, it is more than egalitarianism; we need full computer literacy of the adult population to get the computer field and application development growing again.

I wonder if our apparent inability to reach full computer literacy is due to the fact that a computer isn't designed to do a specific task or set of tasks. Other types of electronics are designed to play music or videos, for example, and other machines in our lives are also for clearly defined purposes. Computers, on the other hand, are largely a blank slate. It is only by the addition of software and the occasional hardware peripheral do they do anything specific.

The blank slate can leave perfectly functional adults helpless to determine how to start. And because they might lack a mental model of computer operation and instead learn tasks by rote, does the lack of a defined beginning mean that they simply can't begin? My days as a learning theorist are well behind me, but perhaps some of you might have some thoughts on this question. Can we achieve full computer literacy, or does the very nature of the computer make that impossible?

Posted by Peter Varhol on 10/11/20040 comments


The Changing Face of Developers

I've included a couple of photos from the Gartner Application Development Summit held in Phoenix at the J.W. Marriott on September 27-29. The first is me talking to Gartner analyst Theresa Lanowitz, a research director for the testing market and testing advisory services within software development organizations.

The second is Kayla White, the PR manager I work with at Compuware. Insofar as I get ink in the trade press (yes, including FTP's publications), the credit goes entirely to her. While I've worked on both the trade press and the vendor side of the software industry, Kayla's efforts and results have once again reminded me of the strangely symbiotic relationship between vendors and the press that she manages so well. But that's a topic for another day.

For those of you who aren't familiar with Gartner Group (www.gartner.com) and its ilk (Meta Group, Forrester Research, IDC, and a host of smaller ones), they play the role of referee between vendors and end-user enterprises. Enterprises hire them to advise on technology strategies, while vendors hire them to...well, advise on technology strategies. If it seems like a conflict of interest, most of these research companies handle it reasonably well. Compuware spends a lot of money on Gartner services, but I mentioned to Theresa that I was well aware that she doesn't shill for us. Nor does she take the role of user advocate, but rather waters down the hype of technologies and products into something useful in individual situations.

I sat in on a session about Java and .NET interoperability presented by Mark Driver, who is a research vice president at Gartner and is often quoted in the trade press. Driver proposed that Java developers today are largely highly technical and code-centric. These are the developers who understand the platform to an extremely detailed level, and spend their own time studying and trying out new technologies. He refers to this group as Type A developers.

But he sees that changing over the next several years. The highly technical developers focused on the comprehensive details of Java would gradually be supplemented by an increasing number of developers whose approach to software is that of a job rather than a passion. Highly technical developers will make up only about 25 percent of the community by 2008, he predicted.

This is significant, in that these "Type B" developers will be looking for different things from the platform and from development tools. In particular, they (or we, if you fall into that category) want the ability to apply platform technologies in less complex ways, while also being more productive at delivering applications. In particular, Driver mentioned modeling and patterns as approaches that would be popular with Type B developers.

My take is that over the next several years we will see significant changes in how we build applications. And it's an understandable evolution that should be embraced by the veterans among us, rather than feared. I date myself when I say that my first development experiences consisted of command-line compilers, linkers, loaders, and debuggers, often from different vendors with no integration between them. I think we would all agree that the modern IDE is significantly better than that.

Just as the "tool chain" of the past has thankfully given way to the modern IDE, we can expect that the IDE will increasingly include the ability to build applications using Java technologies at levels of abstraction above writing code. If the Java community wants to increase its numbers while enabling those new members to build complex distributed applications, this direction is not only possible but inevitable.

Posted by Peter Varhol on 10/03/20040 comments


Rethinking the Future

On September 23, I hosted an expert panel on the future of Java at Compuware OJX in Detroit. This panel was held on an extravagant raised stage in the 14-story glass atrium in the Compuware building at about 5 p.m. The acoustics were generally poor, and the conference organizers had opened the bar, so while attendance was easily a couple hundred, there were a lot of echoes and stray conversations during the discussion. I was taking questions from the audience, and one question was directed at panelist David Herst, an architect from The Middleware Company (www.middleware-company.com).

"What kind of jobs do you think will be available for Java professionals in the future, and where will they be located?" the woman asked.

David thought for a moment and began his answer, and an interesting phenomenon occurred. The room went almost completely quiet, as most conversation and extraneous noise paused, so that all could hear David's answer.

This is indicative of the high level of interest that technology professionals have in this and related topics. It only added to the sense of irony that this conference took place in Detroit, a city decimated largely by the export of manufacturing jobs to foreign auto companies and suppliers in other parts of the country and the world.

What jobs indeed? While I and perhaps many of you have been only marginally or not at all affected by the loss of tech jobs during the last few years, and the export of some of those jobs to low-cost areas of the world, it is a topic that has captured the rapt attention of all of us. It would be a good world to live in if we could be left to perform the jobs we know and enjoy without worrying about whether those jobs will be there tomorrow, or eliminated completely, or reconstituted in Bangalore. And while we wish our overseas comrades success, we want the software development industry to do more with more, rather than more with less.

Certainly there are companies that handle such moves in ways that are insulting to any person. Often our corporate superiors interpret nondiscrimination laws as the requirement to be equally rude to everyone. Some of us seek legislative solutions to protect our jobs, yet what can we conceivably legislate that provides us with both security and opportunity?

And we often think of it as an adversarial situation, a matter of us versus that faceless corporate entity. In reality, those making offshoring decisions are individuals like us—managers and executives who are themselves subject to downsizing if they can't deliver on financial or productivity promises. I don't feel sorry for those making and implementing offshoring decisions, but I think I do understand their motivations, and they are not evil.

I have no solutions except to note that our enterprises can and should be more open to unlocking the value in the employees they have, rather than seeking value purely in lower IT costs. Wiping out a team and rebuilding it halfway around the world seems like getting rid of a management problem rather than solving it. And those in the corporate hierarchy can be concerned about their own futures and still be compassionate toward those they impact.

At the same time, all of us must honestly acknowledge that our loyalty is to ourselves. That means we should always be prepared to have an exit strategy, even if we never have to use it. Such a strategy includes an ample emergency fund (yes, we can always live a slightly less bountiful life in order to stock that fund), an expectation that our future is in our hands and no one else's, and an avocation or two that can be turned into a second career before that emergency fund is exhausted.

Notice that my strategy didn't specify keeping current with skills and professional networks. Those are things that can't hurt, but might not help. That might be the most important message in navigating through life: There are no certainties. We can help our odds a little bit, but should always be prepared for life to be uncooperative.

Posted by Peter Varhol on 09/29/20040 comments


Why Conferences?

After my last posting on upcoming conferences that I'll be attending, I heard from several people who asked me about the value of participating in such events. That's a sticky subject. I've participated in conferences for the past 15 years, and I take doing so for granted. But it seems many software professionals have gotten out of the habit of participation, or perhaps never got into the habit in the first place. Conference passes to some of the better-known events are too expensive for most individuals, and many employers have cut back on funds available for conference attendance.

Education is one reason to participate in conferences, but it's perhaps the least important one. Listening to experts describe and demonstrate new techniques can be useful, but the environment isn't often ideal for deep comprehension and note-taking. The best I've been able to say about the educational opportunities of conferences is that I've been exposed to things I hadn't done before, rather than learning something that builds on my current expertise.

Meeting both experts and other attendees in both formal and informal situations is a much better reason. This is one of the key ways you build your professional network. How important is that? It gives you contact with people you can turn to when you need help with a specific technical problem, or job references, or career advice. And it provides you with a sense of community, in that you are a part of something larger and more complete than your small corner of the professional universe.

It's been my experience that Microsoft technology events tend to draw more people, and more enthusiasm, than Java events. My thoughts on that are that Microsoft has managed to create more of a community around its users. And perhaps that is where Java is losing out the most. It's easy to conclude that working on Java development, testing, or administration is just another job, but the industry is too diverse and fast-moving to create a traditional kind of career. Technology in general, and Java in particular, needs a greater level of community and mutual support than it has had to continue its momentum.

Is there any way that you can attend a conference of interest without paying, or at least without paying full price? The answer is yes, although it might cost you in other ways. Consider the following possibilities:

  • Volunteer to work for the conference. Setting up and executing a conference is a complex and detailed undertaking, and conference organizers are usually happy for the help. Volunteers can often get full conference passes.

  • Volunteer to speak at the conference. If you have some expertise, or can tell a compelling implementation story, your knowledge could be valuable to others. Conference speakers get full access to all sessions, and might even have some of their expenses reimbursed. Most conferences do a call for papers on one or more Web sites months before the event. Keep an eye out for these, or contact the conference organizers directly.

  • Pursue a discount through either an educational institution or professional organization. In many cases, conferences will offer special rates to help attendees who are students or who already show their interest for the technical direction supported by the conference.

It might not be apparent from the first conference you attend, but if you make it a point to be a part of the Java community, over time you will benefit in both tangible and intangible ways.

Posted by Peter Varhol on 09/23/20040 comments


Subscribe on YouTube