The recent controversy surrounding the presentation at the Black Hat Conference of security vulnerabilities in Cisco's Internetwork Operating System used in routers once again raises the question of whether the security of web sites, data, and networks are best served through the wide distribution of known security issues so that everyone is on an equal footing, or secrecy, until those vulnerabilities can be addressed by the responsible parties.
Good arguments can be made either way. By fostering and supporting open communication about vulnerabilities, we can ensure that the word gets out quickly and equitably to those who need to fix the problem or become more vigilant. And details of the problem reach both hackers and their potential victims equally, rather than just the hackers. Last, publicity of a vulnerability may encourage the vendor to address the problem more quickly.
But not all customers are created equal. Some, despite the best efforts, will never get the word, or will decide for whatever reasons to risk being less than secure for the sake of cost or skill or opportunity. They can find themselves at the mercy of a broad community of both hackers and general publicity-seekers, simply because the information was made public.
And a vendor's point of view is also reasonable and valid. Were I a decision-maker at Cisco, I would honestly believe that I should have an opportunity to make good before my customers fell before the onslaught of unscrupulous hackers. And there are no doubt legal minds who advise maintaining silence, lest the company be held liable before there is a chance for any action to be taken.
As a proponent of democratic principles, my instinct leans toward openness. Not only does little good come from keeping secrets, but secrets invariably get leaked. And leaks mean that those who are paying attention are likely to have an advantage over those who are not.
And while secrecy provides the opportunity for the vendor to address the problem, that is assuming that those responsible parties are, in fact, responsible. It is easy to believe that those companies who release software that is subsequently found to be vulnerable are really protecting their image, and of course their sales, rather than dropping everything to make the problem right. I personally have found myself in situations where I wanted to do the right thing for customers, only to find that others had drastically different interpretations of what the right thing was, and who the right thing was for.
Nevertheless, there does not seem to be any middle ground in this debate. Legally, there are a myriad of laws that appear to cover this issue, including liability and tort law, privacy law, trade secret and patent law, and freedom of speech. Freedom of speech may eventually triumph in a relatively free society, but the cost in money and time could be too high if a vendor such as Cisco seeks to apply one of the myriad of laws protecting its own interests.
In this great debate between transparency and secrecy in protecting our technology assets, we must still opt for the former. Society's ideal is openness rather than secrecy. Hardware and software providers may take the legally safe route, but that is a short-sighted attitude that does not well serve the business, the user community, or the society at large. It is possible to cite good and even compelling reasons for protecting such information, but they simply do not stand up against the principles by which we live.
Posted by Peter Varhol on 08/03/2005 at 1:15 PM0 comments
Anyone in the software industry had to sit up and take notice when Jonathan Schwartz of Sun Microsystems made the recent announcement that the company was going to give away all of its software (http://news.yahoo.com/s/infoworld/20050721/tc_infoworld/62811_1
, among others). Sun actually produces quite a bit of software, including Java Enterprise System middleware, Java Studio Creator, the StarOffice productivity suite, and various storage and network management products (Solaris had already been open sourced and inexpensive, at least for small-scale uses).
It's beyond me why he would do that, a former colleague commented to me. I'm not privy to high-level decision-making inside Sun, but I can make some educated guesses about it. First, with the possible exception of Solaris, it's likely that Sun never made money off its software. In and of itself, that's not a reason for giving it away, but it does suggest that a new business model is in order.
Second, I know from talking to friends and colleagues who have worked for Sun in the past that the company culture is fundamentally that of a hardware company. Despite the resounding success of Java, decisions tend to be made on what is best for the hardware groups. Sun no doubt hopes that its software offer will help drive hardware sales.
Schwartz' rationale for providing the software for free is to create a community of developers creating services that would be purchased by those using the software. The software itself would become a commodity, in that equivalent software would be available from many sources.
Does anyone remember Richard Stallman's GNU Manifesto? In his seminal justification for the GNU open source strategy, written over 20 years ago, Stallman wrote that software should not be developed for remuneration, and that software developers should be paid through a tax on hardware. It sounds like Sun is resurrecting the GNU approach to software development.
But I don't think that charging more for the hardware to pay for the software is the answer for Sun. Solaris was a special case, an acknowledged first-rate operating system that nonetheless faced pressure from Linux systems. Open-sourcing Solaris was perhaps the only way of keeping it viable.
What does Sun need to be successful with this strategy? First, I'm sorry to say, SPARC has to go. I'm a computer purist, in that I want to see several alternative processor architectures available. No single architecture is best for all computing problems, and users should be able to make the appropriate tradeoffs. But Sun did such a feeble job creating a market for SPARC that today it is only a drain on the company resources.
Second, it has to build a services organization that goes beyond installing and maintaining systems to understanding customers' business and devising and implementing solutions based on that software. In other words, it needs a true technology consulting arm, similar to IBM Global Services.
Third, it needs to drop, not raise, the price of its hardware. In some cases, it may have to give away the hardware to get the services business. Charging a premium for proprietary hardware is a bad idea whose time has passed.
I don't doubt that Sun is on to something. I do doubt the wisdom of its response, and its ability to carry off that response. I still remember how many times it flip-flopped on Solaris for Intel. Sharing software or hardware goes against the grain of the company, and I don't think Schwartz has changed that.
But there is a message here for any software company that still counts on license sales and upgrades for the bulk of its revenue. Companies such as Sun are turning software into a commodity, just as the Wintel alliance turned hardware into a commodity over a decade ago. As vendors experiment with new software models, it will change the way software is acquired and used for all.
Posted by Peter Varhol on 07/24/2005 at 1:15 PM0 comments
There are about three weeks in the summer when the heat and humidity in New England become stifling, making it impossible to breathe or to concentrate on doing any useful work. This happens to be one of those weeks. Lacking central air conditioning, by default I opt to do little or nothing.
Except that a couple of days ago I was traveling in the upper Midwest, visiting a customer to provide an insight into future product plans for our application platform. This customer, a large and privately-held retailer, prided itself on its cost-consciousness (some might even say they were frugal), and its lack of desire to be on the cutting edge of technology. In the Gartner taxonomy, it clearly rated as a Type C organization – a late adopter, one that moved after the technology had already been proven and adopted by most others. If you view technology adoption as the standard Gaussian curve, a Type C organization would be on the right side of the curve, well past its apex.
Despite my own love of technology for the purpose of building innovative applications that change the competitive landscape, the strategy behind the Type C approach to technology can also be sound. If costs are already under control, by virtue of the corporate culture, and competitive pressures aren't great, then advanced technology might simply be redundant. It doesn't create efficiencies, but reinforces them, and at a high cost.
And, to be honest if unfair, this particular part of the country isn't exactly known for either technical sophistication or a plethora of available skills in the field. So imagine my surprise when the IT managers listened intently to my pitch on SOA. It was even more apparent later in the day, as they expressed frustrations with the ESB they were using to integrate real time data between applications. When a customer places an order at a kiosk, one explained. They can see our order number to our supplier, and have a tracking number that can be followed on the Web.
Did I inadvertently fall asleep and wake up fifty years in the future? No, but this incident taught me a lesson on the speed of adoption of the technologies that we toss about in the media. After years of pitching the next big thing, it's easy to become cynical about whether or not anyone actually uses it.
Today even a classic Type C organization sings the praises of ESB and real-time application integration, and struggles mightily to use the technology, not to gain a competitive advantage, but because it has come to be a business imperative. While the technologies of SOA and the standards they are based on may not be fully mature, the results that they can deliver are needed now. Even late adopters can see that. It was then that I realized that SOA was real, at least as an application integration pattern.
As for the heat, at least we in New England have the four seasons I learned about as a child, rather than the one-and-a-half experienced by my FTP colleagues.
Posted by Peter Varhol on 06/26/2005 at 1:15 PM0 comments
While much of my formal education lies in the social sciences, my immersion in technology over the last 20 years has made me look at problems as technical rather than social. Usually this is the correct view, but it troubles me that I often solve the immediate problem without realizing the implications. I worked through the Internet bubble, for example, without profiting off the social transformation that became obvious in retrospect.
Perhaps it is only when I have a personal stake do I see the larger implications on interpersonal interactions and society. That is certainly the case with recent announcements that the Federal Aviation Administration is considering lifting the ban on in-flight cell phone use. Looking it as a technology problem, it seemed far-fetched that there could be interference with critical avionics, especially serious enough interference as to cause aircraft accidents. And there have certainly been circumstances where I would have like to call ahead with schedule changes or thoughts that couldn't wait.
In response, it seems as if wireless companies have come up with a reasonable technical objection to this plan
. The objection is based not on the possible danger to avionics, but to the wireless network. By flying at high speeds over the towers, cell signals might leap from tower to tower, using cell capacity far in excess of the phone calls being made.
But there is more to this issue than can be discussed from the standpoint of technology. The answer to the question "Is it possible?" is probably yes. That doesn't begin to answer the question "Is it desirable?"
Take a trip I made last week. Upon boarding the flight and settling in, the adjacent seat was filled by a pleasant grandmother type whom I assisted in stowing her carry-on bag. Once seated, she immediately pulled out a cell phone and proceeded to call one of her business subordinates and fire him on the spot. Just prior to closing the aircraft boarding door, she then made the rounds of her superiors to attempt to justify the firing. Fast-forward three hours, and upon touchdown she started once again.
Fortunately, because I had been upgraded to first class for the flight, I was able to obtain the large amounts of alcohol needed to deaden the embarrassment of listening in on the tasteless act of firing a subordinate both long distance and in a public place.
But had in-flight cell phone use been permitted, I would likely have been subjected to this treatment for the entire flight. No amount of alcohol could have addressed that situation. Justifiable action would have been my defense against any resulting charges of air rage.
As is often the case with life's experiences, there is a larger implication of this relatively brief period of discomfort. Many technology advances change social norms and behaviors. How would flyers respond to such an advance as I describe here (which for the most part is a regulatory advance rather than a technology one)? I using my flying time for rest and work, and fear that flyers will spend the majority of time on cell phones, disturbing the relative peace with which I am now able to conduct my own business. Over time, perhaps, social disapproval will limit the use of cell phones to essential conversations, but I have my doubts.
Posted by Peter Varhol on 06/16/2005 at 1:15 PM0 comments
Prior to leaving on vacation, I penned the previous entry, in which I noted that the business of building software development tools beyond the fundamental editor, compiler, and debugger was in trouble. Part of the problem is that open source and free software tools tend to be very good, lowering the value of their commercially-developed equivalents. But the same is true of compilers (Gnu C) and debuggers (GDB), yet they still seem to remain commercially viable.
The real problem, I think, is that the tools to which I refer lack the necessity of compilers, editors, and debuggers. Tools such as performance profilers, thread analyzers, and refactoring utilities simply aren't required to write software. They are only needed when they are needed. When they are needed, they are very valuable. That's about five percent of the total development time, depending on the tool and the application being written. The other 95 percent of the time, they are worth little or nothing.
To be honest, there may be a benefit in establishing the use of such software development tools as part of a process: profile daily, refactor regularly, and so on. That benefit has never been well-quantified, however, and most developers focus on producing code rather than defining and following a strict process.
This presents a challenge to commercial tools vendors, including my former employer, Compuware, as well as companies such as Parasoft, Quest, and even Borland. How does a commercial vendor justify making software tools that lack a consistent value? And how does a development group justify spending money on such tools?
In the comments to my previous post, someone suggested a hosted application model for software development tools. In such a model, the profiler or utility runs on the vendor's server, but is accessible to the development groups when they need it. Developers pay for the use, rather than the purchase. Granted, the cost of use will probably be fairly high, but if developers solve an immediate problem, they may believe that the value is well worth the cost.
Would you use a software development tool this way? In my own experience, I think I would strongly prefer to have everything I might need readily accessible, and on my own system. Is having it readily accessible, but on a remote server operated by the tools vendor, good enough?
Posted by Peter Varhol on 06/04/2005 at 1:15 PM0 comments
Or perhaps it is just me that has changed over time. That part is certainly true. In addition to hair color trending gradually toward gray, I have just been fitted with my first pair of progressive eyeglasses.
But I've always been interested in the business of software, for at least three reasons:
- We, as an industry, build products that we know have defects and limitations, and sell them with a license claiming they have no value for anything.
- As we continue maturing the products through subsequent versions, they become increasingly fragile rather than increasingly stable.
- We have little understanding of the market in which the software is sold, and sell without understanding the customer's business of the problems we need to solve.
Let me focus on the last of the three. In the latter half of the 1990s, this didn't seem to matter so much. Users of the software development productivity tools I helped purvey were encouraged by the promise of improvements in the quality or quantity of their custom-built software.
But there was a problem. In many cases, it didn't happen. Software became what is commonly called "shelfware," remaining in the box on the shelf, rather than installed and working on developers' systems. There are any number of reasons why this was so. In some cases, development groups lacked the discipline or methodology to incorporate tools beyond an IDE into their daily activities. It may have been incumbent upon us as the developers to devise solutions that made our customers successful, through either better usability or better training. Either way, much of the software sold to developers remained unused or little used.
But today shelfware has become a thing of the past. If you can't immediately put it to productive use, you can't buy the software. This means that vendors have to do more than deliver a high-quality product. They have to understand what developers need that product for, and how they will use it.
This is especially difficult for the software tools business, because of the strange equation surrounding tools. Tools to aid in developing and testing software have in effect no value 95 percent of the time. This is because they are not used. This statement is true of even such basic tools as debuggers. Granted, a debugger tends to get more use than other tools, but it is only used when it is needed.
There was once a time, not that long ago, when developers were willing to pay sometimes a significant amount, for access to these tools 100 percent of the time, in order to actually use them for the five percent when they were really needed. Occasionally we had the odd developer who would request an evaluation copy of the tools, three or four or five times a year. What those developers were doing is getting free use of the products, only when they were needed. We tolerated these developers, because we didn't have any way to refuse the evaluation or make them buy.
Well, money is scarcer today, and paying for occasional use doesn't make a lot of sense. It turns out that those who requested evaluation software only when they needed that capability had the right idea after all. The value was nothing most of the time. During the brief times when the value was high, we were willing to give away the software, at least for brief periods. It turns out that for many developers, those brief periods are all that is needed.
So is it possible to make money from building and selling software quality and productivity tools to developers? That's a good question. If you have any thoughts on the issue, please let me know.
But I won't follow up right away. I'll be leaving on vacation in a couple of days, so I'll be back on these pages in a couple of weeks. See you inHawaii.
Posted by Peter Varhol on 05/19/2005 at 1:15 PM0 comments
Starting a technology company is a difficult and risky proposition. As we learned in the latter half of the 1990s, a professional-looking business plan, good contacts among potential investors, and a gift for selling were the primary prerequisites for getting initial funding. However, many of those business plans defined their exit strategy as getting acquired by another, usually more established company that wanted the technology for a specific business purpose.
In reality, few of these ventures delivered technology compelling enough to merit an acquisition by a company that could actually afford to pay for it. The vast majority were shut down when the money ran out, and many never even developed working software. In some cases, that was a good thing, because they were simply copying another concept that had proven popular.
Today, it seems a very different world. While investment money is starting to flow again for technology concepts deemed worthy, the barrier seems to be set somewhat higher this time around.
Yet there are few among us who have not given at least a moderate amount of thought to striking out on our own. After struggling with some commercial software, or writing our own utility to perform a critical task for which no vendor offered a solution, it seems like a short step to a small but profitable enterprise selling your own solution. Many of us don't necessarily know venture capitalists or angel investors personally, but if we start asking around, we find that many of us have a friend of a friend in that capacity.
So we spend our evenings and weekends refining a 20-page business plan, along with half a dozen spreadsheets detailing financing needs, cash flow estimates, and financial projections. We painstakingly prepare a PowerPoint (forgive me, StarOffice) presentation, and practice it in front of family, friends, the dog, or whoever will listen. We get an appointment with the friend of a friend, and start (or continue) our dreams of working incredibly hard for five years, then selling off our interest and retiring to a villa located in our preferred climate.
To which I say, Show me the value.
For such a simple concept, it turns out to be incredibly slippery. Well, I'm going to charge X for my product, because some established vendor charges Y for theirs, and I'm better.
That may all be true, and it's still not valid. The comparison excludes the vendor brand recognition and good will built up by being in the market for a while, doing some marketing and advertising, and selling and servicing a product through multiple versions. The one part that entrepreneurs most frequently underestimate is the value of market presence. This is the part of the software business that lets a company continue making money long after it has stopped producing a good product.
But there are more ways to demonstrate value than there were five years ago. Perhaps you're going to offer your solution under an open source license, and make money by supporting it, or customizing it for specific uses. Similar models proved to be successful for the likes of Red Hat and MySQL.
But that turns out to be even more difficult, because you're placing all of the value in the product with that of your own expertise with that product. That means you are counting on two things to happen – that others will find enough value in your free software to want to use it regularly, and that that value will extend to your own expertise and the offerings based on that expertise. While it is possible, you have created an exceedingly long value chain that at best will take a long time to develop.
The point is not to discourage anyone's initiative or dreams. Rather, the value of a business concept in technology is elusive. Having the idea, and even building the product, is nowhere near enough. Before you embark down the entrepreneurial path, make sure that you have pinpointed just where the value of what you are providing will be perceived by the potential customers. If you can determine that value, you know exactly how to market your product, and what to charge for it.
Posted by Peter Varhol on 05/02/2005 at 1:15 PM0 comments
My journey on this topic began when I talked to Richard Heckel of Engineering Trends for my posting of a couple of weeks ago. His concluding remark, that an engineering degree may well be the current manifestation of the classical liberal arts degree, turned my thoughts as to just what it means to be a roundly-educated person in this day and age. Is it really an engineering degree?
Even as little as a hundred years ago, to be educated meant primarily to be read in the classics, and to converse on their grand meaning. That meant, of course, that only the leisure class could even have the chance of achieving this title.
So I started asking friends and colleagues about what the equivalent might be today. Most people looked at me as if I had grown two additional heads, while others simply ran. I understand their confusion. We equate an educated person with lots of college degrees, or a prodigious memory. I wanted something more.
I recalled the 1960s movie You Only Live Twice, in which James Bond is asked if he liked sake. Upon indicating in the affirmative and being served, he took a sip and nodded appreciatively, "Especially when it's served at the proper temperature, ninety-eight point four degrees, as this is." To which his host Tanaka replied with surprise and pleasure, "You are exceptionally cultivated for a Westerner."
Was there some relationship between being cultivated and being educated? I thought so, but had trouble identifying just what it was. An educated person understood more of the nuances, perhaps, and fit into more diverse situations. Or maybe it was the other way around; an open and diverse mind was more open to being educated, as well.
But does it mean that such an education in some aspect of technology, arguably the most dynamic and controversial subject matter in contemporary life? I think that's a part of it; being on the leading edge of trends like the Web (circa 1995), broadband (circa 2001), and VoIP (circa about now) involves both recognizing that these technologies are going to become a part of mainstream life and having the patience to work with them before they are fully standardized.
In that light, my first criterion for a classical education today is to learn about things that intrigue, surprise, or delight you, yet to be willing and able to learn anything. In many cases, the new things are about technology. About three decades ago, there was a two-volume series published called The Way Things Work
. This and similar texts provided those of my generation a broad exposure to technology and engineering. There are more contemporary works available today.
But there is much more than technology on the subject of being broadly educated. To be educated means to take care of yourself, physically and mentally. Fortune magazine's career columnist Anne Fisher (www.askannie.com) recently interviewed author Steve Vernon, who offered the following advice. "Try to pursue things that make life satisfying now, and take care of your health now, rather than deferring these to a vague time somewhere off in the future."
In the same vein, you understand what makes you happy, sad, frustrated, and satisfied. You recognize that others around you may have different motivations, and to take what you don't know about them into account in your dealings with others. This, I think, is the broader meaning of the above-mentioned line from the James Bond movie.
Last, it means that you can discourse with both fact and considered opinion on the major topics of the day. This includes a wide variety of topics, of which technology is only one. It doesn't mean that you want to, however. Many confuse opinion and fact, offering the former in the guise of the latter. Knowing the difference between the two, and knowing when you shift from one to the other, is the difference between being thought insightful and being thought a bore.
But the requisite tools are not enough to distinguish a classical education of today from one of a hundred years ago. Today there is also the obligation to make use of that knowledge, whether in the service of profit or humanity. That is quite a shift from the last century, where knowledge was accumulated for its own sake. The great thing about knowledge is that it doesn't get used up. Rather, the more you use, the more you get.
Posted by Peter Varhol on 04/24/2005 at 1:15 PM0 comments
My post of a week ago
brought a number of comments, in the feedback and via e-mail, both on outsourcing and on the relationship between outsourcing and the lack of young people interested in technology careers. I mentioned that the Wall Street Journal reported on the curious anecdote that the children of many successful and prominent technologists are shunning education and career in technology. One of the reasons given was that they were perhaps more cognizant than most young people of the phenomenon of outsourcing. In one case, the parent was a venture capitalist who advocated that start-up companies establish their development teams inIndia to reduce engineering costs.
The responses I received provide some clarity on both the intended and unintended consequences of outsourcing. We have pursued outsourcing to some extent in order to achieve economic benefits; that is, to save money on developer salaries. By and large we have achieved that, although there have been exceptions noted in the press in cases where for various reasons the outsourced developers haven't resulted in the desired savings.
Perhaps another intended, or at least unavoidable, consequence is a more macro onereduced levels of employment for technical professionals in the United States, as well as some level of salary depression for the group as a whole. From an economic sense, this is a desired outcome for those engaging in outsourcing, though undesirable for the individual developers.
So at first glance the impacts from outsourcing seem to be primarily economic. I've heard arguments that outsourcing of basic functions saves a corporation money that can be used to initiate new and innovating projects employing local developers, but that remains an economic argument (and an unproven one).
But as the Wall Street Journal article implied, there are consequences that are not economic, and that few if any of us thought about prior to initiating outsourcing. One is that young people seem to be less inclined to pursue these careers, making such skills less available domestically. This was a trend that was evident prior to large-scale outsourcing, because technical training tended to follow the growth and dips of the technology industries. After about 2000, technology jobs dried up, and the desire for training for those jobs went down.
We are currently seeing cautious growth in technology industries, but is it having an effect on career choice? I spoke to Richard Heckel, Technical Director of Engineering Trends (www.engtrends.com), a research firm specializing in engineering education trends. "There is," said Dr. Heckel, "a direct correlation between a student's expectations of a monetary reward as an engineer and enrollment in that field. This is a clear trend since 1945. We are currently seeing an increase in the number of engineering graduates in general, and will probably reach the largest number ever next year.
"Except," he continued, "for computer science and computer engineering graduates. Those numbers tanked in 2002, and continue to remain poor. I believe that the reason is outsourcing."
I offer no personal opinion on the subject of outsourcing and its effect on career choices. The issue has not impacted me, and I lack a basis on which to have personal feelings on the subject. But it does seem to me that utilizing talent irrespective of physical location is both inevitable and, individual pain aside, desirable. In time it should even result in greater levels of employment for all.
But it's pretty clear that one unintended consequence of outsourcing is its effect on future career choices in technology. Dr. Heckel concluded our discussion by saying, "You might think of engineering as the new form of liberal arts education." Technology and its applications have become so central in our everyday lives that to understand it thoroughly might be as important as reading and understanding the classics were in an earlier era.
Posted by Peter Varhol on 04/09/2005 at 1:15 PM0 comments
We've been reading for years about the inability of mathematics and the natural sciences to attractAmerica's youth into serious study and careers. The latest I saw was a March 31st
Wall Street Journal article (www.wsj.com
; the exact link requires a paid subscription) describing how even children of very successful technologists are shunning such careers. As a former educator in these fields, it pains me to hear such news. There is an elegance about much of these subject areas that should be experienced by many more than actually do.
The Wall Street Journal article cited two reasons for this trend. First, young people are perhaps more cognizant than most of us when it comes to outsourcing jobs. Second, many say that studying math and science is just too hard.
The Journal noted that former reason can be highly ironic, in that it cites cases where the technology-educated parent has engaged in outsourcing activities at their company even as they encourage their children to pursue careers in science or engineering. The children, certainly logically but perhaps incorrectly, observe the results of their parents' actions and concluded that there was no future in these fields.
However, it's difficult to have similar sympathy for the too hard argument. I have graduate degrees in both hard and soft sciences, and my conclusion was that the hard sciences required no more intelligence than the soft. However, the hard sciences did (and likely still do) require a greater level of persistence and commitment. I couldn't crack a book open the night before a mathematics exam and expect to do well. I had to keep with it, rewriting my notes and doing problems almost every night.
But this is not my definition of hard. In my case, it was a combination of a labor of love and a desire to learn and achieve. Others who succeeded no doubt had other reasons. But in any case it does require different attitudes and expectations.
It also pains me to admit that science education itself may be at fault. This failure has less to do with the subject matter than with the way it is taught. In the several universities at which I studied, there were courses whose principle purpose was to weed out the class. At one university, for example, the sophomore circuit theory classes were designed to reduce the size of the electrical engineering class by sixty percent. Those who didn't succeed were not given a second chance.
This was, and is, preposterous. Flunking out over half of a class demonstrates less an aptitude for engineering than it does for competition and stress. Certainly some portion of those left behind would have made at least adequate engineers, had they not been abjectly discouraged in this manner. Can we afford to continue this practice?
Then there is the problem of the lack of technology skills in the sciences in many schools. I recall one of my undergraduate statistics students coming up to me at the end of the semester and saying, I had no idea this could be so easy. I gave up on math when I had a teacher in junior high who went out of his way during each lesson to say how much he hated it.
That is a much more intractable problem, and I offer no solution. I think it requires nothing less than a major redesign of education in general, and I don't see that happening. Yet if producing more technically trained professionals is a serious goal, then nothing less will suffice.
Posted by Peter Varhol on 04/02/2005 at 1:15 PM0 comments
I used to be an unquestioning proponent of formal modeling and other techniques for developing software at high levels of abstraction. And it was easy to see how I came about that notion. While the assembly language programming I did as a college student was technically easy, it took an enormous amount of effort to perform the simplest tasks. I concluded that this activity wasn't the most productive use of my time.
As a graduate student, I was introduced to the concepts of formal modeling (in my case Petri nets, and later state charts), and became an immediate convert. The thought of diagramming my application and executing the diagram was appealing, because I didn't have to worry about housekeeping details such as type declaration and matching, memory allocations and deallocations, and pointer arithmetic. The semantics were all that mattered. The productivity gains from working at such a high level of abstraction had to overcome any inefficiencies in execution, especially with the ever-faster performance of processors.
Well, time wounds all heels, and I've begun to have second thoughts about that set of beliefs. In the intervening fifteen or so years, some things have supported my original position. Processors, as well as memory and mass storage, have made significant advances in performance, and we have largely accepted not making code as fast as it could be in return for the ability to use frameworks and libraries to speed application development. And execution technology has become so good that managed execution environments have done away with most of the memory housekeeping chores I mention above.
Application architectures have become more complex than they were around 1990, and code written in older languages stumbles through N-tier, object-oriented, services-based applications and application components. It's hard enough to get these applications right without having to worry about making sure the interactions between the code and the underlying machine are right, too.
I still believe that better processor performance and managed languages are important and valuable advances in software development, but I have become more concerned about the impact of abstraction on application performance and quality. Legacy languages (C, Pascal, Adatake your choice) forced you to understand how they worked in order to get it right. It wasn't always pretty or even necessarily fast, but when you were done, you knew more than just your code.
On the other hand, managed code just works if you get the semantics correct. I called it formal modeling back in nineteen-mumble-mumble, but managed code is very similar in that regard. Think of managed code as a more concrete implementation of an abstract model. That's what I was looking for, right?
Well, not anymore. Formal modeling is still the right way to go, but there is more to application development than correct semantics. A software application is more than a model, or even an implementation of a particular model. It has flaws, some of which arise from its own construction, others of which arise from the environment in which it runs. None of these flaws make it useless for its intended purpose, although users might occasionally experience the downside of software with minor failings. But the application exists within the machine, and will have fewer of those failings if it plays well with that machine.
Take memory management. I can write a managed application that operates correctly without understanding a thing about how it uses memory. Years ago I might have argued that that was a good thing. Today it concerns me, because the more you know about the interaction between your code and the underlying machine (both real and virtual), the better prepared you are to find flaws and write fast and efficient code. You can still influence these characteristics in both Java and .NET, if you understand how they work.
Formal modeling languages, such as UML, that can generate code work at such a high level of abstraction that they don't even give you the opportunity to make those adaptations. Because you are farther away from the machine, you don't even have the opportunity to see how your design decisions gobble memory or create a massive working set. You have great productivity, but less quality, and that's not a good tradeoff when you let the tools make it for you.
I'm not advocating a return to assembly language or even legacy languages. Productivity is still important. But developers have to make that tradeoff, not have it made for them. Managed languages are a good intermediate step, but only if developers understand the implications of their decisions on the underlying machine. Formal modeling languages also need to give developers visibility into more than just the semantics of the application. Developers need to see how design decisions affect efficiency and performance. Once they can see and react to the interaction of code and machine, I'll be able to say I was right all along.
Posted by Peter Varhol on 03/26/2005 at 1:15 PM0 comments
Tim O'Reilly of publisher O'Reilly and Associates (http://tim.oreilly.com
) was one of the keynote speakers at EclipseCon last week, and in my mind one of the more compelling speakers I've heard. Tim spoke of patterns for business opportunity in an era of open source. He opened the talk by discussing several failed patterns, such as IBM's use of commodity components for the IBM PC and its outsourcing of its true value, namely the operating system.
His most interesting remarks concerned the changing technology stack and the changing value within this stack. The stack that we have been used to is hardware (Tim broke this down into microprocessor and computer layers), operating system, and application. He notes that Intel largely has a lock on the microprocessor layer, and Dell has been successful at driving down prices on computers to commodity levels.
This leaves operating systems and applications. But Tim points out that while we associate the operating system with the computer in front of us, it really extends beyond that to the Internet. In other words, the Internet is our platform, and thanks to increasing access to fast connections, we are spending a greater proportion of our computing time there.
And it is possible to add value on this emerging platform. Major online companies such as Google, Amazon, and EBay have created entirely new businesses that now have revenues of billions of dollars.
Increasingly, these companies have also become computing platforms in their own right. EBay is a platform for small storefronts and individual sellers, while Google is a platform for search applications. And note that each of these platforms is largely proprietary in nature, even if it might be built on commodity hardware and open source code.
This means that there is still value at the platform level, but the definition of the platform has moved. It also means that the platform has not yet become commoditized, so there is ample opportunity for innovation at this level.
Building applications on top of these platforms is positive business pattern, but these applications are different from traditional desktop applications. A key facet of these applications is data, which can be considered another layer on top of the application. Tim (as well as Jon Udell in several of his recent Infoworld columns) gave Google Maps as an example of the value of data. Most mapping web sites use the same maps, which are provided by only two or three different companies that saw the value and licensed the rights to maps from the government, or created their own maps.
But he noted that there can also be value in user-provided data. For example, on Amazon, thousands of users rate books and write reviews. Amazon uses this information to be able to list responses to searches based on those ratings. In other words, Amazon uses data provided by users to add value to the search of other users.
This is a powerful concept, and one that is easily overlooked as we design applications. Many Web application designs don't persist user input, or don't make use of it to provide additional value. Not every application can do this sort of thing, of course, but those that have the ability to accept and manipulate data, and provide information back to users, have more value to the community as a whole, and to the application developer in particular. Tim reminds us vividly that by harnessing data wherever we can find it, we can find value where none existed.
Posted by Peter Varhol on 03/11/2005 at 1:15 PM0 comments