It is probably a bit early to be looking back on the current year, but I need most of the month of December to figure out my technology wish list for 2007. I spent some time looking back at 2006 to what I thought were significant events for the year, both good and bad. Good events generally have a positive outcome for the industry and for software developers in particular. Bad events reflect poorly on the industry, and have negative outcomes for some or all developers. Below is the good list; in a few days I'll post the bad one.
- Open source Java. Someday soon I'll write a longer missive on programming languages. But for the time being, note that the acceptance and use of a programming language is not a static thing. Languages become popular based on their ease of use and the relevance of new features, and they decline in popularity as features become old and stale, and no longer relevant to the problems faced by the development community.
Java tends to keep up with development trends, although perhaps not as fast as we like. But many of its core features are no longer as innovative as they once were, and the language and platform have grown large and cumbersome. The open source community may fragment Java, but it will almost certainly trim it down and keep it relevant.
- Windows Vista. Whatever you may think of Microsoft's new flagship OS, it will generate excitement in the industry, and increase sales of everything from hardware to system management tools. A lot of money, development resources, and marketing dollars will be spent on getting Vista and third party supporting products built and accepted by users. While building Vista-specific applications using .NET 3.0 will require that we all learn new things, change is part and parcel of being a developer.
- The coming of age of wireless connectivity. Growing access to wireless connectivity does more than change the way we work; it changes the way we think. When I want to know the answer to something, I Google (yes, it has become a verb), and can do so from just about anywhere I find myself. Increasingly, when I cannot, I tend to feel lost at not having a wealth information at my fingertips.
- The emergence of Eclipse. This year the attendance at EclipseCon was 50 percent more than it was in 2005, which itself was more than 50 percent over the previous year. And over thirty new Eclipse projects have been started or proposed over the past year, according to Eclipse Director Mike Milinkovich (yet the Eclipse Foundation itself has managed to remain surprisingly lean).
Today, Eclipse is seen as the counterpoint to Visual Studio, but it is more than that. It has become a platform for development, a characteristic that until now has largely been the province of the operating system. And while it is commonly associated with Java development, it is reaching well beyond that.
- Visual Studio Team System. It is easy to dismiss Team System as both late to the game and offering little new for development teams, but that misses the point. Up until now, all of the features of Team System were available, but as separate tools. They were difficult and clumsy to use together. Team System changes that. We will not all run out and buy Team System for our development teams, but it will change how we view the development lifecycle.
Lifecycle vendors have paid lip service to integration for years, but now they must put up or shut up. Either they get serious about integrating the disparate tools they have built or acquired over the years, or risk ceding their customers to Microsoft or others who do so.
Posted by Peter Varhol on 11/29/2006 at 1:15 PM0 comments
It seems like an easy question, doesn't it? As a member of the technology press, with both the experience (I've been present in the industry since the introduction of every version of Windows since 1.0) and a distinct technical bent, it should have been a no-brainer that I would be running Windows Vista as soon as it became available through my MSDN subscription.
But other factors come into play that makes it a much more difficult decision. First, I would have to get a new computer, or at least upgrade memory in one of the older ones. Despite what you may think, the technology press does not get ready access to the latest hardware advances (at least, not like we did in the 1990s). I have three systems with 512 MB of memory (two are my own; one provided by FTP), the minimum required memory for Vista. Knowing how slow these systems run other memory-intensive applications, and also knowing how cavalier Microsoft is with minimum system requirements, I am loathe to even attempt to run Vista with the minimum memory.
But none of these systems have the graphics horsepower necessary to run Aero with any level of resolution (the best has a 32 MB graphics card). That would be a much harder limitation to correct, as two of my systems are laptops. From an individual standpoint, one of the primary reasons for moving to Vista would be for the graphics.
But once you have Vista, it is unclear what applications actually work, and how well they do so (http://www.eweek.com/article2/0,1895,2062318,00.asp for an entertaining summary). In particular, it seems like some debugging scenarios can be broken due to the Vista's security profiles. Of course, developers are able to run Visual Studio 2005 SP1 and the .NET Framework 3.0 on XP systems, so they aren't significantly restricted in developing Vista applications (but if your goal is to run Vista, using XP doesn't satisfy that goal).
Over the last 20 years, I have been among the first to run just about every Windows operating system. I think that stops with Vista. Certainly every new OS bumps up the hardware requirements, and I've made the investment for that hardware in the past. What is different now is that there is little for the individual user to justify that investment. And there seems to be more than its fair share of incompatibilities and issues with existing applications.
No doubt most or even all of these problems will be worked out in due time. And I'll upgrade to Vista, also in due time. But not today.
Posted by Peter Varhol on 11/26/2006 at 1:15 PM0 comments
There are at least two sides to every story. Those science fiction fans among you might remember Larry Niven's Gripping Hand, a tale of aliens with two smallish arms on the right side and one large, um, gripping hand on the left. This led to a certain way of thinking about tradeoffs involving three sides. Rather than the two hands upon which we look at the sides of an issue, there were two considerations of note, but one additional and overriding factor upon which every good decision was based.
Well, the story last week of Microsoft and Novell teaming to add a level of legal assurance to Linux may not be a good deed in the pure sense of the term, but it did succeed for a brief time in providing a measure of comfort to enterprise CIOs who would like to employ Linux but are afraid to do so because of patent or other intellectual property (IP) implications. As we learned in the still-ongoing SCO case, users of disputed IP can also be pulled into a legal no-win situation as easily as the vendor.
Now Eben Moglen, the general counsel of the Free Software Foundation, has announced that the next version of the GPL will specifically forbid such deals (http://money.cnn.com/blogs/legalpad/index.html
). The FSF believes that the fact that it gives the Novell SuSE distro an advantage not available to other distributions makes the agreement untenable. And with ownership of the license verbiage, the FSF has the ability to do just that, even after the fact.
I admire Richard Stallman. I don't necessarily agree with him, but I do admire his single-minded pursuit of a radical ideal over the course of more than twenty years, and the eventual acceptance of the salient parts of that ideal by the industry mainstream (though one of his original precepts in the GNU Manifesto, that software developers should be paid through a tax on hardware, remains a nonstarter for good reason).
However, the vast majority of Linux advocates and users (or GNU/Linux, as Stallman insists) are not license purists. They have jobs to do, and Linux provides a reasonable way of doing their jobs. By and large, the GPL in and of itself has not been a deal-killer in many enterprises. But IP issues are different, and even if enterprises have accepted that risk, surely it keeps some CIOs up at night.
According to Moglen, GPL version 3 will be adjusted so the effect of the current deal is that Microsoft will by giving away access to the very patents Microsoft is trying to assert. It will be interesting to see if the GPL can legally make that assertion, seeing as Microsoft is not a party to the GPL and does not license any of its own software using that license. So for enterprise IT shops looking for legal guidance on open source in general and Linux in particular, the uncertainty continues. This uncertainty may become the gripping hand that prevents further acceptance of Linux by mainstream organizations.
Posted by Peter Varhol on 11/15/2006 at 1:15 PM0 comments
I keep coming back to the topic of programmer productivity for a couple of reasons. First, advances in the economy in general result from improvements in productivity. It is both unlikely and unfortunate if programmers are not also increasing their productivity along with the economy. Second, I believe that developers can do better in terms of productivity than they are today.
Joel Spolksy (http://www.joelonsoftware.com/items/2006/11/10b.html) claims once again that it is impossible to measure programmer productivity. I would agree with him, except that there are some development teams that get things done faster than others. This has even been demonstrable in controlled environments. However, I suspect that Joel means micro rather than macro measurements, even though he cites a fictitious efficiency rating for a company's software development processes. In other words, there is no quantitative way of measuring the speed and reliability by which we are able to produce code.
So my thinking is that productivity improvement is rather like former Supreme Court Justice Potter Stewart's proclamation on pornography – we would know it if we saw it. We know that some development teams are better than others, and we know that some individual developers are better than others (the best and worst are separated by a factor of ten, according to Ed Yourdan), but we simply have no way of measuring and quantifying that difference.
I am not sure how to measure programmer productivity, or if it can ever be done. But I am certain that it can be improved because we have done so for years. Integrated development environments, reusable libraries, frameworks, and better debuggers have all improved the ability of programmers to produce more. This has largely been hidden by the rapidly increasing complexity of applications, and the new application models that developers are required to learn and use.
So we are seeing some progress in programmer productivity, though that progress is largely being hidden by the changing nature of the end product.
I think we can do better. Developers can make better use of tools, whether integrated into the IDE or available outside of the IDE, to write code faster and with fewer errors. Many of these tools can be had for free, especially if they are part of the Eclipse Foundation. In other cases, they may cost a few hundred to a few thousand dollars, but can pay for themselves with regular use.
Most of you cite time pressures as the cause of the lack of tools use. I understand where that comes from, but that simply means that software development management is penny wise and pound foolish. There must be managers out there with the courage and foresight to say that their teams will take the productivity hit for the next month will installing and learning to use new tools, for the long term productivity improvement. The advantages of improving processes, adding automation, and being more analytical about productivity and quality will last far beyond the immediate project.
Posted by Peter Varhol on 11/13/2006 at 1:15 PM0 comments
Yesterday Microsoft included 3D buildings from 15 cities in Virtual Earth, with the ability to fly through those buildings as though you were playing flight simulator. While Google Earth has had 3D buildings for quite a while, they have not been as lifelike as though available from Visual Earth. I downloaded the Microsoft plug-in for using 3D images, and spent some time as both an end user and developer applying this technology.
Some are saying that this feature, along with the ability to display hybrid maps, propels Visual Earth ahead of Google Earth in the business of working with aerial views of geographic areas. However, I am not sure that it is a race, or that one of the other is winning, or that it even matters.
I relish the competition between the two, because I think everyone wins. Microsoft would never have put this emphasis on aerial displays and mapping had not Google done it first. And Google has shown the ability to innovate in ways that are both surprising and profitable. Both are free, although both also incorporate advertising (a recent CNN.com article said that Virtual Earth had no advertising, but I found a lifelike real estate ad right smack in the middle of the Fenway in Boston).
As for displaying maps and aerial photos, both alternatives have characteristics I like. I like Google Earth for the full screen displays, and the community that is constantly delivering surprising content in the way of photos, insights, and specific geographic features. Some complain that Virtual Earth works only with Internet Explorer, but I have multiple browsers loaded on my system, and am not religious about using them when one or the other is better or more appropriate for a given task. And I do like the easy ability to create, save, and share unique Virtual Earth views with others.
Google Earth has gotten quite slow on my 512MB system when I load up on additional data plug-ins, which is most of the time. It is unrealistic for me to be using Google Earth on this system with other substantive applications at the same time. However, I found that the 3D building plug-in with Visual Earth also slowed down this system, so it is not clear yet whether I am seeing better performance with one or the other.
As a developer, I more readily understand how to create a mapping application with Virtual Earth than I do with Google Earth. The clear and concise API documentation on Virtual Earth probably has a lot to do with that, but that is a personal impression rather than an objective judgment of the comparative merits of programming each.
So I am happy to see both Google Earth and Virtual Earth. And I am especially happy to see such rapid and dramatic advances in geographic and mapping technologies. I don't care who is driving those advances.
Posted by Peter Varhol on 11/09/2006 at 1:15 PM0 comments
I have always been dubious of technical certification programs. While they could provide evidence that its holder had a skill concerning a specific product, they were rather useless in determining the adaptability of an individual for a continually changing career field. Just because you know how to administer Windows NT 4.0 did not mean that you understood what you were doing, and how that skill could be applied to other operating systems, or leveraged into a broader understanding of computers in general.
Alas, for years the job market proved me wrong. Those with certification from Microsoft, Cisco, Novell, and others typically saw salaries and offers $5-10K above non-certified people doing the same work.
Not any more. According to a study conducted by Foote Partners and reported in eWeek (http://www.eweek.com/article2/0,1895,2051272,00.asp
), salary growth for various certified professionals has essentially ceased, while salary growth for non-certified professionals continues to grow. While salaries for certified professionals are still marginally higher on average, Foote Partners expects that those salaries of the non-certified will pass them next year.
The study authors offer their own reasons for this trend. Their conclusion was that non-certified professionals may exhibit more of the business skills necessary to successfully apply technical solutions to business problems.
With the disclaimer that I have never worked in enterprise IT, I am dubious of this answer. I do not see any evidence that certified professionals have any more or less business savvy than non-certified ones. My answer is that those with a solid academic and experience foundation in computer science and software engineering see certifications for what they are – a flash in the pan that look good at one moment in time, but have no lasting value.
I cannot get out of my mind all of those Novell CNEs trained in the 1990s to work with one specific product. When that product was hot, they were in demand and well paid. When it was not, many left computing entirely because they had no other skills or knowledge to fall back on.
Although I have been an academic in the past, this is not intended to be academic snobbery. While formal education is the predominant means of getting a career-long foundation, I have known many bright and talented software people with little education in the field. But there are certain foundations of computing which are necessary in order to integrate the new concepts and technologies that are continually coming to the forefront. Of these, I could count formal languages, operating systems, data structures, and processor-level architectures as the primary engines of my own particular growth and development in the profession.
I have been pressured to get certifications in past jobs, and I have always resisted. I would like to think that my time could be better spent building on my foundation, rather than studying a particular vendor's way of doing things. I hope my longevity in the field ultimately attests to that point of view.
Posted by Peter Varhol on 11/07/2006 at 1:15 PM0 comments
We are currently working on a Special Report on Application Quality and Testing which will be posted in the next week or so. I have spent a good part of my career in the development of code quality tools, and remain mystified by their general lack of use by developers. For example, I spent time on the BoundsChecker (http://www.compuware.com/products/devpartner/visualc.htm
) team (as product manager) a few years ago. BoundsChecker is probably one of the best-known and most effective of the C/C++ error detection tools. While we attempted to promote regular usage by developers, by far the predominant usage pattern was as a last resort to find and diagnose a known bug.
Many developers I've talked to use only the IDE (Visual Studio, Eclipse, or other), with no additional tools for unit testing, code coverage, error detection, or performance. With the possible exception of unit testing harnesses such as NUnit and JUnit, most do not even bother using available free or open source tools.
Software development is a profession of trade-offs, so let's look at the tradeoffs here. Let's say that the average developer with a few years of experience makes $90K (some of you may find this number out of reach, but I know plenty of developers who make more). Adding benefits, facilities, and applicable taxes, a fully loaded developer probably cost 50 percent more, or $135K.
Many of these tools claim to be able to increase productivity finding and fixing bugs, or improving performance. In many cases, the tools vendors claim up to a 50 percent improvement in productivity. That is probably exaggerated, but 20 percent is not unreasonable, especially since quality tools can identify and diagnose problems that sometimes cannot be found any other way.
Some studies have indicated that developers spend perhaps 50 percent of their time debugging. This might be a little high, but for the purposes of working with round numbers, let's use it. This means that tools can possibly be worth ten percent of a developer's loaded salary, or $13.5K.
That pays for a lot of developer quality tools, with some left over. So my question is, why do so few development teams make the investment? I had always thought that development teams were more willing to invest in additional bodies than in tools, but the cost of tools is trivial compared to the cost of an additional developer. And how do you explain the lack of use of free tools, especially in the Eclipse community? Do we think we are so good that we don't need additional quality tools?
I am not planning on going back to work developing quality tools any time in the near future, but I am curious. Any answers that you might have would be gratefully accepted.
Posted by Peter Varhol on 11/05/2006 at 1:15 PM0 comments
At VSLive! Boston last week, I also sat in on a session by Jackie Goldstein on worst practices in coding. What he did was to display code examples and ask the audience what was wrong with them. The code worked, but was slow or inefficient or both, and Jackie led the audience on a journey of discovery of poor coding.
The interesting thing is that most of his examples depended on having an understanding on how underlying mechanisms worked – usually the .NET Framework. He also illustrated differences between the .NET Framework 1.1 and 2.0 that has an effect on coding practices when moving from one to the other. The common thread was that writing good code required knowledge of more than just the programming language (C# or VB.NET, in this case).
Joel Spolsky (www.joelonsoftware.com
) has pointed out on multiple occasions that writing code requires a significant understanding of all layers of abstraction, from the programming language down to the hardware. He has lamented the fact that it is possible to graduate with a degree in computer science from an otherwise reasonable school by just using Java, and not understanding the concept of pointers (http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html
I like abstractions. They tend to simplify otherwise difficult concepts, and accelerate productivity for programmers. But I found myself nodding in agreement with both Jackie and Joel, in that knowing your programming language is not enough. I have never encountered the situation that Joel describes (http://www.joelonsoftware.com/articles/GuerrillaInterviewing3.html), in which programmers simply do not get pointers, but I can see it happening without a proper grounding in languages.
You might make the argument that such an understanding is nice to have, but no longer a necessity, because languages have moved beyond the need to directly manipulate the data in specific memory locations. There is a certain amount of truth to that. Millions of Java and VB programmers are a testament to the ability to write software effectively without pointers. And Jackie's .NET examples certainly did not require the use of pointers.
But the point that you need a deep understanding of everything the computer is doing still holds. Jackie's poor code did run, and was correct, but use managed memory poorly, or used an inappropriate .NET class or method. The fact remains that if you do not know what the underlying layers are doing, you will make the wrong programming decisions.
Posted by Peter Varhol on 10/29/2006 at 1:15 PM0 comments
I have not been a pure programmer for much of my career, but I have a good academic grounding in computer science, and like to sound like a computer scientist every once in a while. It was only well after I suffered through formal languages courses that I actually became interested in the topic of programming languages (this may have something to do with the poor ways that we teach formal language concepts).
That is why Brian Randell's session at VSLive! Boston caught my attention. Brian spoke of improving performance in .NET applications. But he did so in a rather roundabout way, which is the right way to do it. Many things in language design and implementation are not what they seem, and to approach performance by talking about differences in binding is the right way to do so.
I was especially taken by his exposition of early binding versus late binding. He equated them to strong typing and weak typing respectively, which is not entirely the case, but close enough to be able to make some points concerning their use. Noting that Ruby was a weak typed language that used late binding, he made the point that late binding generally produced slower code than early binding. That is true, of course, because it takes time to identify a type and bind data to a variable while the application is running. However, he noted that more experienced programmers can more easily make the determination of when the tradeoff is worth it.
He said that VB enabled programmers to use weak typing in some application files and strong typing in others, but C# didn't have this provision. This is one area that C#, which is clearly Microsoft's flagship language, can use a bit more flexibility.
I did a bit of Lisp programming as a graduate student, and especially appreciated the ability to treat an expression in any number of different ways before I was ready to use it. But Lisp always had the reputation as a slow language. Fortran, which is very strongly typed, is a very fast language.
I believe I've noted on these pages before Robert Heinlein's maxim on tradeoffs – TANSTAAFL – There Ain't No Such Thing As a Free Lunch. I have always tried to emphasize tradeoffs, rather than absolute rules, as the guiding principle in software development. It was good to hear my perspective validated in this way.
Posted by Peter Varhol on 10/25/2006 at 1:15 PM0 comments
Taking my chances with the uncertainties of air transport and Transportation Security Administration, I take my laptop everywhere I go. And it is terribly frustrating when connectivity is absent, or worse, sporadic. It was sporadic a couple of weeks ago at the Eclipse member's meeting in Dallas, where it was necessary to sit in a certain part of the Omni hotel lobby to assure connectivity. A couple of years ago at the Washington (DC) Hilton, room access didn't work at all, but it was possible to connect to unprotected wireless networks in an apartment building across the street (not that I am advocating doing so).
Yet free connectivity, or even connectivity that is reasonably priced for an appropriate range and time period, is sorely lacking. You have to stand in a particular place, stay at a particular place, or buy a particular brand of coffee, in order to get access.
There are two possible ways of viewing this trend. First, the network is filling out slowly and in spots, and it is only a matter of time before we are able to move seamlessly between Starbucks (I dislike coffee, so that is a nonstarter for me) to the Marriott Courtyard to the Northwest WorldClub at Minneapolis Airport without losing a signal.
But another way of looking at it is that we have the wrong model to begin with. The network will never fill out, because there are not enough commercial entities with a desire or business rationale to make it happen. And for those networks that make the attempt, the cost will be both prohibitive and fragmented across multiple non-communicating network providers.
You might argue that digital cellular networks actually accomplish the task of providing a single network across significant distances and at one cost. I think that the model of the cellular providers is one that might work, with an additional feature. Let me explain. I subscribe to one of the minor cellular carriers for voice (United States Cellular, which happens to provide exceptional coverage in my home state). US Cellular does not have a large national network, so it has exchange agreements with major providers all over the country. The net result is that I get 800 anytime minutes a month anywhere in the country for $50 (plus the various taxes and fees).
I would jump at a similar plan for wireless Internet access. This seems in no danger of happening anytime soon, however.
Posted by Peter Varhol on 10/23/2006 at 1:15 PM0 comments
I confess that I am not a journalist in the same sense that those writers who work for the Wall Street Journal, for example, are journalists. For one thing, I never had any training in journalism or even English composition for that matter (I placed out of the one college composition course that I wanted to take). I am formally educated in many areas, but news reporting or the fine details of the English language do not happen to be among them.
For another, having worked for a number of years in technology companies, I have a strong affiliation to both software vendors and the IT professionals using their products. I am largely not a disinterested bystander, but a participant in the technology industry.
But I cannot help feeling outrage at the positively Orwellian way that HP treated journalists, and in particular Pui-Wing Tam, as she chronicled in Thursday's Wall Street Journal (www.wsj.com
; subscription required).
In partial fulfillment of a pledge made by CEO Mark Hurd, Tam was briefed by HP's outside attorneys on the steps taken against her in the name of HP. I say in partial fulfillment, because HP and its attorneys could not, or would not, answer many of her questions concerning the details of those steps. Perhaps those attorneys are so embarrassed at these flagrant abuses of law that their sense of honor prevents them from answering too many questions. In some cases, Tam received less information in her briefing than HP presented during Congressional hearings.
But here is what Tam does know. Her personal phone records were obtained. She was followed, and videotapes of her movements were taken and viewed. IM messages were obtained and scrutinized. Her background and her husband's background were documented and checked. Private investigators carried out "pre-trash inspections" at her home (HP claims not to know what this specific item means). Much of this information was obtained through "pretexting," using her Social Security number which had been obtained in some way.
These activities occurred because Tam was a journalist covering HP.
Some people are responding that it doesn't matter what the board of directors and other executives in the company did, as long as HP continues to improve its business and make money for its stockholders. Analysts are telling customers that the scandal should not impact their business relationships with the technology company.
I strongly disagree. Trust and integrity matter. A lot. And what happens in a corporate boardroom, and in the chief executive offices, has a way of letting all employees know what is acceptable behavior within the company. And the message sent, loud and clear, by ex-Chairman Patricia Dunn, was that she did nothing wrong, and in any case had a legitimate reason for her actions. The message sent by Hurd was that words matter more than actions in correcting the injustice.
Nothing wrong? Customers of HP should be concerned that these strong messages mean that HP employees feel free to engage in less-than-aboveboard tactics, while convincing themselves that it is not wrong to do so, and to prevaricate when caught in the act.
HP customers should be offended, and suspicious. It may be unrealistic to cease doing business with the vendor, but deals and other arrangements that you may have taken for granted need to be specifically defined and vetted for the foreseeable future. And if you are a large enterprise customer, it is appropriate to take Hurd to task, demanding that actions start to back up his words, as a condition of continuing the business relationship.
HP has lost most, if not all, of the trust and integrity that it had earned in the market. Until it regains those characteristics, be sure to check your wallet.
Posted by Peter Varhol on 10/20/2006 at 1:15 PM0 comments
My local newspaper (The New Hampshire Union Leader at www.unionleader.com
) published a comic this weekend that pictured two children dressed in costumes for Halloween, getting ready for the annual Trick or Treat. The caption read Remember, mom says we take only healthy treats like candy. No spinach or lettuce.
Besides the laugh-out-loud nature of the punch line, the comic affords a glimpse of what advances in technology have wrought society. This case clearly demonstrates that processed foods do more good than harm. Processed foods let us trade immediate serious illness or even death for the long-term risk of obesity and heart disease. Certainly the latter are not trivial, but they do work over time and we can avoid them with commitment and moderation. These foods are not an unalloyed good, but they are an improvement over their unprocessed versions.
IT technologies also represent an improvement, though not a panacea. Cell phones, e-mail, and Blackberries (and similar push e-mail devices) make it possible for us to stay in touch in ways that were not possible two decades ago, and at a much cost lower than a landline phone call from that previous era. The downside is that some people do not know when to shut off their devices. I don't think there is any doubt that the tradeoff was worthwhile.
Perhaps the true culprit is our enthusiasm for every advance of technology, and our unbridled optimism that we can use it to change the world. In fact, the world improves through increments, as new technologies fix some problems, only to create others. Fortunately, the ones created are almost always less severe than the ones solved. We should hope that any technology advance offers a net improvement in society. If it does not, it will not last.
I grew up with the cartoon character Popeye, whose ingestion of spinach was portrayed as giving him great strength. It was only when I was older that I realized that the entire goal was to get me to eat my vegetables. Perhaps today it is time for Popeye to eschew his raw vegetables in favor of processed foods in moderation., and simply make sure that he also engages in regular exercise. He may find it an improvement.
Posted by Peter Varhol on 10/15/2006 at 1:15 PM0 comments