Editor's Note

Software Advancements Fail to Keep Pace

Why does software always seem to lag behind hardware?

My first computer was an Atari 800XL with a whopping 64K of RAM.

I purchased it to write papers on while at college; it replaced a manual typewriter. I also bought a disk drive, a dot-matrix printer, and three pieces of software: a word processor, Chessmaster 2000, and Pirates. Even then, I had my priorities in order. The total cost of that computer, its peripherals and software, and related supplies: approximately $350.

It was one of the best investments I've ever made in my life. It was an obsolete computer when I bought it, and I was a hunt-and-peck typist, but as a geological professor was fond of saying when I was in college: "In the land of the blind, the one-eyed man is king." And I was feeling like royalty with this new machine.

It's easy to take what we have today for granted. But word processing--even the limited version available then on an obsolete computer--was a staggering improvement over what was possible with the typewriter. It reduced tremendously the amount of time that was required to write a paper, plus it made it easy to move text around, make corrections, and let you do fancy typographic treatments like italics and bold-faced text.

The difference in computing power now and then is exponential. My current laptop--by no means top of the line--has 2GB of RAM and amazing processing power. It has a built-in graphics chip and handles current software, including the occasional game, quite well. From a technical standpoint, it's a staggering device.

But all this leads to an obvious question: Why isn't the software for it exponentially better? Why doesn't Moore's law apply to software, as well as hardware? I still use word processors, and the software itself is powerful and contains many features not available in that first program, but the initial leap up to a word processor remains more significant than all the improvements in the intervening years. It has been a few versions since I saw a feature in Word that felt compelling and must-have. In this respect, I think back to the aphorism about the land of the blind: Two eyes are nice, and preferable, but the real advance is in the first eye that lets you see at all.

In a similar vein, Visual Basic was a big step forward in terms of RAD for programming on Windows. I still hear from readers who have been using VB since version 1, and the good old days invoke a fair amount of nostalgia.

It's not that the people I talk to feel that version 1 did everything that was needed from its inception, but I do still hear from readers who tell me that, while they use and like .NET, the tool's added abilities come at a significant cost in terms of ease-of-use. The promise of the early versions of VB -- of a highly graphical programming tool with an emphasis on productivity -- hasn't been fulfilled to the degree suggested by the initial implementations.

Newer versions have brought many additional features, particularly powerful and useful features, even -- but there has been regression in terms of what's required, not just to get up and running in VB.NET or C#, but even to use it on a day-to-day basis. To be sure, many developers asked for this trade-off. VB had a large cadre of followers from its inception, but many of those followers were also pushing the limits of what it could do at every turn. VB was notorious for the hacks that were required to implement behavior its users wanted, and VB developers were happy to provide those capabilities any way they could. One of the promises Microsoft made with the introduction of .NET was that you wouldn't have to resort to hacks to implement the basic functionality your users needed. I think, by and large, Microsoft succeeded at that goal, but the costs were significant, in terms of backward compatibility and in ease-of-use.

Computers seem to manage major advancements without requiring retrenchment in terms of ease-of-use and power -- why can't our software?

Talk Back: Do you think software development has kept pace with the improvements in computing power? Why, or why not? Write to me at [email protected].

About the Author

Patrick Meader is editor in chief of Visual Studio Magazine.

comments powered by Disqus

Featured

  • Creating Reactive Applications in .NET

    In modern applications, data is being retrieved in asynchronous, real-time streams, as traditional pull requests where the clients asks for data from the server are becoming a thing of the past.

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

  • .NET 9 Preview 3: 'I've Been Waiting 9 Years for This API!'

    Microsoft's third preview of .NET 9 sees a lot of minor tweaks and fixes with no earth-shaking new functionality, but little things can be important to individual developers.

  • Data Anomaly Detection Using a Neural Autoencoder with C#

    Dr. James McCaffrey of Microsoft Research tackles the process of examining a set of source data to find data items that are different in some way from the majority of the source items.

Subscribe on YouTube