Practical .NET

Programming Will Always Be Hard

Fifty years ago, the U.S. space program put men on the moon and got the whole team back alive. They did it with computers that, probably, are less powerful than the one in my pocket (and that device is over a year old). It's not even possible to compare the tools I use to create Xamarin applications to run on that device in my pocket with the tools that were used to put men on the moon: Most of my tools didn't have equivalents 50 years ago.

I've been writing code for most of the time since that moon landing and I'm here to tell you: It's just as hard to write applications now, in 2018, as it was in 1974. It's good to point out why.

A History Lesson
On the face of it, building applications should be infinitely easier. The history of programming is pretty easy to describe, after all: computers deliver more computing powers and resources; operating systems and infrastructure make it easier to do more sophisticated things; and development tools (languages, frameworks, IDEs) are more powerful.

Each one of those changes was supposed to make it easier to create applications and change the way applications were created. Back in the '60s, COBOL was an "English-like" language that was going to let managers write programs, and RPG was supposed to create a standard processing structure that would allow anyone to generate reports by just filling in some forms. More recently, we have application generators that are intended to create whole applications based on requirements (written in some "requirements language") or some other input (typically, the database schema).

All of these generators have had, at best, niche success.

Even comprehensive tools (think PowerBuilder or Ruby-on-Rails) have a period of high popularity/adoption/use followed by a gradual tapering off. In the end, organizations vote with their programmers' fingers and we go back to writing code in some programming language.

Which leaves us here, still writing code line-by-line, making mistakes and delivering bugs.

I don't want to dismiss the impact of those comprehensive tools: It would be hard to over-estimate the impact of Ruby-on-Rails on my current development environment, for example. A line of code in C# in 2018 accomplishes far more than a line of Fortran did in 1969; writing database access code with Entity Framework means that I get to ignore a whole bunch of details that were critical to doing the same thing in ADO.NET just a few years ago. And I would be lying if I didn't admit that we deliver far more functionality per hour of coding now than I ever could back in the early '70s.

Why It's Hard
So why isn't it easier? Why does it still take about as long to deliver a complete application now as it did in the '70s? Why are we still writing lines of code and still delivering bugs?

The answer, I think, is because we keep "raising the bar" (though, considering how likely it is that we will deliver bugs instead of functionality, "upping the ante" might be a better metaphor). The reason we keep raising the bar is that our users look at what we've done and say "That's nice. But if you can do that ... why can't you do this?"

And, to be honest, we don't have a good answer to that question. In fact, we look at our tools, frameworks, and infrastructure, and think, "Actually, we could do something cooler." So we get better tools and then deliver those applications. Why? Because, quite frankly, we also think it would be cool to deliver that "better thing."

If anything, our response just encourages the cycle of ever rising expectations, along with tools to meet those expectations. The history of computing consists of two things: The tools improve and the problems get tougher. It would be foolish to think that will change.

Yes, we have different kinds of problems than we did 50 years ago and, as a result, we make different kinds of mistakes than we did. We no longer worry about managing memory or, really, memory at all (I remember writing a boot loader in 6502 Assembler for the Apple IIe so I could get back 4K of RAM that I really, really needed).

But we still succeed and fail at the same rate, we still call the failures "bugs" and we still call the people making them "programmers" or "developers." That won't change as new tools show up (yes, even when we integrate AI into the process). Our users -- and us -- will just raise the bar/up the ante.

And we'll still put men on the moon. Except it won't be the moon -- we will be aiming much further out and we'll stay longer and do more when we get there.

It will also continue to be fun. Best job in the world.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

comments powered by Disqus

Featured

Subscribe on YouTube