Back to Basics
When you spend a good part of your day reading or listening to yet another announcement of a product that has a slightly different take than others on the hot problems of today, you tend to forget that there are real issues in computer science that are the subject of serious research and discussion. I took the opportunity over the last week or so to get back to some of the intellectual curiosities that attracted me to computer science in the first place.
My most recent encounter was with a company called 1060 Research (http://www.ftponline.com/channels/business/2006_08/companyfocus/1060research/), which describes a way to abstract the process of calls and data passing into resources that are described by a URI. While most of us use URIs in specifying Web locations or making database calls (using XQuery), 1060 Research's NetKernel uses them for everything, including function calls and storing return values. CEO Peter Rodgers pointed out to me the amazing fact that most code just makes sure that messages and data get from one location to another, so developing NetKernel applications is rapid and efficient.
Canvassing the blogs provides other opportunities to look for some of the fundamental issues of computer science. Joel Spolsky notes that it can be useful for a programming language to be able to treat constructs in the same way in code, even if those constructs turn out to be very different things (http://www.joelonsoftware.com/items/2006/08/01.html
). This is especially true if you don't know in advance what the construct will be. He uses the Lisp programming language as an example of how a term can be a variable, function, or instruction, depending on how it is ultimately used, yet manipulated in exactly the same way up until that point. This has special significance to me, as Lisp was my predominant language when I was doing graduate research in artificial intelligence in the 1980s.
Eric Sink ponders the question of concurrency in application execution (http://software.ericsink.com/entries/LarryO.html). Concurrency was once a real problem for scientific programmers writing Fortran code for a Connection Machine from Thinking Machines, Inc. or other significant multiprocessor system, and a much more abstract problem for the rest of us. That is changing. Dual core desktop systems are becoming common, and he notes that it is only a matter of time before we have 16-core systems at our personal disposal.
Eric presents the Erlang programming language as a possible solution (yet incongruously bemoans its lack of a C-style syntax, even though the C specification was published only a couple of years earlier). Erlang provides for concurrency through the use of threads that don't share memory space. Instead, threads (actually very lightweight processes) communicate through message passing, a mechanism similar to that used by 1060 Research's NetKernel.
The key for all of these problems is abstraction and modeling; that is, creating a model of the underlying execution engine (both hardware and operating software), and abstracting it to something useful for your particular purpose. It turns out that the better models we have, the better the abstraction can be. Of course, it is unlikely that there exists such a thing as a universal model of execution, so there will be no universal programming language to abstract to. This is perhaps the most cogent argument for the need to have several programming languages in your personal set of skills.
You might say that there is nothing new or significant here; in Joel's example, using C you typically use pointers to reference variables and call functions. Likewise, you can do threading and reliable concurrency in C, by setting up critical sections and resource locking in your code.
But the fact that you can use pointers to reference (and dereference) both variables and functions in C doesn't mean that the language treats them the same. For example, when you dereference a pointer to a variable, you are finding the value associated with that variable. When you dereference a pointer to a function, you are finding the entry point of that function. And while you can do concurrency in C, writing your own critical sections and locking resources manually, it is difficult, time-consuming and error-prone.
There was a time when we didn't think very much about concurrency and other problems, but that is changing. We require new thinking, and new approaches to these problems, because these problems are starting to touch mainstream computing.
These are all fundamental problems in computer science. Solutions here are not going to convince a venture capitalist to fund a start-up company, or let you go public or sell out to a larger company in two years. In many cases, the only reward to solutions to fundamental problems of computing will be intellectual satisfaction.
The fundamental problems can seem obtuse when you're in crunch mode for your latest product delivery, or in my case reading yet another press release for a point release to an established product. But they will have a far greater influence on your work, and the direction of software in the future, than any product you may be shipping today.
Posted by Peter Varhol on 08/20/2006 at 1:15 PM