Do You Know the Answers to These Hanselman Questions?
Back when he had a job, Peter used to hire developers. When he did, there were three questions he got from Scott Hanselman's blog, way back at the beginning of the .NET Framework that he's still asking today.
About a decade back, Scott Hanselman posted some questions on his blog under the heading " What Great .NET Developers Ought To Know (More .NET Interview Questions)" (the previous set of questions was oriented toward ASP.NET Web Forms). It's interesting to review the list to see which ones aren't as relevant anymore (for example, "What is the difference between XML Web Services using ASMX and .NET Remoting using SOAP?") and which ones are.
There were three questions on Hanselman's list that I did ask back when I ran an IT department ... and that I would still want to ask today when helping my clients vett potential developers. Here are those questions, with what I would consider ideal answers.
Two caveats before you start objecting to my answers. First, I'm interested in what I would call "practical" answers -- answers that would reflect how the interviewee's knowledge would affect their day-to-day work. I'm not particularly interested in very abstract answers; I'm interested in answers that would drive the way the developer writes code.
Second, I say "ideal" because I'd be surprised if I would get someone who would give exactly these answers. I would suspect that, in an interview, the typical person would leave something out or even fumble the answers. But I'm also interested in hiring "self-aware" developers who know what they know and also know when they've run across something they don't know. I'm more interested in hiring someone who would say, "I don't know that, but here's what I think …" than I am in hiring someone who pretends to know what they don't know (or who confidently "knows" something that's just flat out wrong). To quote Harry Callaghan, "A man's got to know his limitations."
Describe the difference between Interface-oriented, Object-oriented and Aspect-oriented programming.
Object-oriented programming (OOP) is all about classes and the relationships between them: inheritance (a relationship that's set up at compile time) and composition (when objects are brought together at runtime). With "pure" OOP, we're interested in what classes are available and what happens when we use the members (methods, properties and events) that make up the class. OOP is a general-purpose tool for solving a wide variety of problems and the most common programming paradigm in use today, especially for server-side development.
Aspect-oriented programming (AOP), on the other hand, addresses a specific set of problems: cross-cutting concerns. A cross-cutting concern is one that crops up in many different places in an application (authorization and logging are the two most obvious examples). AOP has at least two components: a way to centralize the functionality to handle a concern and a way to specify where in the application that functionality is to be added to the application without altering the application's code. If you've used attributes in your programming, you've done something that looks a lot like aspect programming. If you've used the global settings to apply an attribute in ASP.NET MVC, you've gotten a little bit closer to AOP.
Interface-based programming is a branch of programming where the focus is on the API exposed by various objects, rather than on the implementation behind the API. In interface-based programming, we think about the API as a contract that should never be broken (or, at least, not without a lot of discussion with all of the developers who use that API). We focus on the API because we want to support having multiple implementations. This focus on the API, rather than the implementation, allows us to extend, evolve or even replace functionality provided through the API without having to rewrite the client that uses the API.
OOP and interface-based programming are complementary to each other; AOP is orthogonal to both.
What is the difference between the Finalize and Dispose methods?
In the .NET Framework, the Dispose method, when present in a class, is intended to hold code that frees up unmanaged resources (memory, open database connections and so on). When a class exposes a Dispose method, the class provides a mechanism for the client to release resources held by the object without having to wait for garbage collection to, eventually, free up those resources.
The definition of the Finalize method is almost identical, but with a couple of exceptions. First, the Finalize method is always present because it's part of the base System.Object class that all classes in the .NET Framework inherit from (eventually). However, the Finalize method is declared as Proctected, meaning that it's only available within a class -- you can override it in your class but you can't call it from a client. If you do override the Finalize method, the .NET Framework garbage collection process will call your implementation of the method as part of cleaning up after your object. When the Finalize method will be called is unpredictable, expect that it will be called after all clients have finished using the object.
If your class had both a Finalize and a Dispose method, you could call your Finalize method from your Dispose method. Properly, only the Finalize method should be called a destructor because a destructor is managed by the Framework, not the client; properly, only the Dispose method should be considered an implementation of the Dispose pattern because it decouples, releasing the resource from the object's lifetime.
How is the using pattern useful? What is IDisposable? How does it support deterministic finalization?
If I got a good answer to the previous question that didn't include the following material, this question would be the one I asked next.
The Using keyword allows you to build a code block that guarantees the Dispose method on an object will be called. You'll only be able to use the Using keyword with an object if it has a Dispose method or, more specifically, if the class implements the IDisposable interface, which requires that you add a Dispose method to the class.
While the timing of the Finalize method is unpredictable, exposing a Dispose method allows the developer using an object to determine when resources will be freed: The developer, not the .NET Framework, decides when to call the Dispose method (by deciding where to put the end of a Using block, for example).
And here's one more from the list: Can DateTimes be null? The correct answer is "Yes"… which is embarrassing because I would have said "No." Which goes to show that it's always better to be on the side of the desk that's asking the questions.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.