C# Corner

C# Questions Answered: Lambda, C# Traps, Unsafe Code

Patrick Steele responds to questions readers have about previous columns.

This month, I'm following up on some questions and comments my articles have raised over the past year. Some of these came in e-mail, some from VisualStudioMagazine.com, and a couple from face-to-face meetings at conferences and other events. Got a question about my columns or about C# development in general? E-mail me at [email protected].

Lambda Properties
My June issue column on Lambda Properties ("Lambda Properties: An Alternative to Subclassing?") elicited a number of good questions. One of which was simply: What is the performance or memory impact of lambda properties versus inheritance?

This is a great question that's difficult to answer. If we were to hypothesize on the mechanics going on inside the runtime, I would think extending a class by providing alternate implementations via lambda properties would perform better -- after all, it's just making a method call. Inheritance means the runtime has to invoke some checks to find the "best fitting" method to call based on the current type in the inheritance hierarchy. However, those checks are probably highly optimized by the CLR team, given it's such a core component of the Microsoft .NET Framework.

I put together a simple console application called "LambdaPropertyPerformance" (see the sample code included with this article). We'll compare two classes. First, a base class along with a derived class that overrides a method:

class BaseClass
{
  public virtual int Add(int a, int b)
  {
    return a + b;
  }
}
 
class Derived1 : BaseClass
{
  public override int Add(int a, int b)
  {
    if( a > 0 && b > 0) return a + b;
 
    throw new ArgumentException("arguments must be greater than zero.");
  }
}

Likewise, we'll create another class that exposes the Add functionality as a lambda property:

class Computer
{
  public Func<int, int, int> Add { get; set; }
 
  private int BaseAdd(int a, int b)
  {
    return a + b;
  }
 
  public Computer()
  {
    this.Add = BaseAdd;
  }
}

Now let's do a simple for loop to test the "Add" approach using an inheritance hierarchy:

var sw = new Stopwatch();
 
var derived = new Derived1();
sw.Start();
for (var x = 1; x < loopCount; x++)
{
  var value = derived.Add(x, x + 2);
}
sw.Stop();
Console.WriteLine("Inheritance time: {0} loops in {1} milliseconds",
  loopCount, sw.ElapsedMilliseconds);

And let's do the same thing with our lambda property implementation:

var c = new Computer();
c.Add = (a, b) =>
  {
    if (a > 0 && b > 0) return a + b;
 
    throw new ArgumentException("arguments must be greater than zero.");
  };
sw.Reset();
sw.Start();
for (var x = 1; x < loopCount; x++)
{
  var value = c.Add(x, x + 2);
}
sw.Stop();
Console.WriteLine("Lambda time: {0} loops in {1} milliseconds",
  loopCount, sw.ElapsedMilliseconds);

I ran each loop three times, outside of the debugger (to try and limit the performance impact of having a debugger attached). The average times on my 8GB machine running Windows 7 64-bit were:

  • 1,885 milliseconds for the inheritance approach
  • 1,927 milliseconds for the lambda property approach

So with a little more than 6.7 million iterations, the difference here was virtually nonexistent.

That said, the answer to this question really comes down to "it depends." First off, this type of performance analysis (looping over a particular method a few million times) is not really a good indicator of the implementation's overall performance. You need to do performance testing with a real performance tool under real-world scenarios. And remember, trying to optimize something before you've identified it as a performance bottleneck is a waste of time. Do your homework, make a good design, unit test your code, perform integration tests and monitor your application's performance. Once you see an issue, then dive in and find out the root cause of the issue and attack it at that time.

Finally, I liked the comment left by VSM reader Sean Cooper on the lambda property approach: "I think this is a great idea for those edge instances where you need a slight variation for one or two places in the code and creating an entire subclass is more work than it's worth."

Code Examples for Interface-Based Programming
My very first column for Visual Studio Magazine was titled "Interface-Based Programming," and readers had questions about the examples I provided. In the interest of brevity, I made some assumptions about how the interfaces may be used in real code, and I think that led to some confusion.

One commenter asked about my last example, where I showed how to set up your mocking and tests to make sure your exception handling code is working properly. What I didn't show was a sample consumer of the interface we're mocking. First, let's look at an interface we'll use for reading a file:

public interface IFileReader
{
  void ReadFile();
  string Contents { get; }
}

Here's how a class may use this interface:

public class Processor
{
  private readonly IFileReader reader;
 
  public Processor(IFileReader reader)
  {
    this.reader = reader;
  }
 
  public string GetFileContents()
  {
    try
    {
      reader.ReadFile();
      return reader.Contents;
    }
    catch (FileNotFoundException)
    {
      return null;
    }
  }
}

When I talked about testing for exception handling, I wanted to make sure that if the IFileReader's ReadFile method throws a FileNotFoundException, null will be returned by Processor.GetFileContents. Let's look at how we can use Rhino.Mocks to test our exception handling code.

We'll start by creating a mock of the IFileReader. Then, we'll stub out the functionality of the ReadFile method to simply throw a FileNotFoundException. Please note that when the original article was published, I was using an older version of Rhino.Mocks. Newer versions now support directly throwing an exception (no need to use the "Do" extension method and a custom delegate):

IFileReader reader = MockRepository.GenerateStub<IFileReader>();
reader.Stub(r => r.ReadFile()).Throw(new FileNotFoundException());

This could be considered the "arrange" phase of our Arrange/Act/Assert unit test. Now we'll act:

var p = new Processor(reader);
var contents = p.GetFileContents();

The final phase, Assert, is where you'd make sure that the "contents" variable is null based on whatever unit testing framework you're using. The sample code for this article contains a project called "MockingExamples," which shows a complete example of this concept.

5 Traps to Avoid (Especially Bad Examples)
In September 2010 I wrote a Web column ("5 C# Traps to Avoid") that generated a lot of comments -- most of them about Trap No. 5 (forgetting to unsubscribe from events). I totally flipped my publisher/subscriber sample! Instead of an example of a publisher still having a reference to a subscriber (preventing the subscriber from being garbage collected), I had an example of a publisher going away, which will have no impact on the lifetime of the subscribers!

To clear things up, let's look at a proper example that clearly illustrates the problem. We'll use the WeakReference object to help us track whether an object is still alive (not removed from memory via the garbage collector). Let's start with a simple Publisher:

  public class Publisher
  {
    public event EventHandler DoSomething;
 
    public void Trigger()
    {
      if (DoSomething != null)
      {
        DoSomething(this, EventArgs.Empty);
      }
    }
  }

Pretty simple -- at some point in time the "Trigger" method is called and the DoSomething event is raised. This event has nothing of value, but that's because this is only an example. Now we'll look at a super-simple subscriber:

  public class Subscriber
  {
    public void SomethingHandler(object target, EventArgs e)
    {
    }
  }

As you can see, we have a method that will handle the DoSomething event. Now let's see what happens when a subscriber doesn't unsubscribe from the events it originally subscribed to. We start by creating our publisher and a few subscribers. We'll also initialize a WeakReference for each subscriber:

  var pub = new Publisher();
 
  var sub1 = new Subscriber();
  var sub2 = new Subscriber();
  var sub3 = new Subscriber();
 
  var weak1 = new WeakReference(sub1);
  var weak2 = new WeakReference(sub2);
  var weak3 = new WeakReference(sub3);
 
  pub.DoSomething += sub1.SomethingHandler;
  pub.DoSomething += sub2.SomethingHandler;
  pub.DoSomething += sub3.SomethingHandler;
 
  pub.Trigger();

The call to "Trigger" wasn't required for the demo, but I threw it in for good measure. Now let's clear out our references to the subscribers and call the garbage collector to clean things up:

  sub1 = null; sub2 = null; sub3 = null;
 
  GC.Collect(); GC.WaitForPendingFinalizers();
 
  var problem1 = weak1.IsAlive;
  var problem2 = weak2.IsAlive;
  var problem3 = weak3.IsAlive;

Run this code and set a break point right after the assignment to problem3. If you run the code, you'll see that problem1, problem2 and problem3 are all true! We no longer have a reference to the subscribers and we've even forced a garbage collection, yet they're still "alive" because the publisher still has a reference to each subscriber's "SomethingHandler" method. These subscribers stay in memory until the publisher is garbage collected.

If we unsubscribe from the DoSomething event before we clear out our references, the call to GC.Collect will indeed dump the subscribers from memory. After clearing our last reference to the subscribers, we add:

  pub.DoSomething -= sub1.SomethingHandler;
  pub.DoSomething -= sub2.SomethingHandler;
  pub.DoSomething -= sub3.SomethingHandler;

Set a breakpoint again after the assignment to problem3 and you'll notice that IsAlive is false for all three of the subscribers -- they've all been garbage collected.

The code for this is included with the sample code. See the project "PubSubProblem." The BadEventHandling method shows how not unsubscribing will cause the subscribers to hang around in memory longer than you expect. Likewise, the method ProperEventHandling will show that properly unsubscribing will allow your objects to be garbage collected as expected.

Memory-Mapped Files vs. Unsafe Code
When I wrote about memory-mapped files and the speed increases when manipulating images, commenter Alan asked: "I would be interested to see how much faster your code would be if you used an unsafe block and did direct pointer manipulation instead of using the managed SetPixel method."

I initially thought that, sure, unsafe code would probably beat even the memory-mapped file approach. But the downside of using unsafe code is that your entire assembly must be compiled as "unsafe" and you also have maintenance concerns to consider: Will this code still be maintainable by other developers?

The sample code includes a project called "ProcessBMP." Similar to the code that shipped with my original article, it's cleaned to make it easier to differentiate the various techniques. I also set up an automatic "first time" download from my Dropbox account of the initial bitmap used by the code (the bitmap is too large to include with the code download).

The original code showed a sample of "whiting out" the bottom 50 rows of a bitmap (actually it affected every other row of the bottom 100 rows -- hence 50 rows affected). Because the memory-mapped files approach was so quick (and I expected the unsafe approach to be even quicker), I increased the row count to every other row of the bottom 500 rows.

We can use the LockBits method to grab a locked area of memory for those last 500 rows of the Bitmap instance called "lockImage." In this code, "RowCount" is a readonly int set to 500:

var rect = new Rectangle(0, lockImage.Height - RowCount, lockImage.Width, RowCount);
var bmpData = lockImage.LockBits(rect, ImageLockMode.ReadWrite,
	lockImage.PixelFormat);

Next, we use an unsafe block to do the pointer arithmetic needed to produce 250 white rows:

unsafe
{
  byte* ptr = (byte*)bmpData.Scan0;
  for (var i = 0; i < RowCount; i += 2)
  {
    var rowStart = ptr + (bmpData.Stride * i);
    var width = bmpData.Stride;
    while (width-- >= 0)
    {
      *rowStart++ = 255;
    }
  }
}

The full project includes all three methods -- Bitmap manipulation via SetPixel, memory-mapped files and unsafe code -- all wrapped around Stopwatch instances for timing. The result was surprising.

The memory-mapped files approach was 20 to 25 times faster than manipulating the bitmap using SetPixel. However, the unsafe method using pointers was actually about 20 percent slower than using memory-mapped files. Granted, both memory-mapped files and unsafe code were way faster than SetPixel. However, I thought the unsafe version would be quicker.

But, if you look at the code, there's one line that probably explains the speed difference:

lockImage.Save("Trees-unsafe.bmp", ImageFormat.Bmp);

When using unsafe code, we still need to read the entire file into memory, process it and write it back out. The lack of file I/O is one of the big performance improvements you get with memory-mapped files. You get to map a chunk of memory directly to part of a file on disk. There's no need to read/process/write.

If we comment out the "Save" line at the bottom of the WhiteOutRowsLockBits method, our timings change dramatically. The unsafe code is now 2 to 3 times faster than memory-mapped files. But without the Save, we lose our changes. Memory-mapped files can give you a big performance boost in the right situations, without having to rely on unsafe code.

About the Author

Patrick Steele is a senior .NET developer with Billhighway in Troy, Mich. A recognized expert on the Microsoft .NET Framework, he’s a former Microsoft MVP award winner and a presenter at conferences and user group meetings.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube