C# Corner

Write Code for a Multithreaded World

Multithreaded programming is difficult because context switches can happen any time. Here are a few techniques to mitigate the chance of failure.

Technology Toolbox: C#

The next big frontier in computing is concurrent programming (again).

In recent years, the technique used to achieve Moore's law has changed: Processors are getting faster by increasing the number of cores, not the speed of each individual core. There is reason to believe that we'll come out ahead by creating applications that are able to take advantage of more cores. Of course, like everything else, there is a downside: Multi-threaded programming is hard. Even thorough unit testing can't adequately guarantee that you have a correctly functioning program. One big reason for this: You can't test for context switches. A context switch occurs when the thread scheduler pauses one thread and starts another. A context switch provides the illusion of multiple tasks happening at the same time on the same processor. Context switches can happen anytime, but you can't control when those context switches happen during testing.

The difficulty of writing multithreaded applications stems from working with synchronization primitives, and knowing when to synchronize access to different data members.

There are two different goals that are pushing you in opposing directions. For safety, you want to provide maximum synchronization around data blocks. But if you provide synchronization primitives around everything, you lose any benefits of multiple threads: only one thread can be working in your program because you've blocked all the others out.

Where to create the synchronization blocks in your program is application specific, so I can only provide the broadest of guidelines. On the other hand, there are a number of recommendations you can follow that make it easier to provide the proper synchronization around your shared data.

The rest of this article will describe some of those techniques while building a simple application that calculates square roots using the Hero of Alexandria algorithm. Hero of Alexandria was a first century mathematician and engineer who invented a number of feedback control systems. His square root algorithm uses an iterative process. It works like this: To find the square root of any number X, you make a guess Y. Then, you take the average of X / Y and Y. That's your next guess. It's expressed this way as a lambda expression:

Y -> ((X / Y) + Y) / 2;

For example, to find the square root of 9, you can start with the extremely bad guess of 1. Your next value of Y is ((9 / 1) + 1) / 2, or 5. As you repeat the process with each new guess the values are 3.4, 3.0235, and 3.0001. You can see it converges quickly.

I built a simple Windows forms application that calculates square roots using Hero's algorithm. The first version runs in a single thread (Listing 1). It raises an event after each iteration and updates three text boxes with the current approximation, the error, and the number of iterations. If you experiment with the single threaded application, you'll find a number of issues. Most obviously, the cancel button doesn't work. Everything is in a single thread, so a click on the cancel button can't be processed until the entire algorithm is over.

I created this app with the intention of using it to illustrate how to write a multithreaded algorithm, so the listing incorporates several techniques to adapt this code for use in a multithreaded environment.

Simplify Data Integrity
The first decision I made was to use an immutable struct to communicate status between the hero algorithm calculator and its clients. This single decision will do more to help you avoid data integrity errors than anything else you do. An immutable type is one where the state of the object is fixed at construction and can never be modified. Many operations that modify the internal state of an object take multiple instructions, which means they can be modified by a context switch. For example, suppose that HeroStatus was mutable and contained this update method:

public void updateStatus(double approximation)
{
   guess = approximation;
   // Danger: context switch could happen here.
   numIterations++;
}

A context switch can happen between the assignment of the new approximation and increment operation for numIterations. If that happens, the status is invalid until the next context switch. In this application, it's only a status value, but that's not always the case: If two threads are working with each other to update shared members, you can introduce much more serious errors. Many, but not all, can be avoided by using immutable types as your data storage. None of the operations that could put the object in an inconsistent state are supported. The new object is created, and initialized before it is assigned to the internal member. This ensures that the entire object is updated in one operation (Listing 2).

In fact, it's possible that assignment won't be an atomic operation. Even something as simple as "a++" requires multiple machine instructions, and can be interrupted by a context switch in the application. The System.Threading.Interlocked class contains several members that you can use to perform non-interruptible operations. You can use this class to increment, decrement, add, or exchange values. All of them are guaranteed to finish without any context switch in the middle of the operation.

You have some basic structures in place; now it's time to offload the calculations to the worker thread. You have several different options when it comes to creating new threads in .NET. You can work directly with the thread object, you can use the ComponentModel.BackgroundWorker, or you can use the QueueUserWorkItem method in the ThreadPool class. This article's example uses the thread pool to create work items. Using the thread pool can often be more efficient than creating your own threads. Threads in the thread pool are recycled, and new tasks are assigned to existing threads. When you create the threads yourself, you don't have that advantage: Once a thread has finished its work, it's done. If you want to perform new work with a thread that you create yourself, you must make a new thread object, and that is a much more expensive proposition than taking advantage of the thread pool.

The QueueUserWorkItem has another advantage: It's simple to use. This is all the code necessary to move the square root calculation to a worker thread:

private void buttonThreads_Click(object sender, 
   EventArgs e)
{
   cancel = false;
   buttonStart.Enabled = false;
   buttonStop.Enabled = true;
   ThreadPool.QueueUserWorkItem(
      new  WaitCallback(this.startCalc));
}
private void startCalc(object unused)
{
   double src = double.Parse(textBoxSquare.Text);
   calc.DoCalc(src, 1E-12);
}

The methods in startCalc and the calculations in the HeroCalc object now happen in another thread.

If the Thread Pool is this simple, you might be wondering why the other thread classes were created. The Thread Pool and QueueUserWorkItem are the simplest APIs, but they do have some limitations. The Thread class has many more features you can take advantage of, when circumstances call for them. Using the thread class, you can pause threads, Join (which waits for a thread to complete), and Suspend threads, to name only a few. If you need to take a more active role in managing your background threads, you need to move away from the Thread Pool functionality and start creating and managing the thread objects yourself.

The ComponentModel.BackgroundWorker implements many common thread patterns on top of the Thread class, but that subject is beyond the scope of this column. My recommendation is to use the thread pool when you can, the BackgroundWorker when you need its functionality in a Windows application, and the thread class when you must.

There is one more change you need to make now that the application is multithreaded. When you use Windows Forms, only the main UI thread can work with the controls in your windows. As coded, the progress reporting event handler fails in a multithreaded application. You need to check the form's InvokeRequired property; if the property is true, you need to use the Control.Invoke() method to marshal the call to the main thread (Listing 3).

Don't Lock the this Object
The progress reporting event works now, but there are no synchronization primitives anywhere in this application. To some extent, this is by design: I tried to minimize the data that is shared between different threads. In practice, you should try and do the same thing. The less shared data, the less the chance that you will face synchronization issues.

So far, so good. But let's expand the sample by looking at a few basic concepts on synchronization primitives. For example, let's add a capability to the HeroCalc type that forces you to confront these issues. One of the more interesting facets of Hero's algorithm is how the value of Epsilon changes with each successive iteration. In this step, you'll add a list box to the UI that shows the last 10 values of Epsilon.

Note that if this weren't a demo, I'd use the HeroStatus type and simply add new values to the list in the progress handler, which would avoid all the synchronization problems; however, the goal here is to illustrate locking techniques, so I'll show you a different way of approaching this issue.

Adding the code to store subsequent values of epsilon requires a small change to HeroCalc.DoCalc(). After each iteration, you add the most recent value of epsilon to the list. Next, you need to add a method to retrieve a copy of the most recent N values in the list:

public double[] GetEpsilon(int numValues)
{
   // return the last N items in the epsilon array.
   int returnedItems = Math.Min(numValues, 
      epsilonHistory.Count);
   double[] rVals = new double[returnedItems];
   int index = epsilonHistory.Count - returnedItems;
   epsilonHistory.CopyTo(index, rVals, 0, 
      returnedItems);
   return rVals;
}

You can see that this method makes a copy of the storage, rather than returning a reference (even a read-only one) to the internal storage. That's important in a multithreaded environment: You should not allow live references to an internal volatile data structure. If the UI thread started working with the values in the list while the calculating thread modified it, the iteration would throw an exception because the collection has changed. As a rule of thumb, you should make a copy and send the copy across the thread boundary anytime you intend to move data between threads.

The next step is to look at where you need to add synchronization to this algorithm. GetEpsilon is intended to be called across thread boundaries, so you need to protect access to the member variables it accesses. The GetEpsilon method accesses the epsilonHistory, as in this short snippet:

public double[] GetEpsilon(int numValues)
{
   // Bad idea. Don't lock this
   lock (this)
   {
      // return the last N items in the epsilon array.
      int returnedItems = Math.Min(
         numValues, epsilonHistory.Count);
      double[] rVals = new double[returnedItems];
      int index = epsilonHistory.Count - returnedItems;
      epsilonHistory.CopyTo(index, rVals, 0, 
         returnedItems);
      return rVals;
   }
}

Locking the this object is always a bad idea. By locking the current object, you are locking a publicly accessible object. That means any other piece of code anywhere in the application can acquire (or attempt to acquire) a lock on the same object. In small sample applications, that can work. However, in a large codebase, your chances of avoiding deadlock are almost nonexistent. You'd need to sweep the entire codebase to find any location that might be trying to acquire a lock.

Instead, you should always acquire a lock on a private variable. That way, all the locking code can be confined to one object: the object that owns the private variable. Deadlocks can still occur, but it's easier to find and remove them.

The lock statement acquires a lock on a single object, which means you need an object to lock. This general purpose code creates a lock inside any class:

private object synchHandle;
private object GetSynchHandle()
{
   Interlocked.CompareExchange(ref synchHandle, 
      new object(), null);
   return synchHandle;
}

To use it, you lock the return value of GetSynchHandle():

lock (GetSynchHandle())
{
   epsilonHistory.Add(currentStatus.Epsilon);
}

Your object declares a private variable that serves as the target of the lock. When the synch object is requested, you use the CompareExchange method of the Interlocked class to see if the value is null. If so, you create a new object. If not, you return the previously allocated lock target.

The last step is to put locks around the appropriate blocks of code. You need to lock the code where epsilonHistory's state is important. This snippet updates the code for GetEpsilon():

public double[] GetEpsilon(int numValues)
{
   lock (GetSynchHandle())
   {
      // return the last N items in the epsilon array.
      int returnedItems = Math.Min(
         numValues, epsilonHistory.Count);
      double[] rVals = new double[returnedItems];
      int index = epsilonHistory.Count - returnedItems;
      epsilonHistory.CopyTo(index, rVals, 0, 
         returnedItems);
   }
   return rVals;
}

Of course, you need to lock the other locations where the epsilonHistory member is accessed and modified. That requires changing the DoCalc() method:

public void DoCalc(double square, double epsilon)
{
   if (epsilon < 0)
      throw new ArgumentException(
         "tolerance must be a postive number");
   currentStatus = new HeroStatus(square, 1, 0);
   lock (GetSynchHandle())
   {
      epsilonHistory.Clear();
      epsilonHistory.Add(currentStatus.Epsilon);
   }

   if (raiseProgress(false))
      return;
   while (currentStatus.Epsilon > epsilon)
   {
      nextIteration();
      lock (GetSynchHandle())
      {
         epsilonHistory.Add(
            currentStatus.Epsilon);
      }
         if (raiseProgress(false))
            return;
   }
   raiseProgress(true);
}

Note that there are two different locks inside DoCalc(). These locks illustrate the other major concept you need to keep in mind when using locking: Don't hold locks for too long, or in cases where you call into another thread. If the entire DoCalc() method was synchronized, you'd deadlock in this application. Any other thread (such as the UI thread) that needs to access the epsilonHistory member would be locked out until the entire algorithm finished. You do need to place synchronization primitives around blocks that change state, or access volatile state. Locking too much and too often reduces your application to a single-threaded app—an extraordinarily complicated single-threaded app.

In any case, this overview of multithreaded applications should give you a few pointers into the areas you need to look at to create a well-behaved multithreaded application. It should also give you a healthy respect for the kinds of things that can go wrong. Play around with the locations of the synch blocks in DoCalc (download the sample code from this article here). You might be surprised at how easily you can freeze the application by causing deadlocks. You can also get inconsistent answers, and cause a crash in GetEpsilon() if you remove all locks, and the timing is right. Multithreaded applications might be the next big wave, but it's important to wade through that wave carefully, because there are many potential hazards lurking beneath the surface.

comments powered by Disqus
Upcoming Events

.NET Insight

Sign up for our newsletter.

I agree to this site's Privacy Policy.