DevSmart
Parallel Pickle
What's your strategy for going parallel?
Computing loads are increasing across industries. More companies are confronting the challenges of migrating their apps to higher-performance, multi-core hardware. In addition to increasing performance, moving to multi-core hardware and distributed systems can offer significant benefits such as reduced costs and lower energy use.
So naturally, parallel computing has become a hot topic throughout our industry and across programming languages like C++, Java, .NET and legacy languages. All are gaining tools and primitives to make parallel development easier. With these tools, taking advantage of two, four, or even eight cores is now within the reach of most dev teams. But as chip vendors prepare to move from multi-core to "many-core" -- 32, 64, 128 and more threads in a single server -- in 2009, parallel computing is going to get more complex very quickly. Now put that into a distributed environment with multiple servers, and design, debugging, testing and production management all get much more complex. A well thought-out parallel strategy can help you easily avoid the multi-core dilemma.
Don't Rip-and-Replace
For any app to benefit from multi-core, it must be written as multithreaded or in a container that's able to effectively make it multithreaded. Writing the app to be multithreaded can work fine for two, four or eight cores, but it gets hard to manage for larger systems. Developers can pursue a strategy of "wrap and extend." For example: treating an existing legacy app as a collection of services and deploying a container that can run multiple instances of those services. There's some work involved here if you have a monolithic app, but it's much less effort than rewriting it.
And while it's true that there's not a "plug-and-play" solution available, there are tools and containers that can help in many use cases. In most situations, using these tools to optimize existing apps for multi-core is less risky, less time-consuming and less costly than rewriting your app. You'll want to look for tools that eliminate the need for app developers to be experts in writing multithreaded code. If you plan to use a container, make sure that it can handle your business-app requirements, which may include message ordering, forks and joins in the business process, human interaction and long-running processes.
Process Parallelism
There's a link between service-oriented architecture (SOA) and multi-core; using services can be a key part of your parallel strategy. Traditional SOA is not typically used for low-latency or transaction-intensive apps due to the scalability and high-performance computing needs. However, there's a proven approach that will allow you to scale out your SOA without rewriting services. Simply run multiple instances of a service across many nodes on a network of servers -- a model called "business-process parallelism."
This approach enables both legacy and service-oriented apps to be deployed in distributed, multi-core environments without requiring the low-level coding associated with traditional tools. By using in-memory service calls, you can reduce HTTP and XML marshaling and un-marshaling, which can be very expensive, and deliver significant performance increases. Using business-process parallelism with SOA makes it easier to separate concurrency from application logic, so changing requirements for parallel computing -- adding the latest multi-core server, for example -- won't affect your app logic.
C++ Apps on Multi-Core
Large-scale C++ apps are likely to be one of the biggest beneficiaries of the move to multi-core. True, moving your existing C++ apps to multi-core takes some planning, but it may not be as much work as you think. Design a solid strategy for parallelism, and your existing C++ apps can continue to serve you for years to come.
C++ on multi-core is a great solution for many instances including high-performance/low-latency apps and embedded systems on mobile devices. This configuration can provide the combination of low latency and low memory footprint (scale-up) with high throughput (scale-out). Not all apps have the requirement to scale in both directions, but for those that do, C++ on multi-core can be a powerful combination. And you can combine, for example, C++ services for parts that require low latency or memory use and C# or Java to get developer productivity in the rest of the application.
Virtualization and Compute Grids
Virtualization is great when you need to make multiple instances of an app available to users. Compute grids and clusters are great for "embarrassingly parallel" problems that can be handled with task scheduling. For business apps that have workflow dependencies, message ordering and other complexities that make it harder to go parallel, the package or runtime that's handling the parallelism has to be aware of the business rules and app logic.
The reality is that no single technology provider offers one solution to the entire multi-core dilemma. In some cases, IT departments will need to look for combined solutions that have been tested and optimized for specific application types. Tapping the opportunity of parallel computing clearly represents one of the most promising methods to address the need for greater performance, but dev teams must learn the skills and adopt the tools required to build successful parallel-computing solutions.
About the Author
Patrick Leonard is VP of product development at Rogue Wave Software Inc.