Developer Product Briefs

Trends in the Evolution of the ESB

What's new with ESBs? Dave Chappell, a leader in the technology, gives his vision of the state of ESB advances.

Trends in the Evolution of the ESB
An enterprise service bus does much more than get messages from point A to point B.
by Dave Chappell

October 23, 2006

Over the past four years we have seen a great deal of favorable adoption of the enterprise service bus (ESB) as a product category in the industry. Progress Software may have started the ESB technology category with its introduction of Sonic ESB in 2002, but the ESB phenomenon has taken on a life of its own since then. Application platform suite (APS) vendors such as IBM and BEA, enterprise application integration (EAI) vendors such as TIBCO and webMethods, and even Web services toolkit vendors have all adopted the ESB moniker.

Even British Telecom has embedded an ESB in a hardware box with their BT Integrate offering. We have also seen some significant changes in the way technologists in IT departments view the role of an ESB as an important part of their IT landscape. As is the case with most new technology categories, it has taken some time for the hype to subside. Now that the dust has settled we can see the practical uses of ESBs as they get deployed in the real world.

Analyst views from firms such as Gartner and Forrester have also shifted from hailing ESB as the all-you'll-ever-need for service-oriented architecture (SOA) to more of an implied part of the infrastructure in support of an SOA. Reports from these analysts and from thought-leading vendors have provided greater clarity on what makes up the definition of an ESB. The constant that still remains is that an ESB is used to connect, mediate, and control the interactions between a diverse set of applications that are exposed through the bus using service-level interfaces.

As I spend most of my time traveling around the world talking with people about SOA and ESB, I have noticed a shift in their thinking and witnessed a trend of enlightenment over the past two years. The questions on most people's minds has shifted from "what is SOA?" and "what is an ESB, and why would I need one?" to "I know I need an SOA, I know I need an ESB, but what is an ESB best suited for and what other technology infrastructure do I need to implement my SOA?"

Process-Oriented, Event-Driven Architectures
Point-to-point integrations typically have been addressed with simple request-reply, synchronous style interactions. In this type of environment, a Web service intermediary acting as a proxy for data transformation and routing can work well. However, the real sweet spot where an ESB has shown its power and flexibility is in process-oriented, event-driven architectures (EDA). When doing broad scale integrations across many disparate applications, the key to success is to have an architecture that allows for each application to be decoupled from the rest of the SOA by using the ESB as a form of mediation.

Each application should be made to operate independently from the other by being capable of receiving individual tasks or units of work that are processed as asynchronous events. If there is another action to be taken as a result of receiving and processing the asynchronous event, then so be it. That action doesn't necessarily represent a response to a request that the original initiator of the action is waiting for. It's more likely that the next action to occur is itself an asynchronous event that is placed back onto the bus to be forwarded along to the next application. The surrounding process manager that is part of the ESB, which is controlling the interactions between the individual invocations, is what is concerned with getting that event sent along to the next application.

The same is also true even if an application makes a request and expects a response to continue its processing. The request can be placed on the bus and sent across a distributed path of many applications, data sources, routers, and transformers in accordance with the process that is defined for it. That series of actions may all be processed as independent events and the response may come later as an independent event.

The event-driven interaction style is a major advantage of keeping applications decoupled form one another. When plugged into an ESB, each application doesn't need to understand the intimate details of how all the other applications want to be interacted with. Protocols, data formats, and different interaction styles are all worked with by the ESB.

EDA can only work effectively if certain conditions are in place. First, the ESB must have reliable, asynchronous messaging and high availability capabilities. In a synchronous point-to-point integration scenario, if an application fails to receive a response from a request, it can code in some error handing and retry. In the asynchronous scenario, the application simply fires off an event to the ESB and forgets about it until something else occurs to trigger further processing. There will be many points where the whole business transaction will be nowhere else but in transit through the bus. The ESB must therefore be capable of surviving failure and also sustaining the essence of the business transaction complex topologies that may involve failures and recoveries along the way.

Another condition that has to be met is the applications themselves need to be written or adapted to this event-driven style of interaction. In the case where ordering of things is important, then each application may need to check for or compensate for things occurring out of order, or the ESB itself may need to be capable of ensuring order of events across complex deployment topologies and across failures and recoveries.

Now that ESB has become an established product category, the numerous ESB vendors are competing on architecture, connectivity options, ease of use, and quality of service (QoS) issues such as continuous availability. Since we have been putting ESBs to work in the real world for a few years now, there is better clarity on what other infrastructure is necessary to build, deploy, manage, and extract business value from a widespread SOA across an organization. The list is fairly substantial, but let's focus on a few of the important ones: Web services management and SOA governance, advanced Web services (WS-*), complex event processing, and semantic data integration.

SOA management and governance. The path toward SOA varies drastically at each company and sometimes even within different groups within the same company. An organization may have different development teams experimenting with Web services toolkits or using the Web services capabilities that come as part of their enterprise resource planning (ERP) system or their favorite application server environment. There may be multiple ESBs deployed or none at all.

An SOA management platform provides visibility, security, control, and policy enforcement across an end-to-end business process as the process executes across these many disparate environments. In addition to tracking service-level agreements (SLAs), it may be able to provide business-level views to understand the business-level impact of failures in an SOA. It may also be able to dynamically tune the SOA environment to enforce the SLA or enforce any other business policy. This enforcement is true whether the interaction styles are point to point or asynchronous and event driven.

Depending on the implementation of the WSM platform, it may extend its reach across an ESB, application server platforms, and database access; and it may even provide business-level metrics and reporting at a business process level as the service execution traverses across different platforms, databases, protocols, and appliances (see Figure 1).

Advanced Web services (WS-*). Advanced Web services specifications such as WS-ReliableMessaging, WS-Security, WS-Addressing, and WS-Policy are becoming a reality as they gain momentum and support from the community of vendors who participate in creating these specs and work to ensure interoperability with each other. ESBs have in the past been labeled as proprietary, which has mostly been because of having a vendor-specific, message-oriented middleware layer as part of their core architecture. As ESBs are beginning to implement these advanced Web services protocols, the proprietary stigma is becoming a nonissue. In fact, an ESB can help make these WS-* specifications even more real by providing an implementation on top of the ESB's architecture adding scalability, manageability, and availability to advanced Web services. In addition, these specifications will make ESB's more capable of interoperating as other platforms and application vendors implement the specs.

Complex event processing. Complex event processing (CEP), sometimes known as event-stream processing (ESP), is a relatively new field in the area of EDA. Significant traction is already occurring for CEP in areas of algorithmic trading, fraud detection in financial services, and supply-chain management with Radio Frequency Identification (RFID) processing and filtering.

CEP is about capturing and analyzing high-volume streams of events, identifying sophisticated patterns, and applying time-aware event correlation and decision logic against those patterns. Typically, a CEP engine that is plugged into the ESB as a service will perform these tasks. The stream of events may come through the ESB or may be another source such as an external stock ticker feed or an RFID reader. The course of action taken when a complex pattern of events is identified may vary but can range from alerting a business user in a business activity monitoring (BAM) dashboard or to invoke a service or a business process through the ESB (see Figure 2).

Proprietary Format to Canonical Format
Semantic data integration. An ESB makes it easy to translate disparate application data from one format to the other by providing a means for inserting data transformation engines as services on an as-needed basis into business processes. Service patterns such as validate, enrich, transform, and operate (VETO) have begun to emerge as a best practice for converting to and from a common data format to individual application's proprietary data formats.

In a process-oriented, asynchronous, event-driven environment, data transformations cannot be worked with in a point-to-point fashion. For an application to be truly decoupled from all others, the data being communicated needs to be translated from the application's proprietary format into a common or canonical data format that is used across the organization. In this fashion, each integration point of an application plugging into the SOA needs to only be concerned with how the data gets converted to an from the proprietary format to the canonical format. The transformation services associated with the other applications through the bus takes care of converting to the target data formats on an as-needed basis. This approach dramatically reduces the complexity of adding new applications into the SOA or changing the interaction patterns of the existing ones. The additional benefit of taking this approach is that mediation services such as routers, splitters, and aggregators can be written to conform to the canonical format.

Mapping tools can create data translations to convert from one data format to another, but at the bigger picture what's been typically missing is the means for managing the mappings and the relationships between large numbers of complex data models across a diverse set of applications and data sources. Semantic data mapping tools have begun to emerge as complementary technologies to ESB, which is the next, big frontier that needs to be addressed by those building large-scale SOAs. In a world of point-to-point integration, it's easy to imagine a set of point-to-point data transformations between each pair of applications that are communicating in a one-off fashion.

Bring It All Together
As we have learned through adoption and deployment of ESB technology by many enterprises, there is a lot to consider in addition to an ESB when planning your strategy for standardizing on SOA infrastructure. You will have lots of services hosted on a variety of interactions between platforms that need to be secured and governed.

EDAs are beginning to emerge as the best approach toward implementing an SOA where applications are truly decoupled from each other. Large-scale SOA projects using an ESB are broadening the reach of application integration in ways that were never possible before. However, the challenge of building and managing the complexity of semantic mappings among disparate representations of enterprise data is daunting and is an area that will require continued research and new innovations.

In keeping with the true spirit of SOA, these technologies can all be used generally standalone and pluggable into anything using Web services interfaces and can also be brought together in a common integration through an ESB.

About the Author
Dave Chappell is the vice president and chief technology evangelist for Progress Software's Sonic product unit. He has more than 20 years of experience in the software industry in development, sales, support, and marketing. He has authored many articles and books on technology and architecture, including Enterprise Service Bus (O'Reilly Media 2004), the first and definitive book on ESB.

comments powered by Disqus

Featured

  • Full Stack Hands-On Development with .NET

    In the fast-paced realm of modern software development, proficiency across a full stack of technologies is not just beneficial, it's essential. Microsoft has an entire stack of open source development components in its .NET platform (formerly known as .NET Core) that can be used to build an end-to-end set of applications.

  • .NET-Centric Uno Platform Debuts 'Single Project' for 9 Targets

    "We've reduced the complexity of project files and eliminated the need for explicit NuGet package references, separate project libraries, or 'shared' projects."

  • Creating Reactive Applications in .NET

    In modern applications, data is being retrieved in asynchronous, real-time streams, as traditional pull requests where the clients asks for data from the server are becoming a thing of the past.

  • AI for GitHub Collaboration? Maybe Not So Much

    No doubt GitHub Copilot has been a boon for developers, but AI might not be the best tool for collaboration, according to developers weighing in on a recent social media post from the GitHub team.

  • Visual Studio 2022 Getting VS Code 'Command Palette' Equivalent

    As any Visual Studio Code user knows, the editor's command palette is a powerful tool for getting things done quickly, without having to navigate through menus and dialogs. Now, we learn how an equivalent is coming for Microsoft's flagship Visual Studio IDE, invoked by the same familiar Ctrl+Shift+P keyboard shortcut.

Subscribe on YouTube