Special Reports

Plug Into J2EE-.NET Interoperability

Your J2EE and .NET apps must interact. This overview presents the standards and available technologies that can help you shape interoperable solutions.

There are two 800-pound gorillas in the software development world today: Microsoft .NET and the Java 2 Platform, Enterprise Edition (J2EE). As much as fanatics in one camp might want to pretend that the other camp does not exist or that the other camp is not relevant, the fact is both camps are equally prevalent today and will continue to be for the foreseeable future. Eventually all developers, regardless of camp affinity, are forced to face this reality and interoperate with the other camp.

Java quickly gained popularity when it first emerged in the mid-1990s. Some of this popularity was based on hype about Java's security. Others were drawn in by the allure of technologies, such as applets, that would later prove to be letdowns. Almost certainly the strongest pull was from the marketing slogan, "Write Once, Run Everywhere." In practice, this concept is never as straightforward as one would hope, but the slogan is not far from the truth. A more accurate slogan, however, would be, "Write Once, Test Everywhere."

The technical term for this capability is portability. Portability implies hardware and (to an extent) software independence. IT shops find portability invaluable because they can leverage their existing hardware. Software companies also love portability because they can develop their products on one set of software/hardware and then sell the products to a wide range of customers.

Today, as a direct result of new technologies, portability is less important. Instead, it has been replaced by another, more sophisticated capability called interoperability. Webopedia.com defines interoperability as "the ability of software and hardware on different machines from different vendors to share data." This definition certainly sums it up. One of the key reasons portability was so important in the past is that it was extremely difficult to achieve interoperability between software components, even on the same hardware platform, let alone different hardware platforms.

Don't get me wrong; portability is still useful. As a Java developer, I almost always develop and test code on my Dell-Windows XP laptop. When I am satisfied that it works (which is, unfortunately, never on the first try), I deploy it to the target platform, which is often some flavor of Unix on HP, Sun Microsystems, or IBM hardware. I could never have done this without Java's interoperability.

The reason interoperability is ultimately more significant than portability: Whereas portability guarantees that my Java code will work on another platform, it requires that all software be written in Java. Interoperability, on the other hand, guarantees I will be able to use or access other software regardless of what language it is coded in and what platform it is deployed on.

The Need to Interoperate
The need to interoperate arises more often than one would think, especially as you look at larger IT shops. Most large companies have standardized technology stacks. But just because the stacks are standardized doesn't mean they won't evolve or grow over time. As a result, different solutions created at different times might use different technologies. Eventually, there is always a business need for these solutions to interoperate.

Furthermore, large companies typically grow by acquiring other smaller companies that have no knowledge of or concern for the technology stack the larger company used. The technology the smaller company uses could be not only different, but even completely incompatible with the larger company's technology.

For example, most large companies I have worked with standardize around J2EE, and most small companies I have seen use Microsoft technologies such as .NET. As before, the business will find a need that forces your J2EE solutions to interoperate with the acquired .NET solutions.

Sometimes, the need to interoperate is mandated by conflicting requirements on the same project. For example, I was involved on a project for a U.S. state where the "technology authority" for that state standardized on J2EE, but the state agency prefers .NET. As a result, the presentation side is based on J2EE (Struts), and the middle tier is in .NET. The back end is a legacy mainframe database. The need to interoperate definitely exists today and will only grow stronger with time.

The Interoperability Standards
A major reason for difficulty with interoperability in the past was a lack of agreed upon and adopted standards for such interoperation. Today two well-defined and mature standards exist that can help alleviate interoperability pains.

The Internet Inter-ORB Protocol (IIOP)
Recognizing the need for interoperability, the Object Management Group (OMG) defined the Common Object Request Broker Architecture (CORBA) in the early 1990s. The OMG is a consortium of more than 700 companies, including Microsoft. As part of the CORBA specification, the OMG defined the protocol requirements formally known as the General Inter-ORB Protocol.

Unfortunately, CORBA implementations from different vendors had difficulty interoperating. A few years later, in an attempt to guarantee interoperability between the different CORBA implementations, the OMG defined IIOP, which is a concrete realization of the GIOP specification. In other words, IIOP is an implementation of the GIOP specification over TCP/IP. All CORBA 2.0-compliant object request brokers (ORBs) must support IIOP, so they are interoperable.

Unfortunately, not all software applications are built using CORBA. Furthermore, even though Microsoft is part of the OMG, it did not base its DCOM solution on IIOP. Instead, it created its own proprietary protocol called Object Remote Procedure Call (ORPC) for DCOM, which is incompatible with IIOP. A few commercial "bridge" products cross the IIOP/ORPC boundary, but such bridges never gained popularity.

The Simple Object Access Protocol (SOAP)
Recognizing the need for a better solution to the interoperability problem, Microsoft, IBM, and DevelopMentor came together in the late 1990s to create the first version of SOAP. These points are noteworthy about SOAP:

  • Whereas IIOP, ORPC, and Java Remote Method Protocol (JRMP) are binary protocols, SOAP is a text-based protocol that uses XML. Using XML for data encoding gives SOAP some unique capabilities. For example, debugging applications based on SOAP is much easier because reading XML is easier than reading a binary stream of raw numbers. And because all the information in SOAP is in text form, SOAP is much more firewall-friendly than IIOP, ORPC, or JRMP (used in Java Remote Method Invocation, or RMI).
  • Because it is based on a vendor-agnostic technology, namely XML (and HTTP), SOAP appeals to all vendors including Microsoft, Sun, IBM, and others.
  • Recognizing the fact that SOAP is text-based and thus by definition verbose, communications using binary protocols such as IIOP will in almost all cases outperform those that use SOAP as the underlying protocol.

The Interoperability Technologies
Almost all interoperability technologies are based on either IIOP or SOAP. In addition, each solution can be placed into one of two categories: open and proprietary. Open technologies are created by consortiums such as OASIS or by standards bodies such as the W3C. Proprietary technologies are created by software companies such as Borland with the intent of selling them as a commercial product.

Open Technologies Such as Web Services
One of the open technologies is Web services. Web services are built on SOAP and define the use of other related specifications such as Web Services Description Language (WSDL), Universal Description Discovery and Integration (UDDI), and WS-Security. Web services support is built into Microsoft .NET. Sun has also expanded the J2EE platform to support Web services. So using Web services to interoperate between Microsoft .NET and J2EE becomes a no-brainer, for example:

Call .NET Web Services From J2EE

  • Step 1: Define XML Schemas that describe the request and response SOAP message contents.
  • Step 2: Use the xsd.exe tool that ships with Microsoft .NET to create .NET classes from the XML Schemas.
  • Step 3: Create the .NET Web services that use the classes you created in Step 2.
  • Step 4: On the J2EE side, use a tool that reads the WSDL of the Web service to create Java stubs that will call the Web service. I typically use Apache Axis, which is an open source SOAP implementation and Web service platform in Java. Apache Axis has a useful tool called wsdl2java that converts the WSDL into Java stub classes.
  • Step 5: Create your Java classes that call the stubs generated in Step 4.

If you're consuming existing .NET Web Services, simply start from Step 4.

Proprietary Technologies
As I mentioned, software companies create proprietary technologies with the intent of selling them as commercial products. An example of a recent technology that generated news waves is Borland Janeva. Janeva allows developers to invoke J2EE and CORBA objects/services from .NET clients. To do so, the developer uses a Janeva compiler to create .NET stubs from the Java or CORBA Interface Definition Language (IDL) interfaces. The .NET developer uses these stubs to invoke the J2EE/CORBA services. Internally these stubs use IIOP to communicate with the J2EE/CORBA side.

Using IIOP gives Janeva some advantages over Web services (which use SOAP):

  • As mentioned earlier, IIOP is a binary protocol and in most cases will outperform communications that use SOAP.
  • SOAP (and Web services) is mostly used over HTTP, which is a connectionless protocol. IIOP is connection-oriented and allows a richer interaction model that uses stateful services and additional protocol-based services such as load balancing and fault tolerance.

There are a few limitations as well:

  • Janeva is a proprietary product and therefore has all the limitations associated with a product that's tied to and dependent on a vendor.
  • You can't use Janeva to call .NET services from J2EE and CORBA.

Whether your preferred platform is J2EE or .NET, you will be required to interact with software developed on both platforms. In this article, I presented an overview of the need, the standards, and the types of available technologies to achieve this interoperation with the least amount of pain and the highest degree of confidence that your solutions will continue to work as both J2EE and .NET mature and evolve.

About the Author

Tarak Modi is a senior specialist with North Highland, a management and technology consulting company. His professional experience includes working with COM, MTS, COM+, .NET, J2EE, and CORBA. He is a coauthor of Professional Java Web Services (Wrox Press, 2002). Visit his personal Web site at http://www.tekNirvana.com.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube