Blog

Why Interoperability Must Succeed for the Cloud to Survive

During the past two years, not a week has gone by without several press releases, blogs, editorials or articles about cloud interoperability crossing my electronic desk—€”enough to qualify the term as an official buzz-word. The problem is that ‘€˜interoperability’€™ has several meanings, a not surprising observation given that the term usually requires an asterisk next to it. It’s like Barry Bonds’€™ single season home run record—€”it exists, but it’€™s not very satisfying and someday somebody will do it right. While “€œwhat’s right” may be debatable, what’€™s certain is the cloud represents an opportunity to finally achieve true interoperability. The reason is simple, the success of the cloud—€”its very business model—€”is dependent on providing interoperability that is secure, transactional and fully automates the construction of the consuming side.

The implication is that current interoperability mechanisms, like Web Services, are essentially vendor specific implementations that enable creating and integrating distributed solutions within the vendor’s product portfolio. Web Services may be based on agreed-upon standards, but those standards really only describe enabling mechanisms without the requirement that everybody implements everything exactly the same way—€”right down to the least significant bit. The important thing to understand is that interoperability is a two-edged sword to the business model of enterprise platform vendors. How do you give the customer the assurances that a heterogeneous enterprise can be built, but still protect your turf? It’s not for nothing that the standards don’€™t have any teeth, after all they’€™re written by committees of competitors.

The premise that Web Service standards deliver interoperability between vendor implementations is obviated by the gulf that separates promised expectation and actual experience. The promised expectation is that you can use any enterprise platform vendor’s development environment, chose New->Project->Consume Web Service, point to any Web Service implemented in any other enterprise vendor’s server and automatically, out of the box and within minutes generate the consuming end point and execute a remote call. This means no searching of blogs and forums, no posting an appeal for help or no hand-tooling code at some lower level in the stack. Sorry interoperability fans, but when I take the latest GlassFish platform from Oracle, including the Metro WSIT stack, and point its Eclipse-based development environment at the simplest .NET WCF “€œHello World” Web Service WSDL and watch it throw an exception, not only are my expectations not met, but my productivity has decreased while my time-to-deployment has increased (not to mention my frustration level). I think that there are not a few managers and sales reps at both Oracle and Microsoft who are perfectly fine with this result—€”after all, it’€™s not a bad business model to make interoperability easy among your own enterprise products and much harder with your competitor’s enterprise products. That’s precisely why the cloud has the opportunity to finally deliver universal interoperability—its business model must be fundamentally different from that of the enterprise platform vendor because its source of revenue is different.

The cloud is about services, not software sales. Cloud service revenues are based on the clock, bandwidth consumed, megabytes used and users connected. Cloud vendors, particularly those providing infrastructure and platforms as a service, need to worry about maximizing their ability to securely host and integrate as many different platforms, technologies and languages as possible; they simply can’t worry about whether it’s their stuff, or their competitors. Simply put, there can’t be any protectionism in the cloud because interoperability is critical to the financial success of cloud offerings. Hence, the constant stream of links to cloud interoperability blogs, articles and PR copy that have been popping up on my displays these past two years.

Seemingly, the current cloud vendors may actually get it, however there’€™s a small problem: most of what passes for cloud interoperability is between platform specific code—the stuff running in the cloud written using a particular platform&mdashand the cloud API, resources and storage services. From a business perspective, Amazon’s IaaS cloud must provide Microsoft Windows Server environments and therefore offers its EC2 API, resources and services in both Java and .NET. However, the .NET EC2 API isn’€™t going to enable better interoperability between a WCF Web Service running in the Amazon cloud and Oracle’€™s GlassFish running on the ground. The cloud APIs, resources and services do not enable interoperability between different platform specific applications regardless of where they’€™re running. Microsoft’s PaaS Azure cloud can host a JRE, but that’s mostly because current JVMs support the Windows API. While Microsoft provides a Java API, it only supports those Azure and AppFabric resources and services that use underlying Web Services; Microsoft doesn’t support a Java API for Azure Cloud Drives, for example. However, that’€™s moot because the Java Azure and AppFabric APIs aren’t going to provide better interoperability between Web Services hosted by a JEE app server running in an Azure VM and a WCF client on the ground.

Why is this important? As long as the cloud is perceived as a place to only host web sites, the current cross-platform cloud APIs will probably suffice as the only required interoperability is with a browser. A company that moves its website to the cloud may lower operating costs and the cloud vendor will realize revenue, but this is only half of the potential of the cloud. The real promise of the cloud is enabling companies to build their own SaaS solutions and expose service APIs to any customer. A company can expose its own secure, scalable cloud service and generate a revenue stream with minimum operating, development and sales costs. Everybody’€™s happy. The end customer receives value and can consume the services in the platform of its choice without the operating costs associated with traditional software licensing. The company providing the service and the cloud vendor can both enjoy an additional revenue stream. Sounds great, except that the cloud only provides interoperability internally between the hosted code and its own APIs, resources and storage services. Interoperability between the service running in the cloud and the end customer is still based on existing platform-specific Web Services complete with the built-in protectionism. If the customer cannot quickly and easily consume a cloud service within its preferred platform, the entire value proposition fails and nobody’s happy.

What’€™s the solution? Simple: the industry needs to provide interoperability based on the cloud business model. Interoperability must be a fundamental capability that resides in the cloud rather than just a mechanism that enables remote procedure calls. From the early days of RPC/IDL, through to the present WSIT, interoperability has been about providing a mechanism for integrating distributed applications that leverages software sales and not about providing a service. The very concept of the cloud connotes a level of abstraction, availability and automation that moves beyond the RPC mechanism of Web Services. The cloud’s motto should be “€œany object, on any platform, in any language, anywhere at any time”. In other words, I think the cloud requires a common object paradigm that provides fine-grained interoperability complete with asynchronous events, automatic discovery, automatic instantiation of the object on the consuming side and simple and automatic pay per use. Of course, this isn’t a new idea and, in the past, industry wide consortiums like OMG have tried to solve the problem, but they have always been constrained by the protectionism inherent to the software sales business model.

The cloud will not be successful without true interoperability. For the end customer, the cloud must provide a level of abstraction and ease-of-use that enables automatic consumption of services that is platform agnostic with minimal support and development overhead, not to mention at lower overall cost based on pay-per-use. For the company providing the services, the cloud must embrace platform agnostic interoperability that’€™s easy to implement, in addition to being scalable, secure and globally accessible. For the cloud vendor, true interoperability removes barriers to adoption, moving more third-party software into the cloud and increasing volume and revenue. If there’€™s “some assembly required” by the end customer, or endless workarounds, or only certain platforms are supported in the cloud or on the ground, the business opportunity between the end customer, service provider and cloud vendor will fail. It’€™s time to remove the obstacles that permeate current interoperability implementations—€”obstacles based on an obsolete business model that is anathema to the success of the cloud. If ‘interoperability’€™ continues to sport an asterisk, then so will the cloud and that will be the end of a pretty cool thing.

This article was written by Bill Heinzman of JNBridge, and was originally posted on DZone.

ISVs and the Cloud

If you’ve been following JNBridge in the news, it’s no secret that we’ve been preparing a cloud offering. Over the last few months, we’ve learned a lot about what’s easy and not so easy to do on the various cloud platforms. But one thing that stands out is that cloud providers don’t seem to have devoted too much thought to how independent software vendors (ISVs) play in the cloud.

If you ask most people how software vendors can move into the cloud, they will say that the vendor should take their traditional products, put them in the cloud, and offer them as services. And that’s often fine. But what about software vendors like JNBridge who create components that other developers incorporate in their programs? In most cases, offering the component as a service doesn’t make sense.

The main challenge to running components like JNBridgePro in cloud-based programs has to do with prosaic but essential issues like licensing and billing. Windows Azure has absolutely no provision for third-party licensing (determining whether a user is entitled to use the product) and billing (charging for the use of the product). I would imagine that Microsoft feels that this should be the purview of some other third-party vendor, but I also imagine that potential vendors in this space are reluctant to invest in offerings until the demand materializes. It’s a chicken-and-egg problem. If Microsoft is serious about their software partners producing for Azure (and not just end-user customers creating custom applications), they will have to jump-start the market themselves, by offering their own billing mechanism. Since they are already billing Azure users, this shouldn’t be a stretch.

Unlike Microsoft, Amazon does have a licensing and billing service, called DevPay, that allows third-party developers running on Amazon’s EC2 (Elastic Compute Cloud) service to determine whether a user is authorized to run their products, and to charge for that use in a variety of ways. This service almost has it right, but it not quite there. It seems to be centered around Amazon’s S3 (Simple Storage Service), but will not run with their more modern EC2 images that are backed by EBS (Elastic Block Storage), which is the mechanism that most current EC2 users employ. In addition, the DevPay documents do not seem to have been updated in several years, and many questions on the DevPay support forum have gone unanswered, which leads to questions about Amazon’s commitment to this service.

Don’t worry, we’ve gotten around this problem, and it’s not preventing us from coming out with our cloud-based offering. But it surprises me that this new frontier of the cloud does not seem to have been designed to accommodate software vendors who want their products to work in the cloud. One would think that barriers to entry wouldn’t be there, and that the cloud providers would do all they could to encourage software vendors to help settle this new frontier, but they haven’t. I’m particularly surprised that Microsoft, which, in almost all other areas goes out of its way to cultivate a thriving partner ecosystem, has not done that with Azure, and doesn’t seem to have thought through the issues that would encourage such an ecosystem. And without that robust partner community, cloud adoption will be that much slower for everyone.

Java in the Azure Cloud

Microsoft has been promoting the use of Java in the Azure cloud, and has been providing lots of material showing how it’s done. They’ve also been distributing Java tools for Azure, including an SDK for Eclipse, and an Azure accelerator for Tomcat. Their latest offering is the “Windows Azure Starter Kit for Java,”? which provides tools for packaging and uploading Java-based Web applications running on Tomcat or Jetty. In considering this, the main question that comes up is “Why?”?

It doesn’t work

“It doesn’t work”? is an extreme statement, isn’t it? And Microsoft has demonstrated that it can create Java Web apps and run them on Azure, so why do I say it doesn’t work? The problem is that these examples are extremely constrained. For example, Azure makes a virtue of its lack of a persistence mechanism. Instances can fail or restart at any time, which means that data isn’t persistent between instances, and applications therefore must not depend on persistent data. However, both Java Web applications and the servers they run on do depend on some sort of persistence or state. With effort, the applications can be re-engineered, but one has to wonder whether it’s worth the effort to do this, or whether the time might be spent moving to a different cloud offering where this re-engineering doesn’t need to be done. There’s also the problem that the Tomcat and Jetty servers themselves require persistent data to be stored. And the problem gets even worse when we go from a simple servlet container to a full-fledged application server like JBoss, WebLogic, or WebSphere: application servers, and the Java EE programs that run on them, rely even more deeply on persistent data. While some Java EE application servers can be altered to use alternative persistence mechanisms like Azure storage, the process is arcane to most Java EE developers and not worth the trouble; it would probably be simpler to use a cloud offering where the application server can be deployed without alteration.  In addition, a default application server relies on an extensive array of network endpoints for a variety of protocols that exceeds the number allowed by a worker role or a VM role. To run an app server on Azure, it is necessary to cut down the number of endpoints to the point where much useful functionality is lost. While it may be possible to construct Java EE examples that work as demos, it’s unlikely that any real Java EE apps, web-enabled or otherwise, can be migrated to the Azure cloud without drastic, impractical or impossible, modifications to the underlying application servers in order to accommodate the persistence and networking issues.

It’s not what users want

Beyond the technical issues in getting an app server running on the Azure platform, we need to ask why we would want to do this on a Platform-as-a-Service (PaaS) such as Azure, when it would be far simpler to run such an application on an Infrastructure-as-a-Service (IaaS) offering like Amazon EC2. It’s one thing to say it can be done; it’s another thing to actually want to do it, as opposed to the easier alternatives. The market seems to bear this out – a recent Forrester study shows that Eclipse (that is, Java) developers prefer Amazon EC2 or Google App Engine, while Visual Studio (that is, .NET) developers prefer Windows Azure. Developers really don’t want to go through the contortions of packaging up their Java app plus the app server or servlet container, then configure and start it up as a special action under elevated privileges in an Azure worker role, just so that they can run Java EE, when they can easily install their application on a convenient Amazon EC2 image.

What users do want, it doesn’t do

Users will want to do things with Java on Azure, but not what the creators of the Azure Starter Kit for Java think they want to do. Rather than running a self-contained Java server in an Azure role (something they can more easily do elsewhere), they will want to integrate their Java with the .NET code more directly supported by Azure. For example, they may well have a Java library or application that they want to integrate with their .NET application. Depending on the Java side’s architecture, the Java might run in the same process as the .NET code, or it might run in its own process, or even a separate worker role. In any case, the Java side wouldn’t need to run in a full-fledged app server; it would simply expose an API that could be used by the .NET application.

A scenario like this is exactly the sort of thing that JNBridgePro supports. Java can be called from .NET code, and can run in the same process or in separate processes running on different machines. Up until now, those JNBridgePro deployments have been in on-premises desktop and server machines. In our upcoming JNBridgePro Cloud Edition, it will be just as straightforward to implement these interoperability scenarios in the cloud.

In summary, there’s a role for Java in the Azure cloud, but we think Microsoft is pushing the wrong scenarios. The Azure Starter Kit for Java is clever, but it (incompletely) solves a problem that cloud developers don’t have, while ignoring the real-world problems that cloud developers do have.

JNBridgePro 6.0 is available! Integrate Java & .NET in the cloud.

Whew! We’ve made our committed deadline: JNBridgePro version 6.0 is now available for download.

This new version supports Java/.NET interoperability projects where one or both of the end points are in the cloud. JNBridgePro 6.0 enables you to build and distribute integrated applications anywhere, including:

  • Intra-cloud, where both end points reside in the same cloud, either in the same or separate instances
  • Inter-cloud, where the instances belong to different clouds — even across cloud vendors
  • Ground-to-cloud and cloud-to-ground, where one end point is in a cloud instance and the other is an application running on the ground

JNBridgePro 6.0 extends its full set of interoperability features from the ground to the cloud, so now you can build integrated applications that run anywhere. Read more details and check out some sample use case scenarios.

You may have recently seen a wee bit of press about our launch. As Michael Coté from RedMonk says: “There’s so much valuable data and process locked in Java and .Net applications that can’t just be left behind in whatever cloud-y future is out there – and refactoring all of that to be cloud friendly would be an onerous task. Instead, you need tools that help modernize those pools.”

Try it out for yourself! We’re eager to hear what you think.

Using a Java SSH library to build a BizTalk Adapter

One of the most frequently used BizTalk adapters is the File Adapter. While its most common use is in tutorials and customer evaluations, the File Adapter is essential in situations where BizTalk needs to be integrated with other applications that are self-contained and not designed for connectivity. In such situations, the other applications can simply drop documents in a folder, and the File Adapter simply reads the new documents as they appear. The adapter can also write documents to a folder, from where they can be read by the other applications.

Recently, we had a conversation with a customer who was using the standard File Adapter to process files. The files had been transfered to the Windows server from a Linux server via a Java Secure Shell (SSH) client, a perfect example of file-level interoperability between platforms. Secure Shell is a secure network protocol that enables shell services between client and server machines providing remote command execution and file transfer. SSH servers and clients are most prevalent on Linux/Unix platforms as a standard component of the OS. There are, of course, SSH clients and servers that run on Windows, but they are rarely used.

Obviously, for the customer, the best implementation would be a BizTalk adapter that supports SSH, specifically the SFTP channel for file transfer, in order to avoid the extra step in transferring the file via SSH, followed by reading it with the File Adapter. In addition, an SSH/SFTP adapter would enable secure, encrypted file transfer over the public internet between, for example, a company’s IT infrastructure on the ground and resources that reside in the cloud. We decided to investigate whether it would be possible to use the File Adapter example in the BizTalk Adapter Framework SDK as a starting point to build a SSH/SFTP adapter. This would require a SSH API, preferably one that’s open source. We based the adapter on a Java SSH client library, and used JNBridgePro to bridge from the .NET-based adapter framework because there’s no open source .NET SSH API that is current, or been released. There are several open source Java SSH class libraries available; we chose JCraft’s Java Secure Channel—JSch—because of its prevalence in Java developement tools and IDEs. JSch is distributed under a BSD-like license.

SSHAdapterArchitecture

JNBridgePro 6.0  allows us to proxy the JSch class library directly to .NET. During execution, a JNBridgePro .NET proxy represents an instance of a Java class running in the JVM. The interoperability bridge between the CLR and the JVM manages method calls between the proxy and the Java object, data marshaling/representation and object life cycle.  Here’s a closer look at how JNBridgePro works.

The BizTalk Adapter Framework .NET File Adapter example

The BizTalk Adapter Framework SDK is part of any BizTalk 2006, or greater, installation. The source for the .NET File Adapter example resides in
%INSTALL_DIR%Microsoft BizTalk Server 2010SDKSamplesAdaptersDevelopment
File Adapter
. We used the version in BizTalk Server 2010, although the example source hasn’t changed since BizTalk 2006. Visual Studio 2010 and JNBridgePro complete the development environment. First, copy the example’s root directory to a more suitable working directory.  The BizTalk Adapter Framework (BAF) base classes, a convienient layer that implements many important mechanisms required for a robust adapter, must also be added to the solution. The base classes can be found in the directory BaseAdapter.

Preliminary Tasks

The very first step is to make sure we can build and deploy the File Adapter before modifying the code. This involves building the three projects we’ve consolodated into one Visual Studio solution: AdapterManagement, DotNetFile and Microsoft.Samples.BizTalk.Adapter.Common. In order to build the adapter, the only thing we need to do is repoint the reference to the adapter base class library in the DotNetFile project because the path has changed. While it is possible to unload the project from Visual Studio and edit the project file as a text file, it is simpler and less error-prone to remove and then re-add the reference.

Deploying a BizTalk adapter is done using registry entries. There are a couple of ‘.reg’ files with the example. We’ll use the registration entries file, StaticAdapterManagement.reg. However, because the example was copied from the BizTalk install directory to a working directory, the regisitry file must be edited to modify the paths to the management and run-time assemblies. We also can edit the name of the adapter stored in the TransportType key from “Static DotNetFile” to “JNBPSshFileto avoid confusion.

Bridging the JSch Java API to .NET

JCraft’s open source SSH class library is written in Java. A BizTalk adapter is written in .NET C#. Use JNBridgePro to create .NET proxies of the JSch classes that represent the Java classes during development and at run-time. To do so, we’ll add a JNBridge DotNetToJavaProxies project  to the adapter solution using the provided Visual Studio plug-in. We also need the JSch library, the JAR file jsch-0.1.45.jar.

The first task is to configure the proxy project’s class path to point to the JSch JAR file. The JNBridgePro development environment consists of a proxy tool that uses reflection to expose the Java classes—the public methods and variables—in the JAR file. Before we start exposing the Java classes, we’ll need to tell the proxy tool what classes we need to proxy to .NET. We could expose everything in the JAR file, but by looking at JCraft’s sample Java source (which we’ll draw heavily from when we code in C#), we can determine in advance the pertinent classes. Add the classes from the class path using the proxy tool, as shown below.

AddClassesFromClassPath

Because we’ve checked the box Include supporting classes, the proxy tool will also load all class dependencies from the com.jcraft.* namespace as well as those from the JRE, like java.util.Vector. We can then choose exactly which Java classes we would like proxied into an assembly. The next screen shot shows the proxy tool with the loaded classes on the left and the selected classes on the right. The far right displays the interface of the class Channel. The abstract Channel class will be proxied along with the class that extends it, ChannelSftp.

ProxyTool1

Coding: Change from a File Adapter to a Secure FTP adapter

The adapter currently understands how to consume and place files locally. We need to modify the configuration interface for the adapter to support SSH/SFTP. We’ll leave the transport handler configurations alone and modify only the transmit and receive location configurations. We’ll be able to leverage some of the existing properties, like directory and fileMask, but will have to add properties that handle SSH connection and credential arguments. We’re going to keep things simple and only use simple security—a user name and password—instead of more robust public/private key pair encryption. Hardening the security of user credentials is left  as an exercise to the reader. We then add the following to the source file ReceiveLocation.xsd and TransmitLocation.xsd.

<xs:element name=”host” default =”localhost” type=”xs:string” >
<xs:annotation>
<xs:appinfo>
<baf:designer>
<baf:displayname _locID=”hostName”>in resource file</baf:displayname>
<baf:description _locID=”hostDesc”>in resource file</baf:description>
<baf:category _locID=”SSHCategory”>in resource file</baf:category>
</baf:designer>
</xs:appinfo>
</xs:annotation>
</xs:element>
<xs:element name=”port” default=”22″ type=”xs:int” >…</xs:element>
<xs:element name=”user” type=”xs:string” >…</xs:element>
<xs:element name=”password” type=”xs:string” >…</xs:element>

The block of XSD code shows the schema for the host property that’s used to populate the location property grid and create the XML document that contains the location’s configuration. The strings used for the display name, description, etc… are in the resource file, DotNetFileResource.resx. The schema for the remaining properties will be similar. To correctly parse the the XML document in order to retrieve the configuration values, add the following C# code (again, just using the host property as an example—the other properties will have similar code) to the classes DotNetFileReceiveProperties and DotNetFileTransmitProperties, both defined in the file DotNetFileProperties.cs. The static utility method IfExistsExtract() is defined in ConfigProperties.cs in the adapter base classes.

private string hostName;
public string HostName { get { return this.hostName; } }
public override void ReadLocationConfiguration (XmlDocument configDOM)
{
// to this method, add lines like this for each property:
this.hostName = IfExistsExtract(configDOM, “/Config/host”, “localhost”);
}

Once done with modifying the adapter’s properties to accommodate an interface for SSH, we can fire up the BizTalk Admin Console, create a receive location and open the property sheet. Here’s what it looks like.
BTS_LocationUI

Coding the SSH/SFTP adapter

All of the C# code will be added to the DotNetFile project. We’ve already added the properties code to DotNetFileProperties.cs, now it’s time to start coding the adapter. First, add a couple of references to the project: one to point to the JNBridgePro proxy assembly, the other to point to the JNBridgePro run-time, JNBShare.dll.

Initiating the Java/.NET bridge

The bridge between the CLR and the JVM will use a shared memory transport. The JVM will run as a thread within the BizTalk Server process. Thankfully, JNBridgePro hides all the details. A new source file, JNBridgeProInit.cs, will define the static class JNBridgeProInit. The class has one method, init(). Because a process can have only one JVM instance, we’ll need to provide a mechanism to ensure that JNBridgePro is initialized once, and only once, per process. The bridge needs a pointer to the Java Virtual Machine, the JNBridgePro run-time assemblies and JAR files and a class path pointing to the JSch JAR file.

public static class JNBridgeProInit
{
private static bool isJNBInitialized = false;
private static object jnbInitLock = new Object();
public static void init()
{
lock (jnbInitLock)
{
if (isJNBInitialized == false)
{
com.jnbridge.jnbproxy.JNBRemotingConfiguration.specifyRemotingConfiguration(
com.jnbridge.jnbproxy.JavaScheme.sharedmem
, @”C:Program FilesJavajre6binclientjvm.dll”
, @”C:Program FilesJNBridgeJNBridgePro v6.0jnbcorejnbcore.jar”
, @”C:Program FilesJNBridgeJNBridgePro v6.0jnbcorebcel-5.1-jnbridge.jar”
, @”C:SSH AdapterFile AdapterJschjsch-0.1.45.jar;);
isJNBInitialized = true;
}
}
}
}

Coding a .NET implementation of a Java interface

When connecting to a SSH server, the server negotiates security credentials by asking the client for a user name and password or a private key and pass phrase. JCraft’s JSch provides a convenient Java interface, com.jcraft.jsch.UserInfo, that when implemented returns the required information. When we built the proxy assembly using the JNBridgePro proxy tool, the Java interface was mapped to a .NET interface. We can implement the interface in the class UserInfoImpl, which we’ll place in the file UserInfoImpl.cs. Because an instance of UserInfoImpl will be created on the .NET side, but called on the Java side of the bridge, we’ll have to use a JNBridgePro Callback attribute to decorate the interface implementation.

Initiating JSch

Now it’s time to code against the JCraft JSch class library. Because our project references the proxy assembly we built in the JNBridgePro project, we can code in C# as if JSch were implemented in .NET. In fact, we can use the example Java code that ships with JSch as a starting point. First, we need a method to initiate a JSch context, connect to a SSH server and create the SFTP channel. We’ll add the following private instance variables to the two endpoint classes, DotNetFileTransmitterEndpoint and DotNetFileReceiverEndpoint, in DotNetFileReceiverEndpoint.cs and DotNetFileTransmitterEndpoint.cs.

private com.jcraft.jsch.JSch jschContext;
private com.jcraft.jsch.Session jschSession;
private com.jcraft.jsch.ChannelSftp sftpChannel;

We’ll now add the following method to the two endpoint classes.  Notice the line where an object of type Channel is cast to type ChannelSftp. If the cast to System.Object is not done first, Visual Studio will complain. Using the proxy tool, we can take a look at the class hierarchy. The public class ChannelSftp inherits from ChannelSession, which in turn inherits from the abstract base class Channel. The problem is that ChannelSession is not public while the abstract base class and ChannelSftp are public. Java has the concept of  default access—not specified private, public or protected—allowing the class to be accessible within its own package, like ChannelSession. Java allows a cast from a base class to subclass even if intermediate subclasses have default access. The C# language requires that the entire class hierarchy be public if any subclass is public. If not, Visual Studio refuses to build the assembly.  The point is that this is a language restriction, not a restriction, or lack of mechanism, in the target byte-code. JNBridgePro constructs the proxy assembly directly to MSIL, where classes are either Public or NotPublic. The intermediate cast to Object just gets past the language restriction imposed by the C# specification and Visual Studio. Yes, it’s a hack, but if we didn’t use the indirection, we’d have to code a Java wrapper to do the cast, create a .NET proxy of the wrapper and then call the wrapper to cast on the Java side.

private void initJsch()
{
UserInfoImpl ui = new UserInfoImpl(this.properties.PassWord, “”);
this.jschContext = new JSch();
this.jschSession = this.jschContext.getSession(this.properties.UserName
, this.properties.HostName
, this.properties.PortNumber);
this.jschSession.setUserInfo(ui);
this.jschSession.connect();
Channel chnl = this.jschSession.openChannel(“sftp”);
chnl.connect();
this.sftpChannel = (com.jcraft.jsch.ChannelSftp)((Object)chnl);
}

Paths

We’ll need to add some  code to perform transformations between Windows-style and Unix-style paths such as the directory property that tells the adapter where to look for, or place, files that end in the specified qualifier. Even when a SSH server is running on Windows, paths must be in a Unix-like format, so the path C:UsersMyAccountDownloads must be changed to /users/myaccount/downloads. Luckily, JSch provides a method for path transformation. We’ll add the path transformation code to the Open() method of both endpoint classes. Here’s  the receive endpoint code.

JNBridgeProInit.init();
this.initJsch();
string windowsPath = this.properties.Directory;
windowsPath = windowsPath.Substring(Path.GetPathRoot(windowsPath).Length);
windowsPath = (windowsPath.StartsWith(“/”) ? windowsPath : (“/” + windowsPath));
string unixPath = this.sftpChannel.realpath(windowsPath);
this.sftpChannel.cd(unixPath);

Receiving  Files: SFTP get

The Java type byte is signed. JNBridgePro converts Java signed bytes to the .NET type sbyte. Converting back to type byte involves a little indirection. The use of the unchecked keyword makes sure that there is no run-time exception. Using unchecked is equivalent to casting a void  pointer in C—the bits are taken at face value with no overflow checking. Besides using the JSch class library, this code uses two standard types from the Java run-time library, Vector and ByteArrayOutputStream.

The class BatchInfo maintains a list of the files that have been received. The ReceiveBatch constructor accepts as an argument the callback method BatchInfo.OnBatchComplete(). When the batch has been successfully submitted to the BizTalk Message Box, the callback method will delete the files.

com.jcraft.jsch.ChannelSftp.LsEntry entry;
List<BatchMessage> files = new List<BatchMessage>();
java.util.Vector list = this.sftpChannel.ls(this.properties.FileMask);
int len = list.size();
for (int ndx = 0; ndx < len; ndx++)
{
entry = (com.jcraft.jsch.ChannelSftp.LsEntry)list.elementAt(ndx);
string fileName = entry.getFilename();
java.io.ByteArrayOutputStream byteArrayOStrm = new java.io.ByteArrayOutputStream();
this.sftpChannel.get(fileName, byteArrayOStrm);
sbyte[] sbytes = byteArrayOStrm.toByteArray();
byte[] bytes;
unchecked
{
bytes = (byte[])((Array)sbytes);
}
fs = new MemoryStream(bytes);
IBaseMessagePart part = this.messageFactory.CreateMessagePart();
part.Data = fs;
IBaseMessage message = this.messageFactory.CreateMessage();
message.AddPart(MESSAGE_BODY, part, true);
SystemMessageContext context = new SystemMessageContext(message.Context);
context.InboundTransportLocation = this.properties.Uri;
context.InboundTransportType = this.transportType;
files.Add(new BatchMessage(message, fileName, BatchOperationType.Submit));
}
if (files.Count > 0)
{
BatchInfo batchInfo = new BatchInfo(files, this.sftpChannel);
using (ReceiveBatch batch = new ReceiveBatch(this.transportProxy
, this.controlledTermination
, batchInfo.OnBatchComplete
, this.properties.MaximumNumberOfFiles))
{
foreach (BatchMessage file in files)
batch.SubmitMessage(file.Message, file.UserData);
batch.Done(null);
}
}

Transmitting Files: SFTP put

The BizTalk Adapter Framework calls this method in DotNetFileTransmitterEndpoint whenever a BTS message is available to a subscribing transmit location. The method readStreamToBuffer() copies the message body to an array of type sbyte.

public override IBaseMessage ProcessMessage(IBaseMessage message)
{
Stream source = message.BodyPart.Data;
DotNetFileTransmitProperties props = new DotNetFileTransmitProperties(message
, propertyNamespace);
if (props.Directory != this.currentPath)
{
this.currentPath = props.Directory;
string aPath = props.Directory.Substring(Path.GetPathRoot(props.Directory).Length);
this.sftpChannel.cd(“/”);
aPath = this.sftpChannel.realpath(aPath);
this.sftpChannel.cd(aPath);
}
string filePath = DotNetFileTransmitProperties.CreateFileName(message, props.Uri);
filePath = filePath.Remove(0, filePath.LastIndexOf(“\”) + 1);
sbyte[] sbytes = this.readStreamToBuffer(source);
java.io.ByteArrayInputStream inputStream = new java.io.ByteArrayInputStream(sbytes);
this.sftpChannel.put(inputStream, filePath);
return null;
}

Running the adapter

Testing the adapter is simple. For example, we can create an Ubuntu VM running in Amazon’s EC2 using a pre-configured Amazon Machine Image. Using the standard File Adapter and the SSH/SFTP Adapter we’ve just created, we can configure two send ports and two receive ports. Using simple routing, we can drop a file into a directory on the local Windows machine where a receive location routes the message to a send port configured with the SSH/SFTP prototype adapter. The file is transmitted to /home/myaccount/Downloads on the Linux VM where a second SSH/SFTP receive location grabs the file and routes it to a send port which drops it into a directory back on the local machine.

Wrap-up

We’ve shown how it’s straightforward to create a BizTalk SSH/SFTP adapter using JNBridgePro and the BizTalk Adapter Framework SDK. An adapter like this one can be used to allow BizTalk Server to read and write files securely to and from the cloud, and to communicate easily with Unix/Linux machines.

Note that the instructions and code that are given here are just a foundation—a production adapter will likely need to be enhanced with more hardened credential management, more flexible configuration options, and enhanced exception handling. The adapter can also be enhanced to support additional SSH functionality, including additional file manipulation and securely running applications on remote machines. Also, the adapter is targeted toward the Jsch implementation of SSH, but it is a simple matter to retarget it to other SSH implementations.

While the adapter should be enhanced for most production situations, what we’ve provided here provides a solid start that should allow you to create a useful BizTalk SSH adapter for real-world situations.

If interested, the source is here.

Windows Azure PaaS vs. IaaS

As a follow-on to our recent JNBridge lab showing how to write Hadoop mapreducers using .NET and JNBridgePro, we were planning to show how to take Hadoop and those .NET mapreducers and deploy them to Windows Azure. This would have a number of advantages over Microsoft’s own Hadoop-on-Azure initiative, which does not support .NET-based mapreducers written in C# or VB, but only supports JavaScript. To get this working, we had to figure out a number of things that were undocumented in Azure, and we were rather proud of what we had come up with. However, we recently scrapped this project in light of Microsoft’s announcement of Azure virtual machines. Using these virtual machines makes deploying Hadoop to Azure no different from deploying it to Amazon’s EC2, and the special insights we gained in getting the Hadoop lab to run under Azure’s previous PaaS (Platform as a Service) offering, interesting as they were, were no longer as valuable now that Azure could now be used in Iaas (Infrastructure as a Service) style.

(Don’t worry, though!  We have another cool new lab that we’re preparing.)

Our experience with Hadoop and Azure, and Microsoft’s new offering of virtual machines, has gotten me thinking about the whole Paas/IaaS conflict. It seems to me that Microsoft, when planning their entry into the cloud, was guilty of overthinking the problem. Rather than thinking about what the customer wanted, they concentrated on what they thought the customer should want. A more charitable way of putting this is that they thought about what a cloud offering should be when considered from first principles. By doing so, they thought they were trying to do the cloud right. From that, Azure got its emphasis on lack of persistent state, and robustness of the entire application even under the expectation that individual machine instances would fail and have to recycle. It was all very cool and very forward thinking, but it wasn’t what prospective cloud users wanted. What they wanted was a way to take their existing applications on which they had spent a lot of time and effort, and which were very valuable to them, and simply deploy those applications to the cloud. Ideally, no changes would be necessary, and any necessary changes should be minimal. It turned out that Amazon EC2 offered this, and Azure did not – instead, it required at a minimum major reworking of existing applications, although ideally applications should be written from scratch in the Azure style. Guess which offering the users flocked to? (Guess which offering we used for migrating some of our services to the cloud?)

It seems that by now offering Azure virtual machines, Microsoft is acknowledging that its original PaaS approach was the wrong approach for the vast majority of potential cloud users. Whether that will still be the case in the future remains to be seen, but for the near future, I think we’re going to hear less and less about Azure PaaS and writing applications for the new IaaS offering.

What other story does this remind me of?  I seem to recall that, over a decade ago, when Microsoft rolled out its new .NET platform, people asked whether their existing Java code would work with applications written for .NET. Microsoft’s response, to paraphrase, was that there was really no need for that, when one simply rewrite the applications for .NET. In other words, one should just “rip and replace.” We all know how that turned out. While .NET flourished, so did Java. Most existing Java applications weren’t migrated, and there was a substantial need for solutions that allowed .NET and Java code to work together. While Microsoft left those solutions to other providers, like JNBridge, it did abandon the “rip and replace” rhetoric and proclaim the virtues of interoperability. It’s interesting that we’re going through the same cycle again in cloud technologies.

Now that Microsoft is adding virtual machines to Azure, will this change your plans to use (or not to use) Azure? Leave a comment below!