WebSphere MQ and BIG-IP

WebSphere MQ is the industry leading solution for messaging within the enterprise. As a result, I receive many questions and a lot of interest about BIG-IP Local Traffic Manager (LTM) and WebSphere MQ.  A deployment guide is coming out soon and testing is completed but F5 Networks already has customers using LTM to provide high availability and offload in MQ environments so  I thought I would share some of this guidance.

There are two ways to deploy BIG-IP with WebSphere MQ, either deploying BIG-IP in front of DataPower XI50 devices or in front of WebSphere Message Broker servers directly.  In either case, the end result is the same, high availability, TCP and SSL offload.  When XI50 devices are in play, they will be be used for some XML transformation.

Take a look at the IBM Redbook on deploying load balancing with WebSphere MQ. Beginning on page 148, we can see the typical scenario with DataPower devices in play:

  

If we are using DataPower XI50s the recommendation is that LTM load balance the XI50s. This is a typical setup with no persistence, least connections load balancing methodology and a TCP monitor. BIG-IP brings TCP optimization, SSL offload and outage detection through monitoring.

If XI50 devices are not in the infrastructure the setup is very similar. Message Broker servers are set up identically, with Channels, Transmission Queues, Queue Managers and Queues setup with identical TCP ports and names on both systems.  The TCP port of each of the queues are setup on BIG-IP LTM as pools.  Remember that with WebSphere MQ, each queue can have a TCP port starting typically with TCP 1414. The next step is to setup BIG-IP LTM virtuals for each pool which is mapping to a queue.  In either scenario (either XI50 deployed or not), TCP profiles should be adjusted on the BIG-IP to increase the default timeout as WebSphere MQ keeps a connection open indefinitely and uses a heartbeat to avoid timeouts.  The TCP timeout should be set for a value slightly larger than the value of the hearbeat in order to avoid either port exhaustion on the BIG-IP or connection flapping on the MQ TCP connection port. The heartbeat and timeout are both configurable.

I will update this blog once the official deployment guidance has been published.  In the meantime, contact me if you have any questions.

Published Mar 29, 2012
Version 1.0

Was this article helpful?

1 Comment

  • Nojan_Moshiri_1's avatar
    Nojan_Moshiri_1
    Historic F5 Account
    Hi Mike, BIG-IP isn't participating in the queue 'logic' as it were, simply delivering messages that are arriving to whichever MQ server or datapower device that is available. The messages are only delivered once, to one device, not to both servers.

     

     

    Maybe I'm missing something as well, let me know your thoughts.