Cluster security


Setting a WebSphere MQ Cluster up and make it running is a challenge, and when it's running, we're all very happy :o)

What about security ? Who can connect to your Cluster ? How does a queuemanager(QMGR) join a cluster ? What about SSL security ?

This is some of the topics I will try to describe here, so let's go. Some years ago the SYSTEM.DEF.SVRCONN had by default a MCAUSER of 'NoBody' which normally wasn't defined on any system so you couldn't connect directly to a QMGR, without having directly access to the QMGR first.

Because we the customers said to IBM, it's difficult to deal with, IBM changed the default script that creates SYSTEM.DEF.SVRCONN's MCAUSER to ' ', which means that anybody can connect to the QMGR. You will of cause all disagree, and say it's not right. But it is, is just one Mrs. Blackhat have the opportunity to local signon on a Windows workstation, and is able to create a user there. This user could be Administrator, or a userid similar to the  MQ-administrators inside the domain, and this way dig into the QMGR........ just connecting to port 1414, and use the SYSTEM.DEF.SVRCONN

But this is quite simple and every MQ-administrator has closed all lose security holes, right ?

Therefore I will try to describe some of challenges to secure a Cluster setup.

How is a QMGR joining a cluster ?

A Queue Manager joins a Cluster by connecting to one of the repository QMGRs, and if this success, the Queue Manager is now part of the Cluster. And at this point the Queue Manager will be able to discover all Cluster objects in the whole cluster, and secondly create "fake" queues to obtain internal information. What if the connected Queue Manager belongs to a competitor ? This is not good at all.

A quick way to hide the Queue Manager changing the TCP/IP Listener' port from 1414 to another company value. (But don't think that enough, hacker just scans the ports, and try some different connection types.

What we have do is control the connection process, so we know who is connecting to our Full Repository Queue Manager(FRQM), and to do this we might need a security exit. (we could use SSL, but currently I don't know it deeply enough, and how to use it on a cluster configuration).

We could require some sort of accreditation exchange, so both parties know each other. This can be done more or less complicated, the easiest way is using a partner table on FRQM, and then only allow connection from these addresses, quite simple, there are samples of code showing how to code some of these exits. Not just take them from the shelves, but it's quite easy using the inspiration, there are many sites on the internet dealing with coding bits and pieces for WebSphere MQ.

If we code such a exit, we just have to add an entry for each Queue Manager that is allowed to join the cluster. Yes, I know it's not 100% safe, Mrs. Blackhat could steal a IP-address, but this is beyond the scope of this document.

If we mix both Z/OS and non-Z/OS Queue Managers in a Cluster, we have also to code one or more Channel auto-definition exits, this is because of the different specification on the two environments, and the fact that some of the parameters of our mars Cluster-Sender-channel is inherited from the Cluster-receiver-channel of the FRQM. Our specification of the Cluster-Sender-channel is only used for the initial connection only, and after a successful connection to the FRQM, it now inherits information about all FRQM', Cluster-receiver-channels. A normal Cluster queue manager have only contact to two queue managers, so if there are no special needs/reasons for we should only have two FRQM in our cluster, and this is because a Queue Manager only talks to two FRQM'.

On Z/OS the specification of the security exit is like: 

DEFINE CHANNEL.....SCYEXIT('BLOCKIP2') +
SCYDATA('172.109.34.*')

and on the distributed platform:

DEFINE +
SCYDATA('172.109.34.*') +
CHANNEL....SCYEXIT('c:\path..\BlockIP2(BlockExit)')

You can get your own copy of BLOCKIP2 from the download site here: BlockIP2 

And we all know what happens when the Queue Manager receives a badly specified exit parameter, either it abends or if we're lucky it might just disable the exit.

I hope that our network architects are isolating external network segments so we can use some generic patterns to keep our WebSphere MQ Clusters secure. Lets say that normally all external networks are mapped into 10.21.x.x, internal servers  192.168.x.x and 172.x.x.x for workstations. This eases the challenge, just adding  BlockIP2 like this:  

alt chl(MQT2.TCP.MQT1) chltype(CLUSRCVR) +
  SCYDATA('192.168.*') +
  scyexit('/var/mqm/exits/BlockIP2(BlockExit)') * LINUX

This would keep the WebSphere MQ Cluster secure from the public network, and from workstations (if there are no administrators/roots there).
In the same second my cluster queue-manager are joining the cluster, it will propagate it's clustered queues belonging to the cluster. 

Another challenge here is integration of other companies into my WebSphere MQ Cluster Network. This gives a lot of challenges like Connection names, public/private IP-networks. NAT*, queue-manager names, userids etc.

My personal opinion is: WebSphere MQ Clustering is for INTERNAL USAGE ONLY, because authentication of users happens on the local queue-manager and a message send by mqm or root, will typically just go thru the queue-managers without any security violation. 

As I see it there are no "simple" way of offering a open WebSphere MQ clustering solution. One of the biggest problems is CONNAME. CONNAME is specified on CLUSRCVR, this means that it's controlled in the receiver end. We can't use HOSTS-file, because all cluster queue-manager connames must be specified in there and should always be updated before a new cluster queuemanager is introduced.  /etc/HOSTS should reflect the public network seen from the inside local.

How can we the get a secure connection to our WebSphere MQ cluster ?

I think it's quite simple, just have an conventional queue-manager as network hub, and use a logical cluster name, to archive the workload balancing. I did a small description on the topic a while ago Connecting a normal Queue Manager to a Cluster. The "SERVER" queue on QMX1 and QMX3 is an good example. This setup will not show any object inside (if protected) and all that's open to the foreign queue-manager is the channel, and the queues that he is allowed access to. This can be assured by using MCAUSER together with SSL and/or security exits.

What we now might need is concerning the MQExplorer, one of the bright MQ specialists: Niel Kolban have described how to make it safe. So I won't write so much about that topic. Instead I will give you the link to his explanation to the security issue: MQExplorer and MQJExplorer security hosted by KOLBAN.COM 

Further reading and good sources of security information:
- WebSphere MQ Queue Managers Clusters
- WebSphere MQ Security
- WebSphere MQ internet pass-thru (support pack MS81) 
-
WebSphere MQ Security in an Enterprise Environment
- Links My personal list of good resources on the internet.

*NAT - Network Address Translation, I found a little article describing NAT on howSTUFFworks by Jeff Tyson
*Inside - This is the private network using internal network addresses


The following are trademarks of International Business Machines Corporation:
IBM, MERVA, MQSeries, WebSphere, WBIFN, Object REXX

Copyright © 2002, 2009 MrMQ.dk. All rights are reserved.
Last updated: 2009.03.01 .