October 9, 2007 at 14:29 · Filed under MQ, unix
Do you use MQ? do you have performance problems when using persistence? An interesting article on understanding this on a Solaris platform. Koops has always been an informative source on getting my MQ performance under check, and again he comes to the front of the pack on analysis of performance issues.
read more | digg story
September 13, 2007 at 18:21 · Filed under MQ, unix
The MQ command level is an integer used by MQ clients that identify what version the WebSphere MQ servers is running. This in conjunction with the Platform attribute allows a client know what commands the MQ server supports. Both are required attributes to determine usable commands.
You may be asked by someone what command level your system supports. In the case of MQ V6.0 on Solaris, the following is true:
MQCMDL_LEVEL_600
Level 600 of system control commands.
This value is returned by the following:
Websphere MQ for AIX V6.0
Websphere MQ for HP-UX V6.0
Websphere MQ for i/Series V6.0
WebSphere MQ for Linux V6.0
Websphere MQ for Solaris V6.0
Websphere MQ for Windows V6.0
Websphere MQ for z/OS V6.0
See IBM’s doco for further versions or systems.
August 20, 2007 at 18:23 · Filed under MQ, unix
Recently I posted about our Websphere MQ MQCONN 2195 errors, and how we needed to apply the minimal recommended Solaris Kernel tuning for Websphere MQ. Well, we applied the changes, rebooted and connection problems were gone. Our application was able to spawn multiple connections to MQ without the errors we were seeing before.
However, we were still getting climbing response times when doing load tests between a request message being sent and a reply message arriving back. In conjunction with these times, we saw (using iostat) very high inferred disk utilisation from the SAN devices we had MQ and our application on.
So the team looked through the configuration of our clustered queue managers, and the logs, and application tuning (Triple writing vs. Single writing in MQ), and so on….
Our specialist tester had written a customised load harness to test response and throughput, and his graphs showed a bottleneck somewhere in the system. We knew the environment was good, as a “cloned” development queue manager and application on the pre-production system was getting response times at around a 10 millisecond median.
We tried it in a number of combinations, but finally found that the configuration had DEFPSIST(YES) on all local and alias queues. Changing all these to DEFPSIST(NO) led to near perfect response times. A mean response time of 20 milliseconds!!!!
DESCRIPTIVE STATISTICS --
--------------------------------------------------------------------------
RESPONSE TIME THROUGHPUT
(millisecs) (transactions/sec)
MEDIAN : 10.00 19.05
LOWER QUARTILE : 10.00 19.05
UPPER QUARTILE : 30.00 19.05
95th PERCENTILE : 50.00 19.05
MINIMUM : 0.00 0.00
MAXIMUM : 200.00 19.05
RANGE : 200.00 19.05
IQR : 20.00 0.00
COUNT : 2,000.00 2,000.00
MEAN : 20.45 19.02
STD DEV : 15.66 0.74
--------------------------------------------------------------------------
We are going to need to do some tuning – as persistence turned on across the board appears to be the major performance hog, and yet we need some persistence to cover queues where that functionality is needed. But at least we found the two major causes to our bottleneck and 2195 errors.
August 10, 2007 at 10:35 · Filed under apps, MQ, unix
I was working on a recently built Solaris 9 server with a fresh copy of MQ installed.
During application testing, we were getting 2195 errors from our application when establishing more than 3 concurrent connects to MQ.
After a day of wasted de-bugging of our application, we put it down to the system, and it seems we may have been correct. There is an install chapter in the MQ V6.0 documentation that should NOT be overlooked.
Quoting the Websphere MQ 6.0 Install Guide
WebSphere® MQ uses semaphores, shared memory, and file descriptors, and it is probable that the default resource limits are not adequate.
Review the system’s current resource limit configuration.
As the root user, load the relevant kernel modules into the running system by typing the following commands:
modload -p sys/msgsys
modload -p sys/shmsys
modload -p sys/semsys
Then display your current settings by typing the following command:
sysdef
Check that the following parameters are set to the minimum values required by WebSphere MQ, or higher.
You can see their minimum value tables in the Resource limit configuration Chapter of the MQ 6.0 Install Guide.