Skip to main content

MQ DEFPSIST Bottlenecks

Recently I posted about our Websphere MQ MQCONN 2195 errors, and how we needed to apply the minimal recommended Solaris Kernel tuning for Websphere MQ. Well, we applied the changes, rebooted and connection problems were gone. Our application was able to spawn multiple connections to MQ without the errors we were seeing before.

However, we were still getting climbing response times when doing load tests between a request message being sent and a reply message arriving back. In conjunction with these times, we saw (using iostat) very high inferred disk utilisation from the SAN devices we had MQ and our application on.

So the team looked through the configuration of our clustered queue managers, and the logs, and application tuning (Triple writing vs. Single writing in MQ), and so on….

Our specialist tester had written a customised load harness to test response and throughput, and his graphs showed a bottleneck somewhere in the system. We knew the environment was good, as a “cloned” development queue manager and application on the pre-production system was getting response times at around a 10 millisecond median.

We tried it in a number of combinations, but finally found that the configuration had DEFPSIST(YES) on all local and alias queues. Changing all these to DEFPSIST(NO) led to near perfect response times. A mean response time of 20 milliseconds!!!!

` DESCRIPTIVE STATISTICS – ————————————————————————–