P. 1
Websphere HA on Distributed Platforms

Websphere HA on Distributed Platforms

|Views: 222|Likes:
Published by Hilal Mikati

More info:

Published by: Hilal Mikati on Apr 16, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

11/26/2013

pdf

text

original

Sections

  • Chapter 1 General Configuration
  • Configure the cluster
  • Configure the shared disks
  • Chapter 2 Configuration steps for MQ - Linux-HA
  • Queue Manager Creation
  • Create a new resource group and place under heartbeat control
  • Removing a Queue manager from Heartbeat control
  • Queue Manager Deletion
  • Chapter 3 Configuration steps for UserNameServer – UNIX & Linux
  • Create the UNS queue manager
  • Create the Clustered UNS - HACMP/ServiceGuard/VCS/Linux-HA
  • Place the UNS under cluster control
  • HACMP/ServiceGuard
  • Veritas Cluster Server
  • Linux-HA
  • Chapter 4 Configuration steps for Configuration Manager – UNIX & Linux
  • Create the Configuration Manager queue manager
  • Create the Clustered Configuration Manager - HACMP/ServiceGuard/VCS/Linux-HA
  • Place the Configuration Manager under cluster control
  • Chapter 5 Configuration steps for Broker – UNIX & Linux
  • Create and configure the queue manager
  • Create and configure the broker database
  • For HACMP/ServiceGuard
  • Create the message broker - HACMP/ServiceGuard/VCS/Linux-HA
  • Place the broker under cluster control
  • Linux-HA Control
  • Chapter 6 Removal and Deletion – UNIX & Linux
  • Remove the UNS from the cluster configuration
  • Remove UNS standby information from a node
  • Delete the UNS
  • Remove a configuration manager from the cluster configuration
  • Remove a configuration manager standby information from a node
  • Delete a configuration manager
  • Remove a broker from the cluster configuration
  • Remove broker standby information from a node
  • Delete a broker
  • Chapter 7 MSCS - Configuration steps for Configuration Manager
  • Create the Configuration Manager Group
  • Create and Configure the Configuration Manager’ Queue Manager
  • Create the Clustered Configuration Manager
  • Create the Non-Clustered Configuration Manager
  • Chapter 8 MSCS - Configuration steps for USN
  • Create the UNS Group
  • Create and Configure the UNS Queue Manager
  • Create the Clustered UNS
  • Chapter 9 MSCS - Configuration steps for Broker
  • Create the Message Broker Group
  • Create and Configure the Message Broker’ Queue Manager
  • Create and Configure the Message Broker database
  • Using a Remote Message Broker database
  • Using a Local Message Broker database
  • Create the Clustered Message Broker
  • Chapter 10 MSCS - Configuration Steps for Migrating Existing Components
  • Configuration Manager
  • Configure the Configuration Manager’ Queue Manager
  • Configure the Configuration Manager
  • User Name Server
  • Configure the UNS Queue Manager
  • Configure the UNS
  • Message Broker
  • Configure the Message Broker’s Queue Manager
  • Configure the DB2 instance and database
  • Configure the Message Broker
  • WMB Toolkit
  • Chapter 16 MSCS - Removal and Deletion
  • Removing a Configuration Manager from MSCS Control
  • Removing a UNS from MSCS Control
  • Removing a Message Broker from MSCS Control
  • Appendix A - Sample Configuration Files
  • types.cf
  • main.cf
  • Comments

IC91: High Availability for WebSphere Message Broker on Distributed Platforms

Created: September 2006 Latest Update: May 2008

Rob Convery IBM Hursley

Stephen Cox IBM Pan-IOT Lab Services, UK

Property of IBM

IC91: High Availability for Websphere Message Broker

Table of Contents Concepts
Chapter 1 Introduction .............................................................................................................5
Concepts ............................................................................................................................................5
HA clusters and Websphere MQ Clusters ....................................................................................................5 Cluster Configurations ..................................................................................................................................6 Network connections ..................................................................................................................................10

Chapter 2 Requirements .........................................................................................................11
Software Requirements ..................................................................................................................11 Networks..........................................................................................................................................11 Platform Support............................................................................................................................11
HACMP ......................................................................................................................................................11 VCS.............................................................................................................................................................11 HP/ServiceGuard ........................................................................................................................................11 Linux-HA....................................................................................................................................................11 MSCS..........................................................................................................................................................11

Chapter 3 Support Pac Installation........................................................................................12
HACMP/ ServiceGuard Installation.............................................................................................12 MSCS Installation on Windows 2003 ...........................................................................................12 VCS Installation on Solaris............................................................................................................12 Linux-HA (Heartbeat) Installation ...............................................................................................13

Chapter 4 Planning your configuration.................................................................................15
Configuration Manager component..............................................................................................17 Broker component ..........................................................................................................................18 User Name Server component .......................................................................................................19 Architectural guidelines .................................................................................................................20

Implementation
Chapter 1 General Configuration ............................................................................................2
Configure the cluster ........................................................................................................................2 Configure the shared disks ..............................................................................................................2

Chapter 2 Configuration steps for MQ - Linux-HA................................................................4
Queue Manager Creation ................................................................................................................4 Create a new resource group and place under heartbeat control ................................................4 Removing a Queue manager from Heartbeat control ...................................................................5 Queue Manager Deletion .................................................................................................................6

1

IC91: High Availability for Websphere Message Broker

Chapter 3 Configuration steps for UserNameServer – UNIX & Linux .................................7
Create the UNS queue manager ......................................................................................................7 Create the Clustered UNS - HACMP/ServiceGuard/VCS/Linux-HA .........................................8 Place the UNS under cluster control .............................................................................................10
HACMP/ServiceGuard ...............................................................................................................................10 Veritas Cluster Server .................................................................................................................................12 Linux-HA....................................................................................................................................................12

Chapter 4 Configuration steps for Configuration Manager – UNIX & Linux....................16
Create the Configuration Manager queue manager ...................................................................16 Create the Clustered Configuration Manager - HACMP/ServiceGuard/VCS/Linux-HA ......17 Place the Configuration Manager under cluster control ............................................................19
HACMP/ServiceGuard ...............................................................................................................................19 Veritas Cluster Server .................................................................................................................................21 Linux-HA....................................................................................................................................................22

Chapter 5 Configuration steps for Broker – UNIX & Linux ................................................25
Create and configure the queue manager ....................................................................................25 Create and configure the broker database ...................................................................................26
For HACMP/ServiceGuard.........................................................................................................................26 VCS.............................................................................................................................................................27 Linux-HA....................................................................................................................................................27

Create the message broker - HACMP/ServiceGuard/VCS/Linux-HA......................................27 Place the broker under cluster control .........................................................................................29
HACMP/ServiceGuard ...............................................................................................................................29 VCS.............................................................................................................................................................31 Linux-HA Control.......................................................................................................................................32

Chapter 6 Removal and Deletion – UNIX & Linux ..............................................................35
Remove the UNS from the cluster configuration.........................................................................35 Remove UNS standby information from a node ..........................................................................36 Delete the UNS ................................................................................................................................36 Remove a configuration manager from the cluster configuration .............................................37 Remove a configuration manager standby information from a node........................................38 Delete a configuration manager ....................................................................................................38 Remove a broker from the cluster configuration ........................................................................39 Remove broker standby information from a node ......................................................................40 Delete a broker................................................................................................................................40

Chapter 7 MSCS - Configuration steps for Configuration Manager ...................................42
Create the Configuration Manager Group ..................................................................................42 Create and Configure the Configuration Manager’ Queue Manager .......................................42 Create the Clustered Configuration Manager .............................................................................42 Create the Non-Clustered Configuration Manager.....................................................................43

Chapter 8 MSCS - Configuration steps for USN...................................................................44 2

IC91: High Availability for Websphere Message Broker

Create the UNS Group ...................................................................................................................44 Create and Configure the UNS Queue Manager .........................................................................44 Create the Clustered UNS..............................................................................................................44

Chapter 9 MSCS - Configuration steps for Broker ...............................................................46
Create the Message Broker Group ...............................................................................................46 Create and Configure the Message Broker’ Queue Manager ....................................................46 Create and Configure the Message Broker database ..................................................................47 Using a Remote Message Broker database...................................................................................47 Using a Local Message Broker database ......................................................................................47 Create the Clustered Message Broker ..........................................................................................48 WMB Toolkit ..................................................................................................................................48

Chapter 10 MSCS - Configuration Steps for Migrating Existing Components...................49
Configuration Manager .................................................................................................................49
Create the Configuration Manager Group ..................................................................................................49 Configure the Configuration Manager’ Queue Manager ............................................................................49 Configure the Configuration Manager........................................................................................................49

User Name Server ...........................................................................................................................50
Create the UNS Group ................................................................................................................................50 Configure the UNS Queue Manager...........................................................................................................50 Configure the UNS .....................................................................................................................................50

Message Broker...............................................................................................................................51
Create the Message Broker Group ..............................................................................................................51 Configure the Message Broker’s Queue Manager ......................................................................................51 Configure the DB2 instance and database ..................................................................................................51 Configure the Message Broker ...................................................................................................................52 WMB Toolkit..............................................................................................................................................53

Chapter 16 MSCS - Removal and Deletion ...........................................................................54
Removing a Configuration Manager from MSCS Control ........................................................54 Removing a UNS from MSCS Control .........................................................................................55 Removing a Message Broker from MSCS Control......................................................................55

Appendix A - Sample Configuration Files.............................................................................57
types.cf .............................................................................................................................................57 main.cf .............................................................................................................................................58

Comments ................................................................................................................................60

3

IC91: High Availability for Websphere Message Broker Concepts 4 .

network adapters or critical processes to be detected and automatically trigger recovery procedures to bring an affected service back online as quickly as possible. ”HACMP Cluster”. The testing of this SupportPac used DB2 for the broker repository. Oracle or Sybase. HA clusters and Websphere MQ Clusters The word "cluster" has a number of different meanings within the computing industry. The reader is referred to the HA Software documentation for assistance with these topics. “Hearbeat Cluster” or “MSCS Cluster”. modify and publish messages from WebSphere MQ and other transports. such as the NEON Repository. brokers and a UNS such that they are amenable to operation within an HA cluster. it shows how to create and configure WMB Config Manager. Used in conjunction with the documentation for Websphere MQ. redundant disk controllers. WebSphere Message Broker (WMB) provides services based on message brokers to route. Websphere MQ SupportPac MC91 (High Availability for WebSphere MQ on UNIX platforms) and the DB2 product documentation should be consulted for more detailed assistance relating to queue managers and database instances. Throughout this document.11i MSCS for Windows Server 2003 Enterprise Edition Linux-HA Veritas Cluster Server (available for many platforms. nodes. DB2 and WMB. which refers to a collection of queue managers which can allow access to their queues by other queue managers in the cluster. transform. It is worth making a clear distinction between such an "HA cluster" and the use of the phrase "Websphere MQ Cluster". which is a collection of nodes and resources (such as disks and networks) which cooperate to provide high availability of services running within the cluster. broker or User Name Server (UNS).IC91: High Availability for Websphere Message Broker Chapter 1 Introduction Concepts High Availability Cluster Software is available for a number of vendors for various platforms as listed below HACMP for AIX 5. the word "cluster" is used to describe an “MC/ServiceGuard cluster". Websphere MQ clusters reduce administration and provide load balancing of messages across instances of cluster queues. This SupportPac does not include details of how to configure redundant power supplies.Solaris used in this document) Clustering servers enable applications to be distributed across and moved between a number of physical servers. This SupportPac provides notes and example scripts to assist with the installation and configuration of Websphere MB in an HA environment. it is possible for failures of power supplies. disk mirroring or multiple network or adapter configurations.3 ServiceGaurd for HP-UX 11. This database can be DB2. store. but this is beyond the scope of this document. 5 . They also offer higher availability than a single queue manager.2 & 5. unless explicitly noted otherwise. An WMB broker requires a database to store runtime data. The instructions could also be used for databases other than the broker database. networks. This SupportPac provides brief descriptions of how to configure queue managers or database instances for HA operation. ”VCS Cluster”. With a suitably configured cluster. By using Websphere MQ. WMB and a HA Application altogether. thus providing redundancy and fault resilience for business-critical applications. disks. disk controllers. it is possible to further enhance the availability of an WMB Configuration Manager. With suitable conversion or substitution it would be possible to apply the instructions contained in this SupportPac to other suitable database management systems.

the idea being that the standby node can take over the work of either worker node. simultaneously. it does not include the ability to fail a queue manager over to a surviving node. network addresses. or critical processes. VCS. HA clusters provide monitoring and local restart and also support failover. ServiceGuard . 6 .IC91: High Availability for Websphere Message Broker because following a failure of a queue manager. This is rather like a standby configuration but with (non-critical) work being performed by the standby node. Whilst Websphere MQ can be configured to provide automatic detection of queue manager failure and automatic triggering of local restart. including simple clustered pairs. Resources are things like disks. messaging applications can still access surviving instances of a cluster queue. to work around failures or in response to operator commands. but which can be moved collectively from one node to another. Such disks are sometimes referred to as "shared disks" but because each disk can be owned by only one node at a time. Heartbeat and MSCS all use a "shared nothing" clustering architecture. The two types of cluster can used together to good effect. A "one-sided takeover" configuration is one in which a standby node performs some additional. A "mutual takeover" configuration is one in which all nodes are performing highly available (movable) work. Such a node must possess sufficient capacity to maintain an acceptable level of performance. shows a generic shared nothing cluster. Such a configuration requires a high degree of hardware redundancy. Critical data is stored on external disks which can be owned by either of the nodes in the cluster. non-critical and non-movable work. It is possible to construct a number of cluster topologies. A shared nothing cluster has no concurrently shared resources. clusters which use such an architecture are strictly "shared nothing" clusters. and works by transferring ownership of resources from one node to another. or "N+1" standby topologies. rings. HACMP. which host resource/service groups. Cluster Configurations A HA Cluster contains multiple machines (nodes). it is possible to extend this configuration to have multiple worker nodes with a single standby node. With the extended standby configuration or either of the takeover configurations it is important to consider the peak load which may be placed on any node which can take over the work of other nodes. To economise on hardware. Figure 1. A standby configuration is the most basic cluster configuration in which one node performs work whilst the other node acts only as standby. The standby node does not perform work and is referred to as idle. This is still referred to as a standby configuration and sometimes as an "N+1" configuration. This type of cluster configuration is sometimes referred to as "Active/Active" to indicate that all nodes are actively processing critical workload. The ability to move resource groups from one node to another allows you to run highly available workload on multiple nodes. this configuration is sometimes called "cold standby". including mutual takeover where all cluster nodes are running WMB workload. It is also possible to create configurations in which one or more nodes act as standby nodes which may be running other workload if desired. This SupportPac can be used to help set up either standby or takeover configurations. A resource group is a means of binding together related resources which must be co-located on the same node. A takeover configuration is a more advanced configuration in which all nodes perform some kind of work and critical work can be taken over in the event of a node failure.

IC91: High Availability for Websphere Message Broker

Figure 1. Shared-Nothing Cluster
The cluster could also have additional nodes, public and private networks, network adapters, disks and disk controllers

Remote Clients

Remote Servers

public network (e.g. ethernet or token-ring)

internal disks

Virtual IP Address 1

can migrate to other node

can migrate to other node

Virtual IP Address 2

internal disks

Node A private network (e.g. ethernet)

Node B

Critical Processes 1

can migrate to other node/s

can migrate to other node/s

Critical Processes 2

Critical Data 1

Critical Data 2

resources managed by cluster

shared disks

Figure 1 - A shared nothing cluster HACMP HACMP provides the ability to group resources, so that they can be kept together and can be either restarted on the same node that they were running on, or failed over to another node. Depending on the type of resource group chosen, it is also possible to control whether a group will move back to its former node when that node is restarted. Whilst resource groups define which resources must be co-located, it is application servers and application monitors that allow HACMP to control and monitor services such as queue managers, database instances, message brokers or a UNS. This SupportPac contains scripts that can be used to configure one or more application servers that each contain an WMB broker or UNS. The SupportPac also contains scripts that enable each application server to be monitored. The scripts can be used as shipped, or as a basis from which to develop your own scripts. Whether the scripts are being used to manage brokers, configuration managers or a UNS they rely on the scripts from Websphere MQ SupportPac MC91 to manage queue managers and when managing a broker they also rely on the use of HACMP scripts to manage the database instance containing the broker database. HACMP scripts to manage database instances are available separately. For example, IBM DB2 for AIX includes a set of example scripts for HACMP operations. The example scripts supplied with this SupportPac contain the logic necessary to allow one or more brokers, one or more congfiguration managers and one UNS to be run within a cluster. It is possible to configure multiple resource groups. This, combined with the ability to move resource groups between nodes, allows you to simultaneously run highly available brokers on the cluster nodes, subject only to the constraint that each broker can only run on a node that can access the necessary shared disks. This constraint is enforced by the resource group, which has a list of which nodes it can run on and the volume groups it owns. In the case of the UNS, you can only configure one UNS per cluster. This is because the name "UserNameServer" is fixed, which would cause different UNSs to interfere if you tried to

7

IC91: High Availability for Websphere Message Broker

configure them simultaneously on one node1. The script used for creating an HA compliant UNS enforces this constraint. This SupportPac also includes application monitor scripts which will allow HACMP to monitor the health of brokers (and their queue managers and database instances), configuration managers and the UNS (and its queue manager) and initiate recovery actions that you configure, including the ability to attempt to restart these components. HACMP only permits one application server in a resource group to have an application monitor. If you wished to configure a resource group which contained multiple brokers or a combination of a broker and a UNS then you could combine elements of the example scripts to enable you to run a more complex application server. You would also need to combine elements of the application monitor scripts. This approach is not recommended; it is preferable to use separate resource groups, as described in Architectural guidelines

VCS Whilst service groups define which resources must be co-located, it is agents that allow VCS to monitor and control the running of services such as queue managers, database instances and brokers. This SupportPac contains three example agents MQSIBroker is for managing WMB brokers MQSIConfigMgr is for managing WMB Configuration Managers MQSIUNS is for managing a WMB user name server (UNS). The data services can be used as shipped, or as a basis from which to develop your own agents. Both the MQSIBroker and MQSIUNS agents rely on the use of the MQM agent available in Websphere MQ SupportPac MC91 and the MQSIBroker agent also relies on the use of an appropriate agent to manage the database instance containing the broker database. Database agents are available from VERITAS or the database vendor. IBM provides an agent which supports DB2 version 7.2 onwards. The example agents contain the logic necessary to allow one or more brokers, one or more configuration managers and one UNS to be run within a service group. The agents can also be used by multiple service groups. This, combined with the ability to move service groups between systems, allows you to simultaneously run highly available brokers on the systems within the cluster, subject only to the constraint that each broker can only run on a system that can access the shared disks which contain the files used by the queue manager and database instance used by the broker. This constraint is enforced by VCS. HP This SupportPac contains scripts that can be used to configure one or more application servers that each contain an broker, configuration manager or UNS. The SupportPac also contains scripts that enable each application server to be monitored. The scripts can be used as shipped, or as a basis from which to develop your own scripts. Whether the scripts are being used to manage brokers, configuration managers or a UNS they rely on the scripts from Websphere MQ SupportPac MC91 to manage queue managers and when managing a broker they also rely on the use of MC/ServiceGuard scripts to manage the database instance containing the broker database. MC/ServiceGuard scripts to manage database instances are available separately. The example scripts supplied with this SupportPac contain the logic necessary to allow one or more brokers, one or more configuration managers and one UNS to be run within a cluster. This is described fully in Chapter 4. It is possible to configure multiple resource groups. This,

1

Technically, you could partition the cluster into non-overlapping sets of nodes and configure multiple resource groups each confined to one of these sets. Each resource group could then support a UNS. However, this is not recommended.

8

IC91: High Availability for Websphere Message Broker

combined with the ability to move resource groups between nodes, allows you to simultaneously run highly available brokers on the cluster nodes, subject only to the constraint that each broker can only run on a node that can access the necessary shared disks. This constraint is enforced by the package, which has a list of which nodes it can run on and the volume groups it owns. In the case of the UNS, you can only configure one UNS per cluster. This is because the name "UserNameServer" is fixed, which WebSphere MQ Integrator for HPUX - Implementing with MC/ServiceGuard would cause different UNSs to interfere if you tried to configure them simultaneously on one node1. The script used for creating an HA compliant UNS enforces this constraint. Linux-HA (Heartbeat) Heartbeat provides the ability to group resources, so that they can be kept together and can be either restarted on the same node that they were running on, or failed over to another node. Depending on the type of resource group chosen, it is also possible to control whether a group will move back to its former node when that node is restarted. Whilst resource groups define which resources must be co-located, it is agents and application monitors that allow hearbeat to control and monitor services such as queue managers, database instances, message brokers, configuration managers or a UNS. These scripts do not rely on any other support pacs. Currently they do not provide scripts which allow DB2 to be placed under HA. If you would like to put DB2 under the control of Heartbeat please refer to the DB2 Documentation, for example, “Open Source Linux High Availability for IBM® DB2® Universal Database™ - Implementation Guide”2. For the rest of this document it is presume that the DB instance is located on a machine outside out of HeartBeat. The example scripts supplied with this SupportPac contain the logic necessary to allow one or more components to be run within a cluster. This is described fully in Chapter 4. It is possible to configure multiple resource groups. This, combined with the ability to move resource groups between nodes, allows you to simultaneously run highly available brokers on the cluster nodes, subject only to the constraint that each broker can only run on a node that can access the necessary shared disks. This constraint is enforced by the resource group, which has a list of which nodes it can run on. In the case of the UNS, you can only configure one UNS per cluster. This is because the name "UserNameServer" is fixed, which would cause different UNSs to interfere if you tried to configure them simultaneously on one node3. The script used for creating an HA compliant UNS enforces this constraint. If you are using Heartbeat V2 then this SupportPac also includes application monitor scripts and agents which will allow Heartbeat to monitor the health of components (and their queue managers and database instances) and initiate recovery actions that you configure, including the ability to attempt to restart these components. Heartbeat provides the ability to run a number of monitors within a single resource group. If you wished to configure a resource group which contained multiple brokers or a combination of a broker and a UNS then you could combine elements of the example scripts to enable you to run a more complex agent. You would also need to combine elements of the application monitor scripts. This approach is not recommended; it is preferable to use separate resource groups, as described Chapter 4 Microsoft Cluster Services Microsoft Cluster Services provides the ability to group resources, so that they can be kept together and can be either restarted on the same node that they were running on, or failed over to another node. Depending on the type of MSCS group chosen, it is also possible to control whether a group will move back to its former node when that node is restarted.

This is a whitepaper available from the DB2 for Linux web site - http://www306.ibm.com/software/data/db2/linux/papers.html

2

9

IC91: High Availability for Websphere Message Broker

Whilst MSCS groups define which resources must be co-located, it is service resources that allow MSCS to control and monitor services such as queue managers, database instances, message brokers, configuration managers or a UNS. There are no scripts suppled with this support pac. All the actions are GUI driven within MSCS. This SupportPac contain the instructions necessary to allow one or more brokers, one or more congfiguration managers and one UNS to be run within a cluster. This is described fully in Chapter 4It is possible to configure multiple resource groups. This, combined with the ability to move resource groups between nodes, allows you to simultaneously run highly available brokers on the cluster nodes, subject only to the constraint that each broker can only run on a node that can access the necessary shared disks. This constraint is enforced by the resource group, which has a list of which nodes it can run on and the volume groups it owns. In the case of the UNS, you can only configure one UNS per cluster. This is because the name "UserNameServer" is fixed, which would cause different UNSs to interfere if you tried to configure them simultaneously on one node.

Network connections
Clustered services, such as Websphere MQ queue managers, are configured to use virtual IP addresses which are under cluster control. When a clustered service moves from one cluster node to the other, it takes its virtual IP address with it. The virtual IP address is different to the stationary physical IP address that is assigned to a cluster node. Remote clients and servers which need to communicate with clustered services must be configured to connect to the virtual IP address and must be written such that they can tolerate a broken connection by repeatedly trying to reconnect.

10

2 VCS • Sun Solaris 8 or 9 VCS 4.1 Linux-HA SLES9 HeartBeat V2.0 or above HP/ServiceGuard • • HPUX Version 11i MC/ServiceGuard Version 11.1 or above Websphere MQ SupportPac MC91: High Availability for WebSphere MQ on UNIX Platforms (not required for Linux-HA) Networks The SupportPac has been tested with TCP/IP public networks. It would also be possible to configure HACMP to handle SNA networks.IC91: High Availability for Websphere Message Broker Chapter 2 Requirements Software Requirements WebSphere Message Broker V6 FixPack 1 or higher Websphere MQ V5. and serial private networks.0.5 MSCS Windows Server 2003 Enterprise Edition Microsoft Cluster Server (MSCS) components 11 .3 FixPack 8 or higher DB2 V8.2 (maintenance level 3) or higher HACMP Version V5. Platform Support HACMP AIX Version 5. TCP/IP networks could be used for both public and private networks.

confgiuration manager or UNS. configuration manager or UNS also create an HACMP Application Monitor for that broker. you will have created a working directory which. DB2 and WMB should already be installed on all cluster systems. Windows 2003 Enterprise Server cluster components should be installed onto all cluster nodes. This is the parent directory of the directories which will contain the two agents contained in the SupportPac and a further common subdirectory which will contain files which relate to both 12 . It is important that under normal operating conditions you are running identical versions of software on all cluster systems. It is important that under normal operating conditions you are running identical versions of all the software on all cluster nodes. These application monitors are saved in /MQHA/bin and are called hamqsi_applmon. When installing Websphere MQ. Websphere MQ SupportPac MC91. MQ. If you use a different location you'll need to change the example scripts. WMB and DB2 should already be installed. The only exception to this is during a rolling upgrade . software products should be installed into the same directories on all nodes. It is important that under normal operating conditions you are running identical versions of the operating system. VCS Installation on Solaris Solaris.see the procedures in the HACMP documentation for details. VCS. MSCS Installation on Windows 2003 Windows 2003 Server. for example by issuing: chmod +x ha* mqsi* The hamqsicreatebroker.IC91: High Availability for Websphere Message Broker Chapter 3 Support Pac Installation HACMP/ ServiceGuard Installation AIX. It is recommended that MQ. DB2 and WMB software on the cluster nodes and that the nodes are configured identically as far as possible. using the normal procedures onto internal disks on each of the nodes. Such a cluster is built from existing product software from Microsoft and IBM listed in the software requirements. using the normal procedures onto internal disks on each of the nodes. As part of installing SupportPac MC91. hamqsicreatecfgmgr and hamqsicreateusernameserver scripts used during the creation and configuration of a broker. HACMP. You can either install this SupportPac into the same directory or use a different one. where <broker> is the name of the broker or configuration manager to be monitored and hamqsi_applmon.<broker>. Websphere MQ. For example. The installation of the existing software products should follow the normal installation instructions for each product. Websphere MQ. Configuration Manager(s) and a User Name Server. note the advice in SupportPac MC91 about filesystems and the requirement that the "mqm" username and "mqm" groupname have been created and each have the same numeric value on all of the cluster nodes. is called /MQHA/bin. It is recommended that you install the products onto internal disks on each of the systems. Log in to each of the cluster systems as root and change directory to /opt/VRTSvcs/bin. Download the SupportPac onto each of the cluster nodes into the /MQHA/bin directory and uncompress and untar it. WMB and DB2 should already be installed. by default. Then ensure that all the scripts are executable. This document provides advice on how to create and configure an MSCS cluster running Message Broker(s). The following description assumes that you are using the /MQHA/bin directory. It is important that under normal operating conditions you are running identical versions of all the software on all cluster nodes. DB2 and WMB all be installed to internal drives on each node.UNS. Websphere MQ. using the normal procedures.

Heartbeat. This will create peer directories called MQSIBroker. Websphere MQ.. WMB and DB2 should already be installed.cf file with the cluster stopped. When installing Websphere MQ. All of the scripts need to have executable permission. MQSIConfigMgr and MQSIUNS resource types need to be added to the cluster configuration file. Configure and restart the cluster and check that the new resource types are recognized correctly by issuing the following commands: hatype -display MQSIBroker hatype -display MQSIConfigMgr hatype -display MQSIUNS Linux-HA (Heartbeat) Installation Linux. types. See Appendix A for more details. note that the "mqm" username and "mqm" groupname have been created and each have the same numeric value on all of the cluster nodes. using the normal procedures onto internal disks on each of the nodes.cf file by appending the MQSIBroker. 13 . Download the SupportPac onto each of the cluster nodes into a temporary directory and untar it. For each node in the cluster. This is the working directory assumed by the example scripts. You could use a different layout if you wanted to. in each of the agent directories. For convenience. This can be done using the VCS GUI or ha* commands while the cluster is running. MQSIConfigMgr and MQSIUNS type definitions. MQSIConfigMgr and MQSIUNS. but you'd need to change some of the example scripts. Create the /MQHA/bin directory.IC91: High Availability for Websphere Message Broker agents. by issuing: cd /opt/VRTSvcs/bin/MQSIBroker chmod +x clean explain ha* monitor offline online cd /opt/VRTSvcs/bin/MQSIConfigMgr chmod +x clean explain ha* monitor offline online cd /opt/VRTSvcs/bin/MQSIUNS chmod +x clean explain ha* monitor offline online The agent methods are written in perl. OfflineTimeout and LogLevel attributes of the resource types. log on as mqm or root. or by editing the types.MQSIBroker. This layout of directories is assumed by the example agent methods. but you would have to change the example scripts. It is important that under normal operating conditions you are running identical versions of all the software on all cluster nodes. Ensure that the necessary scripts in each of these subdirectories are executable.MQSIUNS files. Then copy the scripts files from <temp directory>/MQHA/bin/ to /MQHA/bin. these definitions can be copied directly from the types. stop the cluster and edit /etc/VRTSvcs/conf/config/types. The easiest way to do this is to change to the working directory and run chmod +x ha* mqsi* You could use a different location than the default working directory if you wanted to.cf file. Download the SupportPac onto each of the cluster systems into the /opt/VRTSvcs/bin directory and uncompress and untar it. The default parameter settings provide suggested values for the OnlineWaitLimit.MQSIConfigMgr and types. as follows: cd /opt/VRTSvcs/bin cp ScriptAgent MQSIBroker/MQSIBrokerAgent cp ScriptAgent MQSIBroker/MQSIConfigMgrAgent cp ScriptAgent MQSIUNS/MQSIUNSAgent The MQSIBroker. You need to copy or link the ScriptAgent binary (supplied as part of VCS) into each of the MQSIBroker and MQSIUNS directories. If you choose to do this by editing the types.

All of the scripts in the monitor directory need to have executable permission.IC91: High Availability for Websphere Message Broker Next copy the files from the <temp directory>/resource.d directory.d directory to /etc/ha.r/resorce. The easiest way to do this is to change to the working directory and run chmod +x mqsi* mqm* 14 .

Before proceeding. inside the cluster. This document provides assistance in making the Message Broker (and its queue manager and database). 15 .e. the UNS can be placed either inside or outside the cluster. They could. and therefore is critical to the continued provision of the messaging service and should be run under HA control. The broker depends on a queue manager. Like the broker database. be put into a separate HA cluster. The message broker runs message flows. The Message Broker domain is controlled via the Configuration Manager which is responsible for managing brokers. The database is another critical component of the solution architecture and should be placed under HA control if it is run locally. to provide topic-based security. Configuration Manager and UNS highly available. also under HA control. execution groups and message flows. which must be run on the same node as the broker and is another critical function which needs to be under HA control. This is outside the scope of the SupportPac.IC91: High Availability for Websphere Message Broker Chapter 4 Planning your configuration WMB contains a number of components which perform different functions within a broker domain. The broker also depends on access to a database known as the broker database. then the UNS is also a critical service. Alternatively the remote components could be made highly available in other ways. which may either be run locally (on the same node as the broker) or remotely on a node outside the cluster. the reader is advised to consider which components of their solution architecture they wish to place on the cluster nodes and which components will be run remotely. The contents of this document could be used in conjunction with the product documentation as a basis for how to do that. The broker domain may include a User Name Server (UNS). If it is inside the cluster (i. The UNS depends on a queue manager which must be run on the same node as the UNS. it runs on one of the cluster nodes) then it needs to be placed under HACMP control. If this is the case. for example. Components which are run outside the cluster may also need to be made highly available. From Version 6 of WMB it is now possible to locate the Configuration Manager on any platform and in a HA cluster configuration if desired.

IC91: High Availability for Websphere Message Broker Contents of HA Cluster.g. in VSC is a service group. Components which are run outside the cluster may also need to made highly available. Before proceeding.Inclusion of WMB components in the cluster Figure 2. processes) needed to deliver a highly available service. Alternatively the remote components could be made highly available in other ways. be put into a separate cluster. in Heartbeat is a resource group and in MSCS is a group. in Service Guard is a package. showing component dependencies Essential components of HA cluster Optional components of HA cluster Message Broker Configuration Manager User Name Server Config Mgr's Queue Manager Broker's Queue Manager Broker Database Figure 2 . The unit of failover in HACMP is a resource group. The remainder of this Chapter describes the Message Broker. providing flexibility and minimising disruption during a failure or planned maintenance. 16 . but it is outside the scope of this document to describe this. The components which you decide to run on the cluster nodes need to be put under HA control. Configuration Manager and UNS components in more detail and then describes the steps needed to configure the cluster. For the remainder of this document the term resource group will be used to represent a unit of failover. The contents of this document could be used in conjunction with the product documentation as a basis for how to do that. This approach maximises the independence of each resource group. the reader is advised to study the diagram and consider which components of their solution architecture they wish to place on the cluster nodes and which components will be run remotely. The double boxes show which components have an extra instance if an additional broker is configured. IP addresses. Ideally a group should contain only those processes and resources needed for a particular instance of a service. They could. shows the relationships between these components and their recommended placement relative to the Cluster. A resource group is a collection of resources (e. for example. disks.

A configuration manager runs as a pair of processes. called bipservice and bipconfigmgr. The smallest unit of failover for a Configuration Manager is the configuration manager service along with the WebSphere MQ queue manager which it depends upon. On UNIXs. but this is not recommended. Multiple configuration managers can be defined within a single domain but they must all have their own queue manager. A configuration managers resource group must contain the configuration manager. queue managers shared disds and an IP address. This would cause un-necessary disruption if a single component went down such as the brokers queue manager. The configuration steps described in later chapers how to use the example broker scripts to place a broker under HA control. so these directories need to be on shared disks in addition to the directories used by the queue manager. a configuration manager makes use of the /var/mqsi/components/<ConfigMgr> directory and the /var/mqsi/registry/<ConfigMgr> directory. Figure 3.IC91: High Availability for Websphere Message Broker Configuration Manager component An WMB Configuration Manager relies on a Websphere MQ queue manager which in turn is dependant on other lower level resources. A configuration manager group cluster Node A Configuration Manager (Internal DB) Queue Manager ip address Node B Configuration Manager (Internal DB) group can move between nodes Queue Manager ip address disks disks ConfigMgr group ConfigMgr group Figure 3 . queue manager. shows the contents of a resource group containing the Configuration Manager. A configuration manager can share its queue manager with a broker but this is not recommended for a HA scenario as it would result in multiple components being contained within a single resource group.Configuration Manager resource group 17 . In V6 the requirement for an external database was removed as all of its configuration data is now held internally to the configuration manager. such as disks and IP addresses.

which in turn is depend on other lower level resources. It is this collection of processes which are managed by HA Software. The broker database might be run in a database instance on the same node as the message broker. referred to as the broker database. On UNIXs. The optimal configuration of brokers and groups is to place each broker in a separate group. A broker runs as a pair of processes. This would cause unnecessary disruption to other message brokers in the same group. A broker group must contain the message broker. The configuration steps described in later chapers how to use the example broker scripts to place a broker under HA control. with the resources upon which it depends. 18 . You could put multiple message brokers (and associated resources) into a group. and Figure5. you would need to edit the example scripts so that the brokers were in the same application server. If you wanted to monitor multiple brokers in the same group. such as disks and IP addresses. so these directories need to be on shared disks in addition to the directories used by the queue manager and broker database instance. HACMP users who wish to use application monitoring should also note the restriction that only one application server in a resource group can be monitored. a message broker makes use of the /var/mqsi/components/<broker> directory and the /var/mqsi/registry/<broker> directory. so that the message broker can operate correctly on either node.IC91: High Availability for Websphere Message Broker Broker component A WMB broker relies on a Websphere MQ queue manager and a database. if run locally. in which case it is necessary is to ensure that the database is accessible from either cluster node. The smallest unit of failover of an WMB Message Broker is the broker service together with the Websphere MQ queue manager upon which it depends and the broker database. The UNS and Configuration Manager should also be placed in a separate group. The resource groups that might therefore be constructed. even if the problem causing the failover were confined to one message broker or its dependencies. The latter in turn creates the execution groups that run message flows. depending on your placement of broker databases are as shown in Figure 4. but if you did they would all have to failover to another node together. called bipservice and bipbroker. Alternatively the broker database might be run in a remote instance accessed using a remote ODBC connection. in which case the database instance and its lower level dependencies must be failed over with the message broker. the broker's queue manager and the queue manager's shared disks and IP address. If the broker database is run locally then the broker group also needs to contain the database instance and its shared disks. Additional brokers should be configured to use separate database instances and be placed in separate groups.

It must have a queue manager which is run on the same node as the UNS.IC91: High Availability for Websphere Message Broker A broker group with local database cluster Node A Message Broker Queue Manager Broker Database Node B Message Broker Queue Manager Broker Database group can move between nodes disks ip address broker group disks disks ip address broker group disks Figure 4 . 19 .Broker group relying on remote database User Name Server component The User Name Server (UNS) provides a topic security service and is accessed via Websphere MQ.Broker group including local database A broker group with remote database Broker Database remote odbc connections cluster Node A Message Broker Queue Manager ip address Node B Message Broker Queue Manager ip address group can move between nodes disks disks broker group broker group Figure 5 .

On AIX. but they may help. For flexibility it is better to separate them.IC91: High Availability for Websphere Message Broker and the queue manager will require an IP address and shared disk resources. Figure 7 shows one possible architecture which implements the guidelines. its queue manager and the queue manager's shared disks and IP address. so these directories need to be stored on shared disks in addition to the directories used by the queue manager. HACMP users who wish to use application monitoring should also note the restriction that only one application server in a resource group can be monitored. shows the contents of a resource group containing the UNS. You don't have to adhere to them. the most flexible configuration for the UNS is for it to be placed in a resource group of its own. It is advisable to decide and record at this stage what the architecture will be. bipservice and bipuns. If you were to place the UNS in the same group as one or more brokers and wanted to monitor all of them. but this would bind the UNS permanently to that broker. you would need to edit the example scripts to put multiple components in an application server.UNS group You could put the UNS in the same group as one or more brokers and it could share its queue manager with a broker. Architectural guidelines From the preceding discussion it is apparent that there are a number of choices as to which components your architecture will contain and which of them will be clustered. Similar to the rationale for placing each message broker in a separate resource group. The IP address and shared disk are separate from those described earlier for the broker. The UNS runs as a pair of processes. The following list provides a few suggested guidelines which may help in this. The configuration steps described in later chapers how to use the example broker scripts to place a broker under HA control. A User Name Server group cluster Node A User Name Server Queue Manager ip address Node B User Name Server Queue Manager ip address group can move between nodes disks disks UNS group UNS group UNS group Figure 6 . Each Configuration Manager should be in a sepeate resource group 20 . It is this collection of processes which are managed by HA Software. the UNS makes use of the /var/mqsi/Components/UserNameServer directory and the /var/mqsi/registry/UserNameServer directory. This resource group must contain the UNS.

if you have one. in which case it should be run on a machine outside the cluster. should be run in the Cluster and should be in a separate resource group. which must be in the same resource group as the UNS. Example cluster architecture shared disks Broker1QM Broker1DB UNSQM CfgMgrQM Broker2QM Broker2DB Cluster Node B res grp B1 UNS res grp Cluster Node C res grp B2 Broker1 QM DB UNS QM Broker2 QM DB HACMP Cluster Cluster Node A res grp C1 Configuration Manager name QM disk cluster resource Figure 7 .IC91: High Availability for Websphere Message Broker Each Configuration Manager must have its own queue manager. Each broker should use a separate database instance. The queue manager must be in the same resource group as the broker. Each broker must have its own queue manager. not used by any other brokers. although it does ont need to be in the same resource group as the broker. The UNS should have its own queue manager.An example cluster architecture 21 . The queue manager must be in the same resource group as the broker. The UNS. Alternatively. Each broker should be in a separate resource group dedicated to that broker. The broker database instance can be remote. the broker database instance can be run on the same machine as the broker.

IC91: High Availability for Websphere Message Broker Implementation 1 .

although not concurrently. 4. Configure the cluster. ConfigMgr and its dependants. Test the operation of the cluster by creating an instance of a Generic Application (such as the System Clock) and ensure that it fails back and forth between the nodes in accordance with the resource parameters you set for it. Actions: 1. The Configuration Manager and UNS is similar to a broker except that it doesn't use a database.IC91: High Availability for Websphere Message Broker Chapter 1 General Configuration Configure the cluster It is assumed that you will create a single Cluster in which brokers and (optionally) the Configuration Manager and UNS will be run. it creates the /var/mqsi directory within the existing /var filesystem on internal disks. You need to create a volume group for each resource group. in which case you should repeat this step for each cluster you wish to create. cluster nodes and adapters to HA Software as usual. Only the system which currently "owns" a disk group can access the disks in that group. which each contain a set of multihost disks that can be accessed by multiple systems. Synchronise the Cluster Topology.refer to SupportPac MC91 for details of how the queue manager's filesystems should be organised. but the following is recommended: Queue manager . The suggested layout is based on the advice earlier that each Configuration Manager. passwords. Within each volume group. (numeric) user ids. brokers and UNS. Home directories. You could put the Configuration Manager. in the same filesystem. Configure the shared disks This step creates the filesystems required by a resource group containing an broker and its dependencies. you also need to specify where the broker queue manager. making sure that it fails back and forth correctly. then you may need to vary these instructions. the broker database instance and the broker itself should store their files. VCS Documentation. VCS Only You need to create one or more VxVM disk groups.e. database instance and broker.you can locate the database instance home directory for each broker's database instance under the data directory for the corresponding queue manager. or queue manager and UNS. regardless of the number of brokers that the node may host. ServiceGuard Documentation. There is only one such directory per node. Broker . you can decide where to place the files for the queue manager. Now would be a good time to create and configure the user accounts that will be used to run the database instances. there are broker specific directories which need to be on shared disk so that they can be moved to a surviving node in the event of a failure. Configure TCP/IP on the cluster nodes as described in your cluster software documentation. When you install WMB on a node. The remainder of /var/mqsi. broker or UNS should be put into a separate resource group. Brokers and UNS into separate clusters. HACMP Documention. Configuration of cluster should be performed as described in the Cluster Software documentation i. 2. If you chose a different architecture in Chapter 4.a broker stores some information on disks. those parts which are not specific to one particular broker. 3. For each broker that you wish to run in the cluster. the UNS and its dependencies. For Linux-HA refer to Chapter 2 Database instance . Also test the operation of a shared drive placed under HA control. Within the /var/mqsi directory. should be on internal 2 . profiles and group memberships should be the same on all cluster nodes.

ensure that the filesystems can be mounted.IC91: High Availability for Websphere Message Broker disks. the database instance and the broker-specific directories under /var/mqsi.the directories should be organised in the same way as the broker directories with the exception that a Configuration Manager has no database. using queue manager ha. For each broker. create a volume group that will be used for the queue manager's data and log files. which should be read in conjunction with the similar diagram from SupportPac MC91. below the directory specified by the MQHAFSDATA environment variable used in SupportPac MC91. 4. The split between which broker directories are local and which are shared is shown in Figure 7. Step 2. create a volume group that will be used for the queue manager's data and log files and the UNS directories under /var/mqsi. UNS .csq1 and database instance db2inst1 Filesystem: /var mqm /var internal disks ipc qmgrs bin ha!csq1 ha!csq1 as for MC91 Filesystem: /var. Filesystem organisation This diagram shows the filesystem organisation for a single broker.csq1/data data /MQHA ha. configuration manager or the name "UNS" instead of using the name of the queue manager. then unmount the filesystems. The broker specific data stored on shared disk can be placed in the filesystem used for the queue manager data files. 3. For each node in turn. 3 . Configuration Manager . called imb1. create a volume group that will be used for the queue manager's data and log files and the UNS directories under /var/mqsi. For the UNS.csq1 log components qmgrs databases ha!csq1 Filesystem: /MQHA/ha. create the data and log filesystems as described in SupportPac MC91. 2.the UNS directories should be organised in the same way as the broker directories with the exception that a UNS has no database. The broker-related commands described later in this chapter set this up for you. Same as for Node A internal disks mqsi locks Node A log messages odbc users registry components Node B registry imb1 imb1 ha!csq1 db2inst1 as for MC91 Filesystem: /MQHA/ha. When choosing the path names for the filesystems you may prefer to use the name of the broker. The UNS-related commands described later in this chapter set up the UNS directories for you. For the Configuration Manager.Division of broker directories between local disks and shared disks diskxample cluster architecture Actions: 1.csq1/log = subdirectory = symlink shared disks = note Figure 8 . For each volume group.

any of the nodes that could host the queue manager can be used. Create the /MQHA/<qmgr>/data and /MQHA/<qmgr>/log filesystems using the volume 4. It does not matter on which node you do this. 2. As root run the hacrtmqm script. The parameters to pass this script are provided by the hacrtmqm script. Below is a guide how to create and delete queue managers to be used with HA-Linux. 5. Check the queue manager works on all other nodes by unmounting the filesystem and mounting it on each node in turn. Select a node on which to perform the following actions 2. End the queue manager manually. We will comlete the first part of this by creating a resource group which mounts the required filesystems and IP adresses. 1. When you create the queue manager. which may takeover the queue manager. Create a new resource group and place under heartbeat control Once it has been checked the queue manager restarts without any problems it needs to be placed under the control of HeartBeat. using endmqm. Create any queues and channels.Linux-HA Queue Manager Creation The MQ Support Pac MC91 does not cover how to setup your queue manager for HA-Linux. 4. Start the queue manager manually.IC91: High Availability for Websphere Message Broker Chapter 2 Configuration steps for MQ . hacrtmqm moves and relinks some subdirectories and for Linux-HA. Check the queue manager is operational by putting a test message to a queue. For example. Ensure the queue manager’ filesystems are mounted on the selected node. Open up the /var/lib/heartbeat/crm/cib. 3. For each node in turn. On the other nodes. it is strongly advised that you should use the hacrtmqm script included in the SupportPac. 6. Create the volume group that will be used for this queue manager’ data and log files. The move and relink of these subdirectories is to ensure smooth coexistence of queue managers which may run on the same node. ensure that the filesystems can be mounted. 1. using the strmqm command. It is possible to create the queue manager manually. but using hacrtmqm will save a lot of effort. Create the queue manager on this node. run the halinkmqm script. 7. 8. Issue the strmqm command on each note to check the queue manager will start without problems. Mount the volume group in the location /MQHA/<qmgr> 3. unmount the filesystems Select a node on which to create the queue manager.xml and add the following lines highlighted in italics/red <cib> <configuration> <crm_config> <cluster_property_set id="default"> <attributes> <nvpair id="is_managed_default" name="is_managed_default" value="true"/> </attributes> </cluster_property_set> </crm_config> <nodes> <node id="3928ccf5-63d2-4fbd-b4ed-e9d6163afbc9" uname="ha-node1" type="normal"/> </nodes> <resources> # New Code Starts Here <group id="group_1"> <primitive class="ocf" id="IPaddr_1" provider="heartbeat" type="IPaddr"> 4 .

org/v2_2fExamples_2fSimple) Make sure that all nodes have been updated with the new ha.IC91: High Availability for Websphere Message Broker <operations> <op id="IPaddr_1_mon" interval="5s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IPaddr_1_inst_attr"> <attributes> <nvpair id="IPaddr_1_attr_0" name="ip" value="192. nvpair id="Filesystem_2_attr_1" name="directory" value="/MQHA/cm1qm"/ This is the location you want to mount the shared disk onto. For further information on the definitions of the tags please refer to the Linux-HA documentation (http://linuxha.xml file can be found in the support pac. Crm_resource –D –r My_ConfigMgr_Group –t group Restart Heartbeat on the node. value="192. Use crm_resource to delete the resource group from the cib.xml file i. nvpair id="Filesystem_2_attr_0" name="device" value="/dev/sdb1"/ This it the shared disc which holds the file system for the queue manager. 5 .e.1.11" This is the IP address for the resource group. Stop Heartbeat on all the nodes.1. An exmple cib.cf before restarting hearbeat on the node the queue manager is based Removing a Queue manager from Heartbeat control Removing a queue manager from Heartbeat control is fairly simple.168.168.11"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="Filesystem_2" provider="heartbeat" type="Filesystem"> <operations> <op id="Filesystem_2_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="Filesystem_2_inst_attr"> <attributes> <nvpair id="Filesystem_2_attr_0" name="device" value="/dev/sdb1"/> <nvpair id="Filesystem_2_attr_1" name="directory" value="/MQHA/cm1qm"/> <nvpair id="Filesystem_2_attr_2" name="fstype" value="ext2"/> </attributes> </instance_attributes> </primitive> </group> # New Code Ends Here </resources> <constraints> # New Code Starts Here <rsc_location id="rsc_location_group_1" rsc="group_1"> <rule id="prefered_location_group_1" score="100"> <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="ha-node1"/> </rule> </rsc_location> # New Code Ends Here </constraints> </configuration> </cib> The main parameters to change in the script above are: • • • • uname="ha-node1" Node on which the resource should normally be run.

which will clean up the subdirectories related to the queue manager. On each of the other nodes in the cluster. a. 5. run the hadltmqm script provided in the SupportPac. you should first remove it from the cluster configuration as decribed above then. Manually remove the queue manager stanza from the /var/mqm/mqs. perform the following actions. Run the hadltmqm command as above. 3. You can now destroy the filesystems /MQHA/<qmgr>/data and /MQHA/<qmgr>/log. Mount the shared discs which contain the queue manager data Note: The queue manager will already be mounted if you have only removed the resource group from Heartbeat as the queue manager will still be running 2. b. 4.IC91: High Availability for Websphere Message Broker Queue Manager Deletion If you decide to delete the queue manager.ini file. Make sure the queue manager is stopped. On the node which currently has the queue manager’ shared disks and has the queue manager’ filesystems mounted. 1. to delete the QM. by issuing the endmqm command. The queue manager has now been completely removed from the cluster and the nodes. 6 .

For ServiceGaurd During the creation of the UNS queue manager. Create the UNS queue manager The UNS relies on a queue manager to receive requests from brokers. For VCS The service group in which the UNS and its queue manager will run is created during the creation of the queue manager. The UNS queue manager needs to be configured so that the UNS can communicate with brokers and the Configuration Manager. The only difference between the clustered UNS and non-clustered UNS configurations is that in the clustered case 7 . Such configuration is described assuming that the UNS and broker are using separate queue managers. you will create the resource group as described in SupportPac MC91. This is to avoid the interruption to the UNS which would be caused by the reintegration of the top priority node after a failure. For maximum flexibility it is recommended that the UNS is in a separate resource group from any of the brokers in the cluster. If run in a separate cluster. as described in SupportPac MC91.e. then the UNS will be completely independent from the broker cluster and will require its own queue manager. The queue manager will use the IP address managed by the service group. This is the address which clients and channels will use to connect to the queue manager. The following instructions are written assuming that the UNS is being put in a separate resource group. then you have a choice as to how independent you make it. or the UNS can be run elsewhere on a standalone machine. as discussed in Chapter 3. If the UNS is put in the same cluster as the broker(s). allowing you to run the UNS and brokers on separate nodes. The same principles apply in both cases. The logical address is the address which clients and channels will use to connect to the queue manager. If you choose cascading. then it is probably better to manually move the resource group back to the preferred node when it will cause minimum disruption.IC91: High Availability for Websphere Message Broker Chapter 3 Configuration steps for UserNameServer – UNIX & Linux If you plan to use a UNS within the broker domain then you have two choices. its queue manager. regarding the relationship between the UNS. bear the following points in mind: The resource group will use the IP address as the service label. The UNS can either be run in an HA cluster. For HACMP During the creation of the UNS queue manager. you will create the package as described in SupportPac MC91. for example the one that contains the broker(s). as described in the WMB Administration Guide) and proceed to Chapter 4 A clustered UNS can be run in the same cluster as the broker or it can be run in a separate cluster. If you were to decide to configure the UNS to use a queue manager that will also be used by a broker you would have to configure the UNS and broker to be in the same resource group. Perform this step if you plan to cluster the UNS. If they are sharing a queue manager then you can omit the creation of the transmission queues and channels. If you plan to use a non-clustered UNS then create and configure it in the normal manner (i. rather than an IP address statically assigned to a system. Whichever you choose. it is recommended that you consider disabling the automatic fallback facility by setting Cascading Without Fallback to true. This step creates the queue manager. The resource group can be either cascading or rotating. Unless you have a specific requirement which would make automatic fallback desirable in your configuration. This requires that the UNS have a separate queue manager from any queue managers used by the brokers.

Create the UNS on the node hosting the resource group using the hamqsicreateusernameserver command. Test that the above queue managers can communicate regardless of which node owns the resource group for the UNS. 8 .IC91: High Availability for Websphere Message Broker you need to use a virtual IP address for channels sending to the UNS queue manager rather than the machine IP address. Don't configure the application server or application monitor described in SupportPac MC91. This is handled for you if you create the UNS using the hamqsicreateusernameserver command. c. 3. Linux-HA Create a queue manager as decribed in Chapter 2During the creation of the UNS queue manager. Ensure that the queue is given the same name and case as the Configuration Manager queue manager. 2. Ensure that the queue is given the same name and case as the UNS queue manager. Set up queues and channels between the UNS queue manager and the Configuration Manager queue manager: a. It creates the UNS and then moves the above directories to shared disk and creates symlinks to them from their usual paths. d.you will create an application server that covers both the UNS and the queue manager. On the UNS queue manager create a transmission queue for communication to the Configuration Manager queue manager. which accepts the same arguments as the mqsicreateusernameserver command. The transmission queue should be set to trigger the sender channel. create a cltered queue manager as described in SupportPac MC91. The sender channel should use the IP address of the machine where the Configuration Manager queue manager runs. On the Configuration Manager queue manager create a sender and receiver channel for communication with the UNS queue manager. Use the volume group that you created for the UNS and place the volume group and queue manager into a resource group to which the UNS will be added in.HACMP/ServiceGuard/VCS/Linux-HA The UNS stores data to disk in the /var/mqsi/components/ /UserNameServer directory and the /var/mqsi/registry/UserNameServer directory. and the corresponding listener port number. you will create the resource group as described in Chapter 2 Create the Clustered UNS . The transmission queue should be set to trigger the sender channel. Actions: 1. On the UNS queue manager create sender and receiver channels to match those just created on the Configuration Manager queue manager. The hamqsicreateusernameserver command creates the UNS directories under the data path of the queue manager used by the UNS. using the hacrtmqm command. b. These directories need to be on shared disk. On the Configuration Manager queue manager create a transmission queue for communication to the UNS queue manager. The sender channel should use the service address of the UNS resource group and the UNS queue manager's port number. This directory is the parent of the "qmgrs" directory on shared disk in which the queue manager exists. On one node. Actions: 1.

e. It parses the /var/mqm/mqs.the account under which the UNS service runs Example: hamqsiaddunsstandby ha. It parses the /var/mqm/mqs. Syntax hamqsicreateusernameserver <creation parameters> Parameters creation parameters . On any other nodes in the resource group's nodelist (i. You will need to login as the user id under which the UNS runs to test this. You must be root to run this command. The invocation of the hamqsicreateusernameserver command uses exactly the same parameters that you would normally use for mqsicreateusernameserver.csq1 2.ini file to locate this path information.which is created as described earlier.csq1 mqsi 9 . hamqsiaddunsstandby The hamqsiaddunsstandby command will create the information required for a cluster node to act as a standby for the UNS. Ensure that you can start and stop the UNS manually using the mqsistart and mqsistop commands. run the hamqsiaddunsstandby command to create the information needed by these nodes to enable them to host the UNS. excluding the one on which you just created the UNS). Syntax hamqsiaddusnstandby <qm name> <userid> Parameters qm name .are exactly the same as for the regular WMB mqsicreateusernameserver command Example: hamqsicreateusernameserver -i mqsi -a mqsi -q ha. The hamqsicreateusernameserver command puts the UNS directories under the same path used for the data associated with the queue manager which the UNS uses. You must be root to run this command. 3.IC91: High Availability for Websphere Message Broker hamqsicreateusernameserver command The hamqsicreateusernameserver command will create the UNS and will ensure that its directories are arranged to allow for HA operation.ini file to locate this path information. The hamqsiaddunsstandby command expects the UNS directories to have been created by the hamqsicreateusernameserver command under the same path used for the data associated with the queue manager which the UNS uses. This command does not create the UNS . This command defines symbolic links within subdirectories under /var/mqsi on the standby node which allow the UNS to move to that node.the name of the queue manager used by the UNS userid .

IC91: High Availability for Websphere Message Broker Place the UNS under cluster control HACMP/ServiceGuard The UNS and its queue manager are managed by a single HA application server. include the parameters. Recovery actions include the ability to perform local restarts of the UNS and queue manager (see below) or to cause a failover of the resource group to another node. When you ran hamqsicreateusernameserver an application monitor was created. They invoke methods supplied by MC91 to control the queue manager and the hamqsi_start_uns and hamqsi_stop_uns methods to control the UNS. which can be configured using the example UNS scripts supplied by this SupportPac. This script is robust in that it does not assume anything about the state of the UNS or queue manager on entry. Example "/MQHA/bin/hamqsi_start_uns_as ha.UNS If you use the application monitor. so when you define the start command in HA. hamqsi_start_uns_as The example start script is called hamqsi_start_uns_as. You can configure an application monitor which will monitor the health of the UNS and its queue manager. It accepts two command line parameters which are the queue manager name and the userid under which the UNS runs. In HACMP you can only configure one application monitor per resource group.csq1 mqsi" 10 . it will call it periodically to check that the UNS processes and queue manager are running. The UNS depends on the queue manager and the start and stop sequence is coded in the example scripts. This application monitor is specifically for monitoring an application server containing a UNS and queue manager and is called: hamqsi_applmon. The example application monitor checks that the bipservice process is running. The bipservice process monitors and restarts the bipuns process. The UNS scripts are: hamqsi_start_uns_as hamqsi_stop_uns_as hamqsi_start_uns hamqsi_stop_uns The hamqsi_start_uns_as and hamqsi_stop_uns_as scripts are the ones you configure as the start and stop methods for the application server. This will monitor the UNS and its queue manager and trigger recovery actions as a result of failures.

The example application monitor script provided in this SupportPac is described in the 2. Actions: 1. which also include the parameters that control whether a failure of either component of the application server will trigger a restart. hamqsicreateusernameserver. The example scripts are called hamqsi_start_uns_as and hamqsi_stop_uns_as and are described in the following frames. When you define the stop command in HA you should include the parameters. so just specify the name of the monitor script. An application monitor script cannot be passed parameters. With these settings. The stop command has to ensure that the UNS and queue manager are both fully stopped by the time the command completes. Failure of either test will result in the application monitor returning a non-zero exit code. Example "/MQHA/bin/hamqsi_stop_uns_as ha. The stop script accepts three command line parameters. including the monitoring interval and the restart parameters you require. if successive restarts fail without a significant period of stability between.UNS The hamqsi_applmon. The fault monitoring interval is configured in the HA Software. You can also specify an application monitor using the hamqsi_applmon. and that the time period is set to a small multiple of the expected start time for the components of the UNS group. then the resource group will failover to a different node. indicating that the UNS and queue manager are working properly.UNS script created by hamqsi_applmon.IC91: High Availability for Websphere Message Broker hamqsi_stop_uns_as The example stop script is called hamqsi_stop_uns_as. then a more severe stop is performed. Attempting more restarts on a node on which a restart has just failed is unlikely to succeed. It is recommended that the restart count is set to 1 so that one restart is attempted. The monitoring script accepts no parameters.UNS" The example application monitor is tolerant if it finds that the queue manager is starting because this may be due to the stabilisation interval being too short. Also configure the other application monitor parameters. 11 . Create an application server which will run the UNS and its queue manager using the example scripts provided in this SupportPac. HACMP/ServiceGuard will then take whatever action has been configured. It is a parameter-less wrapper script that calls hamqsi_monitor_uns_as which checks the state of the UNS and queue manager.csq1 mqsi 10" The stop command will use the timeout parameter as the time to allow either the queue manager or UNS to respond to an attempt to stop it.UNS created for you by hamqsicreateusernameserver will be called at the polling frequency you specify. the first is the queue manager name. If you wish to use the example application monitor then supply its for the UNS application server. Example "/MQHA/bin/hamqsi_applmon. If a stop attempt times out. Success of both tests causes the application monitor to return a zero exit code. the second parameter is the UNS userid and the third is the timeout (in seconds) to use on each of the levels of severity of stop.

4.stops the UNS monitor .cf or types. Ensure that taking the service group offline stops the UNS. Ensure that the UNS is stopped. Ensure that the UNS and its queue manager are stopped. then ensure that the configuration is “read-write” and use hacf to verify the changes. Create the MQSIUNS resource type.tests the health of the UNS clean .starts the UNS offline .performs cleanup (forced termination) of the UNS The SupportPac contains the definition of the MQSIUNS resource type. 4. 6. If you opt to edit the files. or by copying the content of the types.cf files. provided by SupportPac MC91). 6. The resource needs to be in the same service group as the queue manager upon which the UNS depends and it should have a “requires” statement to record the dependency. With the service group online. This dependency tells VCS that the queue manager needs to be started before the UNS and that the UNS needs to be stopped before the queue manager.IC91: High Availability for Websphere Message Broker following frame: 3. Ensure that stopping the application server stops the UNS and its queue manager. The sample main. Veritas Cluster Server The UNS is managed by the MQSIUNS agent supplied in this SupportPac. verify that the local restart capability is working as configured. During this testing a convenient way to cause failures is to identify the bipservice for the UserNameServer and kill it.cf for the cluster. Actions: 1. The agent contains the following methods: online . The queue manager and UNS should start.MQSIUNS file into the existing types.cf file. Because the UNS needs to be co-located with its queue manager. 2.cf file. 7. This can be performed using the VCS GUI or by editing the main. and start the application server. During this testing a convenient way to cause failures is to identify the bipservice for the UserNameServer and kill it.MQSIUNS file in the main. Linux-HA The UNS is managed by the mqsiuns agent supplied in this SupportPac.MQSIUNS file.cf included in Appendix A can be used as a guide. the UNS resource needs to have a resource dependency on the corresponding queue manager resource (of resource type MQM. verify that the monitor correctly detects failures and configure the restart attributes to your desired values. Synchronise the cluster resources. and bring the service group online. Check that the UNS and queue manager started and test that the resource group can be moved from one node to the other and that the UNS runs correctly on each node. Add a resource of type MQSIUNS. Verify and enable the changes. 5. The agent contains the following methods: 12 . which can be found in the types. 3. Check that the UNS started and test that the service group can be switched from one system to another and that the UNS runs correctly on each system. 5. either by including the types. either using the VCS GUI or by editing the main. With the application server started. The UNS is put under cluster control by creating a resource of this type.

starts the UNS stop . This will monitor the UNS and its queue manager and trigger recovery actions as a result of failures.stops the UNS status . An extra primative class is required for the username server as shown below higlighted in italics/red <cib> <configuration> <crm_config> <cluster_property_set id="default"> <attributes> <nvpair id="is_managed_default" name="is_managed_default" value="true"/> </attributes> </cluster_property_set> </crm_config> <nodes> <node id="3928ccf5-63d2-4fbd-b4ed-e9d6163afbc9" uname="ha-node1" type="normal"/> </nodes> <resources> <group id="group_1"> <primitive class="ocf" id="IPaddr_1" provider="heartbeat" type="IPaddr"> <operations> <op id="IPaddr_1_mon" interval="5s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IPaddr_1_inst_attr"> <attributes> <nvpair id="IPaddr_1_attr_0" name="ip" value="192.IC91: High Availability for Websphere Message Broker start . If you have followed the instruction in Chapter 2 you should already have a resource group for the uns. hamqsi_start_uns_as hamqsi_stop_uns_as hamqsi_start_uns hamqsi_stop_uns The mqsiusn script is the one you configure with Heartbeat which calls hamqsi_start_uns_as and hamqsi_stop_uns_as.1. For this support pac we will only use one To place the user name server under the control of Heartbeat the /var/lib/heartbeat/crm/cib. They invoke methods supplied by scripts to control the queue manager and the hamqsi_start_uns and hamqsi_stop_uns methods to control the UNS.xml file must be updated to include the uns as a component in the resource group.tests the health of the UNS This agent relies on a number of other scripts such as.11"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="Filesystem_2" provider="heartbeat" type="Filesystem"> <operations> <op id="Filesystem_2_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="Filesystem_2_inst_attr"> <attributes> <nvpair id="Filesystem_2_attr_0" name="device" value="/dev/sdb1"/> <nvpair id="Filesystem_2_attr_1" name="directory" value="/MQHA/cm1qm"/> <nvpair id="Filesystem_2_attr_2" name="fstype" value="ext2"/> </attributes> </instance_attributes> </primitive> # New section starts here <primitive class="heartbeat" id="mqsiuns_4" provider="heartbeat" type="mqsiuns"> <operations> 13 . You can configure an application monitor which will monitor the health of the UNS and its queue manager.168. Recovery actions include the ability to perform local restarts of the UNS and queue manager (see below) or to cause a failover of the resource group to another node. With Heartbeat its possible to configure many monitors for a single resource group.tests the health of the UNS monitor .

Once you have made this change make sure you syncronise the cib. <nvpair id=" mqsiuns_4_attr_3" name="3" value="argostr"/> Name of the usn userID. include the parameters.xml file to all other machines within the cluster. hamqsi_start_uns_as The example start script is called hamqsi_start_uns_as. so when you define the start command in HA. It accepts two command line parameters which are the queue manager name and the userid under which the UNS runs. Example "/MQHA/bin/hamqsi_start_uns_as ha.IC91: High Availability for Websphere Message Broker <op id="mqsiuns_4_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="mqsiuns_4_inst_attr"> <attributes> <nvpair id=" mqsiuns_4_attr_1" name="1" value="usnqm"/> <nvpair id=" mqsiuns_4_attr_2" name="2" value="mqm"/> <nvpair id=" mqsiuns_4_attr_3" name="3" value="argostr"/> </attributes> </instance_attributes> </primitive> # New section ends here </group> </resources> <constraints> <rsc_location id="rsc_location_group_1" rsc="group_1"> <rule id="prefered_location_group_1" score="100"> <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="ha-node1"/> </rule> </rsc_location> </constraints> </configuration> </cib> The main parameters to change in the script above are: • • • <nvpair id=" mqsiuns_4_attr_1" name="1" value="usnqm"/> Name of the queue manager the uns runs on.csq1 mqsi" 14 . This script is robust in that it does not assume anything about the state of the UNS or queue manager on entry. <nvpair id=" mqsiuns_4_attr_2" name="2" value="mqm"/> Name of the MQ userID.

The stop command has to ensure that the UNS and queue manager are both fully stopped by the time the command completes. If a stop attempt times out.IC91: High Availability for Websphere Message Broker hamqsi_stop_uns_as The example stop script is called hamqsi_stop_uns_as. the second parameter is the UNS userid and the third is the timeout (in seconds) to use on each of the levels of severity of stop. Example "/MQHA/bin/hamqsi_stop_uns_as ha. When you define the stop command in HA you should include the parameters.csq1 mqsi 10" The stop command will use the timeout parameter as the time to allow either the queue manager or UNS to respond to an attempt to stop it. The stop script accepts three command line parameters. the first is the queue manager name. then a more severe stop is performed. 15 .

The logical address is the address which clients and channels will use to connect to the queue manager. Unless you have a specific requirement which would make automatic fallback desirable in your configuration. or the it can be run elsewhere on a standalone machine. rather than an IP address statically assigned to a system. This is to avoid the interruption to the configuration manager which would be caused by the reintegration of the top priority node after a failure. If the configuration manager is put in the same cluster as the broker(s). Such configuration is described assuming that the 16 . then the configuration manager will be completely independent from the broker cluster and will require its own queue manager. This step creates the queue manager. you will create the package as described in SupportPac MC91. as discussed in Chapter 3. This requires that the configuration manager have a separate queue manager from any queue managers used by the brokers. For ServiceGaurd During the creation of the UNS queue manager. For HACMP During the creation of the configuration manager queue manager. Create the Configuration Manager queue manager The configuration manager relies on a queue manager to receive requests from brokers. If you were to decide to configure the configuration manager to use a queue manager that will also be used by a broker you would have to configure the configuration manager and broker to be in the same resource group. regarding the relationship between the configuration manager and queue manager. as described in SupportPac MC91. The resource group can be either cascading or rotating. it is recommended that you consider disabling the automatic fallback facility by setting Cascading Without Fallback to true. The same principles apply in both cases.e. This is the address which clients and channels will use to connect to the queue manager. allowing you to run the configuration manager and brokers on separate nodes. For maximum flexibility it is recommended that the configuration manager is in a separate resource group from any of the brokers in the cluster. The following instructions are written assuming that the configuration manager is being put in a separate resource group. The queue manager will use the IP address managed by the service group. For VCS The service group in which the configuration manager and its queue manager will run is created during the creation of the queue manager. bear the following points in mind: The resource group will use the IP address as the service label.IC91: High Availability for Websphere Message Broker Chapter 4 Configuration steps for Configuration Manager – UNIX & Linux There are two choices on how to configure a configuration manager in a HA scenario. If run in a separate cluster. Perform this step if you plan to cluster the configuration manager. The configuration manager queue manager needs to be configured so that the configuration manager can communicate with brokers. you will create the resource group as described in SupportPac MC91. If you plan to use a nonclustered configuration manager then create and configure it in the normal manner (i. then it is probably better to manually move the resource group back to the preferred node when it will cause minimum disruption. Whichever you choose. then you have a choice as to how independent you make it. as described in the WMB Administration Guide) and proceed to Chapter 5 below A clustered configuration manager can be run in the same cluster as the broker or it can be run in a separate cluster. If you choose cascading. The configuration manager can either be run in an HA cluster. for example the one that contains the broker(s).

Don't configure the application server or application monitor described in SupportPac MC91. c. Actions: 4. 17 . The sender channel should use the IP address of the machine where the Configuration Manager queue manager runs. If they are sharing a queue manager then you can omit the creation of the transmission queues and channels. This is handled for you if you create the configuration manager using the hamqsicreatecfgmgr command. Set up queues and channels between the configuration manager queue manager and the brokers: a. On the brokers queue manager create a transmission queue for communication to the Configuration Manager queue manager. Test that the above queue managers can communicate regardless of which node owns the resource group for the brokers.IC91: High Availability for Websphere Message Broker configuration manager and broker are using separate queue managers. Ensure that the queue is given the same name and case as the Configuration Manager queue manager. The only difference between the clustered configuration manager and non-clustered configuration manager configurations is that in the clustered case you need to use a virtual IP address for channels sending to the UNS queue manager rather than the machine IP address. Ensure that the queue is given the same name and case as the brokers queue manager. Actions: 1. create a cltered queue manager as described in SupportPac MC91. It creates the configuration manager and then moves the above directories to shared disk and creates symlinks to them from their usual paths. The transmission queue should be set to trigger the sender channel. On the brokers queue manager create sender and receiver channels to match those just created on the Configuration Manager queue manager. Create the configuration manager on the node hosting the resource group using the hamqsicreatecfgmgr command. Use the volume group that you created for the configuration manager and place the volume group and queue manager into a resource group to which the configuration manager will be added. which accepts the same arguments as the mqsicreateconfigmgr command. and the corresponding listener port number. Linux-HA Create a queue manager as decribed in Chapter 2 aboveDuring the creation of the configuration managers queue manager. using the hacrtmqm command.you will create an application server that covers both the configuration manager and the queue manager. On one node. The transmission queue should be set to trigger the sender channel. On the Configuration Manager queue manager create a transmission queue for communication to the brokers queue manager. The sender channel should use the service address of the brokers resource group and the brokers queue manager's port number. On the Configuration Manager queue manager create a sender and receiver channel for communication with the brokers queue manager. 2. The hamqsicreatecfgmgr command creates the configuration manager directories under the data path of the queue manager used by the configuration manager. b. you will create the resource group as described in Chapter 2 above Create the Clustered Configuration Manager HACMP/ServiceGuard/VCS/Linux-HA The configuration manager stores data to disk in the /var/mqsi/components/<configmgr name> directory and the /var/mqsi/registry/<configmgr name> directory. d. This directory is the parent of the "qmgrs" directory on shared disk in which the queue manager exists. These directories need to be on shared disk. 3.

You must be root to run this command. This command does not create the configuration manager . This command defines symbolic links within subdirectories under /var/mqsi on the standby node which allow the configuration manager to move to that node. 6.the name of the queue manager used by the UNS userid .csq1 5. run the hamqsiaddcfgmgrstandby command to create the information needed by these nodes to enable them to host the configuration manager. Syntax hamqsiaddcfgmgrstandby <cfgmgr> <qmname> <userid> Parameters cfgmgr – configuration managers name qm name . It parses the /var/mqm/mqs. excluding the one on which you just created the configuration manager). The invocation of the hamqsicreatecfgmgr command uses exactly the same parameters that you would normally use for hamqsicreatecfgmgr. It parses the /var/mqm/mqs.IC91: High Availability for Websphere Message Broker hamqsicreatecfgmgr command The hamqsicreatecfgmgr command will create the configuration manager and will ensure that its directories are arranged to allow for HA operation.which is created as described earlier.csq1 mqsi 18 . You must be root to run this command. The hamqsiaddcfgmgrstandby command expects the configuration manager directories to have been created by the hamqsicreatecfgmgr command under the same path used for the data associated with the queue manager which the configuration manager uses. Ensure that you can start and stop the configuration manager manually using the mqsistart and mqsistop commands. On any other nodes in the resource group's nodelist (i.the account under which the configuration manager service runs Example: hamqsiaddcfgmgrstandby MyCfgMgr ha.ini file to locate this path information.are exactly the same as for the regular WMB mqsicreateconfigmgr command Example: Hamqsicreatecfgmgr MyCfgMgr -i mqsi -a mqsi -q ha. You will need to login as the user id under which the configuration manager runs to test this.ini file to locate this path information. The hamqsicreatecfgmgr command puts the configuration managers directories under the same path used for the data associated with the queue manager which the configuration manager uses. hamqsiaddcfgmgrstandby The hamqsiaddcfgmgrstandby command will create the information required for a cluster node to act as a standby for the configuration manager.e. Syntax hamqsicreatecfgmgr <creation parameters> Parameters creation parameters .

The configuration manager scripts are: hamqsi_start_cfgmgr_as hamqsi_stop_cfgmgr_as hamqsi_start_cfgmgr hamqsi_stop_cfgmgr The hamqsi_start_cfgmgr_as and hamqsi_stop_cfgmgr_as scripts are the ones you configure as the start and stop methods for the application server. You can configure an application monitor which will monitor the health of the configuration manager and its queue manager. The example application monitor checks that the bipservice process is running.csq1 mqsi" 19 .<cfgmgr> If you use the application monitor. The bipservice process monitors and restarts the bipuns process. In HACMP you can only configure one application monitor per resource group. it will call it periodically to check that the configuration manager processes and queue manager are running.IC91: High Availability for Websphere Message Broker Place the Configuration Manager under cluster control HACMP/ServiceGuard The configuration manager and its queue manager are managed by a single HA application server. They invoke methods supplied by MC91 to control the queue manager and the hamqsi_start_cfgmgr and hamqsi_stop_cfgmgr methods to control the UNS. When you ran hamqsicreatecfgmgr an application monitor was created. This will monitor the configuration manager and its queue manager and trigger recovery actions as a result of failures. Recovery actions include the ability to perform local restarts of the configuration manager and queue manager (see below) or to cause a failover of the resource group to another node. This application monitor is specifically for monitoring an application server containing a configuration manager and queue manager and is called: hamqsi_applmon. so when you define the start command in HA. which can be configured using the example configuration manager scripts supplied by this SupportPac. This script is robust in that it does not assume anything about the state of the configuration manager or queue manager on entry. hamqsi_start_cfgmgr_as The example start script is called hamqsi_start_cfgmgr_as. The configuration manager depends on the queue manager and the start and stop sequence is coded in the example scripts. Example "/MQHA/bin/hamqsi_start_cfgmgr_as MyCfgMgr ha. include the parameters. It accepts two command line parameters which are the queue manager name and the userid under which the configuration manager runs.

which also include the parameters that control whether a failure of either component of the application server will trigger a restart. including the monitoring interval and the restart parameters you require. You can also specify an application monitor using the hamqsi_applmon. The stop command has to ensure that the configuration manager and queue manager are both fully stopped by the time the command completes. created by hamqsicreatecfgmgr. so just specify the name of the monitor script. An application monitor script cannot be passed parameters. if successive restarts fail without a significant period of stability between. The fault monitoring interval is configured in the HA Software. Create an application server which will run the configuration manager and its queue manager using the example scripts provided in this SupportPac. If a stop attempt times out. the second parameter is the configuration manager userid and the third is the timeout (in seconds) to use on each of the levels of severity of stop.<cfgmgr> script 20 .csq1 mqsi 10" The stop command will use the timeout parameter as the time to allow either the queue manager or configuration manager to respond to an attempt to stop it. and that the time period is set to a small multiple of the expected start time for the components of the UNS group. It is recommended that the restart count is set to 1 so that one restart is attempted. Attempting more restarts on a node on which a restart has just failed is unlikely to succeed. Also configure the other application monitor parameters. The example application monitor script provided in this SupportPac is described in the following frame: 9. With these settings. The stop script accepts three command line parameters. Actions: 8. then a more severe stop is performed. When you define the stop command in HA you should include the parameters.IC91: High Availability for Websphere Message Broker hamqsi_stop_cfgmgr_as The example stop script is called hamqsi_stop_cfgmgr_as. then the resource group will failover to a different node. The example scripts are called hamqsi_start_uns_as and hamqsi_stop_uns_as and are described in the following frames. the first is the queue manager name. Example "/MQHA/bin/hamqsi_stop_cfgmgr_as My CfgMgr ha.

MQSIConfigMgr file. If you wish to use the example application monitor then supply its for the configuration manager application server. the configuration manager resource needs to have a resource dependency on the corresponding queue manager resource (of resource type MQM.cf or types. or by copying the content of the types. Because the configuration manager needs to be co-located with its queue manager. This can be performed using the VCS GUI or by editing the main. 21 . Veritas Cluster Server The configuration manager is managed by the MQSIConfigMgr agent supplied in this SupportPac. Check that the configuration manager and queue manager started and test that the resource group can be moved from one node to the other and that the configuration manager runs correctly on each node. Failure of either test will result in the application monitor returning a non-zero exit code. The monitoring script accepts no parameters.IC91: High Availability for Websphere Message Broker hamqsi_applmon. MQSIConfigMgr file in the main.<cfgmgr> The hamqsi_applmon. The agent contains the following methods: online .<cfgmgr> created for you by hamqsicreatecfgmgr will be called at the polling frequency you specify. verify that the local restart capability is working as configured.starts the configuration manager offline . 14. 13. Actions: 7. and start the application server.tests the health of the configuration manager clean . which can be found in the types.MyCfgMgr" The example application monitor is tolerant if it finds that the queue manager is starting because this may be due to the stabilisation interval being too short.performs cleanup (forced termination) of the configuration manager The SupportPac contains the definition of the MQSIConfigMgr resource type. Ensure that stopping the application server stops the configuration manager and its queue manager.stops the configuration manager monitor . indicating that the configuration manager and queue manager are working properly.cf for the cluster. Synchronise the cluster resources. Ensure that the configuration manager and its queue manager are stopped. The configuration manager is put under cluster control by creating a resource of this type. It is a parameter-less wrapper script that calls hamqsi_monitor_cfgmgr_as which checks the state of the configuration manager and queue manager. During this testing a convenient way to cause failures is to identify the bipservice for the configuration manager and kill it. Example "/MQHA/bin/hamqsi_applmon. 10. Success of both tests causes the application monitor to return a zero exit code. 11. HACMP/ServiceGuard will then take whatever action has been configured. provided by SupportPac MC91). Create the MQSIConfigMgr resource type.cf files. 12. With the application server started. either by including the types. This dependency tells VCS that the queue manager needs to be started before the configuration manager and that the configuration manager needs to be stopped before the queue manager.

and bring the service group online. 11.cf file.tests the health of the configuration manager monitor .stops the configuration manager status . An extra primative class is required for the username server as shown below higlighted in italics/red <cib> <configuration> <crm_config> <cluster_property_set id="default"> <attributes> <nvpair id="is_managed_default" name="is_managed_default" value="true"/> 22 . 9. The sample main. 12.starts the configuration manager stop . The agent contains the following methods: start . hamqsi_start_cfgmgr_as hamqsi_stop_cfgmgr_as hamqsi_start_cfgmgr hamqsi_stop_cfgmgr The mqsiusn script is the one you configure with Heartbeat which calls hamqsi_start_uns_as and hamqsi_stop_uns_as. For this support pac we will only use one To place the configuration manager under the control of Heartbeat the /var/lib/heartbeat/crm/cib. This will monitor the configuration manager and its queue manager and trigger recovery actions as a result of failures. then ensure that the configuration is “read-write” and use hacf to verify the changes. They invoke methods supplied by scripts to control the queue manager and the hamqsi_start_uns and hamqsi_stop_uns methods to control the configuration manager.cf included in Appendix A can be used as a guide.tests the health of the configuration manager This agent relies on a number of other scripts such as. 8. Check that the configuration manager started and test that the service group can be switched from one system to another and that the configuration manager runs correctly on each system. 10. During this testing a convenient way to cause failures is to identify the bipservice for the configuration manager and kill it. With the service group online.IC91: High Availability for Websphere Message Broker MQSIConfigMgr file into the existing types. either using the VCS GUI or by editing the main.xml file must be updated to include the bipconfigmgr as a component in the resource group. With Heartbeat its possible to configure many monitors for a single resource group.cf file. Ensure that the configuration manager is stopped. If you opt to edit the files. You can configure an application monitor which will monitor the health of the configuration manager and its queue manager. The resource needs to be in the same service group as the queue manager upon which the configuration manager depends and it should have a “requires” statement to record the dependency. Recovery actions include the ability to perform local restarts of the configuration manager and queue manager (see below) or to cause a failover of the resource group to another node. Add a resource of type MQSIConfigMgr. The queue manager and configuration manager should start. verify that the monitor correctly detects failures and configure the restart attributes to your desired values. Linux-HA The configuration manager is managed by the mqsiuns agent supplied in this SupportPac. Ensure that taking the service group offline stops the configuration manager. Verify and enable the changes. If you have followed the instruction in Chapter 2 above you should already have a resource group for the configuration manager.

<nvpair id=" mqsicfgmgr _4_attr_3" name="3" value="argostr"/> Name of the configuration manager userID. 23 .168.xml file to all other machines within the cluster. <nvpair id=" mqsicfgmgr _4_attr_2" name="2" value="mqm"/> Name of the MQ userID.IC91: High Availability for Websphere Message Broker </attributes> </cluster_property_set> </crm_config> <nodes> <node id="3928ccf5-63d2-4fbd-b4ed-e9d6163afbc9" uname="ha-node1" type="normal"/> </nodes> <resources> <group id="group_1"> <primitive class="ocf" id="IPaddr_1" provider="heartbeat" type="IPaddr"> <operations> <op id="IPaddr_1_mon" interval="5s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IPaddr_1_inst_attr"> <attributes> <nvpair id="IPaddr_1_attr_0" name="ip" value="192.1.11"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="Filesystem_2" provider="heartbeat" type="Filesystem"> <operations> <op id="Filesystem_2_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="Filesystem_2_inst_attr"> <attributes> <nvpair id="Filesystem_2_attr_0" name="device" value="/dev/sdb1"/> <nvpair id="Filesystem_2_attr_1" name="directory" value="/MQHA/cm1qm"/> <nvpair id="Filesystem_2_attr_2" name="fstype" value="ext2"/> </attributes> </instance_attributes> </primitive> # New section starts here <primitive class="heartbeat" id="mqsicfgmgr_4" provider="heartbeat" type="mqsicfgmgr"> <operations> <op id="mqsicfgmgr_4_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="mqsicfgmgr_4_inst_attr"> <attributes> <nvpair id=" mqsicfgmgr _4_attr_1" name="1" value="cfgqm"/> <nvpair id=" mqsicfgmgr _4_attr_2" name="2" value="mqm"/> <nvpair id=" mqsicfgmgr _4_attr_3" name="3" value="argostr"/> </attributes> </instance_attributes> </primitive> # New section ends here </group> </resources> <constraints> <rsc_location id="rsc_location_group_1" rsc="group_1"> <rule id="prefered_location_group_1" score="100"> <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="ha-node1"/> </rule> </rsc_location> </constraints> </configuration> </cib> The main parameters to change in the script above are: • • • <nvpair id=" mqsicfgmgr_4_attr_1" name="1" value="cfgqm"/> Name of the queue manager the configuration manager runs on. Once you have made this change make sure you syncronise the cib.

csq1 mqsi" hamqsi_stop_cfgmgr_as The example stop script is called hamqsi_stop_cfgmgr_as. Example "/MQHA/bin/hamqsi_start_cfgmgr_as MyCfgMgr ha. This script is robust in that it does not assume anything about the state of the configuration manager or queue manager on entry. so when you define the start command in HA.csq1 mqsi 10" The stop command will use the timeout parameter as the time to allow either the queue manager or configuration manager to respond to an attempt to stop it. When you define the stop command in HA you should include the parameters. If a stop attempt times out. It accepts two command line parameters which are the queue manager name and the userid under which the configuration manager runs. Example "/MQHA/bin/hamqsi_stop_cfgmgr_as MyCfgMgr ha. the second parameter is the configuration manager userid and the third is the timeout (in seconds) to use on each of the levels of severity of stop. 24 . The stop script accepts three command line parameters. The stop command has to ensure that the configuration manager and queue manager are both fully stopped by the time the command completes. the first is the queue manager name. then a more severe stop is performed.IC91: High Availability for Websphere Message Broker hamqsi_start_cfgmgr_as The example start script is called hamqsi_start_cfgmgr_as. include the parameters.

queue manager and broker database instance. Unless you have a specific requirement which would make automatic fallback desirable in your configuration. If you choose cascading. For HACMP During the creation of the UNS queue manager. then it is probably better to manually move the resource group back to the preferred node when it will cause minimum disruption. The sender channel should use the service address of the broker resource group and the broker queue manager's port number. This is the address which clients and channels will use to connect to the queue manager. you will create the package as described in SupportPac MC91. If the broker is sharing its queue manager with the UNS. Don't configure the application server or application monitor described in SupportPac MC91 you will create an application server that covers the broker. you will create the resource group as described in SupportPac MC91. Whichever you choose. If the broker is running in a collective then it will also need to communicate with other brokers and you should configure additional queues and channels for broker to broker communication. Actions: 1. bear the following points in mind: The resource group will use the IP address as the service label. The resource group can be either cascading or rotating. On the Configuration Manager queue manager create a transmission queue for communication to the broker queue manager. then you can omit the creation of the relevant transmission queues and channels. On the Configuration Manager queue manager create a sender and receiver channel for communication with the broker queue manager. rather than an IP address statically assigned to a system. using the hacrtmqm command. On one node. Remember that because the broker queue manager is clustered you need to use the service address for channels sending to the broker queue manager rather than the machine IP address. Set up queues and channels between the broker queue manager and the Configuration Manager queue manager: a. create a clustered queue manager as described in SupportPac MC91. The following actions are written assuming that the UNS is not sharing the broker queue manager. Use the volume group that you created for the broker and place the volume group and queue manager into a resource group to which the broker will be added.IC91: High Availability for Websphere Message Broker Chapter 5 Configuration steps for Broker – UNIX & Linux Create and configure the queue manager A broker relies on a queue manager to receive and send messages and to communicate with other WMB components. 25 . The queue manager will use the IP address managed by the service group. as described in SupportPac MC91. it is recommended that you consider disabling the automatic fallback facility by setting Cascading Without Fallback to true. The logical address is the address which clients and channels will use to connect to the queue manager. For VCS The service group in which the UNS and its queue manager will run is created during the creation of the queue manager. This is to avoid the interruption to the UNS which would be caused by the reintegration of the top priority node after a failure. Ensure that the queue is given the same name and case as the broker queue manager. 2. b. The transmission queue should be set to trigger the sender channel. The broker queue manager needs to be configured so that the broker can communicate with the Configuration Manager and UNS. This step creates the queue manager. For ServiceGaurd During the creation of the UNS queue manager.

The transmission queue should be set to trigger the sender channel. For HACMP/ServiceGuard The database instance is made highly available by invoking its HA scripts from within the scripts you configure for the broker resource group's application server. If the UNS is clustered. If you are configuring the database inside the cluster (recommended) then follow the instructions in the remainder of this step. As discussed in the start of Chapter 3. 3. The transmission queue should be set to trigger the sender channel. 4. and the corresponding listener port number. On the UNS queue manager create a sender and receiver channel for communication with the broker queue manager. Create and configure the broker database The message broker relies on a broker database and in this description it is assumed that this is running under DB2. set up queues and channels between the broker queue manager and the UNS queue manager: a. Ensure that the queue is given the same name and case as the broker queue manager. The logical address is the address which clients and channels will use to connect to the queue manager.7. If you are using DB2 . If you choose to run the database outside the cluster then simply follow the instructions in the WMB documentation for creating the broker database but ensure that you consider whether the database is a single point of failure and make appropriate provision for the availability of the database. then HACMP scripts are supplied with DB2 and are described in the DB2 Administration Guide. The queue manager will use the IP address managed by the service group. d. If you are using a UNS. The example scripts supplied in this SupportPac for the broker application server 26 .1 Implementation and Certification with MC/ServiceGuard High Availability Software” If you are using a different database manager then follow the instructions provided with that database manager. as described in Chapter 6. Test that the above queue managers can communicate regardless of which node owns the resource groups they belong to. On the broker queue manager create a transmission queue for communication to the Configuration Manager queue manager. This step creates the database instance which the broker will use. On the UNS queue manager create a transmission queue for communication to the broker queue manager. c. Ensure that the queue is given the same name and case as the UNS queue manager. The sender channel should use the IP address of the machine where the Configuration Manager queue manager runs. d. the sender channel should use the service address of the UNS resource group and the UNS queue manager's port number. On the broker queue manager create a transmission queue for communication to the UNS queue manager. The transmission queue should be set to trigger the sender channel. The database instance is the unit of failover of the database manager. b. there are two options regarding where the broker database is run. either inside or outside the cluster. On the broker queue manager create sender and receiver channels to match those just created on the Configuration Manager queue manager. rather than an IP address statically assigned to a system. with the same names as the receiver and sender channel just created on the broker queue manager.IC91: High Availability for Websphere Message Broker c. The sender channel should use the service address of the broker resource group and the broker queue manager's port number. The instance runs the broker database in which the broker tables are created when the broker is created. ServiceGaurd scripts can be found on the DB2 web site in appendix 5 of the “IBM DB2 EE v. Ensure that the queue is given the same name and case as the Configuration Manager queue manager. For Linux-HA The service group in which the UNS and its queue manager will run is created during the creation of the queue manager. On the broker queue manager create a sender and receiver channel for communication with the UNS queue manager.

Create the database instance.HACMP/ServiceGuard/VCS/Linux-HA A broker stores data to disk in the /var/mqsi/components/<broker> directory and the /var/mqsi/registry/<broker> directory. Create a database instance home directory in the disk group owned by the service group. The hamqsicreatebroker command creates the broker directories under the data path of the queue manager used by the broker. These directories need to be on shared disk. 2. Create the database instance. 6. 3. Linux-HA For the Linux section is it presumed that the database is made highly available in its own entity. 5.ibm. defined on all cluster nodes that may host the resource group. The database instance is made highly available by using an appropriate agent. then appropriate agents are available eihter from IBM or VERITAS Actions: 1. 4. Actions: 1. You will need to manually start and stop the database instance to test this. Documention on how to complete this is provided by DB2 such as the following RedPaper “Open Source Linux High Availability for IBM® DB2® Universal Database” ftp://ftp. Ensure that the database agent can start and stop the database instance as the service group is placed online and offline. Create a database instance owner user. Ensure that the database instance runs correctly on each node the resource group can move to. Ensure that the database instance runs correctly on each system in the service group’s systemlist. Create a database instance home directory in the volume group owned by the resource group. If you are using DB2.it will be included in the application server which you will create. specifying the home directory just created.pdf Create the message broker . If you are using a different database manager then edit the scripts accordingly.software. VCS The database instance is the unit of failover of the database manager.IC91: High Availability for Websphere Message Broker include calls to the DB2 Version 8. Place the database instance under VCS control by configuring the database agent and modifying the cluster configuration. It creates the broker and then moves the above directories to shared disk and creates symlinks to them from their usual paths. this can be in the queue manager's data path in the "databases" directory. 7. Create a database instance owner user. As portrayed in Figure 7. which accepts the same arguments as the mqsicreatebroker command. specifying the home directory just created. This is handled for you if you create the broker using the hamqsicreatebroker command. 2. Actions: 27 . including creation of an ODBC data source for it. 4.com/software/data/pubs/papers/db2halinux. Start the instance and create and configure the broker database as described in the WMB documentation. Start the instance and create and configure the broker database as described in the WMB documentation. 3. 6. This may be achieved using another HA application or Linux-HA.2 scripts. This directory is the parent of the "qmgrs" directory on shared disk in which the queue manager exists. including creation of an ODBC data source for it. 5. Don't create an application server for the database instance .

ini file to locate this path information. This command does not create the broker . It parses the /var/mqm/mqs.the account under which the broker service runs Example: hamqsiaddbrokerstandby imb1 ha.the name of the queue manager used by the broker userid . Ensure that you can start and stop the broker manually using the mqsistart and mqsistop commands.which is created as described earlier. excluding the one on which you just created the broker). Create the broker on the node hosting the logical host using the hamqsicreatebroker command.IC91: High Availability for Websphere Message Broker 1.csq1 mqsi 28 . You will need to login as the user id under which the UNS runs to test this. This command defines symbolic links within subdirectories under /var/mqsi on the standby node which allow the broker to move to that node. Syntax hamqsicreatebroker <creation parameters> Parameters creation parameters . You must be root to run this command. 3. On any other nodes in the resource group's nodelist (i. It parses the /var/mqm/mqs.csq1 -n IMB1DB 2. hamqsiaddbrokerstandby The hamqsiaddbrokerstandby command will create the information required for a cluster node to act as a standby for the broker. Syntax hamqsiaddbrokerstandby <broker> <qm> <userid> Parameters broker . hamqsicreatebroker command The hamqsicreatebroker command will create the broker and will ensure that its directories are arranged to allow for HA operation.the name of the broker qm .are exactly the same as for the regular WMB mqsicreatebroker command Example: hamqsicreatebroker imb1 -i mqsi -a mqsi -q ha.ini file to locate this path information.e. The invocation of the hamqsicreatebroker command uses exactly the same parameters that you would normally use for mqsicreatebroker. run the hamqsiaddbrokerstandby command to create the information needed by these nodes to enable them to host the broker. The hamqsiaddbrokerstandby command expects the broker directories to have been created by the hamqsicreatebroker command under the same path used for the data associated with the queue manager which the broker uses. The hamqsicreatebroker command puts the broker directories under the same path used for the data associated with the queue manager which the broker uses. You must be root to run this command.

queue manager and database instance. and that the time period is set to a small multiple of the expected start time for the components of the broker group. You can only configure one application monitor per resource group. The application monitor does not check for DataFlowEngines because you may have none deployed. then the resource group will failover to a different node.IC91: High Availability for Websphere Message Broker Place the broker under cluster control HACMP/ServiceGuard The broker. The bipbroker process monitors and restarts DataFlowEngines. Actions: 1. then customise the example hamqsi_monitor_broker_as script. its queue manager and the database instance. The example application monitor checks that the bipservice process is running. if successive restarts fail without a significant period of stability between. If you wish to monitor for these as well.<broker> where <broker> is the name of the broker. using the example scripts provided in this SupportPac. They invoke methods supplied by SupportPac MC91 to control the queue manager and invoke the database HA scripts to control the database instance. The fault monitoring interval is configured in the HA panels. The bipservice process monitors and restarts the bipbroker process. When you ran hamqsicreatebroker an application monitor was created. Create an application server which will run the broker. otherwise it would be classed as a failure. You can configure an application monitor which will monitor the health of the broker. In addition they invoke the hamqsi_start_broker and hamqsi_stop_broker methods to control the broker. which can be configured using the example broker scripts supplied by this SupportPac. 29 . you may have to suspend monitoring if you wish to deploy a removal of a DataFlowEngine. it will call it periodically to check that the broker. The example scripts are called hamqsi_start_broker_as and hamqsi_stop_broker_as and are described in the following frames. queue manager and database instance are running. The broker scripts are: hamqsi_start_broker_as hamqsi_stop_broker_as hamqsi_start_broker hamqsi_stop_broker The hamqsi_start_broker_as and hamqsi_stop_broker_as scripts are the ones you configure as the start and stop methods for the application server. With these settings. This application monitor is specifically for monitoring an application server containing a broker. Recovery actions include the ability to perform local restarts of the broker and its dependencies (see below) or to cause a failover of the resource group to another node. queue manager and database instance are managed by a single HA application server. The application monitor will assess the state of these components and trigger recovery actions as a result of failures. If you configure application monitor. which also include the parameters that control whether a failure of either component of the application server will trigger a restart. but remember that depending on how you customise it. a queue manager and a database instance and is called: hamqsi_applmon. The broker depends on the queue manager and the database instance and the start and stop sequence is coded in the example scripts. It is recommended that the restart count is set to 1 so that one restart is attempted. Attempting more restarts on a node on which a restart has just failed is unlikely to succeed.

If a stop attempt times out. so just specify the name of the monitor script. including the monitoring interval and the restart parameters you require.<broker> script created by hamqsicreatebroker.IC91: High Availability for Websphere Message Broker hamqsi_start_broker_as The example start script is called hamqsi_start_broker_as. the broker userid. This script is robust in that it does not assume anything about the state of the broker. the queue manager name. It accepts command line parameters which provide the name of the broker. the userid under which the broker runs and the names of the database instance and database. An application monitor script cannot be passed parameters. The example application monitor script provided in this SupportPac is described in the following frame: 30 .csq1 mqsi db2inst1 10" The stop command will use the timeout parameter as the time to allow either the queue manager or broker to respond to an attempt to stop it. include the parameters. Also configure the other application monitor parameters. The stop script accepts command line parameters that provide the name of the broker. The stop command has to ensure that the broker and queue manager are both fully stopped by the time the command completes. queue manager or database instance on entry. Example "/MQHA/bin/hamqsi_start_broker_as imb1 ha. the database instance name and a timeout (in seconds) to use on each of the levels of severity of stop. 2. then a more severe stop is performed. You can also specify an application monitor using the hamqsi_applmon. When you define the stop command you should include the parameters. the queue manager. When you define the start command in HACMP. Example "/MQHA/bin/hamqsi_stop_broker_as imb1 ha.csq1 mqsi db2inst1 IMB1DB" hamqsi_stop_broker_as The example stop script is called hamqsi_stop_broker_as.

Failure of any component test will result in the application monitor returning a non-zero exit code. provided by SupportPac MC91This dependency tells VCS that the queue manager needs to be started before the broker and that the broker needs to be stopped before the queue manager. indicating that the components are working properly. With the application server started. The monitoring script accepts no parameters. Check that the components started and test that the resource group can be moved from one node to the other and that they run correctly on each node.<broker> The hamqsi_applmon. These dependencies tell VCS that the queue manager and database instance need to be started before the broker and that the broker needs to be stopped before the queue manager and database instance. It is a parameter-less wrapper script that calls hamqsi_monitor_broker_as which checks the state of the broker.IC91: High Availability for Websphere Message Broker hamqsi_applmon.<broker>" The example application monitor is tolerant if it finds that the queue manager is starting because this may be due to the stabilisation interval being too short. 4. and start the application server. queue manager and database instance are stopped. Ensure that stopping the application server stops the components. VCS The broker is managed by the MQSIBroker agent supplied in this SupportPac. verify that the HACMP local restart capability is working as configured. the broker resource needs to have a resource dependency on the corresponding queue manager resource (of resource type MQM. If you wish to use the example application monitor then supply its name tothe HA Software. The agent contains the following methods: online .<broker> created for you by hamqsicreatebroker will be called at the polling frequency you specify.stops the broker monitor . 3. Ensure that the broker. A broker is put under cluster control by creating a resource of this type.starts the broker offline . the queue manager and the database instance. 5. which can be found in the types. During this testing a convenient way to cause failures is to identify the bipservice for the broker and kill it.MQSIBroker file.forcibly terminates the broker The SupportPac contains the definition of the MQSIBroker resource type. 31 . 7.tests the health of the broker clean . Example "/MQHA/bin/hamqsi_applmon. The broker resource needs to be configured to depend upon the queue manager resource which manages the queue manager and the database resource which manages the database instance in which the broker database runs. Synchronise the cluster resources. Because the broker needs to be co-located with its queue manager. A successful test of all three components causes the application monitor to return a zero exit code. 6.

2. Create the MQSIBroker resource type. If you opt to edit the files. then customise the monitor method. They invoke methods to control the queue manager and the hamqsi_start_broker and hamqsi_stop_broker methods to control the broker. Add a resource of type MQSIBroker. The resource needs to be in the same service group as the queue manager and database instance upon which the broker depends and it should have “requires” statements to record these dependencies.MQSIBroker file in the main. To place the broker under the control of Heartbeat the /var/lib/heartbeat/crm/cib. Actions: 1.tests the health of the Broker This agent relies on a number of other scripts such as. 5. The agent contains the following methods: start . either by including the types. Recovery actions include the ability to perform local restarts of the broker and queue manager or to cause a failover of the resource group to another node. Linux-HA Control The broker is managed by the mqsibroker agent supplied in this SupportPac. This will monitor the broker and its queue manager and trigger recovery actions as a result of failures. Verify and enable the changes. 6. For this support pac we will only use one. During this testing a convenient way to cause failures is to identify the bipservice for the broker and kill it.cf files. verify that the monitor correctly detects failures and configure the restart attributes to your desired values. hamqsi_start_broker_as hamqsi_stop_broker_as hamqsi_start_broker hamqsi_stop_broker The mqsibroker script is the one you configure with Heartbeat which calls hamqsi_start_broker_as and hamqsi_broker_uns_as.cf included in Appendix A can be used as a guide.IC91: High Availability for Websphere Message Broker The monitor method checks that the bipservice process is running and either restarts it or moves the service group to another system.xml file must be updated to include the broker as a component in the resource group.cf file. If you wish to monitor for these as well. The bipbroker process monitors and restarts DataFlowEngines.tests the health of the Broker monitor . database and broker should start. With the service group online.cf for the cluster.stops the Broker status .MQSIBroker file into the existing types. The probe does not check for DataFlowEngines because you may have none deployed.starts the Broker stop . 3.cf file. The bipservice process monitors and restarts the bipbroker process. or by copying the content of the types. depending on how you configure it. Ensure that taking the service group offline stops the broker.cf or types. 4. If you have followed 32 . You can configure an application monitor which will monitor the health of the broker and its queue manager. The queue manager. With Heartbeat its possible to configure many monitors for a single resource group. and bring the service group online. then ensure that the configuration is “read-write” and use hacf to verify the changes. The sample main. This can be performed using the VCS GUI or by editing the main. Ensure that the broker is stopped. Check that the broker started and test that the service group can be switched from one system to another and that the broker runs correctly on each system. either using the VCS GUI or by editing the main.

<nvpair id="mqsibroker_1_attr_2" name="2" value="br1qm"/> Name of the queue manager the broker runs on.1. An extra primative class is required for the broker as shown below highlighted in italics/red.IC91: High Availability for Websphere Message Broker the instruction in Chapter 2 above you should already have a resource group for the uns. 33 . <cib> <configuration> <crm_config> <cluster_property_set id="default"> <attributes> <nvpair id="is_managed_default" name="is_managed_default" value="true"/> </attributes> </cluster_property_set> </crm_config> <nodes> <node id="3928ccf5-63d2-4fbd-b4ed-e9d6163afbc9" uname="ha-node1" type="normal"/> </nodes> <resources> <group id="group_1"> <primitive class="ocf" id="IPaddr_1" provider="heartbeat" type="IPaddr"> <operations> <op id="IPaddr_1_mon" interval="5s" name="monitor" timeout="5s"/> </operations> <instance_attributes id="IPaddr_1_inst_attr"> <attributes> <nvpair id="IPaddr_1_attr_0" name="ip" value="192.11"/> </attributes> </instance_attributes> </primitive> <primitive class="ocf" id="Filesystem_1" provider="heartbeat" type="Filesystem"> <operations> <op id="Filesystem_1_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id="Filesystem_1_inst_attr"> <attributes> <nvpair id="Filesystem_1_attr_0" name="device" value="/dev/sdb1"/> <nvpair id="Filesystem_1_attr_1" name="directory" value="/MQHA/cm1qm"/> <nvpair id="Filesystem_1_attr_2" name="fstype" value="ext2"/> </attributes> </instance_attributes> </primitive> # New section starts here <primitive class="heartbeat" id="mqsibroker_1" provider="heartbeat" type="mqsibroker"> <operations> <op id=" mqsibroker _1_mon" interval="120s" name="monitor" timeout="60s"/> </operations> <instance_attributes id=" mqsibroker _1_inst_attr"> <attributes> <nvpair id="mqsibroker_1_attr_1" name="1" value="br1"/> <nvpair id="mqsibroker_1_attr_2" name="2" value="br1qm"/> <nvpair id="mqsibroker_1_attr_3" name="3" value="mqm"/> <nvpair id="mqsibroker_1_attr_4" name="4" value="argostr"/> </attributes> </instance_attributes> </primitive> # New section ends here </group> </resources> <constraints> <rsc_location id="rsc_location_group_1" rsc="group_1"> <rule id="prefered_location_group_1" score="100"> <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="ha-node1"/> </rule> </rsc_location> </constraints> </configuration> </cib> The main parameters to change in the script above are: • • <nvpair id="mqsibroker_1_attr_1" name="1" value="br1"/> Name of the Broker.168.

Once you have made this change make sure you syncronise the cib. 34 . <nvpair id="mqsibroker_1_attr_4" name="4" value="argostr"/> Name of the userID the broker runs under.IC91: High Availability for Websphere Message Broker • • <nvpair id="mqsibroker_1_attr_3" name="3" value="mqm"/> Name of the MQ userID. Once these updates are complete you are ready to restart Heartbeat to pickup the new resource.xml file to all other machines within the cluster.

5. This does not destroy the components. 35 . 3. Take the service group offline. When removing the UNS from the cluster configuration remember that it relies on a queue manager. Modify the cluster configuration. bring the service group back online. Delete the application monitor. remove the filesystem. the application server which manages it should be stopped. It is recommended that you remove a component from cluster control before deleting it. which should continue to function normally. When a component is being removed from the cluster. Verify the changes and. HACMP/ServiceGuard Actions: 1. unmounts the filesystems on the disk groups it contains and deports the disk groups. rather than it being controlled and monitored by the cluster. Delete the application server. This stops the resources managed by the service group. For details of how to remove the queue manager from cluster control. but under manual control. If you wish to retain the queue manager under cluster control then you may decide to replace the application server with one that uses the scripts from SupportPac MC91 4. If the UNS is removed from the cluster configuration. Deletion of a component destroys the component. Stop the application server which runs the UNS. The following activities are related to removal and deletion of components. If you intend to continue to use the UNS. Removal of the UNS from the cluster configuration Removal of UNS standby information from a node Deletion of the UNS component Removal of the configuration manager from the cluster configuration Removal of configuration manager standby information from a node Deletion of the configuration manager component Removal of a broker from the cluster configuration Removal of a broker standby information from a node Deletion of a broker component Remove the UNS from the cluster configuration As described previously. service label and volume group resources from the resource group and delete the group. Synchronise the cluster resources configuration.IC91: High Availability for Websphere Message Broker Chapter 6 Removal and Deletion – UNIX & Linux Removal and deletion are defined as follows: Removal of a component from the cluster configuration returns the component to manual control. refer to SupportPac MC91. VCS Actions: 1. 2. 3. If you have no further use for them. it would be inadvisable to remove it from cluster control and leave the queue manager under cluster control. if desired. 2. The configuration can then be changed and synchronised. A component can only be deleted on the node which is hosting it. it can continue to operate. removal of a component from the cluster configuration returns the component to manual control. Once removed from the cluster configuration the component will not be monitored for failures and will remain on one node. either using the VCS GUI or by editing the configuration files. but must be started and stopped manually.

Syntax hamqsiremoveunsstandby Parameters none Delete the UNS When the UNS has been removed from cluster control.e. The UNS was created by the hamqsicreateusernameserver command. /etc/ha.d/resource. This destroys the UNS. Actions: 1. Remove UNS standby information from a node With the UNS removed from cluster control. run the hamqsiremoveunsstandby command.IC91: High Availability for Websphere Message Broker Linux-HA Actions: 1. There is a companion command called hamqsideleteusernameserver which is aware of the changes made for HA operation and which should always be used to delete the UNS. in such a way that it is amenable to HA operation. it is now safe to remove the UNS information from any standby nodes.e. This will remove the symlinks for the UNS from the subdirectories under the /var/mqsi directory. crm_resource –D –r mqsiuns_1 –t primative 2. as described in Step 6. This standby information was created wh en you ran the hamqsiaddunsstandby command see Step 4b. This is achieved by running “crm_resource –D –r <resource id> -t primative”.xml file or by running “crm_resource –L” which will list the resource with resource id i. If a node has such a link. You must be root to run this command. This companion command reverses the HA changes and then deletes the UNS. You can find out the resource ID by looking at your cib. On the standby nodes. Ensure that the UNS has been removed from cluster control and identify which node it is running on. which removes the standby information. 2. hamqsiremoveunsstandby command The hamqsiremoveunsstandby command will remove the standby information from standby nodes for the UNS. There is a companion command called hamqsiremoveunsstandby. then it is a standby node. 3.d/mqsiuns unsqm mqm argostr stop 3. Verify the user name server is stopped by checking for a bupuns process. Identify which other nodes have standby information on them. Stop the resource being monitored by removing it from CRM. it is possible to delete it. Once the resouce is no longer managed you need to stop it manually using the mqsibroker script i. If you are not sure whether a node has standby information then look for a symbolic link called /var/mqsi/components/Currentversion/UserNameServer. Actions: 36 .

This is similar to the behaviour of the mqsideleteusernameserver command. You must be root to run this command. either using the VCS GUI or by editing the configuration files. Linux-HA Actions: 37 . 2. removal of a component from the cluster configuration returns the component to manual control. which the command uses internally. 4. it would be inadvisable to remove it from cluster control and leave the queue manager under cluster control. 3. For details of how to remove the queue manager from cluster control. bring the service group back online. it can continue to operate. as root. Synchronise the cluster resources configuration. When removing the configuration manager from the cluster configuration remember that it relies on a queue manager . then issue the mqsistop UserNameServer command to stop it. 2. Syntax hamqsideleteusernameserver <userid> Parameters userid . Delete the UNS from the Configuration Manager and deploy the changes as described in the WMB Administration Guide. This stops the resources managed by the service group. HACMP/ServiceGuard Actions: 1. Delete the application server. If you wish to retain the queue manager under cluster control then you may decide to replace the application server with one that uses the scripts from SupportPac MC91. Delete the application monitor. VCS Actions: 1. if desired. For details of how to remove the database instance.the userid under which the UNS service runs Remove a configuration manager from the cluster configuration As described previously. run the hamqsideleteusernameserver command. Take the service group offline. If you have no further use for them. refer to SupportPac MC91. Modify the cluster configuration.IC91: High Availability for Websphere Message Broker 1. Stop the application server which runs the configuration manager. If the configuration manager is removed from the cluster configuration. 2. remove the filesystem. 3. but must be started and stopped manually. Identify the node on which the UNS is defined and on that node ensure that the UNS is stopped. If it is not. service label and volume group resources from the resource group and delete the group. This will destroy its control files and remove the definition of the broker from the /var/mqsi directory. 3. 5. hamqsideleteusernameserver command The hamqsideleteusernameserver command will delete the UNS. Verify the changes and. refer to the database manager documentation. On the same node. If you intend to continue to use the configuration manager. unmounts the filesystems on the disk groups it contains and deports the disk groups.

it is now safe to remove the configuration manager information from any standby nodes. Identify which other nodes have standby information on them. /etc/ha.xml file or by running “crm_resource –L” which will list the resource with resource id i.e. hamqsiremoveconfigmgrstandby command The hamqsiremoveconfigmgrstandby command will remove the standby information from standby nodes for a configuration manager. then issue the mqsistop 38 . run the hamqsiremoveconfigmgrstandby command. You can find out the resource ID by looking at your cib.IC91: High Availability for Websphere Message Broker 1. it is possible to delete it. On the standby nodes. If it is not.the name of the configuration manager to be removed Delete a configuration manager When a configuration manager has been removed from cluster control. This is achieved by running “crm_resource –D –r <resource id> -t primative”. Remove a configuration manager standby information from a node With the configuration manager removed from cluster control. which removes the standby information. This companion command reverses the HA changes and then deletes the configuration manager. This will remove the symlinks for the broker from the subdirectories under the /var/mqsi directory. Once the resouce is no longer managed you need to stop it manually using the mqsiconfigmgr script i. Identify the node on which the configuration manager is defined and on that node ensure that the configuration manager is stopped. Ensure that the configuration manager has been removed from cluster control and identify which node it is running on. This destroys the configuration manager. Syntax hamqsiremoveconfigmgrstandby <config mgr name> Parameters Config mgr name . Stop the resource being monitored by removing it from CRM. The configuration manager was created by the hamqsicreateconfigmgr command. Actions: 1. You must be root to run this command.e. If you are not sure whether a node has standby information then look for a symbolic link called /var/mqsi/components/<broker>. Actions: 1. where <broker> is the name of the broker. then it is a standby node. Verify the broker is stopped by checking for a bipbroker process.d/mqsiconfigmgr cfgmgr cfg1qm mqm argostr stop 3. There is a companion command called hamqsiremoveconfigmgrstandby. in such a way that it is amenable to HA operation. crm_resource –D –r mqsicfgmgr_1 –t primative 2. There is a companion command called hamqsideleteconfigmgr which is aware of the changes made for HA operation and which should always be used to delete the configuration manager. This standby information was created when you ran the hamqsiaddconfigmgr standby command. 2. 3. If a node has such a link.d/resource. as root.

Delete the application monitor. For details of how to remove the queue manager from cluster control. remove the filesystem. If you have no further use for them. Verify the changes and. as root. This stops the resources managed by the service group. Delete the application server. removal of a component from the cluster configuration returns the component to manual control. Syntax hamqsideleteconfigmgr <config mgr name> <userid> Parameters Config mgr name . VCS Actions: 4. 5. 2. 9. where <configmgr> is the name of the configuration manager. if desired. unmounts the filesystems on the disk groups it contains and deports the disk groups. For details of how to remove the database instance. it would be inadvisable to remove it from cluster control and leave the queue manager or database instance under cluster control. Modify the cluster configuration. HACMP/ServiceGuard Actions: 6. it can continue to operate. This is similar to the behaviour of the mqsideleteconfigmgr command. Synchronise the cluster resources configuration.the name of the configuration manager to be deleted userid . This will destroy its control files and remove the definition of the configuration manager from the /var/mqsi directory. refer to the database manager documentation. If you wish to retain the queue manager or database instance under cluster control then you may decide to replace the application server with one that uses the scripts from SupportPac MC91 or the database HACMP scripts. When removing the broker from the cluster configuration remember that it relies on a queue manager and database instance. Take the service group offline. Linux-HA Actions: 39 . If the broker is removed from the cluster configuration. but must be started and stopped manually. You must be root to run this command. 7. refer to SupportPac MC91. run the hamqsideleteconfigmgr command. Stop the application server which runs the broker. i 8. On the same node.IC91: High Availability for Websphere Message Broker <configmgr> command to stop it. 6. If you intend to continue to use the broker. 10. which the hamqsideleteconfigmgr command uses internally. hamqsideleteconfigmgr command The hamqsideleteconfigmgr command will delete a configuration manager. either using the VCS GUI or by editing the configuration files. bring the service group back online.the userid under which the broker service runs Remove a broker from the cluster configuration As described previously. service label and volume group resources from the resource group and delete the group.

This destroys the broker.xml file or by running “crm_resource –L” which will list the resource with resource id i. which removes the standby information.IC91: High Availability for Websphere Message Broker 4. Delete the broker from the Configuration Manager and deploy the changes as described in the WMB Administration Guide. /etc/ha. This standby information was created when you ran the hamqsiaddbrokerstandby command see Step 5c. Actions: 4. as root. Ensure that the broker has been removed from cluster control and identify which node it is running on. If a node has such a link. 5. The broker was created by the hamqsicreatebroker command. it is possible to delete it. Stop the resource being monitored by removing it from CRM. crm_resource –D –r mqsibrk_1 –t primative 5. This companion command reverses the HA changes and then deletes the broker. Remove broker standby information from a node With the broker removed from cluster control. This is achieved by running “crm_resource –D –r <resource id> -t primative”. Verify the broker is stopped by checking for a bipbroker process.the name of the broker to be removed Delete a broker When a broker has been removed from cluster control. On the standby nodes. Once the resouce is no longer managed you need to stop it manually using the mqsibroker script i. You must be root to run this command. If you are not sure whether a node has standby information then look for a symbolic link called /var/mqsi/components/<broker>. where <broker> is the name of the broker. it is now safe to remove the broker information from any standby nodes. hamqsiremovebrokerstandby command The hamqsiremovebrokerstandby command will remove the standby information from standby nodes for a broker. run the hamqsiremovebrokerstandby command. There is a companion command called hamqsideletebroker which is aware of the changes made for HA operation and which should always be used to delete the broker. in such a way that it is amenable to HA operation. There is a companion command called hamqsiremovebrokerstandby. Identify which other nodes have standby information on them.e.d/mqsibroker br1 br1qm mqm argostr stop 6. Actions: 7. Syntax hamqsiremovebrokerstandby <broker name> Parameters broker name . then it is a standby node. 40 . 6.e.d/resource. as described in Step 9. You can find out the resource ID by looking at your cib. This will remove the symlinks for the broker from the subdirectories under the /var/mqsi directory.

This will destroy its control files and remove the definition of the broker from the /var/mqsi directory. Identify the node on which the broker is defined and on that node ensure that the broker is stopped. as root. This is similar to the behaviour of the mqsideletebroker command. where <broker> is the name of the broker.IC91: High Availability for Websphere Message Broker 8. run the hamqsideletebroker command. 9. If it is not. Syntax hamqsideletebroker <broker name> <userid> Parameters broker name . On the same node. then issue the mqsistop <broker> command to stop it. hamqsideletebroker command The hamqsideletebroker command will delete a broker.the userid under which the broker service runs 41 . You must be root to run this command.the name of the broker to be deleted userid . which the hamqsideletebroker command uses internally.

If you were to decide to configure a Configuration Manager to use a Queue Manager that will also be used by a Message Broker you would have to configure the Configuration Manager and Message Broker to be in the same MSCS group. It is assumed that for independence the Configuration Manager will remain in its own group. Bring the Queue Manager resource online and test that it starts the Queue Manager. Move the Configuration Manager group to the secondary node and ensure that the Queue Manager is online. For example: S:\WorkPath. Do not start the component. 1. Create the Configuration Manager on the primary node using the mqsicreateconfigmgr command specifying the – option defining a location on the shared drive used by the Configuration Manager’ Queue nager. 42 . If you want to put multiple components in the same group you may. Start the Cluster Administrator and create a new MSCS group. The choice of which node is the primary is initially arbitrary but once made should be adhered to. Create and Configure the Configuration Manager’ Queue Manager Whether you are running the Configuration Manager within an MSCS cluster or outside. 6. allowing you to run the Configuration Manager(s) and Message Broker(s) on separate nodes. 2. 1. For maximum flexibility it is recommended that the Configuration Manager is in a separate group from any of the Message Broker(s) in the cluster. It is recommended that you leave the group failback policy at the default value of disabled. the Configuration Manager requires a Queue Manager and this Queue Manager must be configured so that the Configuration Manager can communicate with the Message Broker(s) and the UNS (detailed later).Configuration steps for Configuration Manager Create the Configuration Manager Group As described earlier a Configuration Manager and its dependencies may be contained in an MSCS group if clustering is being used. Create the Clustered Configuration Manager The Configuration Manager relies on a Queue Manager to receive requests from Message Brokers. test failover of the Queue Manager to the other cluster node. Ensure the Configuration Manager group is on the primary node. Optionally. For MSCS control ensure the start-up policy is set to manual and setting the listener(s) port number(s) appropriately. and that the Queue Manager functions correctly. in which case you need only perform this step to create one group.IC91: High Availability for Websphere Message Broker Chapter 7 MSCS . 5. Use the method defined in the MQ documentation to create a standard Queue Manager. The instructions below are written assuming that the Configuration Manager is being put in a separate group. 4. 3. The Queue Manager and its dependencies (shared disks and IP address) should be put in the MSCS group created for the Configuration Manager. The following descriptions refer to the nodes as primary and secondary. This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. This would also require that the Configuration Manager have a separate Queue Manager from any Queue Managers used by Message Brokers which are reliant on their own shared disks.

12. 43 . On the primary node. Re-execute the mqsicreateconfigmgr command on the secondary node. Start the Configuration Manager and ensure that it has started correctly. 9. The name of the service is MQSeriesBroker<ConfigurationManager>. 2. Move the Configuration Manager group back to the primary node in the cluster. Ensure that the parameters match those used when the Configuration Manager was created on the primary node. 8. Ensure that the Queue Manager is online. Create the Configuration Manager using the mqsicreateconfigmgr command. Bring the Configuration Manager generic service resource online. Test that the Configuration Manager group can be moved from one node to the other and that the Configuration Manager runs correctly on either node. 1. MSCS will be solely responsible for starting and stopping this service. Leave the start-up policy for the Configuration Manager service as its default setting of manual on both nodes. When asked for any registry keys to monitor add the following key SOFTWARE\IBM\WebSphereMQIntegrator\2\<ConfigurationManager> where <ConfigurationManager> is the name of the component created. no MSCS configuration is required. Specify the dependency on the Configuration Manager’ Queue Manager.IC91: High Availability for Websphere Message Broker 7. 10. Create the Non-Clustered Configuration Manager This step creates a standard Configuration Manager. create an MSCS Generic Service resource for the Configuration Manager service. Do not start the component. 11.

The transmission queue should be set to d. Ensure that the queue is given the same name and case as the Configuration Manager Queue Manager. The only difference between the clustered UNS and a non-clustered UNS configuration is that in the clustered case you need to use a virtual IP address for channels sending to the UNS Queue Manager rather than the machines local IP address. the sender channel should use the virtual IP address of the Configuration Manager’ Queue Manager and its associated listener port number. Ensure that the queue is given the same name and case as the UNS Queue Manager. If you want to put multiple components in the same group you may. If the UNS is clustered. On the UNS Queue Manager create a transmission queue for communication to the Configuration Manager Queue Manager. Use the method defined in the MQ documentation for MSCS control if MSCS is to be used. Test that the Queue Managers for the Configuration Manager and the UNS can communicate. 1. 3. This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. 1. Set up queues and channels between the UNS Queue Manager and the Configuration Manager Queue Manager: a. This step is described assuming that the UNS and Message Broker(s) are using separate Queue Managers. On the Configuration Manager Queue Manager create a sender and receiver channel for the associated UNS transmission queue. If they are sharing a Queue Manager then you can omit the creation of the transmission queues and channels. the UNS requires a Queue Manager and this Queue Manager must be configured so that the UNS can communicate with the Message Broker(s) and the Configuration Manager. 2. Create a Queue Manager for the User Name Server. ensuring the start-up policy is set to manual and setting the listener(s) port number(s) appropriately. It is recommended that you leave the group failback policy at the default value of disabled. It is assumed that for independence the User Name Server will remain in its own group. allowing you to run the UNS and Message Brokers on 44 . Create and Configure the UNS Queue Manager Whether you are running the UNS within an MSCS cluster or outside. b. The transmission queue should be set to trigger the sender channel. in which case you need only perform this step to create one group. c. the sender channel should use the virtual IP address of the UNS Queue Manager and its associated listener port number.IC91: High Availability for Websphere Message Broker Chapter 8 MSCS . For maximum flexibility it is recommended that the UNS is in a separate group from any of the Message Brokers in the cluster. On the UNS Queue Manager create a sender and receiver channel for the associated Configuration Manager’ transmission queue. On the Configuration Manager Queue Manager create a transmission queue for communication to the UNS Queue Manager. Create the Clustered UNS The UNS relies on a Queue Manager to receive requests from Message Brokers. 2. trigger the sender channel. Start the Cluster Administrator and create a new MSCS group.Configuration steps for USN Create the UNS Group As described earlier a User Name Server and its dependencies may be contained in an MSCS group if clustering is being used. e. If the Configuration Manager is clustered.

Do not start the UNS.IC91: High Availability for Websphere Message Broker separate nodes. 4. Ensure the UNS group is on the primary node. Bring the Queue Manager resource online and test that it starts the Queue Manager. The following instructions are written assuming that the UNS is being put in a separate group. 6. 5. 12. This would also require that the UNS have a separate Queue Manager from any Queue Managers used by the Message Broker(s) which is reliant on its own shared disk(s). For example: S:\WorkPath. 2. If you were to decide to configure the UNS to use a Queue Manager that will also be used by a Message Broker you would have to configure the UNS and Message Broker(s) to be in the same MSCS group. The following descriptions refer to the nodes as primary and secondary. The choice of which node is the primary is initially arbitrary but once made should be adhered to. and that the Queue Manager functions correctly. Move the UNS group to the secondary node and ensure that the Queue Manager is online. Test that the UNS group can be moved from one node to the other and that the UNS runs correctly on either node. 3. 8. 45 . MSCS will be solely responsible for starting and stopping this service. Move the UNS group back to the primary node in the cluster. Ensure that the Queue Manager is online. 7. Re-execute the mqsicreateusernameserver command on the secondary node. 11. Ensure that the parameters match those used when the UNS was created on the primary node. Specify the dependency on the UNS Queue Manager. On the primary node. Do not start the UNS. 1. The Queue Manager and its dependencies (shared disks and IP address) should be put in the MSCS group created for the User Name Server. create an MSCS Generic Service resource for the UNS service. When asked for any registry keys to monitor add the following key SOFTWARE\IBM\WebSphereMQIntegrator\2\UserNameServer 10. The name of the service is MQSeriesBrokerUserNameServer. Leave the start-up policy for the UNS service as its default setting of manual on both nodes. Bring the UNS generic service resource online. Optionally. 9. test failover of the Queue Manager to the other cluster node. Create the UNS on the primary node using the mqsicreateusernameserver command specifying the – option defining a location on the shared drive used by the User Name Servers Queue Manager.

Ensure that the queue is given the same name and case as the Message Broker’ Queue Manager. The transmission queue should be set to trigger the sender channel. c. On the Configuration Manager Queue Manager create a sender and receiver channel for the associated Message Broker’ transmission queue. If you are using a UNS. This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. The transmission queue should be set to trigger the sender channel. On the UNS Queue Manager create a transmission queue for communication to the Message Broker’ Queue Manager. Test failover of the Queue Manager to the other cluster node ensuring correct operation. the sender channel should use the virtual IP address of the Configuration Manager’ Queue Manager and its associated listener port number. 2. 6. It is recommended that you leave the group failback policy at the default value of disabled. Create and Configure the Message Broker’ Queue Manager Whether you are running the Message Broker within an MSCS cluster or outside. On the Message Broker’ Queue Manager create a transmission queue for communication to the Configuration Manager Queue Manager. Ensure that the queue is given the same name and case as the Message Broker’ Queue Manager. Create a Queue Manager for the Message Broker as defined in the MQ documentation for MSCS control ensuring the start-up policy is set to manual and setting the listener(s) port number(s) appropriately. set up queues and channels between the broker queue manager and the UNS queue manager: a.IC91: High Availability for Websphere Message Broker Chapter 9 MSCS . Use the virtual IP address of the Message Broker’ Queue Manager and its associated listener port number. 4. Bring the Queue Manager resource online. the Message Broker requires a Queue Manager and this Queue Manager must be configured so that the broker can communicate with a UNS and Configuration Manager. test that MSCS starts the Queue Manager and that the Queue Manager functions correctly. The Queue Manager and its dependent resources should be put in the Message Broker group created previously. On the Message Broker’ Queue Manager create sender and receiver channels to match those just created on the Configuration Manager. Ensure that the queue is given the same name and case as the Configuration Manager Queue Manager. which is assumed to already exist. d. If you want to put multiple Message Brokers in the same group you may. On the Configuration Manager Queue Manager create a transmission queue for communication to the Message Broker’ Queue Manager. It is assumed that for independence between Message Brokers you will place each broker in a separate group. in which case you need only perform this step for the first broker in each group. 3. Start the Cluster Administrator and create a new MSCS group. 1. The transmission queue should be set to trigger the sender channel.Configuration steps for Broker Create the Message Broker Group As described earlier a Message Broker and its dependencies must be contained in an MSCS group. Place the Queue Manager under MSCS control as described in the MQ documentation. 46 . 5. b. a. This will require the creation of shared disk(s) and IP address(es). Now that the Message Broker’ Queue Manager has been created it is necessary to set up queues and channels between it and the Configuration Manager’ Queue Manager. If the Configuration Manager is clustered.

If the Message Broker database is to run on the cluster nodes it must be configured for MSCS operation so that it can fail to another node with the Message Broker. the sender channel should use the virtual IP address of the UNS Queue Manager and its associated listener port number. On the UNS Queue Manager create a sender and receiver channel for the associated Message Broker’ transmission queue. The transmission queue should be set to trigger the sender channel. The following instructions and example in Appendix 1 assume that this functionality is not used. c. 2. If the UNS is clustered. When the Message Broker is created the tables it needs are created within the DB2 instance. Then create a database for the Message Broker and add a matching ODBC connection. 1. Using a Local Message Broker database The broker database can be run within the MSCS cluster that runs the Message Broker. With a remote database configuration if you want to use message flows with XA support then in addition to setting up the Queue Manager to act as the transaction manager you will also need to configure the remote DB2 instances on the cluster nodes to take part in the XA transaction. there are two options regarding where the Message Broker database runs. The db2mscs command will place the instance under MSCS control and create a cluster resource of type DB2 Instance defining that instance. 5. DB2 includes support for MSCS by providing a custom resource type for a DB2 Instance.IC91: High Availability for Websphere Message Broker b. Ensure that the queue is given the same name and case as the UNS Queue Manager. As discussed previously. Please refer to the DB2 documentation for information on how to configure these connections. The DB2 MSCS implementation provides the ability to create and configure MSCS groups and resources additional to the DB2 configuration. Using a Remote Message Broker database The broker database can be run outside the cluster on a separate machine or group of machines. Create a DB2 instance for the Message Broker. Check that the Message Broker group is on the primary node. 47 . 1. Use the virtual IP address of the Message Broker’ Queue Manager and its associated listener port number. Create and Configure the Message Broker database The Message Broker relies on a DB2 broker database. On the Message Broker’ Queue Manager create sender and receiver channels to match those just created on the UNS. d. The default DB2 instance should not be put under MSCS control. DB2 should be installed on all nodes in the MSCS cluster. The resource created will include all relevant dependencies. See the WMB and DB2 documentation for details. On the primary node with the MSCS Message Broker’ group running use the DB2 Control Center (if it is not visible already) to add the newly created database instance to the visible instances. Ignore any MSCS configuration parameters that can be defined for db2icrt. This can be done in the normal manner using the db2icrt command. 3. You need to create the database instance which the Message Broker will use. 2. If not the db2mscs command will fail. Ensure all nodes in the MSCS cluster group are functioning correctly. 4. possibly even on a separate cluster. On both cluster nodes configure remote ODBC connections to the Message Broker database. Refer to the DB2 documentation for details on how to use the db2mscs command. On the Message Broker’ Queue Manager create a transmission queue for communication to the UNS Queue Manager.

Do not start the broker at this stage. 48 .CONFIG could be set to <user>@<domain>.BKR. For example S:\WorkPath. Check that the Message Broker group is on the primary node. where <BrokerName> should be replaced with the actual name of your Message Broker. The following descriptions refer to the nodes as primary and secondary. a. b. Ensure that the same parameters are used as when initially created on the primary node. Check that the Message Broker’ Queue Manager is available and functioning correctly and that the DB2 instance has successfully migrated to the secondary node. 7. if necessary. 4. The choice of which node is the primary is initially arbitrary but once made should be adhered to. plus its dependencies can move from one cluster node to the other. Move the Message Broker’ group to the secondary node in the cluster. The name of the service will be MQSeriesBroker<BrokerName>. On the primary node create an MSCS Generic Service resource for the Message Broker service. When asked for any registry keys to monitor. Create the Message Broker on the primary node using the mqsicreatebroker command specifying the – option defining a location on the shared drive used by the Message Broker’ Queue Manager. Create a dependency on the Message Broker’ Queue Manager. add the following key SOFTWARE\IBM\WebSphereMQIntegrator\2\<BrokerName> where <BrokerName> should be replaced with the actual name of your Message Broker. 1. c. Create a dependency on the Message Broker’ database. For example the MCAUSER parameter of SYSTEM. WMB Toolkit To connect the WMB toolkit to a highly available Configuration Manager use the virtual IP address defined for the Configuration Manager to ensure that the tooling can connect to the Configuration Manager no matter which machine the component is currently running on. 3. Bring the Message Broker generic service resource online. 8. On secondary node(s) use the DB2 Control Center to add the instance (if it is not visible already) and create a duplicate ODBC connection as defined on the primary node. Create the Clustered Message Broker The Message Broker must be placed under MSCS control.IC91: High Availability for Websphere Message Broker 6. d. Re-execute the mqsicreatebroker command on the secondary node. 8. It may be necessary to use the mqsicreateaclentry function of WMB or the MCAUSER parameter of MQ channels to allow communication between the Toolkit and the Configuration Manager when the Configuration Manager is remote. 7. Refer to the documentation for more detailed descriptions of these functions. Move the Message Broker’ group back to the primary node in the cluster. Move the Message Broker group back to the primary node. Move the Message Broker group to the secondary node. 6. 5. The broker runs as a service in Windows 2003 and can be managed by MSCS using the Generic Service resource type. 2. Test that the Message Broker.

Test the failover of the Queue Manager.IC91: High Availability for Websphere Message Broker Chapter 10 MSCS . S:\WorkPath. Create the Configuration Manager Group As described earlier a Configuration Manager and its dependencies may be contained in an MSCS group if clustering is being used. 3. The instructions will demonstrate how to move a Message Broker’ database into a new DB2 instance. 4. The Queue Manager and its dependent resources should be put in the Configuration Manager’ group created previously. Configuration Manager Ensure that the Configuration Manager is stopped. If you want to put multiple components in the same group you may. Start the Cluster Administrator and create a new MSCS group. 3. It is recommended that when moving multiple components into the same MSCS group all components use the same WorkPath location on the shared disk.exe" depend=MQSeriesServices 49 . 2. Copy all files under the Configuration Manager’ WorkPath (the default is C:\Documents and Settings\All Users\Application Data\IBM\MQSI) to the shared disk mentioned previously. Update any channels referencing this Queue Manager to use the newly configured virtual IP Address defined for the MSCS group. For all nodes other than the node the Message Broker is currently running on execute the command a. Configure the Configuration Manager’ Queue Manager 1. DB2 databases will be moved into new instances and then placed under MSCS control.Configuration Steps for Migrating Existing Components Migrating components involves more manual steps than creating components under MSCS control. 5. 2. Delete references in the old components directory to WMB components that will be placed in this MSCS group. Using regedit create or update the registry string entry HKEY_LOCAL_MACHINE\SOFTWARE\IBM\WebSphereMQIntegrator\2\<ConfigurationManag ezName>\CurrentVersion\WorkPath setting the value to S:\WorkPath. It is assumed that for independence the Configuration Manager will remain in its own group. Delete references in the new components directory to WMB components that will not be placed in this MSCS group.0\bin/bipservice. It is recommended that you leave the group failback policy at the default value of disabled. This will require the creation of shared disk(s) and IP address(es). The shared disk is under MSCS control. the registry must be edited and files manually moved on the file system. Place the Queue Manager under MSCS control as described in the MQ documentation. 2. Configure the Configuration Manager 1. sc \\<NodeName> create MQSeriesBroker<ConfigurationManagerName> binPath= "C:\Program Files\IBM\MQSI\6. The below configuration steps will assume that the WorkPath is located on a shared disk in the location S:\WorkPath. This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. in which case you need only perform this step to create one group. Each WMB component has a WorkPath associated with it where configuration files are kept. 1.

Place the Queue Manager under MSCS control as described in the MQ documentation. Leave the start-up policy for the Configuration Manager service as its default setting of manual on both nodes. 9. It is recommended that you leave the group failback policy at the default value of disabled. Delete references in the old components directory to WMB components that will be placed in this MSCS group./ for local users or be a valid Domain user. This will require the creation of shared disk(s) and IP address(es). This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. S:\WorkPath. Bring the Configuration Manager generic service resource online. 3. 8. If you want to put multiple components in the same group you may. add the following key: SOFTWARE \IBM\WebSphereMQIntegrator\2\<ConfigurationManager> where <ConfigurationManager> is the name of the component created. The Queue Manager and its dependent resources should be put in the User Name Server’s group created previously. c. It is assumed that for independence the User Name Server will remain in its own group. The UserName value should start . Delete references in the new components directory to WMB components that will not be placed in this MSCS group. in which case you need only perform this step to create one group.IC91: High Availability for Websphere Message Broker DisplayName= "IBM WebSphere Message Broker component <ConfigurationManagerName>" obj= <UserName> password= <Password> b. Update any channels referencing this Queue Manager to use the newly configured virtual IP Address defined for the MSCS group. Copy all files under the UNS WorkPath (the default is C:\Documents and Settings\All Users\Application Data\IBM\MQSI) to the shared disk mentioned previously. For all nodes other than the node the Message Broker is currently running on execute the command 50 . 5. When asked for any registry keys to monitor. Configure the UNS Queue Manager 1. 3. 1. Start the Cluster Administrator and create a new MSCS group. The binPath value should be changed if WMB is installed in a nondefault location. 4. 6. Move the MSCS group to the secondary node in the cluster and check that all resources are available and running correctly. 7. Using regedit create or update the registry string entry HKEY_LOCAL_MACHINE\ SOFTWARE\IBM\WebSphereMQIntegrator\2UserNameServer\CurrentVersion\WorkPath setting the value to S:\WorkPath. The name of the service is MQSeriesBroker<ConfigurationManager>. Test the failover of the Queue Manager Configure the UNS 1. MSCS will be solely responsible for starting and stopping this service. 2. 2. create an MSCS Generic Service resource for the Configuration Manager service. User Name Server Ensure that the User Name Server is stopped. 2. On the primary node. The user must be defined and valid on all nodes in the MSCS cluster. Create the UNS Group As described earlier a User Name Server and its dependencies may be contained in an MSCS group if clustering is being used. Specify the dependency on the Configuration Manager’ Queue Manager.

a. Create the Message Broker Group As described earlier a broker and its dependencies must be contained in an MSCS group. Create a new instance using db2icrt db2icrt <new instance name> Update the command prompt to use this new instance set DB2INSTANCE=<new instance name> Start the new instance db2start Create a file to define relocation parameters 51 . c. a. MSCS will be solely responsible for starting and stopping this service. This step creates the group in preparation for the subsequent steps which will create the resources to be placed in the group. 3. When asked for any registry keys to monitor. The binPath value should be changed if WMB is installed in a non-default location. Specify the dependency on the User Name Server’s Queue Manager. The name of the service is MQSeriesBrokerUserNameServer. Test the failover of the Queue Manager Configure the DB2 instance and database 4.0\bin/bipservice. 6. 2. 9. Bring the User Name Server’s generic service resource online. create an MSCS Generic Service resource for the User Name Server’s service.IC91: High Availability for Websphere Message Broker a. 7. Start the Cluster Administrator and create a new MSCS group. Configure the Message Broker’s Queue Manager 1. 6. Move the MSCS group to the secondary node in the cluster and check that all resources are available and running correctly. The UserName value should start . in which case you need only perform this step for the first broker in each group. Update any channels referencing this Queue Manager to use the newly configured virtual IP Address defined for the MSCS group. 7. a. It is assumed that for independence between Message Brokers you will place each broker in a separate group. This will require the creation of shared disk(s) and IP address(es). The Queue Manager and its dependent resources should be put in the Message Broker group created previously. Leave the start-up policy for the User Name Server’s service as its default setting of manual on both nodes. 8. 5. Message Broker Ensure that the Message Broker is stopped. The user must be defined and valid on all nodes in the MSCS cluster. add the following key: SOFTWARE\IBM\WebSphereMQIntegrator\2\UserNameServer. 2./ for local users or be a valid Domain user. On the primary node. If you want to put multiple Message Brokers in the same group you may. 1.exe" depend= MQSeriesServices DisplayName= "IBM WebSphere Message Broker component UserNameServer" obj= <UserName> password= <Password> b. sc \\<NodeName> create MQSeriesBrokerUserNameServer binPath= "C:\Program Files\IBM\MQSI\6. It is recommended that you leave the group failback policy at the default value of disabled. Place the Queue Manager under MSCS control as described in the MQ documentation.

Move the MSCS group to the secondary node in the cluster and check that all resources are available and running correctly.IC91: High Availability for Websphere Message Broker a. DB_NAME=<database name> DB_PATH=<disk on which the database is located> INSTANCE=<original instance>.exe" depend= MQSeriesServices DisplayName= "IBM WebSphere Message Broker component <BrokerName>" obj= <UserName> password= <Password> b. The UserName value should start . 52 . b. For all nodes other than the node the Message Broker is currently running on execute the command a. 14. Bring the Message Broker generic service resource online. d. a. 17. Relocate the database db2relocatedb -f relocate. c. where <BrokerName> should be replaced with the actual name of your Message Broker./ for local users or be a valid Domain user. 16. add the following key SOFTWARE\IBM\WebSphereMQIntegrator\2\<BrokerName> where <BrokerName> should be replaced with the actual name of your Message Broker. Move the new instance to MSCS Control 11. b.cfg Delete the old database set DB2INSTANCE=DB2 drop db HADB1 10. c. sc \\<NodeName> create MQSeriesBroker<BrokerName> binPath= "C:\Program Files\IBM\MQSI\6. S:\WorkPath. 18. The user must be defined and valid on all nodes in the MSCS cluster. 9. Create a dependency on the Message Broker’s database. a. The name of the service will be MQSeriesBroker<BrokerName>.0\bin/bipservice. Copy all files under the Message Broker’s WorkPath (the default is C:\Documents and Settings\All Users\Application Data\IBM\MQSI) to the shared disk mentioned previously. Follow the instructions for ’Using a Local Message Broker database’ in the ‘Create and Configure the Message Broker’ section of this document. 15. Delete references in the new components directory to WMB components that will not be placed in this MSCS group. a.<new instance name> NODENUM=0 8. On the primary node create an MSCS Generic Service resource for the Message Broker service. The binPath value should be changed if WMB is installed in a non-default location. Delete references in the old components directory to WMB components that will be placed in this MSCS group. Create a dependency on the Message Broker’s Queue Manager. When asked for any registry keys to monitor. Using regedit create or update the registry string entry HKEY_LOCAL_MACHINE\ SOFTWARE\IBM\WebSphereMQIntegrator\2\<BrokerName>\CurrentVersion\WorkPath setting the value to S:\WorkPath. 19. Configure the Message Broker 12. 13.

Update this file using the virtual IP address defined in the MSCS group so that the Configuration Manager can be located no matter which node it is running on 53 .confgmgr file(s) created by the toolkit provide location information for Configuration Manager(s).IC91: High Availability for Websphere Message Broker WMB Toolkit The .

reboot the secondary node. 2. In the Cluster Administrator. 7. This is necessary because running mqsideleteconfigmgr would not be able to clean up the Queue Manager resources associated with the Configuration Manager. Ensure the Configuration Manager group is on the primary node. change SOFTWARE\IBM\WebSphereMQIntegrator\2\<ConfigurationManager>\CurrentVersion\WorkP ath to the new location of the files (C:\<ConfigurationManager> from above) c. Delete the resource instances for lower level resources. such as disks and IP addresses used by the Configuration Manager and its Queue Manager. Also delete the following registry tree entry.IC91: High Availability for Websphere Message Broker Chapter 16 MSCS . Removing a Configuration Manager from MSCS Control This step removes the Configuration Manager from MSCS cluster control without destroying it. Copy all files from S:\WorkPath to C:\WorkPath (for example) b.Removal and Deletion The following steps describe how to remove a Message Broker. Configuration Manager or UNS from a clustered configuration so that the component can still be used. Follow the product documentation to delete a WMB component completely from your system. 54 . the Configuration Manager will be fixed on the primary node. Delete S:\WorkPath\components\<ConfigurationManager> 4. delete the Configuration Manager resource instance. Remove the Configuration Manager Queue Manager from Configuration Manager group as described in the MQ product documentation. Delete the Configuration Manager group. The result should be an operational Configuration Manager which is fixed on the primary node. 10. 3. d. Use hadltmqm to delete the Configuration Manager’s Queue Manager from the secondary node. On the secondary node 9. 6. This approach is based on a manual cleanup of the secondary node(s) which will involve editing the registry on those nodes. Use the Windows 2003 command sc to uninstall the service a. d. the real services that they represented should continue to function correctly. Move the Message Broker’s resources to the local machine a. This will clear the active control set and ensure that the node is completely clean. Designate one of the nodes as the primary and the other(s) as secondary. Optionally. sc \\<NodeName> delete MQSeriesBroker<ConfigurationManager> b. Delete from the C:\WorkPath\components directory all components that are still managed by MSCS. if available SOFTWARE\IBM\WebSphereMQIntegrator\2\<ConfigurationManager> c. Although the resources have been deleted from the cluster configuration. but under manual control. You may want to reconfigure any channels which refer to the Configuration Manager’s Queue Manager so that they use the physical IP address rather the virtual IP address. 8. 5. On completion of this step. Using regedit update the value for the Message Broker’s work path. 1.

the real services that they represented should continue to function correctly. Use the Windows 2003 command sc to uninstall the service sc \\<NodeName> delete MQSeriesBrokerUserNameServer b. 3. Using regedit update the value for the Message Broker’s work path. the Message Broker will be fixed on the primary node. Optionally. Delete the resource instances for lower level resources.IC91: High Availability for Websphere Message Broker Removing a UNS from MSCS Control This step removes the UNS from MSCS cluster control without destroying it. In the Cluster Administrator delete the Message Broker’s resource instance. if available SOFTWARE\IBM\WebSphereMQIntegrator\2\UserNameServer c. Remove the UNS Queue Manager from UNS group as described in the MQ product documentation. 1. Delete S:\WorkPath\components\UserNameServer 4. Designate one of the nodes as the primary and the other(s) as secondary. Although the resources have been deleted from the cluster configuration. On completion of this step. In the Cluster Administrator. The result should be an operational UNS which is fixed on the primary node. 3. 2. Also delete the following registry tree entry. Move the Message Broker’s resources to the local machine 55 . Delete from the C:\WorkPath\components directory all components that are still managed by MSCS. 9. On the secondary node a. You may want to reconfigure any channels which refer to the UNS Queue Manager so that they use the physical IP address rather the virtual IP address. which involves editing the registry on the secondary node(s). Removing a Message Broker from MSCS Control This step removes a Message Broker from MSCS control without destroying it. 1. This step is based on a manual cleanup of the secondary node(s). d. change SOFTWARE\IBM\WebSphereMQIntegrator\2\UserNameServer\CurrentVersion\WorkPath to the new location of the files (C:\WorkPath from above) c. Use hadltmqm to delete the UNS Queue Manager from the secondary node. a. delete the UNS resource instance. This will clear the active control set and ensure that the node is completely clean. 5. Ensure the UNS group is on the primary node. Designate one of the nodes as the primary and the other(s) as the secondary. Ensure the Message Broker group is on the primary node. Move the UNS resources to the local machine Copy all files from S:\WorkPath to C:\WorkPath (for example) b. This is necessary because mqsideletebroker would not be able to clean up the Queue Manager and DB2 resources used by the broker as they are unavailable when the command is executed. This approach is based on a manual cleanup of the secondary node(s) which will involve editing the registry on those nodes. On completion of this step. reboot the secondary node. 2. Delete the UNS group. but under manual control. 7. d. the UNS will be fixed on the primary node. 8. This is necessary because running mqsideleteusernameserver would not be able to clean up the Queue Manager resources associated with the UNS. such as disks and IP addresses used by the UNS and its Queue Manager. 6.

You may also want to reconfigure any channels which refer to the queue manager so that they use the physical IP address rather the virtual IP address assigned in MSCS. Delete from C:\WorkPath\components all components that are still managed by MSCS. 5. 56 . The result should be an operational Message Broker which is fixed on the primary node. d. Use hadltmqm to delete the Message Broker’s Queue Manager from the secondary node. Remove the broker database from the Message Broker group as described in the documentation for the db2mscs command. 8. Remove the Message Broker’s Queue Manager from the Message Broker group as described in the MQ product documentation. the real services that they represented should continue to function normally. Although the resources have been deleted from the cluster configuration. 6. Optionally delete the resource instances for lower level resources. Delete S:\WorkPath\components\<BrokerName> 4. On the secondary node: Use the Windows 2003 command sc to uninstall the service b. if available SOFTWARE\IBM\WebSphereMQIntegrator\2\<BrokerName> where <BrokerName> should be replaced with the actual name of your Message Broker. This will clear the active control set and ensure that the node is completely clean. Optionally. Copy all files from S:\WorkPath to C:\WorkPath (for example) b. Using regedit update the value for the Message Broker’s work path. d. reboot the secondary node. 10. change SOFTWARE\IBM\WebSphereMQIntegrator\2\<BrokerName>\CurrentVersion\WorkPath to the new location of the files (C:\WorkPath from above) c. Optionally delete the Message Broker group. sc \\<NodeName> delete MQSeriesBroker<BrokerName> where <BrokerName> is the name of the Message Broker. such as disks and IP addresses used by the broker and its dependencies. f. 9. a. but under manual control. Also delete the ODBC data source set up for the broker’s database. e. c. Also delete the following registry tree entry.IC91: High Availability for Websphere Message Broker a. 7.

• LogLevel It is recommended that you run the MQSIBroker and MQSIUNS agents with LogLevel set to ‘error’. type MQSIConfigMgr ( static int OfflineTimeout = 60 static str LogLevel = error static str ArgList[] = { ConfigMgrName. If you want more detail of what either agent is doing. You can adjust this attribute to suit your own configuration. This is because there can only be one UNS. UserID } NameRule = resource. Note that the MQSIUNS resource type does not include a name attribute.cf file.Sample Configuration Files types.BrokerName str BrokerName str UserID ) type MQSIUNS ( static int static str static str NameRule = str UserID ) OfflineTimeout = 60 LogLevel = error ArgList[] = { UserID } "UserNameServer" As well as creating the MQSIBroker and MQSIUNS resource types. • OnlineWaitLimit It is recommended that you configure the OnlineWaitLimit for the resource types. this attribute should be set to 0. 57 . this also sets the values of the following resource type attributes: • OfflineTimeout The VCS default of 300 seconds is quite long for an MQSI broker or UNS. so the suggested value for this attribute is 60 seconds. make sure you have stopped the cluster and use hacf -verify to check that the modified file is correct. If you add the resource type in this way. called “UserNameServer” within the cluster. then you can increase the LogLevel to ‘debug’ or ‘all’. UserID } NameRule = resource. The default setting is 2. but to accelerate detection of start failures.cf The MQSIBroker and MQSIUNS resource types can be created by adding the following resource type definitions to the types. but it is recommended that you do not set it any shorter than approximately 15 seconds.IC91: High Availability for Websphere Message Broker Appendix A .ConfigMgrName str ConfigMgrName str UserID ) type MQSIBroker ( static int OfflineTimeout = 60 static str LogLevel = error static str ArgList[] = { BrokerName. but this will produce far more messages and this is not recommended for regular operation. This will display any serious error conditions (in the VCS log). as discussed in the earlier chapter on “Planning your configuration” and the type definition enforces this uniqueness using the NameRule.

network = 5 } MaxFactor = { runque = 100. 2 = "An existing system has changed its state".20. The group uses an IP address (resource name vxip1) and filesystems managed by Mount resources (vxmnt1. The file has been split across multiple pages for clarity." } CounterInterval = 5 Factor = { runque = 5. The following is a complete main. memory = 10.cf" cluster Kona ( UserNames = { admin = "cDRpdxPmHpzS. There is also a database instance for the broker database (VXBDB1). 4 = "One or more heartbeat links has gone down". cpu = 25. cpu = 100. network = 100 } ) system sunph1 system sunph2 snmp vcs ( TrapList = { 1 = "A new system has joined the VCS Cluster".cf Resources of types MQSIBroker and MQSIUNS can be defined by adding resource entries to the /etc/VRTSvcs/conf/config/main.110. 3 = "A service group has changed its state". sunph2) and one service group (vxg1) which includes resources for a ConfigMgr (ConfigMgr) broker (BRK1) and a UserNameServer2 both of which use a queue manager (VXQM1). vxmnt2) and a DiskGroup (resource name vxdg1).cf. disk = 100. 5 = "An HA service has done a manual restart". 7 = "An HA service has been successfully started" } ) group vxg1 ( SystemList = { sunph1.248" ) 58 . disk = 10. 6 = "An HA service has been manually idled".IC91: High Availability for Websphere Message Broker main. used by the VXB1 broker. include "types.cf for a simple cluster (called Kona) with two systems (sunph1. memory = 1. sunph2 } ) MQSIConfigMgr ConfigMgr { ConfigMgrName @snetterton = ConfigMgr UserID @snetterton = argostr } MQSIUNS UserNameServer { UserID @snetterton = argostr } MQSIBroker BRK1 { BrokerName @snetterton = BRK1 UserID @snetterton = argostr } Db2udb VXBDB1 ( DB2InstOwner = vxdb1 DB2InstHome = "/MQHA/VXQM1/data/databases/vxdb1" ) MQM VXQM1 ( QMName = VXQM1 ) DiskGroup vxdg1 ( DiskGroup = vxdg1 ) IP vxip1 ( Device = hme0 Address = "9.

IC91: High Availability for Websphere Message Broker Mount vxmnt1 ( MountPoint = "/MQHA/VXQM1/data" BlockDevice = "/dev/vx/dsk/vxdg1/vxvol1" FSType = vxfs ) Mount vxmnt2 ( MountPoint = "/MQHA/VXQM1/log" BlockDevice = "/dev/vx/dsk/vxdg1/vxvol2" FSType = vxfs ) NIC vxnic1 ( Device = hme0 NetworkType = ether ) ConfigMgr requires VXQM1 BRK1 requires VXQM1 BRK1 requires VXBDB1 UserNameServer requires VXQM1 VXBDB1 requires vxmnt1 VXQM1 requires vxip1 VXQM1 requires vxmnt1 VXQM1 requires vxmnt2 vxip1 requires vxnic1 vxmnt1 requires vxdg1 vxmnt2 requires vxdg1 // resource dependency tree // // group vxg1 // { // MQSIConfigMgr ConfigMgr // { // MQM VXQM1 // { // IP vxip1 // { // NIC vxnic1 // } // Mount vxmnt1 // { // DiskGroup vxdg1 // } // Mount vxmnt2 // { // DiskGroup vxdg1 // } // } // } // MQSIBroker BRK1 // { // Db2udb VXBDB1 // { // Mount vxmnt1 // { // DiskGroup vxdg1 // } // } // MQM VXQM1 // { // IP vxip1 // { // NIC vxnic1 // } // Mount vxmnt1 // { // DiskGroup vxdg1 // } // Mount vxmnt2 // { // DiskGroup vxdg1 // } // } // } // MQSIBroker UNS 1 // { // MQM VXQM1 // { // IP vxip1 // { // NIC vxnic1 // } 59 .

ibm.IC91: High Availability for Websphere Message Broker Comments If you have any comments on this SupportPac. Hursley Park Winchester SO21 2JN UK Stephen Cox MailPoint 154 IBM UK Laboratories Ltd.ibm.com Post: Rob Convery MailPoint 211 IBM UK Laboratories Ltd.ibm.com coxsteph@uk. Hursley Park Winchester SO21 2JN UK 60 . please send them to: Email: convery@uk.com aford@uk.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->