You are on page 1of 5

How to install SC 2.2 HA or PDB: send all comments to raffaele.lapietra@east This doc cat be found at http://neato.east/suncluster/scinstall.

html Last Modified 06 Dec 1999 I. Paper Work A. Check off the pre site checklist B. Fill out the statement of work and get it signed C. Fill out the configuration questions & network diagram D. Get customer to order licences for VxVM and VxFS II. Install admin station & TC A. Install HW & OS & recommended patches B. scinstall client software and pkgadd SUNWscmgr C. add patch 107388 for 2.6 or 107538 for 2.7 D. Add /opt/SUNWcluster/bin to path E. Add all host to /etc/hosts F. Add entry to /etc/clusters - <clustername> <host1> ... <host n> G. Add entry to /etc/serialports 1. If not an E10K, entry is - <host> <tc> <500x> , where x=port number 2. If an E10K, entry is - <host> <ssp> 23 I. Add entry to /etc/remote : tc:dv=/dev/term/a:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D: J. Install TC and cables (cable is 530-2151 or 530-2152) 1. Connect port 1 to admin station serial port a and tip tc 2. Get into monitor mode by holding down test button while you power on the TC. When the green light blinks, release the test button and then press it again. a. set IP address - monitor:: addr b. check boot image is oper.52.enet - monitor:: image c. set to boot from self - monitor:: seq d. Power cycle the tc e. become su and go into admin mode annex: su passwd: <tc ip address> annex# admin f. configure serial ports : admin: set port=1-8 type dial_in imask_7bits Y admin: set port=2-8 mode slave admin: quit annex# boot bootfile: <return> warning: <return> g. type "~." to exit your tip session 3. Connect tc ports to serial port a on the nodes matching /etc/serialports III. Install cluster nodes A.On admin station - ccp <clustname>. 1. Choose cluster console (console mode). 2. If Multi Initiated SCSI, follow HW install guide (805-6511) chapter 4. 3. Install FULL DISTRIBUTION of OS (with OEM for E10k) a. Use TERM type "xterms". (in .profile : TERM=xterms;export TERM) b. After reboot, edit the /etc/default/login file to allow remote root c. Go back to the ccp and choose cluster console (telnet mode). This will allow faster access for the rest of the process. Use the (console mode) windows to see any error messages logged to the console 4. Install the recommended patch cluster (+ E10K patches if necessary) 5. If using A3x00 or A1000 install RM6.1.1_u1 and add patch 106707 6. Install the following patches for 2.6 105795 hme 106172 fas scsi

106532 qfe 2.2 - Do NOT install -03, use -01 7. Add entries in /etc/hosts for each log host,phys host,admin and tc 8. Add entries in /etc/ethers on admin station for all phys hosts 9. Add to root paths (.profile) a. for all /opt/SUNWcluster/bin:/opt/SUNWpnm/bin b. for CVM and VxVM add /opt/vxva/bin:/etc/vx/bin c. for SDS add /usr/opt/SUNWmd/sbin d. for SCI add /opt/SUNWsci/bin:/opt/SUNWscid/bin:/opt/SUNWsma/bin 10. If using SDS, edit /.rhosts file # Add the following 6 lines for all clusters 204.152.65.33 204.152.65.1 204.152.65.17 204.152.65.34 204.152.65.2 204.152.65.18 # Add the following 3 lines for 3 or 4 node clusters 204.152.65.35 204.152.65.3 204.152.65.19 # Add the following 3 lines for 4 node clusters 204.152.65.36 204.152.65.4 204.152.65.20 11. Edit /etc/nsswitch.conf. Put files first for hosts, group and services 12. Enable savecore in /etc/init.d/sysetup for 2.6 IV. Install SC on the nodes A. Do your scinstall of the server software B. Install Disk Management software 1. For VxVM install VxVM 2.6 and patch 106606-03 or later 2. For CVM install CVM 2.2.1 and patch 107958 (when available) Reboot to activate auto license 3. For SDS 4.2 pkgadd SUNWmd and patch 106627 If needed, change /kernel/drv/md.conf file: nmd=<max diskset number>, md_nsets=<max number of disksets> 4. Install VxFS if necessary (patch 108019 will get you to 3.2.6) C. Install patches Patch 2.6 7 =========================== ====== ====== Sun Cluster Manager 107388 107538 clustd patch 107748 N/A Framework rpcbind 107763 107764 pnmd patch 107931 107932 Support for Oracle8i 107996 107997 HA Tivoli support 108034 N/A disktype patch 108086 108087 hactl patch 108095 108096 HA NSLDAP 108109 N/A SCI Drivers 108190 108191 HA-NFS patch 108215 108216 DID Drivers 108347 N/A SNMP Agent Patch N/A 108292 D. If using SCI, configure it ON ONE NODE 1. cp /opt/SUNWsma/bin/Examples/<samp> to /opt/SUNWsma/bin/<clustername>.sc and edit it where <samp> = switch1.sc ,switch2.sc or link1.sc 2. /opt/SUNWsma/bin/sm_config -f /opt/SUNWsma/bin/<clustername>.sc F. License the software 1. vxlicense -c for VxVM 2. vxfsserial -c for VxFS

3. If using A5x00 and the auto licence doesn't show up: vxconfigd -d ; vxdctl license init G. If using VxVM or CVM, do your vxinstall 1. vxinstall a. Do NOT encapsulate the root drive (will do later) b. Only add the root mirror disk to rootdg and name it rootmir c. Leave all other disks alone 2. If using VxVM (not CVM), disable DMP a. rm /kernel/drv/vxdmp rm -rf /dev/vx/dmp /dev/vx/rdmp ln -s /dev/dsk /dev/vx/dmp ln -s /dev/rdsk /dev/vx/rdmp b. edit /etc/system and remove the line - forceload: drv/vxdmp H. For PDB ONLY load and configure the udlm driver 1. Create a group 'dba' with the same group_id on both nodes 2. Add the driver fron the ORACLE distribution cd on both nodes. The driver is usually in a directory called ops_patch or OPS_patches. Copy it to /tmp, uncompress and untar if necessary and pkgadd -d /tmp ORCLudlm V. Edit the /etc/system file A. Add a line - exclude: lofs B. Add a line: set ip:ip_enable_group_ifs=0 C. Add any entries for databases 1. For Sybase: set shmsys:shminfo_shmmax=240000000 ( 1/2 physical mem size) set shmsys:shminfo_shmseg=200 set rlim_fd_cur=1024 2. For Oracle HA or PDB: set shmsys:shminfo_shmmax=4294967295 ( 1/2 physical mem size) set shmsys:shminfo_shmmin=200 set shmsys:shminfo_shmmni=200 set shmsys:shminfo_shmseg=200 set semsys:seminfo_semmap=250 set semsys:seminfo_semmni=500 set semsys:seminfo_semmns=600 set semsys:seminfo_semmnu=600 set semsys:seminfo_semume=600 set semsys:seminfo_semmsl=1000 3. For Informix: set shmsys:shminfo_shmmax=4026531839 (3.86GB = max) set shmsys:shminfo_shmmni=200 set shmsys:shminfo_shmseg=200 D. init 6 (reboot for /etc/system changes to take effect) VI. Configure NAFO groups and shared CCD and start the cluster A. Configure/check NAFO groups 1.To configure : pnmset 2.To check : pnmstat -l B. If you want a shared ccd in a 2 node cluster and are NOT using SDS run 1. On both nodes - scconf <clustername> -S ccdvol 2. On 1 node - confccdssa. 3. For no shared ccd, you can run scconf <clustername> -S none C. scadmin startcluster <nodename> <clustname> on ONE NODE D. scadmin startnode on ALL OTHER NODES E. on one node check with : hastat | more VII. For SDS ONLY - Configure SDS and did A. Initialize did 1. MAKE SURE YOU DID STEP III.10. (rhosts) 2. On node 0, run scdidadm -r

3. If you get the error "The did entries must be the same on all nodes" a. edit /etc/name_to_major on all nodes to make them match for did b. scadmin stopnode c. rm -rf /devices/pseudo/did* /dev/did /etc/did.conf d. reboot -- -r e. after reboot, rerun scdidadm -r 4. scdidadm -L to make sure driver is initialized B. Create the metadb's on a LOCAL drive with a slice of at least 2mb 1. metadb -afc 3 c<w>t<x>d<y>s<z> 2. metadb C. Create disk set 1. create with - metaset -s <loghost> -a -h node1 ... node<n> 2. check with - metaset 3. add disks - metaset -s <loghost> -a /dev/did/dsk/d<n> ... 4. check with - metaset -s <loghost> 5. repartition disks using format, prtvtoc and fmthard a. Leave slice 7 alone! (db replicas) b. Recommended disk layout slice size Use for ----- ---- -----------------0 rest data 2 all overlap 4 9mb ha file system \ 4 and 5 on 2 disks only 5 2mb ha file system log / for HA admin file system 6 1% data log 7 2mb sds replica 6. Create the md.tab file in /etc/opt/SUNWmd 7. Initialize the md.tab for each diskset on one node a. metaset -s <loghost> -t b. metainit -s <loghost> -a c. metastat -s <loghost> (to check it) 8. Set up the golden mediator a. metaset -s <loghost> -a -m <node0> ... <node-n> b. medstat -s <loghost> VIII. FOR VxVM and CVM ONLY A. Initialize disks and create the disk groups ON ONE NODE 1. Method 1 : /etc/vx/bin/vxdisksetup -i c<n>t<n>d<n> HA ONLY - vxdg init <diskgroup> c<n>t<n>d<n> ... PDB ONLY - vxdg -s init <diskgroup> c<n>t<n>d<n> ... 2. Method 2 : vxdiskadm - Choose option 1 (initialize) and add to a new disk group PDB ONLY: vxdg deport <diskgroup>; vxdg -s import <diskgroup> IX. Create logical hosts & start HA nfs and other services A. HA ONLY - Create logical host ON ONE NODE 1. scconf <clustname> -L <loghost> -n <node0,node1> -g <diskgroup> \ -i <interface,interface,loghost> -m B. HA with VxVM ONLY - Create admin file system ON EVERY NODE 1. scconf <clustname> -F <loghost> C. HA with SDS ONLY - Create admin file system a. newfs /dev/md/<loghost>/rdsk/d<n> on 1 node b. mkdir /<loghost> on all nodes c. mount /dev/md/<loghost>/dsk/d<n> /<loghost> on 1 node D. HA ONLY - Go to /etc/opt/SUNWcluster/conf/hanfs and update the vfstab and dfstab files and start HA-NFS on all nodes 1. SDS ONLY- add the admin file system created above to vfstab.<loghost> 2. Update vfstab.<loghost> files with an entry for the HA file systems 3. Update the dfstab.<loghost> files with any HA-NFS file systems

4. Mount the file systems using mount on one node 5. Share the file systems using share on one node 6. hareg -s -r nfs -h <loghost> for the first logical host on one node 7. scconf -s nfs <loghost> for all additional logical hosts on one node 8. hareg -y nfs on one node 9. haswitch -r F. HA ONLY - Register HA-DBS services See SC 2.2 Install Guide pn 805-4239 1. Ch 5 for HA Oracle a. Make sure any volumes created are owned by oracle/dba using vxva or vxedit -g <diskgroup> set mode=755 user=oracle group=dba <volume> 2. Ch 6 for HA Syabase 3. Ch 7 for HA Informix X. Complete installation 1. Encapsulate root disk using vxdiskadm option 2 or vxroot 2. Mirror root using vxva or: vxrootmir <rootmir>;vxmirror <rootdisk rootmir> 3. Test scmgr: a. From the admin workstation - scmgr <any_cluster_node> 4. Follow SEC Signoff Checklist for testing