Cluster with iSCSI target & initiator Centos 5

Assume we have 2 system with Centos 5 installed and we want to create a cluster to have High Availability for some services (in this tutorial Apache Web Server).

iSCSI Target
1. Install RPM packages
# yum -y install kernel-devel openssl-devel gcc # yum groupinstall ³Cluster Storage´

2. Download iSCSI target here: http://sourceforge.net/project/showfiles.php?group_id =108475 3. Extract the archive:
# # # # tar zxvf iscsitarget-1.4.20.2.tar.gz cd iscsitarget-1.4.20.2 make make install

4. Create the IQN (iscsi qualified name ) in the ietd configuration file.
# vim /etc/iet/ietd.conf Target iqn.2010-10.com.company:storage.disk2.san01.cluster IncomingUser rab 123 OutgoingUser rab 123456 Lun 0 Path=/dev/sdb,Type=fileio Alias iSCSI-Disk1

5. Decide who can connect to IET daemon:
# vim /etc/iet/initiators.allow iqn.2010-10.com.company:storage.disk2.san01.cluster 10.0.0.0/24

6. Starting/Automating the iscsi-target service on startup
# chkconfig --add iscsi-target # chkconfig iscsi-target on # service iscsi-target start # service iscsi-target status iSCSI Target (pid 5312) is running... # cat /proc/net/iet/session tid:1 name:iqn.2010-10.com.company:storage.disk2.san01.cluster

7. Create a new physical volume, a new volume Volume Group and new Logical Volume to use as a shared storage for cluster nodes.
# pvcreate /dev/sdb # vgcreate vg1 /dev/sdb # vgdisplay vg1 | grep ³Total PE´ Total PE 1279 # lvcreate ±l 1279 ±n lv0 vg1

session.authmethod = CHAP node.session. in this case 5 GB. you can use this GFS2 for a maximum of 8 hosts and you ve used the /dev/vg1/lvo device.conf # vim /etc/iscsi/iscsid.session.gfs2 ±p lock_dlm ±t company-cluster:storage1 ±j 8 /dev/vg1/lv0 Explanation: We ve created a GFS fil system.session.session.err_timeo.session.conn[0].session.auth. Create clustered GFS2 file system # mkfs.session.company:www2 3.iscsi. Install all needed packages for both node systems # yum groupinstall ±y ³Cluster Storage´ ³Clustering´ # yum install ±y iscsi-initiator-utils isns-utils 2.noop_out_interval = 5 node.username = rab node.replacement_timeout = 120 node.session.logout_timeout = 15 node.cmds_max = 128 node. Define the initiator name www1.sendtargets.FastAbort = Yes .company.abort_timeout = 15 node.com # vim /etc/iscsi/initiatorname.iscsi.iscsi.MaxBurstLength = 16776192 node.auth.com.initial_login_retry_max = 8 node.2010-10.company.session.session. with locking protocol lock_dlm for a cluster called company-cluster and with name storage1 . 8.session.HeaderDigest = None node.iscsi. iSCSI initiator 1.queue_depth = 32 node.session.Explanation: We ve created a new volume group vg1 and a new logical volume lv0 .timeo.iscsi InitiatorName=iqn.password = 123 node.2010-10.conn[0].timeo.session.auth.ImmediateData = Yes node.err_timeo.FirstBurstLength = 262144 node.startup = automatic node.iscsi InitiatorName=iqn.auth.conn[0].MaxRecvDataSegmentLength = 262144 discovery.iscsi.session.com # vim /etc/iscsi/initiatorname.auth.timeo. The -l 1279 parameter is based on the size on our iSCSI shared storage.password_in = 123456 node.InitialR2T = No node.iscsi.timeo.conn[0].username_in = rab node.iscsi. Configure authentication and some specials in the iscsid.lu_reset_timeout = 20 node.conf node.login_timeout = 15 node.session.conn[0].noop_out_timeout = 5 node.conn[0].iscsi.company:www1 www2.timeo.com.MaxRecvDataSegmentLength = 32768 node.

disk2. select the cluster tab. d.1 10.0. Clicking Submit causes the following actions: a. Restart iSCSI : # service iscsi restart 9. c. y Click Create a New Cluster. On both systems.com. Start the iscsid and iscsi services on both systems : # service iscsi start # service iscsid start 5. a page is displayed providing a configuration interface for the newly created cluster. # service luci stop # luci_admin init # service luci restart 11.0. A progress page shows the progress of those actions for each node in the cluster.company:storage.2010-10. Configure it to start at boot time : # chkconfig iscsi on # chkconfig iscsid on 6.company:storage.0. y At the Cluster Name text box.0. Cluster software to be installed onto each cluster node. Run the following command to scan your iSCSI SAN : # iscsiadm -m discovery -t st -p 10.0.cluster 7. b. Starting the cluster. initialize the luci server using the luci_admin init command.san01.4. Run the following command to login : iscsiadm -m node -p 10. enter cluster name company-cluster . Point browser to https://10.2010-10.com.0.cluster ±login 8. When the process of creating a new cluster is complete. Cluster software packages to be downloaded onto each cluster node.1 iqn.1:3260. run luci and ricci and configure automatic startup on both systems # # # # service luci start service ricci start chkconfig luci on chkconfig ricci on 10. Add the node name and password for each cluster node.0. Cluster configuration file to be created and propagated to each node in the cluster.10:8084 to access luci y As administrator of luci.1 -T iqn.disk2.san01. . Administer Red Hat Clusters with Conga. y Click Submit.0.

Check with command: # mount ±a try to mount/umount read and write 17. If the service created give no errors.company. 13. on both nodes. enable it.168.0.acl 0 0 15. # mkdir ±p /data/websites/default # echo WORKS > /data/websites/default/index.9 is still reachable.conf: <VirtualHost *:80> ServerAdmin webmaster@company. choose IP Address and use 192. check Automatically start this service check Run exclusive choose Recovery policy as Relocate Save the service. for example. if all works fine you can continue.html # chown ±R apache:apache /data/websites 18.0. Check if shared ip address working correctly # ip addr list 14. add to the end of /etc/httpd/conf/httpd. Shutdown or disconnect from network one node and see if the web page 192.12.com-error_log CustomLog logs/www. Create two directory under /data.168.com ErrorLog logs/www. Add a resource. Edit fstab and add /dev/vg1/lv0 /data gfs2 defaults.com DocumentRoot /data/websites/default ServerName www. . Managing your newly created cluster you can add resources. Configure apache to start at boot time and start it # chkconfig httpd on # service httpd start 20. and try to start it on one cluster node.company.company. add the resource IP Address you had created before. Configure apache to use one or more virtual host on folder on the same storage.9 Create a service named cluster .com-access_log common </VirtualHost> 19. Create /data directory # mkdir /data 16.

Sign up to vote on this title
UsefulNot useful