You are on page 1of 5

Gluster Setup HowTo

This is the manual process for creating self contained GlusterFS storage on ECP Nodes:

On each node run the following to install the software:

yum install -y libibverbs fuse

rpm -Uvh http://ftp.gluster.com/pub/gluster/glusterfs/3.0/LATEST/CentOS/glusterfs-common-3.0.5-


1.x86_64.rpm

rpm -Uvh http://ftp.gluster.com/pub/gluster/glusterfs/3.0/LATEST/CentOS/glusterfs-server-3.0.5-


1.x86_64.rpm

rpm -Uvh http://ftp.gluster.com/pub/gluster/glusterfs/3.0/LATEST/CentOS/glusterfs-client-3.0.5-


1.x86_64.rpm

On each node run the following to create a gluster storage directory:

mkdir /glusterfs

On the primary node run the following to create the configuration files:

cd /root

mkdir gluster-config

cd gluster-config
glusterfs-volgen -n images cc1:/glusterfs cc2:/glusterfs cc3:/glusterfs cc4:/glusterfs

NOTE: You must substitute the hostnames in the example with valid hostnames or IP addresses

NOTE: You can enable Raid functionality on glusterfs by adding -r1 before the -n images. Raid capability
is only possible with an even number of storage locations (ie. 2/4/6/8/10).

After generating the configuration files, they must be copied to the appropriate hosts live configuration
directory:

# Copy the server component configs

scp cc1-images-export.vol root@cc1:/etc/glusterfs/glusterfsd.vol

scp cc2-images-export.vol root@cc2:/etc/glusterfs/glusterfsd.vol

scp cc3-images-export.vol root@cc3:/etc/glusterfs/glusterfsd.vol

scp cc4-images-export.vol root@cc4:/etc/glusterfs/glusterfsd.vol

# Copy the client component configs

scp images-tcp.vol root@cc1:/etc/glusterfs/glusterfs.vol

scp images-tcp.vol root@cc2:/etc/glusterfs/glusterfs.vol

scp images-tcp.vol root@cc3:/etc/glusterfs/glusterfs.vol

scp images-tcp.vol root@cc4:/etc/glusterfs/glusterfs.vol

Modify the startup commands to include glusterfs storage daemon on each host:
chkconfig glusterfsd on

Modify the rc.local on each host to ensure glusterfs is mounting the images directory:

echo "glusterfs /var/lib/xen/images" >> /etc/rc.local

Modify the IPTables firewall rules to include an exception for the glusterfsd daemon process:

# The following needs to be added to /etc/sysconfig/iptables

-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6996 -j ACCEPT

At this point you will want to reboot the host machines and verify that /var/lib/xen/images is indeed
working properly across all nodes. It should show the aggregate storage of all the free space found in
/glusterfs on all hosts. If storage is verified proceed to replace FUSE.

The default FUSE module will not work properly with KVM sparse image files. If you are running RAW
images only, this step can be skipped:

# Stop all running GlusterFS clients

killall glusterfs
# Remove the fuse kernel module

rmmod fuse

# Remove the fuse package

yum remove -y fuse

# Install packages to build new Fuse

yum install -y make gcc gcc-c++ sshfs build-essential flex bison byacc vim wget kernel-xen-devel fuse
dkms dkms-fuse openib libibverbs iftop kernel-devel

# Download new Fuse

wget http://ftp.gluster.com/pub/gluster/glusterfs/fuse/fuse-2.7.4glfs11.tar.gz

# Extract new Fuse

tar -xvzf fuse-2.7.4glfs11.tar.gz

# Compile and install new Fuse (change/relink kernel version as needed)

cd fuse-2.7.4glfs11

ln -s /usr/src/kernels/2.6.18-194.*.el5-x86_64 /usr/src/linux

./configure --enable-kernel-module --with-kernel=/usr/src/linux && make && make install

# Reboot hosts.

If you are running your cluster in HA mode and you are experiencing issues with cloning:

# Add the following to /opt/enomalism2/config/agent.cfg


gluster=True

You might also like