Professional Documents
Culture Documents
==========================
Article
Introduction to GPFS
The GPFS is a high-performance clustered file system that can be deployed in
shared-disk or shared-nothing distributed parallel modes.
alt
gpfs.base
gpfs.crypto
gpfs.docs.data
gpfs.ext
gpfs.gskit
gpfs.msg.en_US
The base file sets can be ordered using an IBM Business Partner� ID. You can find
more information at IBM Knowledge Centre.
Node 1:
IP address: 192.168.1.10
Node 2:
IP address: 192.168.1.11
All these entries must be available on the /etc/hosts file of all the nodes, and at
least one raw disk which is shared between all nodes should be present.
Show more
This key will be generated in /root/.ssh/id_rsa.pub. Perform the same steps across
all nodes.
Create a file, authorized_keys in the /root/.ssh/ directory, and change the file
permissions to 644.
Copy the content of id_rsa.pub to authorized_keys for enabling password-less
authentication within the local system. Repeat step 4 and step 5 on all nodes for
enabling local authentication.
Copy the id_rsa.pub content on node1 to node2, node3, node4, and so on nodes to the
authorization_Key file. This completes the configuration of password-less
authentication on all the cluster nodes.
Finally, test the command execution remotely from the local system. alt
If the given commands get executed successfully, then it indicates that our
configuration is correct.
install_all_updates -c -d ./ -Y
Show more
After successful installation of the GPFS, the file sets will be available in
the /usr/lpp/mmfs directory.
Note: Append /usr/lpp/mmfs/bin to the system path for ease of use of GPFS commands
in /etc/profile.
alt
Configuration
Cluster configuration
Perform the following steps to configure a GPFS cluster.
Log in to node1 as a root user and check the system to find whether it is a part of
any other cluster using the mmlscluster command.
# mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error messages to determine cause.
Show more
Configure the cluster primary node using the mmcrcluster command.
Show more
Run the mmlscluster command again to confirm that the cluster has been created.
# /usr/lpp/mmfs/bin/mmlscluster
GPFS cluster information
========================
GPFS cluster name: node1.mycompany.com
GPFS cluster id: 7699862183884611001
GPFS UID domain: node1.in.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Show more
Apply the GPFS license on the primary node using the mmchlicense command.
Show more
Start GPFS on one of the nodes in the GPFS cluster using the mmstartup command. If
the Cluster configuration is successful, GPFS will start automatically on all the
nodes.
# mmstartup -a
Sat Jul 20 01:13:26 IST 2013: mmstartup: Starting GPFS ...
Show more
Check the status of the cluster using the mmgetstate command.
# mmgetstate -a
Node number Node name GPFS state
------------------------------------------------------------
1 Node1 active
Show more
Perform the GPFS package installation on the second node and compile the source.
mmaddnode -N node2
Show more
Confirm that the node has been added to the cluster using the mmlscluster command.
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: node1.in.ibm.com
GPFS cluster id: 7699862183884611001
GPFS UID domain: node1.in.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Show less
Use the mmchcluster command to set node2 as the secondary configuration server.
# mmchcluster -s node2
mmchcluster: GPFS cluster configuration servers:
mmchcluster: Primary server: node1.in.ibm.com
mmchcluster: Secondary server: node2.in.ibm.com
mmchcluster: Command successfully completed
Show more
Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
Show more
Start node2 using the mmstartup command.
# mmstartup -N node2
Sat Jul 20 01:49:44 IST 2013: mmstartup: Starting GPFS ...
Show more
Use the mmgetstate command to verify that both nodes are in the active state.
# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------------------------
1 node1 active
2 node2 active
DDiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
hdiskX:::dataAndMetadata::nsd1:
DDiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
hdiskX:::dataAndMetadata::nsd1:
Show less
Create the NSDs using the mmcrnsd command.
# mmcrnsd -F /diskdesc.txt
Show more
Note: GPFS provides a block-level interface over TCP/IP networks called the NSD
protocol. Whether using the NSD protocol or a direct attachment to the storage area
network (SAN), the mounted file system looks the same to the users and the
application (GPFS transparently handles I/O requests). A shared disk cluster is the
most basic environment. In this configuration, the storage is directly attached to
all the systems in the cluster. The direct connection means that each shared block
device is available concurrently to all of the nodes in the GPFS cluster. Direct
access means that the storage is accessible using a Small Computer System Interface
(SCSI) or other block-level protocol using a SAN disk.
Show more
Verify that the file system has been created correctly using the mmlsfs command.
# mmlsfs fs1
flag value description
------------------- ------------------------ -----------------------------------
-f 2048 Minimum fragment size in bytes
-i 512 Inode size in bytes
-I 8192 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will
mount file system
-B 65536 Block size
-Q none Quotas enforced
none Default quotas enabled
--filesetdf no Fileset df enabled?
-V 13.23 (3.5.0.7) File system version
--create-time Thu Jul 18 22:09:36 2013 File system creation time
-u yes Support for large LUNs?
-z no Is DMAPI enabled?
-L 4194304 Logfile size
-E yes Exact mtime mount option
-S no Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea yes Fast external attributes enabled?
--inode-limit 102528 Maximum number of inodes
-P system Disk storage pools in file system
-d nsd1 Disks in file system
--perfileset-quota no Per-fileset quota enforcement
-A yes Automatic mount option
-o none Additional mount options
-T /gpfs Default mount point
--mount-priority 0 Mount priority
Show more
Mount the file system using the mmmount command.
# mmmount all -a
Sat Jul 20 02:22:45 IST 2013: mmmount: Mounting file systems ...
Show more
Verify that the file system is mounted using the df command.
# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.56 22% 10221 3% /
/dev/hd2 5.00 2.08 59% 52098 10% /usr
/dev/hd9var 2.00 1.70 16% 7250 2% /var
/dev/hd3 5.00 4.76 5% 262 1% /tmp
/dev/hd1 0.50 0.50 1% 79 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 25.00 5.72 78% 371887 21% /opt
/dev/fs1 100G 282M 100G 1% /gpfs <-- This is the
GPFS file
System
Show more
The file system will be available automatically on both the systems.
node1:~ # df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.56 22% 10221 3% /
/dev/hd2 5.00 2.08 59% 52098 10% /usr
/dev/hd9var 2.00 1.70 16% 7250 2% /var
/dev/hd3 5.00 4.76 5% 262 1% /tmp
/dev/hd1 0.50 0.50 1% 79 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 25.00 5.72 78% 371887 21% /opt
/dev/fs1 100G 282M 100G 1% /gpfs
node1:~ #
Show more
Use the mmdf command to get information on the file system.
# mmdf fs1
disk disk size failure holds holds free KB
free KB
name in KB group metadata data in full blocks
in fragments
--------------- ------------- -------- -------- ----- --------------------
-------------------
Disks in storage pool: system (Maximum disk size allowed is 800 GB)
nsd1 104857600 -1 yes yes 104569408 (100%)
94 ( 0%)
------------- --------------------
-------------------
(pool total) 104857600 104569408 (100%)
94 ( 0%)
============= ====================
===================
(total) 104857600 104569408 (100%)
94 ( 0%)
Inode Information
-----------------
Number of used inodes: 4041
Number of free inodes: 98487
Number of allocated inodes: 102528
Maximum number of inodes: 102528
Show more
Conclusion
GPFS cluster is useful in environments where we need high availability of file
system. This paper provides guidelines to UNIX� administrators in installing and
configuring GPFS cluster environment for AIX 5L cluster nodes, Linux cluster nodes,
Microsoft Windows Server cluster nodes or heterogeneous cluster nodes for AIX,
Linux, and Windows.
========
Objectives
Verify the system environment
Create a GPFS cluster
Define NSD's
Create a GPFS file system
You will need
For this lab you should have 4 disks available for use hdiskw-hdiskz.
Use lspv to verify the disks exist
Ensure you see 4 unused disks besides the existing rootvg disks and/or other volume
groups.
On node1
If you have GPFS Standard Edition, you should also have the following:
Note2: The gpfs.gnr fileset is used by the Power 775 HPC cluster only and there is
no need to install this fileset on any other AIX cluster. This fileset does not
ship on the V4.1 media.
Confirm the GPFS binaries are in your $PATH using the mmlscluster command
# mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error messages to determine cause.
Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin
For this exercise the cluster is initially created with a single node. When
creating the cluster make node1 the primary configuration server and give node1 the
designations quorum and manager. Use ssh and scp as the remote shell and remote
file copy commands.
===============================================================================
| Warning: |
| This cluster contains nodes that do not have a proper GPFS license |
| designation. This violates the terms of the GPFS licensing agreement. |
| Use the mmchlicense command and assign the appropriate GPFS licenses |
| to each of the nodes in the cluster. For more information about GPFS |
| license designation, see the Concepts, Planning, and Installation Guide. |
===============================================================================
Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
# mmchlicense server --accept -N node1
Start GPFS on all the nodes in the GPFS cluster using the mmstartup command
# mmstartup -a
Check the status of the cluster using the mmgetstate command
# mmgetstate -a
One node 1 use the mmaddnode command to add node2 to the cluster
# mmaddnode -N node2
Confirm the node was added to the cluster using the mmlscluster command
# mmlscluster
Use the mmchcluster command to set node2 as the secondary configuration server
# mmchcluster -s node2
Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
# mmchlicense server --accept -N node2
Start node2 using the mmstartup command
# mmstartup -N node2
Use the mmgetstate command to verify that both nodes are in the active state
# mmgetstate -a
Now we will take a moment to check a few things about the cluster. Examine the
cluster configuration using the mmlscluster command
You only need to populate the fields required to create the NSD's, in this example
all NSD's use the default failure group and pool definitions.
%nsd:
device=/dev/hdisk1
nsd=mynsd1
usage=dataAndMetadata
%nsd:
device=/dev/hdisk2
nsd=mynsd2
usage=dataAndMetadata
Now collect some information about the NSD's you have created.
Now that there is a GPFS cluster and some NSDs available you can create a file
system. In this section we will create a file system.
Set the file system blocksize to 64kb
Mount the file system at /gpfs
Create the file system using the mmcrfs command
# mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k
Verify the file system was created correctly using the mmlsfs command
# mmlsfs fs1
Will the file system automatically mounted when GPFS starts? _________________