You are on page 1of 13

To crate GPFS Filesystem

==========================

A sample file system creation


To create a file system called gpfs2 with the properties:
v The disks for the file system listed in the file /tmp/gpfs2dsk
v Automatically mount the file system when the GPFS daemon starts (-A yes)
v A block size of 256 KB (-B 256K)
v Mount it on 32 nodes (-n 32)
v Both default replication and the maximum replication for metadata set to two (-m
2 -M 2)
v Default replication for data set to one and the maximum replication for data set
to two (-r 1 -R 2)
v Default mount point (-T /gpfs2)
Enter:

mmcrfs /dev/gpfs2 -F /tmp/gpfs2dsk -A yes -B 256K -n 32 -m 2 -M 2 -r 1 -R 2 -T


/gpfs2

Article

IBM GPFS cluster installation and configuration in IBM AIX


IBM Spectrum Scale Installation and Configuration
By Narendra Babu Swarna
Updated November 25, 2014 | Published November 25, 2014

Introduction to GPFS
The GPFS is a high-performance clustered file system that can be deployed in
shared-disk or shared-nothing distributed parallel modes.

GPFS provides concurrent high-speed file access to the applications executing on


multiple nodes of clusters which can be used with AIX 5L, Linux, and Windows
platforms. GPFS provides file system storage capabilities and tools for managing
GPFS clusters, which allows shared access to file systems from remote GPFS
clusters.

Figure 1. Block diagram of GPFS cluster architecture

alt

Required packages and operating system level configuration


The following file sets are mandatory to configure GPFS cluster

gpfs.base
gpfs.crypto
gpfs.docs.data
gpfs.ext
gpfs.gskit
gpfs.msg.en_US
The base file sets can be ordered using an IBM Business Partner� ID. You can find
more information at IBM Knowledge Centre.

Required GPFS fix packs are available at IBM Fix central.

Operating system level configuration


All nodes host names should be resolved from each and every node. For example,
there were two nodes with the following details.

Node 1:

IP address: 192.168.1.10

Host name, fully qualified domain name (FQDN): node1.mycompany.com

Short host name: node1

Node 2:

IP address: 192.168.1.11

Host name (FQDN): node2.mycompany.com

Short host name: node2

All these entries must be available on the /etc/hosts file of all the nodes, and at
least one raw disk which is shared between all nodes should be present.

Password-less authentication configuration


The best practice is to configure password-less authentication between all nodes.
This section explains how to configure the password-less authentication with an
example.You need to perform the following steps for password-less authentication
for Secure Shell (SSH)

Install the SSH server and client in the system.


Generate the public-keys using the ssh-Keygen command.

[root@localhost ~]# ssh-keygen


Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): (Just Enter Key for empty PWD)
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
93:55:2d:d8:d7:a0:14:83:17:3f:d4:fc:46:a7:f6:90 root@localhost

Show more
This key will be generated in /root/.ssh/id_rsa.pub. Perform the same steps across
all nodes.

Create a file, authorized_keys in the /root/.ssh/ directory, and change the file
permissions to 644.
Copy the content of id_rsa.pub to authorized_keys for enabling password-less
authentication within the local system. Repeat step 4 and step 5 on all nodes for
enabling local authentication.
Copy the id_rsa.pub content on node1 to node2, node3, node4, and so on nodes to the
authorization_Key file. This completes the configuration of password-less
authentication on all the cluster nodes.
Finally, test the command execution remotely from the local system. alt
If the given commands get executed successfully, then it indicates that our
configuration is correct.

Installing the GPFS package


You need to perform the following steps to install the GPFS package.
Run the smit installp command.
On the Install and Update Software screen, select Install Software. alt
Specify the location PATH, where the file sets are copied. alt
Select the file sets, and accept the license agreements. alt
Update the GPFS fix packs (if any).
Run the smit update command to update the patches or fix packs. alt
Provide the path to install the GPFS package and press Enter. Else, you can also
use the following command:

install_all_updates -c -d ./ -Y

Show more
After successful installation of the GPFS, the file sets will be available in
the /usr/lpp/mmfs directory.

Note: Append /usr/lpp/mmfs/bin to the system path for ease of use of GPFS commands
in /etc/profile.

alt

Configuration
Cluster configuration
Perform the following steps to configure a GPFS cluster.

Log in to node1 as a root user and check the system to find whether it is a part of
any other cluster using the mmlscluster command.

# mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error messages to determine cause.

Show more
Configure the cluster primary node using the mmcrcluster command.

mmcrcluster -N node1.mycompany.com:manager-quorum -p node1.mycompany.com


-r /usr/bin/ssh -R /usr/bin/scp
Sat Jul 20 00:44:47 IST 2013 : mmcrcluster: Processing node node1
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.

Show more
Run the mmlscluster command again to confirm that the cluster has been created.

# /usr/lpp/mmfs/bin/mmlscluster
GPFS cluster information
========================
GPFS cluster name: node1.mycompany.com
GPFS cluster id: 7699862183884611001
GPFS UID domain: node1.in.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:


-----------------------------------
Primary server: node1.mycompany.com
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------
------
1 node1.mycompany.com 192.168.1.10 node1.mycompany.com quorum-manager

Show more
Apply the GPFS license on the primary node using the mmchlicense command.

# mmchlicense server --accept -N node1


The following nodes will be designated as possessing GPFS server licenses:
node1.mycompany.com

mmchlicense: Command successfully completed

Show more
Start GPFS on one of the nodes in the GPFS cluster using the mmstartup command. If
the Cluster configuration is successful, GPFS will start automatically on all the
nodes.

# mmstartup -a
Sat Jul 20 01:13:26 IST 2013: mmstartup: Starting GPFS ...

Show more
Check the status of the cluster using the mmgetstate command.

# mmgetstate -a
Node number Node name GPFS state
------------------------------------------------------------
1 Node1 active

Show more
Perform the GPFS package installation on the second node and compile the source.

On node1, use the mmaddnode command to add node2 to the cluster

mmaddnode -N node2

Show more
Confirm that the node has been added to the cluster using the mmlscluster command.

# mmlscluster
GPFS cluster information
========================
GPFS cluster name: node1.in.ibm.com
GPFS cluster id: 7699862183884611001
GPFS UID domain: node1.in.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:


------------------------------------------------
Primary server: node1.in.ibm.com
Secondary server: node2.in.ibm.com

Node Daemon node name IP address Admin node name Designation


--------------------------------------------------------------------
1 node1.in.ibm.com 192.168.1.10 node1.in.ibm.com quorum-manager
2 node2.in.ibm.com 192.168.1.11 node2.in.ibm.com

Show less
Use the mmchcluster command to set node2 as the secondary configuration server.
# mmchcluster -s node2
mmchcluster: GPFS cluster configuration servers:
mmchcluster: Primary server: node1.in.ibm.com
mmchcluster: Secondary server: node2.in.ibm.com
mmchcluster: Command successfully completed

Show more
Set the license mode for the node using the mmchlicense command. Use a server
license for this node.

# mmchlicense server --accept -N node2


The following nodes will be designated as possessing GPFS server licenses:
node2.in.ibm.com
mmchlicense: Command successfully completed
mmchlicense: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.

Show more
Start node2 using the mmstartup command.

# mmstartup -N node2
Sat Jul 20 01:49:44 IST 2013: mmstartup: Starting GPFS ...

Show more
Use the mmgetstate command to verify that both nodes are in the active state.

# mmgetstate -a
Node number Node name GPFS state
-------------------------------------------------------------
1 node1 active
2 node2 active

Cluster file system configuration


Perform the following steps to configure the cluster file system.

Create a disk descriptor file / diskdesc.txt using the following format.

DDiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
hdiskX:::dataAndMetadata::nsd1:

DDiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool
hdiskX:::dataAndMetadata::nsd1:

Show less
Create the NSDs using the mmcrnsd command.

# mmcrnsd -F /diskdesc.txt

Show more
Note: GPFS provides a block-level interface over TCP/IP networks called the NSD
protocol. Whether using the NSD protocol or a direct attachment to the storage area
network (SAN), the mounted file system looks the same to the users and the
application (GPFS transparently handles I/O requests). A shared disk cluster is the
most basic environment. In this configuration, the storage is directly attached to
all the systems in the cluster. The direct connection means that each shared block
device is available concurrently to all of the nodes in the GPFS cluster. Direct
access means that the storage is accessible using a Small Computer System Interface
(SCSI) or other block-level protocol using a SAN disk.

Create a file system using the mmcrfs command.

# mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k

Show more
Verify that the file system has been created correctly using the mmlsfs command.

# mmlsfs fs1
flag value description
------------------- ------------------------ -----------------------------------
-f 2048 Minimum fragment size in bytes
-i 512 Inode size in bytes
-I 8192 Indirect block size in bytes
-m 1 Default number of metadata replicas
-M 2 Maximum number of metadata replicas
-r 1 Default number of data replicas
-R 2 Maximum number of data replicas
-j cluster Block allocation type
-D nfs4 File locking semantics in effect
-k all ACL semantics in effect
-n 32 Estimated number of nodes that will
mount file system
-B 65536 Block size
-Q none Quotas enforced
none Default quotas enabled
--filesetdf no Fileset df enabled?
-V 13.23 (3.5.0.7) File system version
--create-time Thu Jul 18 22:09:36 2013 File system creation time
-u yes Support for large LUNs?
-z no Is DMAPI enabled?
-L 4194304 Logfile size
-E yes Exact mtime mount option
-S no Suppress atime mount option
-K whenpossible Strict replica allocation option
--fastea yes Fast external attributes enabled?
--inode-limit 102528 Maximum number of inodes
-P system Disk storage pools in file system
-d nsd1 Disks in file system
--perfileset-quota no Per-fileset quota enforcement
-A yes Automatic mount option
-o none Additional mount options
-T /gpfs Default mount point
--mount-priority 0 Mount priority

Show more
Mount the file system using the mmmount command.

# mmmount all -a
Sat Jul 20 02:22:45 IST 2013: mmmount: Mounting file systems ...

Show more
Verify that the file system is mounted using the df command.

# df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.56 22% 10221 3% /
/dev/hd2 5.00 2.08 59% 52098 10% /usr
/dev/hd9var 2.00 1.70 16% 7250 2% /var
/dev/hd3 5.00 4.76 5% 262 1% /tmp
/dev/hd1 0.50 0.50 1% 79 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 25.00 5.72 78% 371887 21% /opt
/dev/fs1 100G 282M 100G 1% /gpfs <-- This is the
GPFS file
System

Show more
The file system will be available automatically on both the systems.

node1:~ # df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.56 22% 10221 3% /
/dev/hd2 5.00 2.08 59% 52098 10% /usr
/dev/hd9var 2.00 1.70 16% 7250 2% /var
/dev/hd3 5.00 4.76 5% 262 1% /tmp
/dev/hd1 0.50 0.50 1% 79 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 25.00 5.72 78% 371887 21% /opt
/dev/fs1 100G 282M 100G 1% /gpfs

node1:~ # ssh node2 df -h


Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.00 1.56 22% 10221 3% /
/dev/hd2 5.00 2.08 59% 52098 10% /usr
/dev/hd9var 2.00 1.70 16% 7250 2% /var
/dev/hd3 5.00 4.76 5% 262 1% /tmp
/dev/hd1 0.50 0.50 1% 79 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 25.00 5.72 78% 371887 21% /opt
/dev/fs1 100G 282M 100G 1% /gpfs

node1:~ #

Show more
Use the mmdf command to get information on the file system.

# mmdf fs1
disk disk size failure holds holds free KB
free KB
name in KB group metadata data in full blocks
in fragments
--------------- ------------- -------- -------- ----- --------------------
-------------------
Disks in storage pool: system (Maximum disk size allowed is 800 GB)
nsd1 104857600 -1 yes yes 104569408 (100%)
94 ( 0%)
------------- --------------------
-------------------
(pool total) 104857600 104569408 (100%)
94 ( 0%)

============= ====================
===================
(total) 104857600 104569408 (100%)
94 ( 0%)

Inode Information
-----------------
Number of used inodes: 4041
Number of free inodes: 98487
Number of allocated inodes: 102528
Maximum number of inodes: 102528

Show more
Conclusion
GPFS cluster is useful in environments where we need high availability of file
system. This paper provides guidelines to UNIX� administrators in installing and
configuring GPFS cluster environment for AIX 5L cluster nodes, Linux cluster nodes,
Microsoft Windows Server cluster nodes or heterogeneous cluster nodes for AIX,
Linux, and Windows.

========
Objectives
Verify the system environment
Create a GPFS cluster
Define NSD's
Create a GPFS file system
You will need

Requirements for this lab (not necessarily GPFS minimum requirements):

Two AIX 6.1 or 7.1 operating systems (LPARs)


Very similar to Linux installation. AIX LPP packages replace the Linux RPMs, some
of the administrative commands are different.
At least 4 hdisks

Step 1: Verify Environment

Verify nodes properly installed


Check that the operating system level is supported

On the system run oslevel

Check the GPFS


FAQ:http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/
gpfs_faqs/gpfsclustersfaq.html
Is the installed OS level supported by GPFS? Yes No
Is there a specific GPFS patch level required for the installed OS? Yes No
If so what patch level is required? ___________
Verify nodes configured properly on the network(s)
Write the name of Node1: ____________
Write the name of Node2: ____________
From node 1 ping node 2
From node 2 ping node 1

If the pings fail, resolve the issue before continuing.


Verify node-to-node ssh communications (For this lab you will use ssh and scp for
secure remote commands/copy)
On each node create an ssh-key. To do this use the command ssh-keygen; if you don't
specify a blank passphrase, -N, then you need to press enter each time you are
promoted to create a key with no passphrase until you are returned to a prompt. The
result should look something like this:
# ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/.ssh'.
Your identification has been saved in /.ssh/id_rsa.
Your public key has been saved in /.ssh/id_rsa.pub.
The key fingerprint is:
7d:06:95:45:9d:7b:7a:6c:64:48:70:2d:cb:78:ed:61
root@node1
On node1 copy the $HOME/.ssh/id_rsa.pub file to $HOME/.ssh/authorized_keys
# cp $HOME/.ssh/id_rsa.pub $HOME/.ssh/authorized_keys
From node1 copy the $HOME/.ssh/id_rsa.pub file from node2 to /tmp/id_rsa.pub
# scp node2:/.ssh/id_rsa.pub /tmp/id_rsa.pub
Add the public key from node2 to the authorized_keys file on node1
# cat /tmp/id_rsa.pub >> $HOME/.ssh/authorized_keys
Copy the authorized key file from node1 to node2
# scp $HOME/.ssh/authorized_keys node2:/.ssh/authorized_keys
To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2
until you are no longer prompted for a password or for addition to the known_hosts
file.

node1# ssh node1 date


node1# ssh node2 date
node2# ssh node1 date
node2# ssh node2 date
Supress ssh banners by creating a .hushlogin file in the root home directory
# touch $HOME/.hushlogin
Verify the disks are available to the system

For this lab you should have 4 disks available for use hdiskw-hdiskz.
Use lspv to verify the disks exist
Ensure you see 4 unused disks besides the existing rootvg disks and/or other volume
groups.

Step 2: Install the GPFS software

On node1

Locate the GPFS software in /yourdir/gpfs/base/


# cd /yourdir/gpfs/base/
Run the inutoc command to create the table of contents, if not done already
# inutoc .
Install the base GPFS code using the installp command
# installp -aXY -d/yourdir/gpfs/base all
Locate the latest GPFS updates in /yourdir/gpfs/fixes/
# cd /yourdir/gpfs/fixes/
Run the inutoc command to create the table of contents, if not done already
# inutoc .
Install the GPFS PTF updates using the installp command
# installp -aXY -d/yourdir/gpfs/fixes all
Repeat Steps 1-7 on node2. On node1 and node2 confirm GPFS is installed using the
lslpp command
# lslpp -L gpfs.\*
the output should look similar to this

Fileset Level State Type Description (Uninstaller)


----------------------------------------------------------------------------
gpfs.base 4.1.0.3 A F GPFS File Manager
gpfs.docs.data 4.1.0.1 A F GPFS Server Manpages and
Documentation
gpfs.gskit 4.1.0.3 A F GPFS GSKit Cryptography Runtime
gpfs.msg.en_US 4.1.0.3 A F GPFS Server Messages U.S. English
Note 1: The above example is from GPFS V4.1 Express Edition. The important part is
the base, docs and msg filesets are present.

If you have GPFS Standard Edition, you should also have the following:

gpfs.ext 4.1.0.3 A F GPFS Extended Features


If you have GPFS Advanced Edition, in addition to gpfs.ext, you should
also have the following entry:
gpfs.crypto 4.1.0.3 A F GPFS Cryptographic Subsystem

Note2: The gpfs.gnr fileset is used by the Power 775 HPC cluster only and there is
no need to install this fileset on any other AIX cluster. This fileset does not
ship on the V4.1 media.

Confirm the GPFS binaries are in your $PATH using the mmlscluster command
# mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error messages to determine cause.
Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin

Step 3: Create the GPFS cluster

For this exercise the cluster is initially created with a single node. When
creating the cluster make node1 the primary configuration server and give node1 the
designations quorum and manager. Use ssh and scp as the remote shell and remote
file copy commands.

*Primary Configuration server (node1): __________


*Verify fully qualified path to ssh and scp:
ssh path__________
scp path_____________

Use the mmcrcluster command to create the cluster


# mmcrcluster -N node1:manager-quorum --ccr-disable -p node1 \
-r /usr/bin/ssh -R /usr/bin/scp
Thu Mar 1 09:04:33 CST 2012: mmcrcluster: Processing node node1
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
Use the mmchlicense command to designate licenses as needed.
Run the mmlscluster command again to see that the cluster was created
# mmlscluster

===============================================================================
| Warning: |
| This cluster contains nodes that do not have a proper GPFS license |
| designation. This violates the terms of the GPFS licensing agreement. |
| Use the mmchlicense command and assign the appropriate GPFS licenses |
| to each of the nodes in the cluster. For more information about GPFS |
| license designation, see the Concepts, Planning, and Installation Guide. |
===============================================================================

GPFS cluster information


========================

GPFS cluster name: node1.ibm.com


GPFS cluster id: 13882390374179224464
GPFS UID domain: node1.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:


-----------------------------------

Primary server: node1.ibm.com


Secondary server: (none)

Node Daemon node name IP address Admin node name


Designation
-----------------------------------------------------------------------------------
------------
1 node1.lab.ibm.com 10.0.0.1 node1.ibm.com
quorum-manager

Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
# mmchlicense server --accept -N node1

The following nodes will be designated as possessing GPFS server licenses:


node1.ibm.com
mmchlicense: Command successfully completed

Step 4: Start GPFS and verify the status of all nodes

Start GPFS on all the nodes in the GPFS cluster using the mmstartup command
# mmstartup -a
Check the status of the cluster using the mmgetstate command
# mmgetstate -a

Node number Node name GPFS state


------------------------------------------
1 node1 active

Step 5: Add the second node to the cluster

One node 1 use the mmaddnode command to add node2 to the cluster
# mmaddnode -N node2
Confirm the node was added to the cluster using the mmlscluster command
# mmlscluster
Use the mmchcluster command to set node2 as the secondary configuration server
# mmchcluster -s node2
Set the license mode for the node using the mmchlicense command. Use a server
license for this node.
# mmchlicense server --accept -N node2
Start node2 using the mmstartup command
# mmstartup -N node2
Use the mmgetstate command to verify that both nodes are in the active state
# mmgetstate -a

Step 6: Collect information about the cluster

Now we will take a moment to check a few things about the cluster. Examine the
cluster configuration using the mmlscluster command

What is the cluster name? ______________________


What is the IP address of node2? _____________________
What date was this version of GPFS "Built"? ________________

Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest

Step 7: Create NSDs

You will use the 4 hdisks.

Each disk will store both data and metadata


The NSD server field (ServerList) can be left blank if both nodes have direct
access to the shared LUNs.
On node1 create the directory /yourdir/data
Create a disk stanza file /yourdir/data/diskdesc.txt using your favorite text
editor.

The format for the file is:


%nsd: device=DiskName
nsd=NsdName
servers=ServerList
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool

You only need to populate the fields required to create the NSD's, in this example
all NSD's use the default failure group and pool definitions.

%nsd:
device=/dev/hdisk1
nsd=mynsd1
usage=dataAndMetadata

%nsd:
device=/dev/hdisk2
nsd=mynsd2
usage=dataAndMetadata

Create the NSD's using the mmcrnsd command:


mmcrnsd -F /yourdir/data/diskdesc.txt
Note: hdisk numbers will vary per system.

Create the NSD's using the mmcrnsd command


# mmcrnsd -F /yourdir/data/diskdesc.txt

Step 8: Collect information about the NSD's

Now collect some information about the NSD's you have created.

Examine the NSD configuration using the mmlsnsd command


What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?)
associated with an NSD? _______

Step 9: Create a file system

Now that there is a GPFS cluster and some NSDs available you can create a file
system. In this section we will create a file system.
Set the file system blocksize to 64kb
Mount the file system at /gpfs
Create the file system using the mmcrfs command
# mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k
Verify the file system was created correctly using the mmlsfs command
# mmlsfs fs1
Will the file system automatically mounted when GPFS starts? _________________

Mount the file system using the _mmmount_ command


# mmmount all -a
Verify the file system is mounted using the df command
# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 65536 6508 91% 3375 64% /
/dev/hd2 1769472 465416 74% 35508 24% /usr
/dev/hd9var 131072 75660 43% 620 4% /var
/dev/hd3 196608 192864 2% 37 1% /tmp
/dev/hd1 65536 65144 1% 13 1% /home
/proc - - - - - /proc
/dev/hd10opt 327680 47572 86% 7766 41% /opt
/dev/fs1 398929107 398929000 1% 1 1% /gpfs
Use the mmdf command to get information on the file system.
# mmdf fs1
How many inodes are currently used in the file system? ______________

You might also like