You are on page 1of 164

EMC VNXTM Series MPFS over FC and iSCSI Linux Clients Version 6.

Product Guide
P/N 300-012-182 REV A03

EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103


1-508-435-1000 www.EMC.com

Copyright 2007-2011 EMC Corporation. All rights reserved. Published September, 2011 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on the EMC Online Support website at Support.EMC.com. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Contents

Preface Chapter 1 Introducing EMC VNX MPFS over FC and iSCSI


Overview of MPFS over FC and iSCSI .......................................... VNX MPFS architectures ................................................................. MPFS over FC on VNX ............................................................. MPFS over iSCSI on VNX......................................................... MPFS over iSCSI/FC on VNX ................................................. How VNX MPFS works ................................................................... 18 19 19 21 22 24

Chapter 2

EMC VNX MPFS Environment Configuration


Configuration roadmap ................................................................... Implementation guidelines.............................................................. VNX with MPFS recommendations........................................ Storage configuration recommendations ............................... MPFS feature configurations.................................................... MPFS installation and configuration process ............................... Configuration planning checklist............................................ Verifying system components ......................................................... Required hardware components ............................................. Required software components ............................................... Verifying configuration............................................................. Verifying system requirements ................................................ Verifying the FC switch requirements (FC configuration) .. Verifying the IP-SAN VNX for block requirements.............. Setting up the VNX for file .............................................................. Running the VNX Installation Assistant for File/Unified.......... Setting up the file system................................................................. 26 28 28 29 30 35 36 38 38 40 40 41 42 43 44 45 46

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Contents

File system prerequisites .......................................................... Creating a file system on a VNX for file................................. Enabling MPFS for the VNX for file............................................... Configuring the VNX for block by using CLI commands .......... Best practices for VNX for block and VNX VG2/VG8 gateway configurations ............................................................ Configuring the SAN switch and storage ..................................... Installing the FC switch (FC configuration) .......................... Zoning the SAN switch (FC configuration)........................... Creating a security file .............................................................. Configuring the VNX for block iSCSI ports .......................... Configuring Access Logix ........................................................ Configuring and accessing storage ................................................ Installing the FC driver (FC configuration) ........................... Adding hosts to the storage group (FC configuration)........ Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)................................................................. Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration) .................................... Adding initiators to the storage group (FC configuration) . Adding initiators to the storage group (iSCSI configuration)................................................................. Mounting MPFS ................................................................................ Examples..................................................................................... Unmounting MPFS...........................................................................

46 47 57 58 58 59 59 59 60 61 63 67 67 68 70 73 79 81 84 85 88

Chapter 3

Installing, Upgrading, or Uninstalling VNX MPFS Software


Installing the MPFS software .......................................................... 90 Before installing ......................................................................... 90 Installing the MPFS software from a tar file.......................... 90 Installing the MPFS software from a CD ............................... 92 Post-installation checking ........................................................ 93 Operating MPFS through a firewall ....................................... 94 Upgrading the MPFS software ....................................................... 95 Upgrading the MPFS software ................................................ 95 Upgrading the MPFS software with MPFS mounted .......... 97 Post-installation checking ........................................................ 98 Verifying the MPFS software upgrade ................................... 99 Uninstalling the MPFS software................................................... 100

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Contents

Chapter 4

EMC VNX MPFS Command Line Interface


Using HighRoad disk protection .................................................. 102 VNX for file and hrdp ............................................................. 102 hrdp command syntax ............................................................ 103 Viewing hrdp protected devices ............................................ 106 Using the mpfsctl utility................................................................. 107 mpfsctl help .............................................................................. 108 mpfsctl diskreset ...................................................................... 109 mpfsctl diskresetfreq ............................................................... 109 mpfsctl max-readahead........................................................... 110 mpfsctl prefetch........................................................................ 112 mpfsctl reset .............................................................................. 113 mpfsctl stats .............................................................................. 114 mpfsctl version ......................................................................... 117 mpfsctl volmgt.......................................................................... 117 Displaying statistics ........................................................................ 118 Using the mpfsstat command ................................................ 118 Displaying MPFS device information .......................................... 120 Listing devices with the mpfsinq command........................ 120 Listing devices with the /proc/mpfs devices file............... 123 Displaying mpfs disk quotas.................................................. 123 Validating a Linux server installation ................................... 125 Setting MPFS parameters............................................................... 127 Displaying Kernel parameters ...................................................... 127 Setting persistent parameter values ............................................. 129 mpfs.conf parameters .............................................................. 129 DirectIO support ...................................................................... 132 EMCmpfs parameters ............................................................. 134

Appendix A

File Syntax Rules


File syntax rules for creating a site .............................................. 138 VNX for file with iSCSI ports ................................................. 138 File syntax rules for adding hosts ................................................ 139 Linux host.................................................................................. 139

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Contents

Appendix B

Error Messages and Troubleshooting


Linux server error messages ........................................................ Troubleshooting ............................................................................. Installing MPFS software ....................................................... Mounting and unmounting a file system ............................ Miscellaneous issues ............................................................... Known problems and limitations ................................................ 142 143 143 145 149 150

Glossary Index

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Figures

Title 1 2 3 4 5 6 7

Page 20 20 21 22 23 23 27

MPFS over FC on VNX.................................................................................. MPFS over FC on VNX VG2/VG8 gateway............................................... MPFS over iSCSI on VNX ............................................................................. MPFS over iSCSI on VNX VG2/VG8 gateway .......................................... MPFS over iSCSI/FC on VNX...................................................................... MPFS over iSCSI/FC on VNX VG2/VG8 gateway................................... Configuration roadmap.................................................................................

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Figures

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Tables

Title 1 2 3 4 5 6 7 8 9

Page

Prefetch and read cache requirements ......................................................... 29 Arraycommpath and failovermode settings for storage groups.............. 67 iSCSI parameters for RHEL 4 using 2.6 kernels.......................................... 70 RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 iSCSI parameters .............................................................................................74 Linux server firewall ports............................................................................. 94 Command line interface summary ............................................................. 107 MPFS device information............................................................................. 122 MPFS kernel parameters .............................................................................. 128 Linux server error messages ........................................................................ 142

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Tables

10

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, contact your EMC representative. Review the EMC Online Support website, http://Support.EMC.com, to ensure that you have the latest versions of the MPFS software and documentation. For software, open Support > Software Downloads and Licensing > Downloads V and then select the necessary software for VNX MPFS from the menu. For documentation, open Support > Technical Documentation > Hardware/Platforms > VNX Series. For user personalized documentation for all VNX platforms, open http:// www.emc.com/vnxsupport.
Note: Only registered EMC Online Support users can download the MPFS software.

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

11

Preface

Audience

This document is part of the EMC VNX MPFS documentation set, and is intended for use by Linux system administrators responsible for installing and maintaining Linux servers. Readers of this document are expected to be familiar with these topics:

VNX for block or EMC Symmetrix system VNX for file NFS protocol Linux operating system Operating environments to install the Linux server include: Red Hat Enterprise Linux 4, 5, and 6 SuSE Linux Enterprise Server 10 and 11 Community ENTerprise Operating System 5 and 6 (iSCSI only)

Related documentation

Related documents include:


EMC VNX MPFS for Linux Clients Release Notes EMC VNX VG2/VG8 Gateway Configuration Setup Guide EMC Host Connectivity Guide for Linux EMC Host Connectivity Guide for VMware ESX Server EMC documentation for HBAs

VNX for block Removing ATF or CDE Software before Installing other Failover Software

Unisphere online help

Symmetrix Symmetrix product manual VNX for file EMC VNX Documentation

Using VNX Multi-Path File System

All of these publications are found on the EMC Online Support website.

12

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

EMC Online Support

The EMC Online Support website provides the most up-to-date information on documentation, downloads, interoperability, product lifecycle, target revisions, and bug fixes. As a registered EMC Online Support user, you can subscribe to receive notifications when updates occur. The EMC E-Lab Interoperability Navigator tool provides access to EMC interoperability support matrices. After logging in to EMC Online Support, go to Support > Interoperability and Product Lifecycle Information > E-Lab Interoperability Navigator. EMC uses the following conventions for special notices.
Note: A note presents information that is important, but not hazard-related.

EMC E-Lab Interoperability Navigator

Conventions used in this document

CAUTION A caution contains information essential to avoid data loss or damage to the system or equipment.

IMPORTANT An important notice contains information essential to operation of the software. WARNING A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning. DANGER A danger notice contains information essential to avoid a hazard that will cause severe personal injury, death, or substantial property damage if you ignore the message.

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

13

Preface

Typographical conventions EMC uses the following type style conventions in this document: Normal
Used in running (nonprocedural) text for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, functions, utilities URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, notifications Used in running (nonprocedural) text for: Names of commands, daemons, options, programs, processes, services, applications, utilities, kernels, notifications, system calls, man pages Used in procedures for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) What user specifically selects, clicks, presses, or types

Bold

Italic

Used in all text (including procedures) for: Full titles of publications referenced in text Emphasis (for example a new term) Variables Used for: System output, such as an error message or script URLs, complete paths, filenames, prompts, and syntax when shown outside of running text Used for: Specific user input (such as commands) Used in procedures for: Variables on command line User input variables Angle brackets enclose parameter or variable values supplied by the user Square brackets enclose optional values Vertical bar indicates alternate selections - the bar means or Braces indicate content that you must specify (that is, x or y or z) Ellipses indicate nonessential information omitted from the example

Courier

Courier bold Courier italic

<> [] | {} ...

14

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

Where to get help

EMC support, product, and licensing information can be obtained as follows. Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support website (registration required) at:
http://Support.EMC.com

Technical support For technical support, go to EMC Customer Service on EMC Online Support. To open a service request through EMC Online Support, you must have a valid support agreement. Contact your EMC Customer Support Representative for details about obtaining a valid support agreement or to answer any questions about your account. Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:
techpubcomments@EMC.com

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

15

Preface

16

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

1
Introducing EMC VNX MPFS over FC and iSCSI

This chapter provides an overview of EMC VNX MPFS over FC and iSCSI and its architecture. This chapter includes these topics:

Overview of MPFS over FC and iSCSI............................................ 18 VNX MPFS architectures .................................................................. 19 How VNX MPFS works .................................................................... 24

Introducing EMC VNX MPFS over FC and iSCSI

17

Introducing EMC VNX MPFS over FC and iSCSI

Overview of MPFS over FC and iSCSI


EMC VNXTM series Multi-Path File System (MPFS) over Fibre Channel (FC) lets Linux, Windows, UNIX, AIX, or Solaris servers access shared data concurrently over FC connections, whereas MPFS over Internet Small Computer System Interface (iSCSI) on VNX lets servers access shared data concurrently over an iSCSI connection. MPFS uses common Internet Protocol Local Area Network (IP LAN) topology to transport data and metadata to and from the servers. Without MPFS, servers can access shared data by using standard Network File System (NFS) or Common Internet File System (CIFS) protocols. MPFS accelerates data access by providing separate transports for file data (file content) and metadata (control data). For an FC-enabled server, data is transferred directly between the Linux server and storage array over an FC Storage Area Network (SAN). For an iSCSI-enabled server, data is transferred over the IP LAN between the Linux server and storage array for a VNX or VNX VG2/VG8 gateway configuration. Metadata passes through the VNX for file (and the IP network), which includes the network-attached storage (NAS) portion of the configuration.

18

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

VNX MPFS architectures


Three basic VNX MPFS architectures are available:

MPFS over FC on VNX MPFS over iSCSI on VNX MPFS over iSCSI/FC on VNX The FC architecture consists of these configurations: EMC VNX5300, VNX5500, VNX5700, or VNX7500 over FC MPFS over FC on VNX VG2/VG8 gateway The iSCSI architecture consists of these configurations: VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI MPFS over iSCSI on VNX VG2/VG8 gateway The iSCSI/FC architecture consists of these configurations: VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI/FC MPFS over iSCSI/FC on VNX VG2/VG8 gateway

Note: CLARiiON CX3 and CX4 systems are supported in VNX VG2/VG8 gateway configurations as shown in Figure 2 on page 20, Figure 4 on

page 22 and Figure 6 on page 23.

MPFS over FC on VNX

The MPFS over FC on VNX architecture consists of:

VNX with MPFS A NAS device configured with a VNX and MPFS software VNX for block or EMC Symmetrix system Linux servers with MPFS software connected to a VNX through the IP LAN, VNX for block, or Symmetrix system, by using FC architecture

Figure 1 on page 20 shows the MPFS over FC on VNX configuration where the Linux servers are connected to a VNX series (VNX5300, VNX5500, VNX5700, or VNX7500) by using an IP switch and one or more FC or FC over Ethernet (FCoE) switches. A VNX series is a VNX for file and VNX for block in a single cabinet. In a smaller configuration of one or two servers, the servers are connected directly to the VNX series without the use of FC or FCoE switches.

VNX MPFS architectures

19

Introducing EMC VNX MPFS over FC and iSCSI

Servers VNX series

IP switch NFS/CIFS MPFS metadata MPFS data FC switch/FCoE switch FC

VNX-000004

Figure 1

MPFS over FC on VNX

Figure 2 on page 20 shows the MPFS over FC on VNX VG2/VG8 gateway configuration. In this figure, the Linux servers are connected to a VNX for block or a Symmetrix system by using a VNX VG2/VG8 gateway, IP switch, and optional FC switch or FCoE switch.
VNX for block or Symmetrix Servers NFS/ CIFS MPFS metadata FC MPFS data FC IP switch VNX VG2/VG8 gateway

FC switch/FCoE switch

VNX-000001

Figure 2

MPFS over FC on VNX VG2/VG8 gateway

20

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

MPFS over iSCSI on VNX

The MPFS over iSCSI on VNX architecture consists of:

VNX with MPFS A NAS device configured with a VNX and MPFS software VNX for block or Symmetrix system Linux server with MPFS software connected to a VNX through the IP LAN, VNX for block, or Symmetrix system, by using iSCSI architecture

Figure 3 on page 21 shows the MPFS over iSCSI on VNX configuration where the Linux servers are connected to a VNX series by using one or more IP switches.
Servers
VNX series

IP switch
NFS/CIFS MPFS metadata MPFS data iSCSI data

IP switch
VNX-000005

Figure 3

MPFS over iSCSI on VNX

VNX MPFS architectures

21

Introducing EMC VNX MPFS over FC and iSCSI

Figure 4 on page 22 shows the MPFS over iSCSI on VNX VG2/VG8 gateway configuration where the Linux servers are connected to a VNX for block or Symmetrix system by using a VNX VG2/VG8 gateway and one or more IP switches.
VNX for block or Symmetrix Servers
NFS/ CIFS MPFS metadata

VNX VG2/VG8 gateway IP switch FC

MPFS data iSCSI data

IP switch

VNX-000002

Figure 4

MPFS over iSCSI on VNX VG2/VG8 gateway

MPFS over iSCSI/FC on VNX

The MPFS over iSCSI/FC on VNX architecture consists of:

VNX with MPFS A NAS device that is configured with a VNX and MPFS software VNX for block or Symmetrix system Linux server with MPFS software connected to a VNX through the IP LAN, VNX for block, or Symmetrix system by using iSCSI/FC architecture

Figure 5 on page 23 shows the MPFS over iSCSI/FC on VNX configuration where the Linux servers are connected to a VNX series by using one or more IP switches and an FC switch or FCoE switch.

22

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

Servers

IP switch NFS/CIFS MPFS metadata MPFS data iSCSI data IP switch MPFS data FC switch/FCoE switch FC VNX series

VNX-000066

Figure 5

MPFS over iSCSI/FC on VNX

Figure 6 on page 23 shows the MPFS over iSCSI/FC on VNX VG2/VG8 gateway configuration where the Linux servers are connected to a VNX for block or Symmetrix system by using a VNX VG2/VG8 gateway, one or more IP switches, and an FC switch or FCoE switch.
VNX for block or Symmetrix Servers NFS/ IP switch CIFS MPFS metadata MPFS data iSCSI data IP switch MPFS data FC VNX VG2/VG8 gateway FC

FC switch/FCoE switch
VNX-000067

Figure 6

MPFS over iSCSI/FC on VNX VG2/VG8 gateway

VNX MPFS architectures

23

Introducing EMC VNX MPFS over FC and iSCSI

How VNX MPFS works


Although called a file system, the VNX MPFS is neither a new nor a modified format for storing files. Instead, MPFS interoperates and uses the standard NFS and CIFS protocols to enforce access permissions. MPFS uses a protocol called File Mapping Protocol (FMP) to exchange metadata between the Linux server and the VNX for file. All requests unrelated to file I/O pass directly to the NFS/CIFS layer. The MPFS layer intercepts only the open, close, read, and write system calls. When a Linux server intercepts a file-read call, the server sends a request to the VNX for file asking for the file's location. The VNX for file responds with a list of file extents, which the Linux server then uses to read the file data directly from the disk. When a Linux server intercepts a file-write call, the server asks the VNX for file to allocate blocks on disk for the file. The VNX for file allocates the space in contiguous extents and sends the extent list to the Linux server. The Linux server then writes data directly to disk, informing the VNX for file when finished, so that the VNX for file can permit other Linux servers to access the file. The remaining chapters describe how to install, manage, and tune Linux servers. Using VNX Multi-Path File System technical module, available on EMC Online Support at http://Support.EMC.com, provides information on the MPFS commands.

24

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

2
EMC VNX MPFS Environment Configuration

This chapter presents a high-level overview of configuring and installing EMC VNX MPFS. Topics include:

Configuration roadmap .................................................................... Implementation guidelines............................................................... MPFS installation and configuration process ................................ Verifying system components .......................................................... Setting up the VNX for file ............................................................... Running the VNX Installation Assistant for File/Unified........... Setting up the file system.................................................................. Enabling MPFS for the VNX for file................................................ Configuring the VNX for block by using CLI commands ........... Configuring the SAN switch and storage ...................................... Configuring and accessing storage ................................................. Mounting MPFS ................................................................................. Unmounting MPFS ............................................................................

26 28 35 38 44 45 46 57 58 59 67 84 88

EMC VNX MPFS Environment Configuration

25

EMC VNX MPFS Environment Configuration

Configuration roadmap
Figure 7 on page 27 shows the roadmap for configuring and installing the EMC VNX MPFS over FC and iSCSI architectures for both FC and iSCSI environments. The roadmap contains the topics representing sequential phases of the configuration and installation process. The descriptions of each phase, which follow, contain an overview of the tasks required to complete the process, and a list of related documents for more information.

26

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Implementation guidelines MPFS installation and configuration process Verifying system components Setting up the VNX for file Running the VNX Installation Assistant for File/Unified Setting up the file system Enabling MPFS for the VNX for file Configuring the VNX for block by using CLI commands Configuring the SAN switch and storage Configuring and accessing storage Mounting MPFS

Figure 7

Configuration roadmap

Configuration roadmap

27

EMC VNX MPFS Environment Configuration

Implementation guidelines
The MPFS implementation guidelines are valid for all MPFS installations.

VNX with MPFS recommendations

These recommendations are described in detail in the EMC VNX MPFS Applied Best Practices Guide, which can be found at http://Support.EMC.com:

MPFS is optimized for large I/O transfers and may be useful for workloads with average I/O sizes as small as 16 KB. However, MPFS has been shown conclusively to improve performance for I/O sizes of 128 KB and greater. For best MPFS performance, in most cases, configure the VNX for file volumes by using a volume stripe size of 256 KB. EMC PowerPath is supported, but is not recommended since path failover is built into the Linux server. When using PowerPath, the performance of the MPFS system is lower. Primus article emc 165953 contains details on using PowerPath and MPFS. When MPFS is started, 16 threads are run, which is the default number of MPFS threads. The maximum number of threads is 128, which is also the best practice for MPFS. If system performance is slow, gradually increase the number of threads allotted for the Data Mover to improve system performance. Add threads conservatively, as the Data Mover allocates 16 KB of memory to accommodate each new thread. The optimal number of threads depends on the network configuration, the number of Linux servers, and the workload. Using VNX Multi-Path File System provides procedures to adjust the thread count. This technical module is available with the EMC Documentation on EMC Online Support.

Data Mover capacity

The EMC Support Matrix provides Data Mover capacity guidelines. After logging in to EMC Online Support, go to Support > Interoperability and Product Lifecycle Information > Interoperability Matrices.

28

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Linux server configuration

All Linux servers using the MPFS software require:

At least one FC connection or an iSCSI initiator connection to a SAN switch or a VNX for block or Symmetrix system Network connections to the Data Mover
Note: When deploying MPFS over iSCSI on a VNX5300, VNX5500, VNX5700, VNX7500 or a VNX VG2/VG8 gateway configuration based on the iSCSI-enabled VNX for block, the VNX for block iSCSI target is used.

Storage configuration recommendations

Linux servers read and write directly from a VNX for block. Reading and writing has several implications:

Use the VNX Operating Environment (VNX OE) for best performance in new MPFS configurations. Unmount MPFS from the Linux server before changing any storage device or switch configuration.

Table 1 on page 29 lists the prefetch and read cache requirements.


Table 1

Prefetch and read cache requirements Prefetch requirements Modest Heavy Read cache 50100 MB 250 MB Notes 80% of the systems fall under this category. Requests greater than 64 KB and sequential reads from many LUNs expected over 300 MB/s. 120 or more drives reading in parallel.

Extremely heavy

1 GB

Implementation guidelines

29

EMC VNX MPFS Environment Configuration

MPFS feature configurations


iSCSI CHAP authentication

These sections describe the configurations for MPFS features.

The Linux server with MPFS software and the VNX for block support the Challenge Handshake Authentication Protocol (CHAP) for iSCSI network security. CHAP provides a method for the Linux server and VNX for block to authenticate each other through an exchange of a shared secret (a security key that is similar to a password), which is typically a string of 12 to 16 bytes.

CAUTION If CHAP security is not configured for the VNX for block, any computer connected to the same IP network as the VNX for block iSCSI ports can read from or write to the VNX for block. CHAP has two variants One-way and reverse CHAP authentication:

In one-way CHAP authentication, CHAP sets up the accounts that the Linux server uses to connect to the VNX for block. The VNX for block authenticates the Linux server. In reverse CHAP authentication, the VNX for block authenticates the Linux server and the Linux server also authenticates the VNX for block.

Because CHAP secrets are shared between the Linux server and VNX for block, the CHAP secrets are configured the same on both the Linux server and VNX for block. The CX-Series iSCSI Security Setup Guide provides detailed information regarding CHAP and is located on the EMC Online Support website.

30

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

VMware ESX (optional)

VMware is a software suite for optimizing and managing IT environments through virtualization technology. MPFS supports the Linux server guest operating systems running on a VMware ESX server. The VMware ESX server is a robust, production-proven virtualization layer that abstracts processor, memory, storage, and networking resources into multiple virtual machines (software representation of a physical machine) running side-by-side on the same server. VMware is not tied to any operating system, giving customers a bias-free choice of operating systems and software applications. All operating systems supported by VMware are supported with both iSCSI and NFS protocols for basic connectivity. This allows several instances of similar and different guest operating systems to run as virtual machines on one physical machine. To run a Linux server guest operating system on a VMware ESX server, the configuration must meet these requirements:

Run a supported version of the Linux operating system. Have the VNX for block supported HBA hardware and driver installed. Connect to each SP in each VNX for block directly or through a switch. Each SP must have an IP connection. Connect to a TCP/IP network with both SPs in the VNX for block.

Currently, the VMware ESX server has these limitations:


Booting the guest Linux server off iSCSI is not supported. PowerPath is not supported. Virtual machines that run the Linux server guest operating system must use iSCSI to access the VNX for block. Store the virtual machine on a VMware datastore (VNX for block or Symmetrix system) and access it by the VMware ESX server by using either FC (ESX server versions 3.0.1, 3.0.2, or 3.5.1) or iSCSI (ESX server version 3.5.1).

The EMC Host Connectivity Guide for VMware ESX Server provides information on how to configure iSCSI initiator ports and how VMware operates in a Linux environment. The VMware website, http://www.vmware.com, provides more information.

Implementation guidelines

31

EMC VNX MPFS Environment Configuration

Rainfinity Global Namespace

The EMC Rainfinity Global Namespace (GNS) Appliance complements the Nested Mount File System (NMFS) by providing a global namespace across Data Movers and simplifying mount point management of network shared files. A global namespace organizes file shares across servers into a coherent directory structure. A global namespace is a virtual hierarchy of folders and links to shares or exports, designed to ease access to distributed data. End users no longer need to know the server names and shared folders where the physical data resides. Instead, they mount only to the namespace and navigate the structure of the namespace which appears as though they are navigating a directory structure on a physical server. The Rainfinity GNA application works behind the scenes to provide Linux servers with the data they need from multiple physical servers or shared folders. The Rainfinity GNA has these benefits:

Leverages the MPFS architecture to provide a scalable NFS global namespace for Linux servers with an iSCSI interface. Creates a global view of file shares, simplifying the management of complex NAS and file server environments. Provides a single mount point for MPFS NAS shares, so as the file server environment grows and changes, users and applications do not have to experience the disruption of remounting. Supports 50,000 physical file shares in a single global namespace. Each Rainfinity GNA cluster supports 30,000 server connections with up to two clusters deployed to share a single global namespace.

The use of NAS devices and file servers increases storage management complexity. The Rainfinity GNS removes the dependency on physical storage location and makes it easier to consolidate, replace, and deploy NAS devices and file servers without disrupting server access. MPFS is a VNX for file feature that allows heterogeneous servers with MPFS software to concurrently access, directly over FC or iSCSI channels, stored data on a VNX for block or Symmetrix system. MPFS NFS is a referral-based protocol that does not require Rainfinity to be permanent in-band 100 percent of the time. As a result, the protocols are very scalable.

32

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

The EMC Rainfinity Global Namespace Appliance Getting Started Guide provides information on how the GNS solution works with MPFS, how to configure GNS when supporting Linux servers, and how to mount a Linux server to the GNS application. Hierarchical volume management Hierarchical volume management (HVM) allows the user to cache more information about the file to disk mapping. It is particularly useful when using large files with random access I/O patterns and with file systems built on a small stripe. A hierarchical volume is a tree-based structure composed of File Mapping Protocol (FMP) volumes. Each volume is either an FMP_VOLUME_DISK, FMP_VOLUME_SLICE, FMP_STRIPE or FMP_VOLUME_META. The root of the tree and the intermediate nodes are slices, stripes, or metas. The leaves of the tree are disks. Every volume in a hierarchical volume description has a definition that includes an ID. By convention, a volume must be defined before it can be referenced by an ID. One consequence of this convention is that the volumes in the tree must be listed in depth-first search order. Because of limitations on the transport medium, the description of an especially dense volume tree may require more than one RPC packet. Therefore, a hierarchical volume description may be incomplete, in which case the Linux server with MPFS software must send subsequent requests to obtain descriptions of the remaining volumes. Because the volume structure could change, for example, owing to automatic file system extension, each response contains a cookie that changes when the volume tree changes. A Linux server issuing a request for volume information must return the latest cookie, and if the volume tree has changed, the server will return a status of FMP_VOLUME_CHANGED. In this case, the Linux server must get the whole hierarchical volume description from the beginning by reissuing its mount request.

Implementation guidelines

33

EMC VNX MPFS Environment Configuration

MPFS changes the FMP protocol, which allows the FMP server to describe the volume slices, stripes, and concatenations used to create a logical volume on which a file system is stored. Linux servers communicate with the FMP server to request maps that allow a Linux server to read a file directly from the disk. These maps are described as offsets and lengths on a physical disk. Because most file systems are created on striped volumes, from the standpoint of Linux server communication, the maps are broken up into many extents. Each time the file crosses a stripe boundary, the FMP server must send a different ID to represent the physical volume, and a new offset and length on that volume. With HVM, when the user mounts a file system, the Linux server requests a description of the logical volumes (the striping pattern). The Linux server now describes file maps as locations within the logical volume. The Linux server is now responsible for noticing when a file crosses a stripe boundary, and dispatching the I/Os to the proper physical disk. This change allows the protocol to be more efficient by using less space to represent the maps. Furthermore, it allows the Linux server to represent the extent map in a more compact form, thus conserving Linux server memory and CPU resources.

34

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

MPFS installation and configuration process


The MPFS configuration process involves performing tasks on various system components in a specific order. MPFS can be installed and configured manually or with the use of the VNX Installation Assistant for File/Unified as described in Running the VNX Installation Assistant for File/Unified on page 45.
Note: This document provides guidelines for installing and configuring MPFS with several options. Disregard steps that do not pertain to your environment.

To manually install and configure MPFS: 1. Run the VNX Installation Assistant for File/Unified to help setup the MPFS system (for MPFS-enabled systems only), which: a. Provisions unused disks. b. Creates/extends the MPFS storage pool. c. Configures the VNX for block iSCSI ports (only for iSCSI ports). d. Starts the MPFS service. e. Installs the MPFS client software on multiple Linux hosts. f. Configures the Linux host parameters and sysctl parameters. g. Mounts the MPFS-enabled NFS exports. 2. Collect installation and configuration planning information and complete the checklist: a. Collect the IP network addresses, FC port addresses, and VNX for block or Symmetrix system information. b. Map the Ethernet and TCP/IP network topology. c. Map the FC zoning topology. d. Map the virtual storage area network (VSAN) topology.

MPFS installation and configuration process

35

EMC VNX MPFS Environment Configuration

3. Install the MPFS software manually (on a native or VMware1 hosted Linux operating system): a. Install the HBA driver (for FC configuration). b. Install and configure iSCSI (for iSCSI configuration).2 c. Start the iSCSI service (for iSCSI configuration). d. Install the MPFS software. e. Verify the MPFS software configuration.

Configuration planning checklist

Collect information before beginning the MPFS installation and configuration process. For an FC and iSCSI configuration: SP A IP address ..................................................................................... SP A login name .................................................................................... SP A password....................................................................................... SP B IP address ...................................................................................... SP B login name..................................................................................... SP B password ....................................................................................... Zoning for Data Movers....................................................................... First Data Mover LAN blade IP address or Data Mover IP address ............................................................................................... Second Data Mover LAN blade IP address or Data Mover IP address ............................................................................................... Control Station IP address or CS address.......................................... LAN IP address (same as LAN Data Movers).................................. Linux server IP address on LAN ........................................................ VSAN name ........................................................................................... VSAN number (ensure the VSAN number is not in use) ...............

1. VMware ESX (optional) on page 31 provides information. 2. Installing VNX iSCSI Host Components provides details.

36

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

For an FC configuration: SP A FC port assignment or FC ports ................................................ SP B FC port assignment or FC ports................................................. FC switch name..................................................................................... FC switch password ............................................................................. FC switch port IP address.................................................................... Zoning for each FC HBA port............................................................. Zoning for each FC director ................................................................ For an iSCSI configuration: VNX with MPFS target IP address ..................................................... VNX for block or Symmetrix system target IP address .................. Linux server IP address for iSCSI Gigabit connection .................... Initiator and Target Challenge Handshake Authentication Protocol (CHAP) password (optional) ..............................................

MPFS installation and configuration process

37

EMC VNX MPFS Environment Configuration

Verifying system components


MPFS environments require standard VNX for file hardware and software, with the addition of a few components that are specific to either FC or iSCSI configurations. Set up an MPFS environment to verify that each of the previously mentioned components is in place and functioning normally. Each hardware and software component is discussed in these sections.

Required hardware components


MPFS over FC on VNX configuration

This section lists the MPFS configurations with the required hardware components. The hardware components for a MPFS over FC on VNX configuration are:

A VNX series connected to an FC network and SAN An IP switch that connects the VNX series to the servers An FC switch or FCoE switch with an HBA for each Linux server

MPFS over FC on VNX on page 19 provides more information. MPFS over FC on VNX VG2/VG8 gateway configuration The hardware components for a MPFS over FC on VNX VG2/VG8 gateway configuration are:

A VNX VG2/VG8 gateway connected to an FC network and SAN A fabric-connected VNX for block or Symmetrix system, with available LUNs An IP switch that connects the VNX VG2/VG8 gateway to the servers An FC switch or FCoE switch with an HBA for each Linux server

MPFS over FC on VNX on page 19 provides more information. MPFS over iSCSI on VNX configuration The hardware components for a MPFS over iSCSI on VNX configuration are:

A VNX series One or two IP switches that connect the VNX series to the servers

MPFS over iSCSI on VNX on page 21 provides more information.

38

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

MPFS over iSCSI on VNX VG2/VG8 gateway configuration

The hardware components for a MPFS over iSCSI on VNX VG2/VG8 gateway configuration are:

A VNX VG2/VG8 gateway connected to an FC network and SAN A fabric-connected VNX for block or Symmetrix system with available LUNs One or two IP switches that connect the VNX VG2/VG8 gateway and the VNX for block or Symmetrix system to the servers

MPFS over iSCSI on VNX on page 21 provides more information. MPFS over iSCSI/FC on VNX configuration The hardware components for a MPFS over iSCSI/FC on VNX configuration are:

A VNX for file One or two IP switches and an FC switch or FCoE switch that connect the VNX for file to the servers

MPFS over iSCSI/FC on VNX on page 22 provides more information. MPFS over iSCSI/FC on VNX VG2/VG8 gateway configuration The hardware components for a MPFS over iSCSI/FC on VNX VG2/VG8 gateway configuration are:

A VNX VG2/VG8 gateway connected to an FC network and SAN A fabric-connected VNX for block or Symmetrix system with available LUNs One or two IP switches and an FC switch or FCoE switch that connect the VNX VG2/VG8 gateway and the VNX for block or Symmetrix system to the servers

MPFS over iSCSI/FC on VNX on page 22 provides more information. Configuring Gigabit Ethernet ports Two Gigabit Ethernet NICs, or a multiport NIC with two available ports, connected to isolated IP networks or subnets are recommended for each Linux server for iSCSI. For each Linux server for FC, one NIC is required for NFS and FMP traffic. For maximum performance, use:

One port for the connection between the Linux server and the Data Mover for MPFS metadata transfer and NFS traffic One port for the connection between the Linux server and the same subnet as the iSCSI discovery address dedicated to data transfer

Verifying system components

39

EMC VNX MPFS Environment Configuration

Note: The second NIC for iSCSI must be on the same subnet as the discovery address.

Configuring and Managing VNX Networking provides detailed information for setting up network connections. The document is available on the EMC Online Support website.

Required software components

Software components required for an MPFS configuration:

NAS software version that supports either FC or iSCSI configurations on Linux platforms VNX OE software version 7.0.x.x supports RHEL 6 or SuSE 11. Linux operating system and kernel version that supports HBAs or an iSCSI initiator
Note: The EMC E-Lab Interoperability Matrix lists the latest versions of Red Hat Enterprise Linux, SuSE Linux Enterprise Server, CentOS operating systems.

MPFS software version 5.0 or later iSCSI initiator

Related documentation

The EMC VNX MPFS for Linux Clients Release Notes, available on the EMC Online Support website, provide a complete list of EMC supported operating system versions. Verify whether each of the previously mentioned system components is in place and functioning normally. If each of these components is operational, MPFS installation and configuration process on page 35 provides more information. If each of these components is not operational, Error Messages and Troubleshooting on page 141 provides more information. Configure NFS and start the services on the VNX for file that are used for MPFS connectivity.

Verifying configuration

40

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Related documentation

These technical modules, available on the EMC Online Support website, provide additional information:

Configuring and Managing EMC VNX Networking Managing VNX Volumes and File Systems Manually Configuring Standbys on VNX

Verifying system requirements

This section describes system requirements for an MPFS environment.

CAUTION Ensure that the systems used for MPFS do not contain both VNX for block and Symmetrix system LUNs. MPFS does not support a mixed storage environment. VNX configurations used within an MPFS environment must be designed for MPFS. These models are supported:

VNX5300 VNX5500 VNX5700 VNX7500

All VNX and VNX VG2/VG8 gateway configurations must meet these requirements:

Have file systems built on disks from only one type (not a mixture of disk drives): For VNX configurations - Serial Attached iSCSI (SAS) or nearline SAS (NL-SAS) For VNX VG2/VG8 gateway configurations - Fibre Channel (FC) (for CLARiiON CX3, CX4, or Symmetrix system), SAS, NL-SAS, or Advanced Technology Attachment (ATA)

Use disk volumes from the same storage system. A file system spanning multiple storage systems is not supported. Cannot use RAID groups that span across two different system enclosures. LUNs must be built by using RAID 1, RAID 3, RAID 5, or RAID 6 only. Management LUNs must be built by using 4+1 RAID 5 only.
Verifying system components
41

EMC VNX MPFS Environment Configuration

Have write cache enabled. Use EMC Access Logix. Run VNE OE with NAS 7.0.x or later.

Symmetrix systems used within an MPFS environment must be designed for MPFS. These models are supported:

Symmetrix DMX series Enterprise Storage Platform (ESP) Symmetrix VMAX series Symmetrix 8000 series

All Symmetrix systems must meet these requirements:

Use the correct version of the microcode. Do either for microcode release updates: Contact your EMC Customer Support Representative Check the EMC E-Lab Interoperability Navigator

Have the Symmetrix FC/SCSI port flags properly configured for MPFS. Set the Avoid_Reset_Broadcast (ARB) flag for each port that is connected to a Linux server. Do not use a file system that spans across two different system enclosures.

Verifying the FC switch requirements (FC configuration)

To set up the FC switch: 1. Install the FC switch. 2. Verify that the host bus adapter (HBA) driver is loaded by selecting Start > Run and typing compmgmt.msc in the window. In the Explorer window, select Device Manager > Disk drives. 3. Connect cables from each HBA FC port to a switch port. 4. Verify the HBA connection to the switch by checking LEDs for the switch port connected to the HBA port. 5. Configure zoning for the switch as described in Zoning the SAN switch (FC configuration) on page 59.
Note: Configure zoning as single initiator, which means that each HBA port will have its own zone. Each zone has only one HBA port.

42

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Verifying the IP-SAN VNX for block requirements

The MPFS over FC on VNX and iSCSI environment with VNX for block configurations requires:

For a VNX configuration, a VNX5300, VNX5500, VNX5700, or VNX7500. For a VNX VG2/VG8 gateway configuration: VNX for block, Symmetrix DMX, Symmetrix VMAX, or Symmetrix 8000. Same cabling as shared VNX for block cabling. Access Logix LUN masking by using iSCSI to present all managed LUNs to the Linux servers.

Linux server configuration is the same as a standard Linux server connection to an iSCSI connection. Linux servers are load-balanced across VNX for block iSCSI ports for performance improvement and protection against single-port and Ethernet cable problems. Port 0 iSCSI through port 3 iSCSI on each storage processor is connected to the iSCSI network.

Verifying system components

43

EMC VNX MPFS Environment Configuration

Setting up the VNX for file


The VNX System Software Installation Guide provides information on how to set up the VNX for file, which is located on the EMC Online Support website.

44

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Running the VNX Installation Assistant for File/Unified


The VNX Installation Assistant for File/Unified is a single instance, pre-configuration tool targeted for a factory installed (unconfigured) VNX or to open EMC Unisphere software. The VNX Installation Assistant for File/Unified helps in setting up an MPFS system (for MPFS-supported systems only) by doing:

Provisions storage for MPFS use Creates an MPFS storage pool Configures VNX for block iSCSI ports (only for iSCSI ports) Starts the MPFS service on the VNX for file system Push-installs the MPFS client software on multiple Linux hosts Configures Linux host parameters Mounts MPFS-enabled NFS exports

The VNX Installation Assistant for File/Unified is available from the VNX Tools page on EMC Online Support. For the VNX Installation Assistant for File/Unified on EMC Online Support, open Support > Product and Diagnostic Tools > VNX Tools > VNX Startup Assistant and download the appropriate version of the VNX Installation Assistant for File/Unified from EMC Online Support.

Running the VNX Installation Assistant for File/Unified

45

EMC VNX MPFS Environment Configuration

Setting up the file system


This section describes the prerequisites for file systems and the procedure for creating a file system.

File system prerequisites

File system prerequisites are guidelines to be met before building a file system. A properly built file system must:

Use disk volumes from the same VNX for block.


Note: Do not use a file system spanning across two system enclosures. A file system spanning multiple systems is not supported even if the multiple systems are of the same type, such as VNX for block or Symmetrix system.

Have file systems built on disks from only one type (not a mixture of disk drives): For VNX configurations - Serial Attached iSCSI (SAS) or nearline SAS (NL-SAS) For VNX VG2/VG8 gateway configurations - Fibre Channel (FC) (for CLARiiON CX3, CX4, or Symmetrix system), SAS, NL-SAS, or Advanced Technology Attachment (ATA)

For best MPFS performance, in most cases, configure the volumes by using a volume stripe size of 256 KB. The EMC VNX MPFS Applied Best Practices Guide provides detailed performance related information. In a Symmetrix system environment, ensure that the Symmetrix FC/SCSI port flag settings are properly configured for MPFS; in particular, set the ARB flag. The EMC Customer Support Representative configures these settings.

46

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Creating a file system on a VNX for file

This section describes how to configure, create, mount, and export file systems. Ensure that LUNs for the new file system are created optimally for MPFS. All LUNs must:

Use the same RAID type Have the same number of spindles in each RAID group Contain spindles of the same type and speed Other LUNs in the same file system Another file system heavily utilized by high-I/O applications

In addition, ensure that all LUNs do not share spindles with:


Before creating the LUNs, ensure that the total usable capacity of all the LUNs within a single file system does not exceed 16 TB. The maximum number of LUNs tested that are supported in MPFS configurations per file system is 256. Ensure that the LUNs are accessible by the Data Movers through LUN masking, switch zoning, and VSAN settings. Use this procedure to build or mount the MPFS on the VNX for file: 1. Log in to the Control Station as NAS administrator. 2. Before building the file system, type the nas_disk command to return a list of unused disks by using this command syntax:
$ nas_disk -list |grep n | more

For example, type:


$ nas_disk -list |grep n | more

Setting up the file system

47

EMC VNX MPFS Environment Configuration

The output shows all disks not in use:


id 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 inuse n n n n n n n n n n n n n n n n n n n n n n n n sizeMB 466747 466747 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 storageID-devID APM00065101342-0010 APM00065101342-0011 APM00065101342-0012 APM00065101342-0014 APM00065101342-0016 APM00065101342-0018 APM00065101342-0013 APM00065101342-0015 APM00065101342-0017 APM00065101342-0019 APM00065101342-001A APM00065101342-001B APM00065101342-001C APM00065101342-001E APM00065101342-0020 APM00065101342-001D APM00065101342-001F APM00065101342-0021 APM00065101342-0022 APM00065101342-0024 APM00065101342-0026 APM00065101342-0023 APM00065101342-0025 APM00065101342-0027 type CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD name d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 d21 d22 d23 d24 d25 d26 d27 d28 d29 d30 servers 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2

48

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

3. Display all disks by using this command syntax:


$ nas_disk -list

For example, type:


$ nas_disk -list

Output:
id 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 inuse y y y y y y n n n n n n n n n n n n n n n n n n n n n n n n sizeMB 11263 11263 2047 2047 2047 2047 466747 466747 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 549623 storageID-devID APM00065101342-0000 APM00065101342-0001 APM00065101342-0002 APM00065101342-0003 APM00065101342-0004 APM00065101342-0005 APM00065101342-0010 APM00065101342-0011 APM00065101342-0012 APM00065101342-0014 APM00065101342-0016 APM00065101342-0018 APM00065101342-0013 APM00065101342-0015 APM00065101342-0017 APM00065101342-0019 APM00065101342-001A APM00065101342-001B APM00065101342-001C APM00065101342-001E APM00065101342-0020 APM00065101342-001D APM00065101342-001F APM00065101342-0021 APM00065101342-0022 APM00065101342-0024 APM00065101342-0026 APM00065101342-0023 APM00065101342-0025 APM00065101342-0027 type CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD CLSTD name root_disk root_disk d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 d19 d20 d21 d22 d23 d24 d25 d26 d27 d28 d29 d30 servers 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2 1,2

The first stripe alternate SP ownership A,B,A,B,A,B is displayed in bold text and the second stripe alternate SP ownership B,A,B,A,B,A is displayed in a shaded background. The two different stripes (A, B, A) and (B, A, B) are both in RAID group X, Y, and Z.

Setting up the file system

49

EMC VNX MPFS Environment Configuration

Note: Use Navicli or EMC Navisphere Manager to determine which LUNs are on SP A and SP B.

4. Find the names of file systems mounted on all servers by using this command syntax:
$ server_df ALL

For example, type:


$ server_df ALL

Output:
server_2 : Filesystem kbytes used avail S2_Shgvdm_FS1 831372216 565300 825719208 root_fs_vdm_vdm01 114592 7992 106600 S2_Shg_FS2 831372216 19175496 812196720 S2_Shg_FS1 1662746472 25312984 1637433488 root_fs_common 153 5280 10088 root_fs_2 2581 80496 177632 server_3 : Filesystem kbytes used root_fs_vdm_vdm02 114592 7992 S3_Shgvdm_FS1 831372216 4304736 S3_Shg_FS1 831373240 11675136 S3_Shg_FS2 831373240 4204960 root_fs_commo 15368 5280 root_fs_3 258128 8400 vdm01 : Filesystem S2_Shgvdm_FS1 vdm02 : Filesystem S3_Shgvdm_FS1 capacity 1% 7% 2% 2% 34% 31% Mounted on
/root_vdm_5/S2_Shgvdm_FS1

/root_vdm_5/.etc /S2_Shg_mnt2 /S2_Shg_mnt1 /.etc_common /

avail 106600 827067480 819698104 827168280 10088 249728

capacity 7% 1% 1% 1% 34% 3%

Mounted on /root_vdm_6/.etc
/root_vdm_6/S3_Shgvdm_FS1

/S3_Shg_mnt1 /S3_Shg_mnt2 /.etc_common /

kbytes 831372216

used 5653008

avail 825719208

capacity 1%

Mounted on /S2_Shgvdm_FS1

kbytes 831372216

used 4304736

avail 827067480

capacity 1%

Mounted on /S3_Shgvdm_FS1

Find the names of file systems mounted on a specific server by using this command syntax:
$ server_df <server_name>

where: <server_name> = name of the Linux server

50

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

For example, type:


$ server_df vdm02

Output:
vdm02 : Filesystem S3_Shgvdm_FS1 kbytes 831372216 used 4304736 avail 827067480 capacity 1% Mounted on /S3_Shgvdm_FS1

5. Find the names of existing file systems that are not mounted by using this command syntax:
$ nas_fs -list

For example, type:


$ nas_fs -list

Output:
id 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 25 221 222 223 384 inuse type acl n 1 0 y 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 n 1 0 y 1 0 n 5 0 n 5 0 n 5 0 n 5 0 n 5 0 n 5 0 y 1 0 y 1 0 n 1 0 n 1 0 y 1 0 volume 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 73 76 77 78 79 80 116 112 1536 1537 3026 name server root_fs_1 root_fs_2 2 root_fs_3 root_fs_4 root_fs_5 root_fs_6 root_fs_7 root_fs_8 root_fs_9 root_fs_10 root_fs_11 root_fs_12 root_fs_13 root_fs_14 root_fs_15 root_fs_common 2 root_fs_ufslog root_panic_reserve root_fs_d3 root_fs_d4 root_fs_d5 root_fs_d6 S2_Shg_FS2 2 S2_Shg_FS1 2 S3_Shg_FS1 S3_Shg_FS2 testdoc_fs2 2

Setting up the file system

51

EMC VNX MPFS Environment Configuration

6. Find the names of volumes already mounted by using this command syntax:
$ nas_volume -list

For example, type:


$ nas_volume -list

Part of the output is similar to this:


id 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 . . . 1518 1527 inuse type acl y 4 0 y 4 0 y 4 0 y 4 0 y 4 0 y 4 0 n 1 0 n 1 0 y 1 0 y 3 0 y 1 0 y 3 0 y 1 0 y 3 0 y 1 0 y 3 0 . . . . . . . . . y 3 0 y 3 0 name root_disk root_ldisk d3 d4 d5 d6 root_dos root_layout root_slice_1 root_volume_1 root_slice_2 root_volume_2 root_slice_3 root_volume_3 root_slice_4 root_volume_4 . . . Meta_S2vdm_FS1 Meta_S2_FS1 cltype 0 0 1 1 1 1 0 0 1 2 1 2 1 2 1 2 . . . 2 2 clid 1-34,52 35-51 77 78 79 80

10 1 12 2 14 3 16 4 . . . 229 235

7. Create the first stripe by using this command syntax:


$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where: <name> = name of new stripe pair <stripe_size> = size of the stripe <volume_set> = set of disks For example, to create a stripe pair named s2_stripe1 and a depth of 262144 bytes (256 KB) by using disks d9, d14, d11, d16, d17, and d22, type:
$ nas_volume -name s2_stripe1 -create -Stripe 262144 d9,d14,d11,d16,d17,d22

52

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Output:
id name acl in_use type stripe_size volume_set disks = = = = = = = = 135 s2_stripe1 0 False stripe 262144 d9,d14,d11,d16,d17,d22 d9,d14,d11,d16,d17,d22

Note: For best MPFS performance, in most cases, configure the file volumes by using a volume stripe size of 256 KB. Detailed performance-related information is available in the EMC VNX MPFS Applied Best Practices Guide.

8. Create the second stripe by using this command syntax:


$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where: <name> = name of new stripe pair <stripe_size> = size of the stripe <volume_set> = set of disks For example, to create a stripe pair named s2_stripe2 and a depth of 262144 bytes (256 KB) by using disks d13, d10, d15, d12, d18, and d19, type:
$ nas_volume -name s2_stripe2 -create -Stripe 262144 d13,d10,d15,d12,d18,d19

Output:
id name acl in_use type stripe_size volume_set disks = = = = = = = = 136 s2_stripe2 0 False stripe 262144 d13,d10,d15,d12,d18,d19 d13,d10,d15,d12,d18,d19

Setting up the file system

53

EMC VNX MPFS Environment Configuration

9. Create the metavolume by using this command syntax:


$ nas_volume -name <name> -create -Meta <volume_name>

where: <name> = name of the new meta volume <volume_name> = names of the volumes For example, to create a meta volume s2_meta1 with volumes s2_stripe1 and s2_stripe2, type:
$ nas_volume -name s2_metal -create -Meta s2_stripe1, s2_stripe2

Output:
id = 137 name = s2_meta1 acl = 0 in_use = False type = meta volume_set = s2_stripe1, s2_stripe2 disks = d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

10. Create the file system by using this command syntax:


$ nas_fs -name <name> -create <volume_name>

where: <name> = name of the new file system <volume_name> = name of the meta volume For example, to create a file system s2fs1 with a meta volume s2_meta1, type:
$ nas_fs -name s2fs1 -create s2_meta1

Output:
id = 33 name = s2fs1 acl = 0 in_use = False type = uxfs worm = compliance worm_clock = Thu Mar 6 16:26:09 EST 2008 worm Max Retention Date = Fri April 18 12:30:40 EST 2008 volume = s2_meta1 pool = rw_servers= ro_servers= rw_vdms =

54

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

ro_vdms = auto_ext = no, virtual_provision=no stor_devs = APM00065101342-0012,APM00065101342-0015,APM00065101 342-0016,APM00065101342-0019,APM00065101342-001A,AP M00065101342-001D,APM00065101342-0013,APM0006510134 2-0014,APM00065101342-0017,APM00065101342-0018,APM0 0065101342-001B,APM00065101342-001C disks = d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

11. Mount the file system by using this command syntax:


$ server_mount <movername> <fs_name> <mount_point>

where: <movername> = name of the Data Mover <fs_name> = name of the file system to mount <mount_point> = name of the mount point For example, to mount a file system on Data Mover server_2 with file system s2fs1 and mount point /s2fs1, type:
$ server_mount server_2 s2fs1 /s2fs1

Output:
server_2 : done

12. Export the file system by using this command syntax:


$ server_export <mover_name> -Protocol nfs -name <name> -option <options> <pathname>

where: <mover_name> = name of the Data Mover <name> = name of the alias for the <pathname> <options> = options to include <pathname> = path of the mount point created For example, to export a file system on Data Mover server_2 with a pathname alias of ufs1 and mount point path /ufs1, type:
$ server_export server_2 -P nfs -name ufs1 /ufs1

Output:
server_2: done

Setting up the file system

55

EMC VNX MPFS Environment Configuration

Related documentation

These documents provide more information on building MPFS and are available on the EMC Online Support website:

EMC VNX Command Line Interface Reference for File Manual Configuring and Managing VNX Networking Managing VNX Volumes and File Systems Manually Using VNX Multi-Path File System

56

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Enabling MPFS for the VNX for file


Start MPFS on the VNX for file. Use this command syntax:
$ server_setup <movername> -Protocol mpfs -option <options>

where: <movername> = name of the Data Mover <options> = options to include For example, type:
$ server_setup server_2 -Protocol mpfs -option start

Output:
server_2: done Note: Start MPFS on the same Data Mover on which the file system was exported using NFS.

Enabling MPFS for the VNX for file

57

EMC VNX MPFS Environment Configuration

Configuring the VNX for block by using CLI commands


This section presents an overview of configuring the VNX for block array ports mounted in VNX VG2/VG8 gateway configurations. Use site-specific parameters for these steps. Use the VNX for block CLI commands to configure the array ports for a VNX VG2/VG8 gateway configuration.

Best practices for VNX for block and VNX VG2/VG8 gateway configurations

Configure the discovery addresses (IP addresses) and enabled targets for each Linux server so that all the iSCSI target ports on the system are equally balanced to achieve maximum performance and availability. Balancing the load across all ports enables speeds up to 4 x 10 Gb/s per storage processor. If one of the iSCSI target ports fails, the other three will remain operational. One-fourth of the Linux servers will fail over to the native NFS or CIFS protocol, but three-fourths of those servers will continue operating at higher speeds attainable through iSCSI. VNX for block discovery sessions reveal paths to all iSCSI ports on each storage processor. The ports are described to the iSCSI initiators as individual targets. Each of these connections creates another session. The maximum number of initiator sessions or hosts per storage processor is dependent on the VNX for block configuration. To increase the number of achievable Linux servers for a VNX VG2/VG8 gateway configuration, disable access on each of the servers to as many as three out of four iSCSI targets per storage processor. Ensure that the enabled iSCSI targets (VNX for block iSCSI ports) match the storage group definition. For VNX VG2/VG8 gateway configurations, Access Logix LUN masking is used to present all VNX for file managed LUNs to the Linux servers. LUNs that are not VNX for file LUNs are protected from the iSCSI initiators. A separate storage group, created for MPFS initiators and all VNX for file LUNs that are not control LUNs, is added to this group. Enable at least one port from each SP for each Linux server in this separate storage group.

58

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

In a VNX VG2/VG8 gateway environment, iSCSI initiator names are used in providing the path in the storage group for the Linux server to access the iSCSI targets. Unique, known iSCSI names are required for using Access Logix software.

Configuring the SAN switch and storage


This section describes how to configure the SAN switch and provides configuration information for VNX for block and Symmetrix systems.

Installing the FC switch (FC configuration)

To set up the FC switch: 1. Install the FC switch (if not already installed). 2. Connect cables from each HBA FC port to a switch port. 3. Verify the HBA connection to the switch by checking the LEDs for the switch port that is connected to the HBA port.
Note: Configure zoning as single initiator, which means that each HBA port will have its own zone. Each zone has only one HBA port.

Zoning the SAN switch (FC configuration)

To configure and zone the FC switch: 1. Record all attached port WWNs. 2. Create a zone for each FC HBA port and its associated FC Target.
Note: Configure the VNX for block so that each target is zoned to an SP A and SP B port. Configure the Symmetrix system so that it is zoned to a single FC Director or FC Adapter (FA).

Configuring the SAN switch and storage

59

EMC VNX MPFS Environment Configuration

Creating a security file

A VNX for block does not accept a secure CLI command unless the user who issues the command has a valid user account. Configure a Navisphere 6.X security file to issue secure CLI commands on the server. Secure CLI commands require the servers (or the password prompt) in each command line. The commands are not needed in the command line if a security file is created. To create a security file: 1. Log in to the Control Station as NAS administrator. 2. Create a security file by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

where: <hostname:IP address> = name of the VNX for file or IP address of the VNX for block For example, type:
$ /nas/sbin/naviseccli -h 172.24.107.242 -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears. 3. Verify that the security file was created correctly by using the command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> getagent

where: <hostname:IP address> = name of the VNX for file or IP address of the VNX for block If the security file was not created correctly or cannot be found, an error message is displayed:
Security file not found. Already removed or check -secfilepath option.

60

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

4. If an error message is displayed, repeat step 2 and step 3 to create the security file.

Configuring the VNX for block iSCSI ports

This section describes how to set up the VNX for block in an iSCSI configuration:
Note: The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

1. Configure iSCSI target hostname SP A and port IP address 0 on the system by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

where: <hostname:IP address> = name of the VNX for file or IP address of the VNX for block. <port IP address> = IP address of a named logical element mapped to a port on a Data Mover. Each interface assigns an IP address to the port. <subnet mask> = 32-bit address mask used in IP to identify the bits of an IP address used for the subnet address. <gateway IP address> = IP address of the machine through which network traffic is routed. For example, type:
$ /nas/sbin/naviseccli -h 172.24.107.242 connection -setport -sp a -portid 0 -address 172.241.107.1 -subnetmask 255.255.255.0 -gateway 172.241.107.2

Output:
It is recommended that you consult with your Network Manager to determine the correct settings before applying these changes. Changing the port properties may disrupt iSCSI traffic to all ports on this SP. Initiator configuration changes may be necessary to regain connections. Do you really want to perform this action (y/n)? y

Configuring the SAN switch and storage

61

EMC VNX MPFS Environment Configuration

SP: A Port ID: 0 Port WWN: iqn.1992-04.com.emc:cx.apm00065101342.a0 iSCSI Alias: 2147.a0 IP Address: 172.24.107.242 Subnet Mask: 255.255.255.0 gateway Address: 172.241.107.2 Initiator Authentication: false Note: If the iSCSI target is not configured (by replying with n), the command line prompt appears.

2. Continue for SP A ports 13 and SP B ports 03 by using the command syntax:


$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address> $ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

62

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

The outputs for SP A ports 13 and SP B ports 03 are the same as SP A port 0 with specific port information for each port.
Note: Depending on the system configuration, additional storage processors (SP C, SP D, and so on) each containing ports 03 can exist.

Configuring Access Logix


Setting failovermode and the arraycommpath

This section describes how to set up an Access Logix configuration, create storage groups, add LUNs, set failovermode, and set the arraycommpath for the MPFS client. The naviseccli failovermode command enables or disables the type of trespass needed for the failover software for the MPFS client. This method of setting failovermode works for VNX for block with Access Logix only. The naviseccli arraycommpath command enables or disables a communication path from the VNX for file to the VNX for block. This command is needed to configure a VNX for block when LUN 0 is not configured. This method of setting arraycommpath works for VNX for block with Access Logix only.

CAUTION Changing the failovermode setting may force the VNX for block to reboot. Changing the failovermode to the wrong value makes the storage group inaccessible to any connected server.
Note: Failovermode and arraycommpath should both be set to 1 for MPFS. If EMC PowerPath is enabled, failovermode must be set to 1.

To set and verify failovermode and arraycommpath settings: 1. Set failovermode to 1 (VNX for file only) by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode 1

where: <hostname:IP address> = name of the VNX for file or IP address of the VNX for block For example, type:

Configuring the SAN switch and storage

63

EMC VNX MPFS Environment Configuration

$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin failovermode 1

Output:
WARNING: Previous Failovermode setting will be lost! DO YOU WISH TO CONTINUE (y/n)? y Note: Setting or not setting failovermode produces no system response. The system just displays the command line prompt.

2. Verify the failovermode setting (VNX for file only) by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode

For example, type:


$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin failovermode

Output:
Current failovermode setting is: 1

3. Set arraycommpath to 1 (VNX for file only) by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath 1

where: <hostname:IP address> = name of the VNX for file or IP address of the VNX for block For example, type:
$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin arraycommpath 1

Output:
WARNING: Previous arraycommpath setting will be lost! DO YOU WISH TO CONTINUE (y/n)? y Note: Setting or not setting failovermode produces no system response. The system just displays the command line prompt.

64

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

4. Verify the arraycommpath setting (VNX for file only) by using this naviseccli command syntax:
$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath

For example, type:


$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin arraycommpath

Output:
Current arraycommpath setting is: 1

To discover the current settings of failovermode or the arraycommpath, also use the port -list -failovermode or port -list -arraycommpath commands.
Note: The outputs of these commands provide more detail than just the failovermode and arraycommpath settings and may be multiple pages in length.

Creating storage groups and adding LUNs

This section describes how to create storage groups, add LUNs to the storage groups, and configure the storage groups for the MPFS client. The IP addresses of all systems <hostname:IP address> is located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file:
Note: Specify the hostname as the name of the VNX for file, for example server_2.

1. Create a storage group by using this navicli command syntax:


$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -create -gname MPFS_Clients

where: <hostname:IP address> = name or IP address of the VNX for file For example, type:
$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -create -gname MPFS_Clients

Configuring the SAN switch and storage

65

EMC VNX MPFS Environment Configuration

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears. 2. Add LUNs to the storage group by using this navicli command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

where: <hostname:IP address> = name or IP address of the VNX for file For example, type:
$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears. 3. Continue adding LUNs to the rest of the storage group:
$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

where: <hostname:IP address> = name or IP address of the VNX for file For example, type:
$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears.

66

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Configuring and accessing storage


This section describes how to install the FC driver, add hosts to storage groups, configure the iSCSI driver, and add initiators to the storage group. The arraycommpath and failovermode settings are used to see both active and passive paths concurrently. For a LUN failover, LUNs can be presented from active to passive path or passive to active path. Use the arraycommpath and failovermode settings as described in Table 2 on page 67.
Table 2

Arraycommpath and failovermode settings for storage groups Default Access Logix units arraycommpath failovermode VNX and VNX for block arraycommpath failovermode 0 0 0 1 VNX for file ports 0 0 n/a n/a MPFS clients 1 1 n/a n/a

Any MPFS server that is connected and logged in to a storage group should have the arraycommpath and failovermode set to 1. For any VNX for file port connected to a storage group, these settings are 0. The settings are on an individual server/port basis and override the global settings on the system default of 0. When using the VNX for block in an MPFS over iSCSI on VNX VG2/VG8 gateway configuration, the iSCSI initiator name, or IQN, is used to define the server, not a WWN.

Installing the FC driver (FC configuration)

Install the FC driver on the Linux server. The latest driver and qualification information is available on the Fibre Channel manufacturers website, the EMC E-Lab Interoperability Navigator, or the documentation provided with the Fibre Channel driver.

Configuring and accessing storage

67

EMC VNX MPFS Environment Configuration

Adding hosts to the storage group (FC configuration)

To view hosts in the storage group and add hosts to the storage group for SP A and SP B for the MPFS client: 1. List the hosts in the storage group by using this navicli command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

where: <hostname:IP address> = name or IP address of the VNX for file For example, type:
$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:"

Output:
HBA HBA HBA HBA UID: UID: UID: UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3A 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Add hosts to the storage group by using this navicli command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where: <hostname:IP address> <gname> <hbauid> <sp> <spport> <failovermode> = = = = = = name or IP address of the VNX for file storage group name WWN of proxy initiator storage processor port on SP enables or disables the type of trespass needed for failover software (1 = enable, 0 = disable) creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable)

<arraycommpath>

68

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Examples of adding hosts to storage groups are shown in step 3 and step 4. 3. Add hosts to storage group A:
$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp a -spport 0 -failovermode 1 -arraycommpath 1 Note: The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

4. Add hosts to storage group B:


$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

Configuring and accessing storage

69

EMC VNX MPFS Environment Configuration

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)

After installing the Linux server for iSCSI configurations, configure the iSCSI driver for RHEL 4 on the Linux server: 1. Edit the /etc/iscsi.conf file on the Linux server. 2. Edit the /etc/initiatorname.iscsi file on the Linux server. 3. Start the iSCSI service daemon.
Note: Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration) on page 73 provides information about RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6.

Edit the iscsi.conf file

Using vi, or another standard text editor that does not add carriage return characters, edit the /etc/iscsi.conf file. Modify the file so that the iSCSI parameters shown in Table 3 on page 70 have comments removed and have the required values as listed in these tables. Global parameters should be listed before the DiscoveryAddress, should start in column 1, and should not have any white space in front of them. The DiscoveryAddresses must also start in column 1, and not have any whitespace in front of it/them. DiscoveryAddresses should appear after all global parameters. Be sure to read the iscsi.conf man page carefully.

Table 3

iSCSI parameters for RHEL 4 using 2.6 kernels (page 1 of 2) iSCSI parameter HeaderDigest DataDigest ConnFailTimeout Required value Never Never 45

70

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Table 3

iSCSI parameters for RHEL 4 using 2.6 kernels (page 2 of 2) iSCSI parameter InitialR2T PingTimeout ImmediateData DiscoveryAddress Required value Yes 45 No IP address of the iSCSI LAN port on IP-SAN switch

Note: The discovery address is the IP address of the IP-SAN iSCSI LAN port. This address is an example of using an internal IP. The actual switch IP address will be different.

For VNX for block configurations, the target name is the IQN of the VNX for block array ports. Run this command from the Control Station to get the IQNs of the target ports:
$ /nas/sbin/navicli -h <hostname:IP address> port -list -all |grep "SP UID:" SP SP SP SP SP SP SP SP SP SP SP SP UID: UID: UID: UID: UID: UID: UID: UID: UID: UID: UID: UID: 50:06:01:60:C1:E0:05:9F:50:06:01:60:41:E0:05:9F 50:06:01:60:C1:E0:05:9F:50:06:01:61:41:E0:05:9F 50:06:01:60:C1:E0:05:9F:50:06:01:68:41:E0:05:9F 50:06:01:60:C1:E0:05:9F:50:06:01:69:41:E0:05:9F iqn.1992-04.com.emc:cx.hk192201067.a2 iqn.1992-04.com.emc:cx.hk192201067.a3 iqn.1992-04.com.emc:cx.hk192201067.a0 iqn.1992-04.com.emc:cx.hk192201067.a1 iqn.1992-04.com.emc:cx.hk192201067.b2 iqn.1992-04.com.emc:cx.hk192201067.b3 iqn.1992-04.com.emc:cx.hk192201067.b0 iqn.1992-04.com.emc:cx.hk192201067.b1

An example of iscsi.conf parameters for a VNX VG2/VG8 gateway configuration follows (two discovery addresses are shown as there are two zones):
Enabled=no TargetName=iqn.1992-04.com.emc:cx.hk192201067.a0 TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1 TargetName=iqn.1992-04.com.emc:cx.hk192201067.a2 TargetName=iqn.1992-04.com.emc:cx.hk192201067.a3 TargetName=iqn.1992-04.com.emc:cx.hk192201067.b0 TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1 TargetName=iqn.1992-04.com.emc:cx.hk192201067.b2 TargetName=iqn.1992-04.com.emc:cx.hk192201067.b3

Configuring and accessing storage

71

EMC VNX MPFS Environment Configuration

DiscoveryAddress=45.246.0.41 TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25 DiscoveryAddress=45.246.0.45 TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25

Editing the initiatorname.iscsi file

The VNX for block automatically generates initiator names for the Linux server. However, initiator names must be unique and auto initiatorname generation may not always be unique. iSCSI names are generalized by using a normalized character set (converted to lower case or equivalent), with no white space allowed, and very limited punctuation. For those using only ASCII characters (U+0000 to U+007F), these characters are allowed:

ASCII dash character ('-' = U+002d) ASCII dot character ('.' = U+002e) ASCII colon character (':' = U+003a) ASCII lower-case characters ('a'..'z' = U+0061..U+007a) ASCII digit characters ('0'..'9' = U+0030..U+0039)

In addition, any upper-case characters input by using a user interface MUST be mapped to their lower-case equivalents. RFC 3722, http://www.ietf.org/rfc/rfc3722.txt, provides more information. To generate a unique initiatorname: 1. To view the current /etc/initiator.iscsi file:
$ more /etc/initiatorname.iscsi GenerateName=yes

2. Using vi, or another standard text editor that does not add carriage return characters, edit the /etc/initiatorname.iscsi file and comment out the line containing GenerateName=yes. Example of commented-out line:
#GenerateName=yes Note: Do not exit the file until step 3 is completed.

72

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

3. Place the unique IQN name, iSCSI qualified name, in the /etc/initiatorname.iscsi file:
#GenerateName=yes InitiatorName=iqn.2006-06.com.emc.mpfs:<xxxxxxx>

where: <xxxxxxx> = Server name In this example, use mpfsclient01 as the Linux server name:
#GenerateName=yes InitiatorName=iqn.2006-06.com.emc.mpfs:<mpfsclient01>

4. Save and exit the editor.


Note: If nodes exist on the switch, issue the show iscsi initiator command to show the IQN name. Care must be taken to not use duplicate IQN names (InitiatorName).

Starting iSCSI

To start iSCSI, as root, type this command:


$ /etc/init.d/iscsi start

Output:
Starting iSCSI: iscsi iscsid [ OK ]

Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration)
Installing and configuring RHEL 5-6, SLES 10-11, and CentOS 5-6

After installing the Linux server for iSCSI configurations, follow the procedures below to configure the iSCSI driver for RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 on the Linux server.

To install the Linux Open iSCSI software initiator, consult the README files available within the Linux distribution and the release notes from the distributor.
Note: Complete these steps before continuing to the RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 installation subsections. The open-iSCSI persistent configuration is implemented as a DBM database available on all Linux installations.

Configuring and accessing storage

73

EMC VNX MPFS Environment Configuration

The database contains two tables:


Discovery table (discovery.db) Node table (node.db)

The iSCSI database files in RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 are located in /var/lib/open-iscsi/. For SLES 10 SP3 and SLES 11 SP1 they will be found in /etc/iscsi/. Use these MPFS recommendations to complete the installation. The recommendations are generic to all distributions unless noted otherwise. To configure the iSCSI driver for RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 on the Linux server: 1. Edit the /etc/iscsi/iscsid.conf file. There are several variables within the file. The default file from the initial installation is configured to operate with the default settings. The syntax of the file uses a pound (#) symbol to comment out a line in the configuration file. Enable a variable by deleting the pound (#) symbol preceding the variable in the iscsid.conf file. The entire set of variables with the default and optional settings is listed in each distributions README file and in the configuration file. Table 4 on page 74 lists the recommended iSCSI parameter settings.
Table 4

RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 iSCSI parameters (page 1 of 2) Default settings manual No Yes 120 MPFS recommended auto Yes No 60

Variable name node.startup node.session.iscsi.InitialR2T node.session.iscsi.ImmediateData node.session.timeo.replacement_ timeout

Comments None None None With the use of multipathing software, this time can be decreased to 30 seconds for a faster failover. However, caution should be used to ensure that this timer is greater than the node.conn[0].timeo.timoe.noop_out_interval and node.conn[0].timeo.timeo.noop_out_timeout times combined.

74

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Table 4

RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 iSCSI parameters (page 2 of 2) Default settings 10 15 131072 MPFS recommended > 10 congested network > 15 congested network 262144

Variable name node.conn[0].timeo.timoe.noop_ out_interval node.conn[0].timeo.timeo.noop_ out_timeout node.conn[0].iscsi.MaxRecvData SegmentLength

Comments Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout. Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout. According to BP for previous versions.

2. Set the run levels for the iSCSI daemon to automatically start at boot and to shut down when the Linux server is brought down: For RHEL 5, RHEL 6, CentOS 5, and CentOS 6:
# chkconfig - -level 345 iscsid on # service iscsi start

For RHEL 5 or RHEL 6, perform a series of eight iscsiadm commands to configure the targets to connect to with open-iSCSI. Consult the man pages for iscsiadm for a detailed explanation of the command and its syntax. First, discover the targets to connect the server to using iSCSI. For SLES 10 and SLES 11:
# chkconfig -s open-iscsi 345 # chkconfig -s open-iscsi on #/sbin/rcopen-iscsi start

Use the YaST utility on SLES 10 and SLES 11 to configure the iSCSI software initiator. It can be used to discover targets with the use of the iSCSI SendTargets command, add targets to be connected to the server, and start/stop the iSCSI service. Open YaST and select Network Services > iSCSI Initiator. Open the tab to Discovered Targets by typing the IP address of the target:

Configuring and accessing storage

75

EMC VNX MPFS Environment Configuration

For a VNX for block: Specify one of the target IP addresses and the array will return all its available targets to select. After discovering the targets, click the Connected Targets tab to log in to the targets to be connected to and select those to be logged in to automatically at boot time. Perform the discovery process on a single IP address and the array will return all its iSCSI configured targets. For a Symmetrix system: Specify each individual target to discover and the array will return the specified targets to select. After discovering the targets, click the Connected Targets tab to log in to the targets to be connected to and select those to be logged in to automatically at boot time. Perform the discovery process on each individual target and the array will return the specified iSCSI configured targets. Command examples To discover targets:
# iscsiadm -m(ode) discovery -t(ype) s(end)t(argets) -p(ortal) <port IP address> output: <node.discovery_address>:3260,1 iqn.2007-06.com.test.cluster1:storage.cluster1 #iscsiadm -m discovery <node.discovery_address>:3260 via sendtargets <node.discovery_address>:3260 via sendtargets #iscsiadm --mode node (rhel5.0) <node.discovery_address>:3260,13570 iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016941e00f 1c <node.discovery_address>:3260,13570 iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016141e00f 1c #iscsiadm --mode node --targetname iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f 1c node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f 1c node.tpgt = 13569 node.startup = automatic iface.hwaddress = default iface.iscsi_ifacename = default iface.net_ifacename = default iface.transport_name = tcp node.discovery_address = 128.221.252.200
76

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

node.discovery_port = 3260 . #iscsiadm --mode node (suse10.0) [2f21ef] <node.discovery_address>:3260,13569 iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e019 bd [2f071e] <node.discovery_address>:3260,13569 iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e00f 1c #iscsiadm -m node -r 2f21ef node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e0

19bd
node.transport_name = tcp node.tpgt = 13569 node.active_conn = 1 node.startup = automatic node.session.initial_cmdsn = 0 node.session.auth.authmethod = None #iscsiadm --mode node --targetname iqn.2007-06.com.test.cluster1 :storage.cluster1 --portal <node.discovery_address>:3260 --login #iscsiadm -m session -i #iscsiadm --mode node --targetname iqn.2007-06.com.test.cluster1:storage.cluster1 --portal <node.discovery_address>:3260 --logout

To log in to the all targets:


# iscsiadm -m node -L all Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.b1] Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.a1]

Configuring and accessing storage

77

EMC VNX MPFS Environment Configuration

Starting/stopping the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration)

Use these commands to start and stop the Open-iSCSI driver. To manually start and stop the iSCSI driver for RHEL 5, RHEL 6, CentOS 5, and CentOS 6:
# etc/init.d/iscsid {start|stop|restart|status| condrestart}

To manually start and stop the iSCSI driver for SLES 10 and SLES 11:
# sbin/rcopen-iscsi {start|stop|status|restart}

If there are problems loading the iSCSI kernel module, diagnostic information will be placed in /var/log/iscsi.log. The open_iscsi driver is a sysfs class driver. Many of its attributes can be accessed in the directory. The man page for iscsiadm (8) provides information for all administrative functions used to configure, gather statistics, target discovery, and so on. The command is in the format:
/sys/class/iscsi_<host, session, connection> Note: Verify that anything that has an iSCSI device open has closed the iSCSI device before shutting down iSCSI. This includes file systems, volume managers, and user applications. If iSCSI devices are open when attempting to stop the driver, the scripts will error out instead of removing those devices. This prevents corrupting the data on iSCSI devices. In this case, iscsid will no longer be running. To continue by using the iSCSI devices, issue /etc/init.d/iscsi start command.

Limitations and workarounds

Limitations and workarounds are:

The Linux iSCSI driver, which is part of the Linux operating system, does not distinguish between NICs on the same subnet. Therefore to achieve load balancing and multipath failover, VNX for block connected to Linux servers must configure each NIC on a different subnet. The open-iSCSI daemon does not find targets automatically on boot when configured to log in at boot time. The Linux iSCSI Attach Release Notes provide more information.

78

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Adding initiators to the storage group (FC configuration)

In an FC configuration, the storage group should contain the HBA UID of the Linux servers. The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file. To add initiators to the storage group for SP A and SP B for the MPFS client: 1. Use this navicli command to list hosts in the storage group:
$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

For example, type:


$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:"

Output:
HBA HBA HBA HBA UID: UID: UID: UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3A 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Use this navicli command to add initiators to the storage group by using this command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where: <gname> <hbauid> <sp> <spport> <failovermode> storage group name HBA UID of Linux servers storage processor port on SP enables or disables the type of trespass needed for failover software (1 = enable, 0 = disable) <arraycommpath> = creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable) = = = = =

Note: Perform this command for each SP.

Configuring and accessing storage

79

EMC VNX MPFS Environment Configuration

Examples of adding initiators to storage groups are shown in step 3 and step 4. 3. Add initiators to the storage group for SP A:
$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

4. Add initiators to the storage group for SP B:


$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

80

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Adding initiators to the storage group (iSCSI configuration)

When using the VNX for block in a MPFS over iSCSI on VNX VG2/VG8 gateway configuration, the iSCSI initiator name, or IQN, is used to define the host, not a WWN. The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file. To add initiators to the storage group for SP A and SP B for the MPFS client: 1. Find the IQN used to define the host by using this navicli command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:" |grep iqn

For example, type:


$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:" |grep iqn

Output:
InitiatorName=iqn.1994-05.com.Red Hat:58c8b0919b31

2. Use this navicli command to add initiators to the storage group by using this command syntax:
$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

Configuring and accessing storage

81

EMC VNX MPFS Environment Configuration

where: <gname> <hbauid> <sp> <spport> <failovermode> = = = = = storage group name iSCSI initiator name storage processor port on SP enables or disables the type of trespass needed for failover software (1 = enable, 0 = disable) = creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable)

<arraycommpath>

Examples of adding initiators to storage groups are shown in step 3 and step 4.
Note: Perform this command for each iSCSI proxy-initiator.

3. Add initiators to the storage group for SP A:


$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.Red Hat:58c8b0919b31 -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

82

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

4. Add initiators to the storage group for SP B:


$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.Red Hat:58c8b0919b31 -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:
The recommended configuration is to have all HBAs on one host mapped to the same storage group. Set Path to storage group MPFS_Clients (y/n)? y

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Configuring and accessing storage

83

EMC VNX MPFS Environment Configuration

Mounting MPFS
A connection between the Linux server and the VNX for file, known as a session, must be completed before mounting MPFS. Establish a session by mounting the MPFS on the Linux server.
Note: MPFS can be added to the /etc/fstab file to mount the file system automatically after the server is rebooted or shut down.

To mount MPFS on the Linux server, use the mount command with this syntax:
mount -t mpfs [-o] <MPFS_specific_options> <movername>:/<FS_export> <mount_point>

where:

<MPFS_specific_options> is a comma-separated list (without spaces) of arguments to the -o option that are supported by MPFS. Most arguments to the -o option that are supported by the NFS mount and mount_nfs commands are also supported by MPFS. MPFS also supports these additional arguments:

-o mpfs_verbose Executes the mount command in verbose mode. If the mount succeeds, the list of disk signatures used by the MPFS volume is printed on standard output. -o mpfs_keep_nfs If the mount using MPFS fails, the file system is mounted by using NFS. Warning messages inform the user that the MPFS mount failed. -o hvl Specify the volume management type as hierarchical by default if it is supported by the server (-o hvl=1) or as not hierarchical by default (-o hvl=0). Setting this value overrides the default value specified in /etc/sysconfig/EMCmpfs. Hierarchical volume management on page 33 describes hierarchical volumes and their management. The -t option specifies the type of file system (such as MPFS).
Note: The -o hvl option requires NAS software version 5.6 or later.

<movername> is the name of the VNX for file. <FS_export> is the absolute pathname of the directory that is exported on the VNX for file.

84

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

<mount_point> is the absolute pathname of the directory on the Linux server on which to mount MPFS. Note: To view the man page for the mount command, type man mount_mpfs.

Examples

This command mounts MPFS without any MPFS specific options:


mount -t mpfs <hostname:IP address>:/src /usr/src

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears. The default behavior of mount t mpfs is to try to mount the file system. If all disks are not available, the mount will fail with this error:
$ mount -t mpfs <hostname:IP address>:/src /usr/src -v Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Volume APM000643042520000-0008 not found. Error mounting /mnt/mpfs via MPFS

This command mounts MPFS and displays a list of disk signatures:


mount -t mpfs -o mpfs_verbose <hostname:IP address>:/src /usr/src

Output:
VNX signature APM000531007850006-001c APM000531007850006-001d APM000531007850006-001e APM000531007850006-001f APM000531007850006-0020 APM000531007850006-0021 vendor EMC EMC EMC EMC EMC EMC product_id SYMMETRIX SYMMETRIX SYMMETRIX SYMMETRIX SYMMETRIX SYMMETRIX device serial number or path Active Active Active Active Active Active

/dev/sdab path = /dev/sdab(0x41b0) /dev/sdab path = /dev/sdab(0x41b0) /dev/sdac path = /dev/sdac(0x41c0) /dev/sdad path = /dev/sdad(0x41d0) /dev/sdae path = /dev/sdae(0x41e0) /dev/sdaf path = /dev/sdaf(0x41f0)

Mounting MPFS

85

EMC VNX MPFS Environment Configuration

If all disks are not available, the mount will fail with this error:
$ mount -t mpfs -o <hostname:IP address>:/src /usr/src -v Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Volume APM000643042520000-0008 not found. Error mounting /mnt/mpfs via MPFS

This command mounts MPFS. The mpfs_keep_nfs option causes the file system to mount by using NFS if the mount using MPFS fails:
mount -t mpfs -o mpfs_keep_nfs <hostname:IP address>:/src /usr/src

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears. With the mpfs_keep_nfs option, the behavior is to try to mount the file system by using MPFS. If all the disks are not available, the mount will default to NFS:
$ mount -t mpfs <hostname:IP address>:/rcfs /mnt/mpfs -v -o mpfs_keep_mpfs <hostname:IP address>:/rcfs on mnt/mpfs type mpfs (rw,addr=<hostname:IP address>) <hostname:IP address>:/rcfs using disks No disks found, ignore and work through NFS now! It will failback to MPFS automatically when the disks are OK.

This command specifies the volume management type as hierarchical volume management:
mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src

Output: This command produces no system response. When the command has finished executing, only the command line prompt appears.

86

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

If all disks are not available, the mount will fail with this error:
$ mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src -v Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Requested volume not found. Attempting re-discovery... Volume APM000643042520000-0008 not found. Error mounting /mnt/mpfs via MPFS

Never retry I/O through the SAN. For all intents and purposes the behavior is as if the user typed mount t nfs. Use this option for mounts that are done automatically and to ensure that the volume is mounted with or without MPFS.

Mounting MPFS

87

EMC VNX MPFS Environment Configuration

Unmounting MPFS
To unmount the MPFS file system on the Linux server, use the umount command with this syntax:
umount -t mpfs [-a] <mount_point>

where:

-t is the type of file system (such as MPFS). -a specifies to unmount MPFS.


<mount_point> is the absolute pathname of the directory on the Linux server on which to unmount the MPFS file system.

Example

This command unmounts MPFS:


umount -t mpfs -a

To unmount a specific file system, type either of these commands:


umount -t mpfs /mnt/fs1

or
umount /mnt/fs1

If a file system cannot be unmounted or is not in use, the umount command displays this error message:
Error unmounting /mnt/fs1/mpfs via MPFS

If a file system cannot be unmounted as it is in use, the umount command displays this error message:
umount: device busy

Note: These commands produce no system response. When the commands have finished executing, only the command line prompt appears.

88

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

3
Installing, Upgrading, or Uninstalling VNX MPFS Software

This chapter describes how to install, upgrade, and uninstall the EMC VNX MPFS software. Topics include:

Installing the MPFS software ........................................................... 90 Upgrading the MPFS software......................................................... 95 Uninstalling the MPFS software .................................................... 100

Installing, Upgrading, or Uninstalling VNX MPFS Software

89

Installing, Upgrading, or Uninstalling VNX MPFS Software

Installing the MPFS software


This section describes the requirements necessary before installing and two methods to install the MPFS software:

Install the MPFS software from a tar file Install the MPFS software from a CD

Before installing

Before installing the MPFS software, read the prerequisites for the Linux server and VNX for block, listed in this section: Verify that the Linux server on which the MPFS software will be installed meets the MPFS configuration requirements specified in the EMC VNX MPFS for Linux Clients Release Notes. Ensure that the Linux server has a network connection to the Data Mover on which the MPFS software resides and that the Data Mover can be contacted. Ensure that the Linux server meets the overall system and other configuration requirements specified in the E-Lab Interoperability Navigator.

Installing the MPFS software from a tar file

To install the MPFS software from a compressed tar file, download the file from the EMC Online Support website. Then, uncompress and extract the tar file on the Linux server and execute the install-mpfs script.
Note: The uncompressed tar file needs approximately 17 MB and the installation RPM file needs approximately 5 MB of disk space.

Note: Unless noted as an output, when the commands in the procedures have finished executing, only the command line prompt is returned.

To download, uncompress, extract, and install the MPFS software from the compressed tar file: 1. Create the directory /tmp/temp_mpfs if it does not already exist. 2. Locate the compressed tar file on the EMC Online Support website.

90

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

Depending on the specific MPFS software release and version, the filename will appear as:
EMCmpfs.linux.6.0.x.x.tar.Z

3. Download the compressed tar file from the EMC Online Support website to the directory created in step 1. 4. Change to the /tmp/temp_mpfs directory:
cd /tmp/temp_mpfs

5. Uncompress the tar file by using this command syntax:


uncompress <filename>

where <filename> is the name of the tar file For example, type:
uncompress EMCmpfs.linux.6.0.x.x.tar.Z

6. Extract the tar file by using this command syntax:


tar -zxvf <filename>

where <filename> is the name of the tar file For example, type:
tar -zxvf EMCmpfs.linux.6.0.x.x.tar.Z

7. Go to the Linux directory created by the last step:


cd /tmp/temp_mpfs/linux

8. Install the MPFS software:


$ ./install-mpfs

Output:
Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost [ Step 1 ] Checking installed MPFSpackage ... [ Step 2 ] Installing MPFS package ... Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%] Loading EMC MPFS Disk Protection [ OK ] Protecting EMC VNX disks [ OK ] Loading EMC MPFS [ OK ] Starting MPFS daemon [ OK ] Discover MPFS devices [ OK ] Starting MPFS perf daemon [ OK ] [ Done ]

9. Follow the instructions in Post-installation checking on page 98.

Installing the MPFS software

91

Installing, Upgrading, or Uninstalling VNX MPFS Software

Installing the MPFS software from a CD

To install the MPFS software from the EMC MPFS for Linux Client Software CD, mount the CD, view the architecture subdirectories, find the architecture being used, select the desired architecture subdirectory, and execute the install-mpfs script:
Note: Unless noted as an output, when the commands in the procedures have finished executing, only the command line prompt is returned.

1. Insert the CD in to the CD drive. 2. Mount the CD in the mnt directory:


$ mount /dev/cdrom /mnt

Output:
mount: block device/dev/cdrom is write-protected, mounting read-only

3. Go to the mnt directory created by the last step:


$ cd /mnt

4. View the architecture subdirectories:


$ ls -lt

Output:
dr-xr-xr-x -r--r--r--r--r--r-2 root 1 root 1 root root root root 2048 Jul 31 14:46 Packages 694 Jul 31 14:46 README.txt 442 Jul 31 14:46 TRANS.TBL

5. Go to the Packages directory:


$ cd Packages

6. View the architecture subdirectories:


$ ls -lt

Output:
-r--r--r--r--r--r--r--r--r--r-xr-xr-x -r--r--r--r--r--r-1 1 1 1 1 1 root root root root root root root root root root root root 42164 3381317 3955356 11711 1175 4807898 Jul Jul Jul Jul Jul Jul 31 31 31 31 31 31 14:46 14:46 14:46 14:46 14:46 14:46 EMCmpfs-6.0.2.x-ia32e.rpm EMCmpfs-6.0.2.x-ia64.rpm EMCmpfs-6.0.2.x-x86_64.rpm install-mpfs TRANS.TBL EMCmpfs-6.0.2.x-i686.rpm

92

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

7. Install the MPFS software:


$ ./install-mpfs

Output:
Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost [ Step 1 ] Checking installed MPFSpackage ... [ Step 2 ] Installing MPFS package ... Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%] Loading EMC MPFS Disk Protection [ OK ] Protecting EMC VNX disks [ OK ] Loading EMC MPFS [ OK ] Starting MPFS daemon [ OK ] Discover MPFS devices [ OK ] Starting MPFS perf daemon [ OK ] [ Done ]

8. Follow the instructions in Post-installation checking on page 98.

Post-installation checking

After installing the MPFS software: 1. Verify that the MPFS software is installed properly and the MPFS daemon (mpfsd) has started as described in Verifying the MPFS software upgrade on page 99. 2. Start the MPFS software by mounting an MPFS file system as described in Mounting MPFS on page 84. If the MPFS software does not run, Appendix B, Error Messages and Troubleshooting provides information on troubleshooting the MPFS software.

Installing the MPFS software

93

Installing, Upgrading, or Uninstalling VNX MPFS Software

Operating MPFS through a firewall

For proper MPFS operation, the Linux server and VNX for file (a Data Mover) must communicate with each other on their File Mapping Protocol (FMP) ports. If a firewall resides between the Linux server and the VNX for file, the firewall must allow access to the ports listed in Table 5 on page 94 for the Linux server.

Table 5

Linux server firewall ports Linux server Linux O/S RHEL, SLES Cent O/S Linux server port/use 6907 - FMP notify protocol VNX for file port/use 4656a - FMP 2049a - NFS 1234 - mountd 111 - portmap/rpcbind
a.Both ports 2049 and 4656 must be open to run the FMP service.

94

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

Upgrading the MPFS software


Use this procedure to upgrade the MPFS software.

Upgrading the MPFS software

Upgrade the existing MPFS software by using the install-mpfs script. The install-mpfs script can store information about the MPFS configuration, unmount MPFS, and restore the configuration after an upgrade. The command syntax for the install-mpfs script is:
install-mpfs [-s] [-r]

where: -s = silent mode, which unmounts MPFS and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade. The install script will automatically issue rpm -e EMCmpfs to remove the existing MPFS software.
Note: Do not back up and restore MPFS configuration files by default.

Upgrading the MPFS software

95

Installing, Upgrading, or Uninstalling VNX MPFS Software

To upgrade the MPFS software on a Linux server that has an earlier version of MPFS software installed: 1. Type:
$ ./install-mpfs

Output:
Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost [ Step 1 ] Checking installed MPFSpackage ... Warning: Package EMCmpfs-6.0.1-0 has already been installed. Do you want to upgrade to new package? [yes/no] yes [ Step 2 ] Checking mounted mpfs file system ... Fine, no mpfs file system is mounted. Install process will continue. [ Step 3 ] Upgrading MPFS package ... Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%] Unloading old version modules... unprotect Loading EMC MPFS Disk Protection [ OK ] Protecting EMC VNX disks [ OK ] Loading EMC MPFS [ OK ] Starting MPFS daemon [ OK ] Discover MPFS devices [ OK ] Starting MPFS perf daemon [ OK ] [ Done ]

2. The installation is complete. Follow the instructions in Post-installation checking on page 98.

96

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

Upgrading the MPFS software with MPFS mounted

The install-mpfs script can store information about the MPFS configuration, unmount MPFS, and restore the configuration after an upgrade. The command syntax for the install-mpfs script is:
install-mpfs [-s] [-r]

where: -s = silent mode which unmounts MPFS and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade. The install-mpfs script will attempt to unmount MPFS after prompting the user to proceed.
Note: Do not back up and restore MPFS configuration files by default.

To install the MPFS software on a Linux server that has an earlier version of MPFS software installed and MPFS mounted: 1. Type:
$ ./install-mpfs

Output:
Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost [ Step 1 ] Checking installed MPFSpackage ... Warning: Package EMCmpfs-6.0.1-0 has already been installed. Do you want to upgrade to new package? [yes/no] yes [ Step 2 ] Checking mounted mpfs file system ... The following mpfs file system are mounted: /mnt Do you want installation to umount these file system automatically? [yes/no] yes

Upgrading the MPFS software

97

Installing, Upgrading, or Uninstalling VNX MPFS Software

Unmounting MPFS filesystems... Successfully umount all mpfs file system. [ Step 3 ] Upgrading MPFS package ... Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%] Unloading old version modules... unprotect Loading EMC MPFS Disk Protection [ OK ] Protecting EMC VNX disks [ OK ] Loading EMC MPFS [ OK ] Starting MPFS daemon [ OK ] Discover MPFS devices [ OK ] Starting MPFS perf daemon [ OK ] [ Done ]

2. The installation is complete. Follow the instructions in Post-installation checking on page 98.

Post-installation checking

After upgrading the MPFS software: 1. Verify that the MPFS software is upgraded properly and the MPFS daemon (mpfsd) has started as described in Verifying the MPFS software upgrade on page 99. 2. Start the MPFS software by mounting MPFS as described in Mounting MPFS on page 84.

98

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

Verifying the MPFS software upgrade

To verify that the MPFS software is upgraded and that the MPFS daemon is started: 1. Use RPM to verify the MPFS software upgrade:
rpm -q EMCmpfs

If the MPFS software is upgraded properly, the command displays an output such as:
EMCmpfs-6.0.x-x Note: Alternatively, use the mpfsctl version command to verify the MPFS software is upgraded. The mpfsctl man page or Using the mpfsctl utility on page 107 provides additional information.

2. Use the ps command to verify that the MPFS daemon has started:
ps -ef |grep mpfsd

The output will look like this if the MPFS daemon has started:
root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS software by using this command:
$ /etc/rc.d/init.d/mpfs start

Upgrading the MPFS software

99

Installing, Upgrading, or Uninstalling VNX MPFS Software

Uninstalling the MPFS software


To uninstall the MPFS software from a Linux server: 1. To uninstall the MPFS software:
$ rpm -e EMCmpfs

If the MPFS software was uninstalled correctly, this message appears on the screen:
Unloading EMCmpfs module... [root@###14583 root]#

2. If the MPFS software was not uninstalled due to MPFS being mounted, this error message appears:
[root@###14583 root]# rpm -e EMCmpfs ERROR: Mounted mpfs filesystems found. Please unmount all mpfs filesystems before uninstalling the product. error: %preun(EMCmpfs-6.0.2-x) scriptlet failed, exit status 1

3. Unmount MPFS. Follow the instructions in Unmounting MPFS on page 88. 4. Repeat step 1.

100

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

4
EMC VNX MPFS Command Line Interface

This chapter discusses the EMC VNX MPFS commands, parameters, and procedures used to manage and fine-tune a Linux server. Topics include:

Using HighRoad disk protection ................................................... Using the mpfsctl utility ................................................................. Displaying statistics ......................................................................... Displaying MPFS device information ........................................... Setting MPFS parameters................................................................ Displaying Kernel parameters ....................................................... Setting persistent parameter values ..............................................

102 107 118 120 127 127 129

EMC VNX MPFS Command Line Interface

101

EMC VNX MPFS Command Line Interface

Using HighRoad disk protection


Linux servers provide hard disk protection for VNX for block and Symmetrix system volumes associated with MPFS. These volumes are called File Mapping Protocol (FMP) volumes. The program providing this protection is called the EMC HighRoad Disk Protection (hrdp) program. With hrdp read/write protection activated, I/O requests to FMP volumes from the Linux server are allowed, but I/O requests from other sources are denied. For example, root users can use the dd utility to read/write to an MPFS mounted file system, but cannot use the dd utility to read/write to the device files themselves (/dev). The reason for disk protection is two fold. The first reason is to provide security. Arbitrary users on a Linux server should not be able to access the data stored on FMP volumes. The second reason is to provide data integrity. Hard drive protection prevents the accidental corruption of file systems. This section describes the behavior and interface characteristics between the VNX for file and Linux servers.

VNX for file and hrdp

Linux servers depend on the VNX for file to tag relevant volumes to identify them as FMP volumes. To accomplish this, the VNX for file writes a signature on all visible volumes. From a disk protection view, a VNX for file and an FMP volume are synonymous. When a Linux server performs a disk discovery action, it tries to read a VNX for file signature from every accessible volume. For VNX for block volumes, which may be accessible through two different service processors (SP A and SP B), hrdp is not able to read a VNX for file signature from the passive path. However, hrdp does recognize that the two paths lead to the same device. The hrdp program protects both the passive and active paths to the VNX for block volumes. Because a set of FMP volumes may change over time, hrdp must perform disk discovery periodically. The hrdp program receives notifications of changes to device paths, and responds accordingly by protecting any newly accessible VNX for file devices.

Discovering disks

102

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

hrdp command syntax

The hrdp program can be used to manually control device protection. Used with no arguments, hrdp identifies all the devices in the system, and protects the devices or partitions with a VNX for file disk signature.
hrdp [-d] [-h] [-n] [-p] [ -s sleep_time ] [-u] [-v] [-w]

Command syntax

where: -d = run hrdp as a daemon, periodically scan devices, and update the kernel. -h = print hrdp usage information. -n = scan for new volumes, but do not inform the kernel about them. -p = enable protection (read and write) for all VNX for file volumes. -s sleep_time = when run as a daemon, sleep the specified number of seconds between rediscovery. The default sleep time is 900 seconds.
Note: Sleep time can also be set by using HRDP_SLEEP_TIME as an environment variable, or as a parameter in /etc/sysconfig/EMCmpfs. The sysconfig parameter is explained in detail in Displaying statistics on page 118.

-u = disable protection (read and write) for all VNX for file volumes. -v = scan in verbose mode; print the signatures of new volumes as they are found. -w = enable write protection for all VNX for file volumes. Examples These examples illustrate the hrdp command output. This command runs hrdp as a daemon, periodically scans devices, and updates the kernel:
$ hrdp -d Note: When the command has finished executing, only the command line prompt is returned.

Using HighRoad disk protection

103

EMC VNX MPFS Command Line Interface

This command prints information about hrdp usage:


$ hrdp -h

Output:
Usage: hrdp [options] Options: -d run as a daemon -h print this help information -n do not update kernel just print results -p enable protection -s time seconds to sleep between reprotection if run as daemon -u disabled (unprotect) protection -v verbose -w enable write protection (i.e. allow reads) $

This command does not inform the kernel about scanning for new volumes:
$ hrdp -n Note: When the command has finished executing, only the command line prompt is returned.

This command enables read and write protection for all VNX for file volumes:
$ hrdp -p

Output:
protect $

This command displays "protect" to show that read and write protection is enabled for all VNX for file volumes. This command when running hrdp as a daemon sleeps the specified number of seconds between rediscoveries:
$ hrdp -s sleep_time Note: When the command has finished executing, only the command line prompt is returned.

104

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

This command disables read and write protection for all VNX for file volumes:
$ hrdp -u

Output:
unprotect $

This command displays "unprotect" to show that read and write protection is disabled for all VNX for file volumes. This command scans in verbose mode and prints the signatures of new volumes as they are found.
$ hrdp -v

Output:
VNX signature vendor product_id 0001874307271FA0-00f1 EMC SYMMETRIX path = /dev/sdig Active FA-51b /dev/sg240 0001874307271FA0-00ee EMC SYMMETRIX path = /dev/sdid Active FA-51b /dev/sg237 0001874307271FA0-00f0 EMC SYMMETRIX path = /dev/sdif Active FA-51b /dev/sg239 device serial number or path info 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44

This command enables write protection for all VNX for file volumes:
$ hrdp -w

Output:
protect $

This command displays "protect" to show that write protection is enabled for all VNX for file volumes.

Using HighRoad disk protection

105

EMC VNX MPFS Command Line Interface

Viewing hrdp protected devices

Devices being protected by hrdp can be seen by listing the /proc/hrdp file. For a list of protected devices:
$ cat /proc/hrdp

Output:
Disk Protection Enabled for reads and writes Device 274: /dev/sddw 275: /dev/sddx 276: /dev/sddy 277: /dev/sddz Status protected protected protected protected

71.224 71.240 128.000 128.016

106

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Using the mpfsctl utility


The MPFS Control Program, mpfsctl, is a command line utility that can be used by MPFS system administrators to troubleshoot and fine-tune their systems. The mpfsctl utility resides in /usr/sbin/mpfsctl. Table 6 on page 107 lists the mpfsctl commands.
Table 6

Command line interface summary Command mpfsctl help mpfsctl diskreset Description Displays a list of mpfsctl commands. Clears any file error conditions and causes MPFS to retry I/Os through the SAN. Clears file error conditions, and tells MPFS to retry I/Os through the SAN in a specified timeframe. Allows for adjustment of the number of kilobytes to prefetch when MPFS detects sequential read requests. Sets the number of blocks of metadata to prefetch. Sets the statistical counters. Displays statistical data about MPFS. Displays the current version of MPFS software running on the Linux server. Displays the volume management type used by each mounted file system. Page 108 109

mpfsctl diskrestfreq mpfsctl max-readahead mpfsctl prefetch mpfsctl reset mpfsctl stats mpfsctl version mpfsctl volmgt

109

110

112 113 114, 118 117 117

Error message

If any of these commands are used and an error is received, ensure that the MPFS software has been loaded. Use the command mpfsctl version on page 117 to verify the version number of MPFS software.

Using the mpfsctl utility

107

EMC VNX MPFS Command Line Interface

mpfsctl help
Why use this command Command syntax

This command displays a list of the various mpfsctl program commands. Each command is explained in the rest of this chapter. Get a listing of all available mpfsctl commands.
mpfsctl help

Input:
$ mpfsctl help

Output:
Usage: mpfsctl op ... Operations supported (arguments in parentheses): diskreset resets disk connections diskresetfreq sets the disk reset frequency (seconds) max-readahead set number of readahead pages help print this list prefetch set number of blocks to prefetch reset reset statistics stats print statistics version display product compile time stamp volmgt get volume management type $

Use the man page facility on the Linux server for mpfsctl by typing man mpfsctl at the command line prompt.

108

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

mpfsctl diskreset
Why use this command

This command clears any file error conditions and tells MPFS to retry I/Os through the SAN. When MPFS detects that I/Os through the SAN are failing, it uses NFS to transport data. There are many reasons why a SAN I/O failure can occur. Use the mpfsctl diskreset command when:

A cable has been disconnected. After the reconnection, use the mpfsctl diskreset command to immediately retry the SAN. A configuration change or a hardware failure has occurred and the MPFS I/O needs to be reset through the SAN after the repair or change has been completed. Network congestion has occurred and the MPFS I/O needs to be reset through the SAN when the network congestion has been identified and eliminated.

Command syntax

mpfsctl diskreset

Input:
$ mpfsctl diskreset Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

mpfsctl diskresetfreq
Why use this command

This command sets the frequency at which the kernel automatically clears errors associated with using the SAN. Invoke NFS until the errors are cleared either manually with the mpfsctl diskreset command, or automatically when the frequency is greater than zero.
mpfsctl diskresetfreq <interval_seconds>

Command syntax

where:
<interval_seconds> = time between the clearing of errors in seconds

Input:
$ mpfsctl diskresetfreq 650

Using the mpfsctl utility

109

EMC VNX MPFS Command Line Interface

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new interval has been set:


$ cat /proc/mpfs/params Kernel Parameters DirectIO=1 disk-reset-interval=650 seconds ecache-size=2047 extents max-retries=10 prefetch-size=256 MaxConcurrentNfsWrites=128 MaxComitBlocks=2048 NotifyPort=6907 StatfsBsize=65536 Readahead=0 defer-close-seconds=60 seconds defer-close-max=1024 UsePseudo=1 ExostraMode=0 ReadaheadForRandomIO=0 SmallFileThreshold=0 $

The default for the disk-reset-interval parameter is 600 seconds with a minimum of 60 seconds and a maximum of 3600 seconds. Note the value change in the example.

mpfsctl max-readahead

This command allows for adjustment of the number of kilobytes of data to prefetch when MPFS detects sequential read requests. The mpfsctl max-readahead command is designed for 2.6 Linux kernels to provide functionality similar to the kernel parameter entry and /proc/sys/vm/max-readahead for 2.4 Linux kernels. One difference is that in 2.4 Linux kernels, this kernel parameter is system-wide and the mpfsctl max-readahead parameter only applies to I/O issued to MPFS. This option to the mpfsctl command allows experimentation with different settings on a currently running system. Changes to the mpfsctl max-readahead value are not persistent across system reboots. However, mpfsctl max-readahead value changes take effect immediately for file systems that are currently mounted.

110

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To load a new value every time MPFS starts, remove the comments from the globReadahead parameter in the /etc/mpfs.conf file if it is present. If it is not present, add the globReadahead on a line by itself to change the default value.
Note: The prefetch parameter value can be set to stay in effect after a reboot. Setting persistent parameter values on page 129 describes how to set this value persistently.

For example: globReadahead=120 (120 x 4 K = 480 KB) where 120 equals 480 KB on an x86_64 machine. Why use this command Command syntax Tune MPFS for higher read performance.
mpfsctl max-readahead <kilobytes>

where:
<kilobytes> = an integer between 0 and 32768

The minimum/default value 0 specifies use of the kernel default, which is 480 KB. A maximum value specifies 32,768 KB of data to be read ahead. Input:
$ mpfsctl max-readahead 0 Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new max readahead value has been set:
$ cat /proc/mpfs/params Kernel Parameters DirectIO=1 disk-reset-interval=600 seconds ecache-size=2047 extents max-retries=10 prefetch-size=256 MaxConcurrentNfsWrites=128 MaxComitBlocks=2048 NotifyPort=6907 StatfsBsize=65536 Readahead=0 defer-close-seconds=60 seconds
Using the mpfsctl utility
111

EMC VNX MPFS Command Line Interface

defer-close-max=1024 UsePseudo=1 ExostraMode=0 ReadaheadForRandomIO=0 SmallFileThreshold=0 $

mpfsctl prefetch

This command sets the number of data blocks to prefetch metadata. Metadata is information that describes the location of file data on the SAN. It is this prefetched metadata that allows for fast, accurate access to file data through the SAN. Tune MPFS for higher performance.
mpfsctl prefetch <blocks>

Why use this command Command syntax

where:
<blocks> = an integer between 4 and 4096 that specifies the number of blocks for which to prefetch metadata.

A block contains 8 KB of metadata. Metadata can be prefetched that maps (describes) between 32 KB (4 blocks) and 32 MB (4096 blocks) of data. The default is 256 blocks or 2 MB for which metadata is prefetched. This is the best number for a variety of workloads. Leave this value unchanged. However, mpfsctl prefetch can be changed in situations when higher performance is required. Changing the prefetch value does not affect current MPFS mounts, only subsequent mounts.
Note: The prefetch parameter value can be set to stay in effect after a reboot. Setting persistent parameter values on page 129 describes how to set this value persistently.

Input:
$ mpfsctl prefetch 256 Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

112

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To verify that the new prefetch value has been set:


$ cat /proc/mpfs/params Kernel Parameters DirectIO=1 disk-reset-interval=650 seconds ecache-size=2047 extents max-retries=10 prefetch-size=256 MaxConcurrentNfsWrites=128 MaxComitBlocks=2048 NotifyPort=6907 StatfsBsize=65536 Readahead=0 defer-close-seconds=60 seconds defer-close-max=1024 UsePseudo=1 ExostraMode=0 ReadaheadForRandomIO=0 SmallFileThreshold=0 $

mpfsctl reset

This command resets the statistical counters read by the mpfsctl stats command. Displaying statistics on page 118 provides additional information. By default, statistics accumulate until the system is rebooted. Use the mpfsctl reset command to reset the counters to 0 before executing the mpfsctl stats command.
mpfsctl reset

Why use this command Command syntax

Input:
$ mpfsctl reset Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

Using the mpfsctl utility

113

EMC VNX MPFS Command Line Interface

mpfsctl stats

This command displays a set of statistics showing the internal operation of the Linux server. By default, statistics accumulate until the system is rebooted. The command mpfsctl reset on page 113 provides information to reset the counters to 0 before executing the mpfsctl stats command.

Why use this command Command syntax

The output of the mpfsctl stats command can help pinpoint performance problems.
mpfsctl stats

Input:
$ mpfsctl stats

Output:
=== OS INTERFACE 8534 reads totalling 107683852 bytes 5378 direct reads totalling 107683852 bytes 4974 writes totalling 74902093 bytes 2378 direct writes totalling 107683852 bytes 0 split I/Os 25 commits, 14 setattributes 4 fallthroughs involving 28 bytes === Buffer Cache 8534 disk reads totalling 107683852 bytes 4974 disk writes totalling 74902093 bytes 0 failed disk reads totalling 0 bytes 0 failed disk writes totalling 0 bytes === NFS Rewrite 6436 sync read calls totalling 107683852 bytes 3756 sync write calls totalling 74902093 bytes === Address Space Errors 321 swap failed writes === EXTENT CACHE 8364 read-cache hits (97%) 3111 write-cache hits (62%) === NETWORK INTERFACE 188 open messages, 187 closes
114

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

178 getmap, 1897 allocspace 825 flushes of 1618 extents and 9283 blocks, 43 releases 1 notify messages === ERRORS 0 WRONG_MSG_NUM, 0 QUEUE_FULL 0 INVALIDARG 0 client-detected sequence errors 0 RPC errors, 0 other errors $

When the command has finished executing, the command line prompt is returned.

Using the mpfsctl utility

115

EMC VNX MPFS Command Line Interface

Understanding mpfsctl stats output

Each of the output sections is explained next. OS INTERFACE The first four lines show the number of NFS message types that MPFS either handles (reads, direct reads, writes, and direct writes) or watches and augments (split I/Os, commits, and setattributes). The last line shows the number of fallthroughs or reads and writes attempted over MPFS, but accomplished over NFS. The number of fallthroughs should be small. A large number of fallthroughs indicates that MPFS is not being used to its full advantage. Buffer Cache The first two lines show the number of disk (reads and writes) that MPFS reads and writes to and from cache. The last two lines show the number of failed disk reads and writes to cache. NFS Rewrite The first line shows the number of synchronized read calls that NFS rewrites. The second line shows the number of synchronized write calls that NFS rewrites. Address Space Errors This line shows the number of failed writes due to memory pressure, which will be retried later. EXTENT CACHE These lines show the cache-hit rates. A low percentage (such as the percentage of write-cache hits in this example) indicates that the application has a random access pattern rather than a more sequential access pattern. NETWORK INTERFACE These lines show the number of FMP messages sent. In this example, the number is 187. The number of blocks (9283) per flush (825) is also significant; in this case it is a 11:1 ratio. Coalescing multiple blocks into a single flush is a major part of the MPFS strategy for reducing message traffic. ERRORS This section shows both serious errors and completely recoverable errors. The only serious errors are those described as either RPC or other. Contact EMC Customer Support if a significant number of errors are reported in a short period of time.

116

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

mpfsctl version
Why use this command Command syntax

This command displays the version number of the MPFS software running on the Linux server. Find the specific version number of the MPFS software running on the Linux server.
mpfsctl version

Input:
$ mpfsctl version

Output:
version: EMCmpfs.linux.6.0.2.x.x /emc/test/mpfslinux (test@eng111111), 12/10/10 01:41:24 PM $

When the command has finished executing, only the command line prompt is returned. If the MPFS software is not loaded, this error message appears:
/dev/mpfs : No such file or directory

Install the MPFS software by following the procedure in Installing the MPFS software on page 90.

mpfsctl volmgt
Why use this command Command syntax

This command displays the volume management type used by each mounted file system. Find if the volume management type is hierarchical volume management.
mpfsctl volmgt

Input:
$ mpfsctl volmgt

Output:
Fs ID 1423638547 $ VolMgtType Hvl management Disk signature

When the command has finished executing, only the command line prompt is returned.

Using the mpfsctl utility

117

EMC VNX MPFS Command Line Interface

Displaying statistics
MPFS statistics for the system can be retrieved by using the mpfsstat command.

Using the mpfsstat command

This command displays a set of statistics. The command reports I/O activity for MPFS. Without options, mpfsstat reports global statistics in megabytes per second. By default, statistics accumulate until the Linux server is rebooted. To reset the counters to 0, run mpfsstat with the -z option. Help troubleshoot MPFS performance issues or to gain general knowledge about the performance of MPFS.
mpfsstat [-d] [-h] [-k][-z] [interval[count]]

Why use this command Command syntax

where: -d = report statistics about the MPFS disk interface -h = print mpfsstat usage information -k = report statistics in kilobytes instead of megabytes -z = reset the counters to 0 before reporting statistics Operands: interval = report statistics every interval seconds count = print only count lines of statistics Examples These examples illustrate the mpfsstat command output. This command prints the I/O rate for all MPFS-mounted file systems:
$ mpfsstat

Output:
r/s 0.0 w/s 0.0 dr/s 0.0 $ dw/s 0.0 mr/s 0.0 mw/s 0.0 mdr/s 0.0 mdw/s 0.0 Fallthroughs 0

This command reports MPFS disk interface statistics:


$ mpfsstat -d

118

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Output:
disk r/s w/s 0 0 mr/s 0.0 $ syncnfs mw/s r/s w/s 0.0 0 0 failed zero mr/s mw/s r+w/s mr+w/s blocks 0.0 0.0 0 0.0 0

This command prints information about mpfsstat usage:


$ mpfsstat -h

Output:
Usage: mpfsstat [-dhkz] [interval [count]] -d Print disk statistics -h Print This screen -k Print Statistics in Kilobytes per sec. -z Clear all statistics $

This command reports statistics in kilobytes instead of megabytes:


$ mpfsstat -k

Output:
r/s 0.0 w/s 0.0 drs 0.0 $ dw/s 0.0 kr/s 0.0 kw/s 0.0 kdr/s 0.0 kdw/s 0.0 Fallthroughs 0

This command resets the counters to zero before reporting statistics:


$ mpfsstat -z

Output:
r/s 0.0 w/s 0.0 dr/s 0.0 $ dw/s 0.0 mr/s 0.0 mw/s 0.0 mdr/s 0.0 mdw/s 0.0 Fallthroughs 0

This command prints two lines of statistics, waiting one second between prints:
$ mpfsstat 1 2

Output:
r/s 0.0 0.0 w/s 0.0 0.0 dr/s 0.0 0.0 $ dw/s 0.0 0.0 mr/s 0.0 0.0 mw/s mdr/s 0.0 0.0 0.0 0.0 mdw/s 0.0 0.0 Fallthroughs 0 0

Displaying statistics

119

EMC VNX MPFS Command Line Interface

Displaying MPFS device information


Several types of information can be displayed for MPFS devices, including the devices vendor ID, product ID, active/passive state, and mapped paths. The methods for displaying device information are:

Using the mpfsinq command Listing the /proc/mpfs/devices file Using the hrdp command Using the mpfsquota command

This section describes each of these methods and the type of information provided by each.

Listing devices with the mpfsinq command

The mpfsinq command displays disk signature information, shows where the path where disks are mapped, and identifies whether the devices are active (available for I/O). The command syntax is:
mpfsinq [-c time] [-h] [-m] [-S] [-v] <devices>

where: -c time = timeout for SCSI command in seconds -h = print mpfsinq usage information -m = used to write scripts based on the output of mpfsinq; it prints out information in a machine readable format that can be easily edited with awk and sed -S = tests the disk speed -v = run in verbose mode printing out additional information <devices> = active devices available for I/O

120

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To view the timeout in seconds for a SCSI command:


$ mpfsinq -c time

Output:
FNM00083700177002E-0007 DGC RAID 5 60:06:01:60:00:03:22:00:27:b4:48:b9:d8:03:de:11 path = /dev/sdbm (0x4400 | 0x4003f00) Active

SP-a3

/dev/sg65

path = /dev/sdbi (0x43c0 | 0x3003f00) Passive SP-b3 /dev/sg61 . . . . . . FNM000837001770000-000e DGC RAID 5 60:06:01:60:00:03:22:00:5d:59:b0:aa:71:02:de:11 path = /dev/sdbh (0x43b0 | 0x1001300) Active SP-a2* /dev/sg60 path = /dev/sdee (0x8260 | 0x2000000) Passive SP-b2 /dev/sg167

* designates active path using non-default controller

To print information about mpfsinq usage:


$ mpfsinq -h

Output:
Usage: mpfsinq [options] <devices> Options: -c time timeout for scsi command in seconds -h print this help information -m machine readable format for output -S test disk speed -v verbose $

To write scripts based on the output of the mpfsinq command with information printed out in a machine readable format that can be edited with awk and sed:
$ mpfsinq -m
b2:38:7e:d9:03:de:11 /dev/sdcy Active /dev/sg103 FNM00083700177002E-0010 DGC RAID 5 60:06:01:60:00:03:22:00 :4a:b2:38:7e:d9:03:de:11 /dev/sddb Passive /dev/sg106 FNM000837001770028-000c DGC RAID 5 60:06:01:60:00:03:22:00 :8d:e7:5d:67:d9:03:de:11 /dev/sdcq Active /dev/sg95 . . . . . . FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00 :16:b6:c1:d5:6e:02:de:11 /dev/sdee Passive /dev/sg135

Displaying MPFS device information

121

EMC VNX MPFS Command Line Interface

To test the disk speed:


$ mpfsinq -S

Output:
FNM000837001770000-0017 DGC RAID 5 60:06:01:60:00:03:22:00:5c:59:b0:aa:71:02:de:11 path = /dev/sdbd (0x4370 | 0x1001200) Active SP-a2* /dev/sg56 50MB/s path = /dev/sdfj (0x8250 | 0x2001200) Passive SP-b2 /dev/sg166 . . . . . . FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00:16:b6:c1:d5:be:02:de:11 path = /dev/sdc (0x820 | 0x1000000) Active SP-a2 /dev/sg3 50MB/s path = /dev/sder (0x8130 | 0x2000000) Passive SP-b2 /dev/sg148

* designates active path using non-default controller

To display MPFS devices in verbose mode printing out additional information:


$ mpfsinq -v

Output:
mpfsinq -v
VNX signature vendor product_id 0001874307271FA0-00f1 EMC SYMMETRIX path = /dev/sdig Active FA-51b /dev/sg240 0001874307271FA0-00ee EMC SYMMETRIX path = /dev/sdid Active FA-51b /dev/sg237 0001874307271FA0-00f0 EMC SYMMETRIX path = /dev/sdif Active FA-51b /dev/sg239 device serial number or path info 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44

Note: A passive path will be shown in the output only if there is a secondary path mapped to the device. Only the VNX for block has Active/Passive states. Symmetrix system arrays are Active/Active only as shown in Table 7 on page 122.
Table 7

MPFS device information Vendor ID Symmetrix VNX for block VNX for block State Active Active Passive I/O available Yes Yes No

122

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Listing devices with the /proc/mpfs/ devices file

The state of MPFS devices may also be shown by listing the /proc/mpfs/devices file. To list the MPFS devices in the /proc/mpfs/devices file:
$ cat /proc/mpfs/devices

Output:
VNX Signature Path State FNM000837001770000-0001 /dev/sdc active FNM000837001770034-0001 /dev/sdak active FNM000837001770000-0002 /dev/sdf active FNM000837001770034-0002 /dev/sdan active FNM000837001770028-0002 /dev/sdg active FNM00083700177002E-0002 /dev/sdv active FNM000837001770000-0003 /dev/sdaq active

Displaying mpfs disk quotas

Use the mpfsquota command to display a users MPFS disk quota and usage. Log in as root to use the optional username argument and view the limits of other users. Without options, mpfsquota displays warnings about mounted file systems where usage is over quota. Remote mounted file systems that do not have quotas turned on are ignored.
Note: If quota is not turned on in the file system, log in to the VNX for file and execute the nas_quotas commands.

Example

To set quotas in the server:


$ nas_quotas -edit -user -fs server2_fs1 501

Output:
Userid : 501 fs "server2_fs1" blocks (soft = 2000, hard = 3000) inodes (soft = 0, hard = 0)

To turn on the quotas:


$ nas_quotas -on -user -fs server2_fs1

Output:
done $

Displaying MPFS device information

123

EMC VNX MPFS Command Line Interface

To run a report on the quotas:


$ nas_quotas -report -fs server2_fs1

Output:
Report for user quotas on file system server2_fs1 mounted on /server2fs1 +------------+-------------------------------------+---------------------------+ |User | Bytes Used (1K) | Files | +------------+-----------+-------+-------+---------+------+-----+-----+--------+ | | Used | Soft | Hard |Timeleft | Used | Soft| Hard|Timeleft| +------------+-----------+-------+-------+---------+------+-----+-----+--------+ |#501 | 8 | 2000| 3000| | 1 | 0| 0| | |#32769 | 1864424 | 0| 0| | 206 | 0| 0| | +------------+-----------+-------+-------+---------+------+-----+-----+--------+ done $

Mount the file system with the mpfs option from a Linux server. The command syntax is:
mpfsquota -v [username/UID]

where: -v = is a required option username/UID = is the user ID To display all MPFS-mounted file systems where quotas exist:
$ mpfsquota -v

To view the quota of UID 501:


$ mpfsquota -v 501

Output:
Filesystem /mnt usage 8 quota limit timeleft 2000 3000 files quota limit 1 0 0 timeleft

Example

If quota is turned off in the server, this message appears:


$ mpfsquota 501

Output:
No quota

124

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Validating a Linux server installation

Use the mpfsinfo command to validate a Linux server and VNX for file installation by querying an FMP server (Data Mover) and validating that the Linux server can access all the disks required to use MPFS for each exported file system. The user must supply the name or IP address of at least one FMP server and have the Tool Command Language (Tcl) and Extended Toll Command Language (Tclx) installed. Multiple FMP servers may be specified, in which case the validation is done for the exported file systems on all the listed servers.

mpfsinfo command

The command syntax is:


mpfsinfo [-v] [-h] <fmpserver>

where: -v = run in verbose mode printing out additional information -h = print mpfsinfo usage information <fmpserver> = name or IP address of the FMP server To query FMP server ka0abc12s402:
$ mpfsinfo ka0abc12s402

Output:
ka0abc12s402:/server4fs1 ka0abc12s402:/server4fs2 ka0abc12s402:/server4fs3 ka0abc12s402:/server4fs4 ka0abc12s402:/server4fs5 $ OK OK OK OK OK

When the Linux server cannot access all of the disks required for each exported file system, this output appears. To query FMP server kc0abc17s901:
$ mpfsinfo kc0abc17s901

Output:
kc0abc17s901:/server9fs1 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSING kc0abc17s901:/server9fs2 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSING

Displaying MPFS device information

125

EMC VNX MPFS Command Line Interface

To run in verbose mode printing out additional information:


$ mpfsinfo -v 172.24.107.243

Output:
172.24.107.243:/S2_Shg_mnt1 FNM000836000810000-0007 FNM000836000810000-0008 172.24.107.243:/S2_Shg_mnt2 FNM000836000810000-0009 FNM000836000810000-000a 172.24.107.243:/S2_Shg_mnt3 FNM000836000810000-000d FNM000836000810000-000e $ OK OK OK OK OK OK OK OK OK

To print mpfsinfo usage information:


$ mpfsinfo -h

Output:
Usage: /usr/sbin/mpfsinfo [options] fmpserver... options: -h help -v verbose

If the server is not available, this error message is displayed:


$ mpfsinfo -v ka0abc12s402 Warning: No MPFS disks found ka0abc12s402: Cannot reach server.

126

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Setting MPFS parameters


A list of MPFS parameters may be found in the /proc/mpfs/params file. The parameter settings shown are the default or recommended values. If a Linux server reboot is performed, several of these parameters revert to the default value unless they are set to a persistent state. Setting persistent parameter values on page 129 explains the procedure for applying these parameters across the reboot process.

Displaying Kernel parameters


Use Table 8 on page 128 as a guide for minimum and maximum settings for each parameter. To display the current settings:
$ cat /proc/mpfs/params

Output:
Kernel Parameters DirectIO=1 disk-reset-interval=600 seconds ecache-size=2047 extents max-retries=10 prefetch-size=256 MaxConcurrentNfsWrites=128 MaxComitBlocks=2048 NotifyPort=6907 StatfsBsize=65536 Readahead=0 defer-close-seconds=60 seconds defer-close-max=1024 UsePseudo=1 ExostraMode=0 ReadaheadForRandomIO=0 SmallFileThreshold=0

Setting MPFS parameters

127

EMC VNX MPFS Command Line Interface

Table 8

MPFS kernel parameters (page 1 of 2) Default 1024 60 Minimum 0 0 Maximum None None Meaning Closes 12 files when the number of open files exceeds the defer-close-max value. When an application closes a file, the FMP module will not send the FMP close command to the server until the defer-close-seconds time has passed. Allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache. Sets the timeframe in seconds for failback to start by using the SAN for all open files. Sets the number of extents per file to keep in the extent cache. Use default setting; for EMC use only. Sets the maximum number of blocks to commit to a single commit command. Sets the maximum number of concurrent NFS writes allowed. Sets the maximum number of SAN-based retries before failing over to NFS. The notification port that is used by default. Sets the number of prefetch blocks. Recommended size: no larger than 512 unless instructed to do so by your EMC Customer Support Representative. Specifies the read ahead in pages. This parameter only applies to 2.6 kernels. When an application reads a file randomly, the readahead size is reduced by the kernel. By setting this parameter to 1, the readahead size is not reduced by the kernel.

Parameter defer-close-max defer-close-seconds

DirectIO

disk-reset-interval ecache-size ExostraMode MaxCommitBlocks

600 2,047 0 2,048

60 31 Do not change Do not change Do not change 2 Do not change 4

3,600 16,383 Do not change Do not change Do not change 20 Do not change 2,048

MaxConcurrentNfsWrites 128 max-retries NotifyPort prefetch-size 10 6,907 256

Readahead ReadaheadForRandomIO

0 0

Do not change 0

Do not change 1

128

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Table 8

MPFS kernel parameters (page 2 of 2) Default 0 Minimum 0 Maximum None Meaning Sets the size threshold for files. For files smaller than this value, I/O will go through NFS instead of MPFS. When set to 0, this function is disabled. The file system block size as returned by the statsfs system call. This value is not used by MPFS, but some applications choose this as the size of their writes. Enables MPFS to use pseudo devices created by Multipathing software, such as PowerPatha and the Device-Mapper Multipath toolb.

Parameter SmallFileThreshold

StatfsBsize

65,536

8,192

2M

UsePseudo

a. MPFS supports PowerPath version 5.3 on RHEL 4 U6-U8, RHEL 5 U5-U7, SLES 10 SP3, and SLES 11 SP1. b. MPFS supports the Device-Mapper Multipath tool on RHEL 4 U6-U8 and RHEL 5 U5-U7.

Setting persistent parameter values


Parameters in /etc/mpfs.conf and /etc/sysconfig/EMCmpfs can be set to persistently remain in effect after a Linux server reboot.

mpfs.conf parameters

Prefetch along with several other MPFS parameters, may be set to a persistent state by modifying the /etc/mpfs.conf file. These parameters are:

globPrefetchSize Sets the number of blocks to prefetch when requesting mapping information. globMaxRetries Sets the number of retries for all FMP requests. globDiskResetInterval Sets the number of seconds between retrying by using SAN.

To view the /etc/mpfs.conf file:


$ cat /etc/mpfs.conf

Output:
# # This is the MPFSi configuration file # # It contains parameters that are used when the MPFSi module is loaded # # # Users who supply the direct I/O flag when opening a file will get

Setting persistent parameter values

129

EMC VNX MPFS Command Line Interface

# # # # # # # # #

behavior that is dependant on the global setting of a parameter called "globDirectIO". There are three valid values for this parameter. They are: 0 -- No direct I/O support, return ENOSUPP 1 -- Direct I/O via MPFS 2 -- Direct I/O via NFS even on MPFS file systems 3 -- Direct I/O via MPFS, and optimized for DIO to pre-allocated file, DIO to non-allocated file will fallback to NFS

# # globDirectIO=1 # # Set the number of seconds between retrying via SAN # # globDiskResetInterval_sec=600 # # # # # # #

Set number of extents per file to keep in the extent cache. This should be a power of 2 minus 1. Too many extents means that searching the extent cache may be slow. Too few, and we will have to do too many RPCs. globECacheSize=2047

# # Set the number of retries for all FMP requests # # globMaxRetries=10 # # Set the number of blocks to prefetch when requesting mapping information # # globPrefetchSize=256 # # Set number of simultaneous NFS writes to dispatch on SAN failure # # globMaxConcurrentNfsWrites=128 # # # # # # #

Set optimal blocksize for MPFS file systems globStatfsBsize=65536 Set number of readahead pages This is only used for 2.6 kernels. vm.max_readahead

For 2.4 kernel users, please set

# # globReadahead=250

130

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

# # Readahead support for random I/O # When an application reads a file randomly, the readahead size is reduced by kernel. # By setting this parameter to 1, the readahead size is not reduced. # # globReadaheadForRandomIO=0 # # Set maximum number of blocks to commit in a single commit command. # # globMaxCommitBlocks=2048 # # Set the notification port if the mpfsd is unable to get the requested # port when it starts. # # globNotifyPort=6907 # # Enable MPFS to use Pseudo devices created by Multipathing software, # namely PowerPath and Device-Mapper Multipath tool. # globUsePseudo=1 # # Set Defer Close Second for FileObj, 0 to disable # # globDeferCloseSec=60 # # Set maximum Defer Close files # # globDeferCloseMax=1024 # # Set size threshold for files # For file smaller than this value, IO will go through NFS instead of MPFS # When set to 0, this function is disabled. # # globSmallFileThreshold=0

To modify the /etc/mpfs.conf file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. This example shows this file after modification:
# #This is the MPFSi configuration file
Setting persistent parameter values
131

EMC VNX MPFS Command Line Interface

# #It contains parameters that are used when the MPFSi module is loaded # # Set the number of blocks to prefetch when requesting mapping information # # globPrefetchSize=256 # Set number of readahead pages # # globReadahead=250 # Set the number of seconds between retrying via SAN # # globDiskResetInterval_sec=600 # Set the number of retries for all FMP requests # globMaxRetries=8 # Set the number of seconds between retrying via SAN # globDiskResetInterval_sec=700 #

In the example, globMaxRetries was changed to 8 and globDiskResetInterval_sec was changed to 700.

DirectIO support

DirectIO allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache. This feature was added for applications that use the O_DIRECT flag when opening a file. When MPFS opens files by using DirectIO, the read/write behavior depends on the global setting of a parameter called DirectIO.
Note: DirectIO is only a valid option for 2.6 kernels (such as RHEL 4, RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6).

To examine the value of the MPFS DirectIO setting:


$ grep DirectIO /proc/mpfs/params $ globDirectIO=1 $ Note: The default value is 1, meaning that DirectIO is enabled.

132

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To change the DirectIO parameter value, use vi or another text editor that does not add carriage return characters to the file. Remove the comment from line in the /etc/mpfs.conf file on the server:
globDirectIO=1 Change the value 1 to the desired DirectIO action value,

where: 0 = No DirectIO support, return ENOSUPP 1 = DirectIO via MPFS 2 = DirectIO via NFS even on MPFS 3 = DirectIO via MPFS and optimized for DirectIO to a pre-allocated file, DirectIO to a non-allocated file will fallback to NFS After changing the DirectIO parameter in the /etc/mpfs.conf file, activate DirectIO for MPFS: 1. Unmount MPFS:
$ umount -a -t mpfs

2. Stop the MPFS service:


$ service mpfs stop

3. Restart the MPFS service:


$ service mpfs start

4. Remount MPFS:
$ mount -a -t mpfs

Rebooting the Linux server will also activate the changes made in the /etc/mpfs.conf file. Changes to global parameters in the /etc/mpfs.conf file persist across reboots. Example In this example, the /etc/mpfs.conf file has been modified so that MPFS does not use DirectIO when writing to and reading from MPFS. Type the command:
$ cat /etc/mpfs.conf $

Setting persistent parameter values

133

EMC VNX MPFS Command Line Interface

Output:
# # This is the MPFSi configuration file # # It contains parameters that are used when the MPFSi module is loaded # # # # # # # # # # # #

Users who supply the direct I/O flag when opening a file will get behavior that is dependant on the global setting of a parameter called "globDirectIO". There are three valid values for this parameter. They are: 0 -- No direct I/O support, return ENOSUPP 1 -- Direct I/O via MPFS 2 -- Direct I/O via NFS even on MPFS file systems 3 -- Direct I/O via MPFS, and optimized for Direct I/O to pre-allocated file, Direct I/O to non-allocated file will fallback to NFS

# globDirectIO=0 # Note: The O_DIRECT flag is used by the DirectIO parameter. The man 2 open man page contains detailed information on the O_DIRECT flag.

Asynchronous I/O interfaces allow an application thread to dispatch an I/O without waiting for the I/O operation to complete. Later the thread can verify to see if the I/O has completed. This feature is for applications that use the aio_read and aio_write interfaces. Asynchronous I/O is now supported natively in 2.6 Linux kernels.

EMCmpfs parameters

The /etc/sysconfig/EMCmpfs file contains these parameters:

When run as a daemon, MPFS_DISCOVER_SLEEP_TIME waits the specified number of seconds before it performs a disk rediscovery process. The default is 900 seconds. If an error occurs on any VNX for file volume, the daemon wakes so that it can perform a rediscovery without waiting the full sleep time.

The purpose of the HRDP_SLEEP_TIME daemon is to periodically wake up, notice if there are additional disks, and protect them if they are VNX for file disks. The default is 300 seconds.

134

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

The MPFS_ISCSI_PID_FILE parameter can be used to customize the name of the file containing the Process ID (PID) of the iSCSI daemon. The purpose of the MPFS_ISCSI_REDISCOVER_TIME daemon is to wait the specified number of seconds to allow iSCSI to rediscover new LUNs. The default is 10 seconds. The MPFS_SCSI_CMD_TIMEOUT daemon waits the specified number of seconds for SCSI commands before timing out. The default is 5 seconds. The PERF_TIMEOUT daemon waits the specified number of seconds to send performance packets after the last hello message. The default is 900 seconds. The purpose of the MPFS_DISKSPEED_BUF_SIZE daemon is to set the default disk speed test buffer size. The default is 5 MB. The purpose of the MPFS_MOUNT_HVL parameter is to set the default behavior for using hierarchical volume management (hvm). HVM uses protocols which allows the Linux server to conserve memory and CPU resources. The default value of 1 uses hierarchical volume management. A value of 0 does not use hierarchical volume management if it is supported by the server. The value can be changed by using the -o hvl=0 option to disable hvm or -o hvl=1 option to enable hvm on the mount command. Hierarchical volume management on page 33 describes hierarchical volumes and their management. The MPFS_DISCOVER_LOAD_BALANCE parameter is based on VNX for file best practices to statically load-balance the VNX for block. Load-balancing the Symmetrix system is not necessary. The default is to disable userspace load-balancing.

To view the parameters:


$ # # # # # # # # # # # # cat /etc/sysconfig/EMCmpfs Default values for MPFS daemons

/** Default amount of time to sleep between rediscovery */ MPFS_DISCOVER_SLEEP_TIME=900 /** Default amount of time to sleep between reprotection of disks */ HRDP_SLEEP_TIME=300 /** Default name of iscsi pid file */ MPFS_ISCSI_PID_FILE=/var/run/iscsi.pid

Setting persistent parameter values

135

EMC VNX MPFS Command Line Interface

# # # # # # #

/** Default time to allow iscsi to rediscover new LUNs */ MPFS_ISCSI_REDISCOVER_TIME=10 /** Default timeout for scsi commands (inquiry, etc) in seconds */ MPFS_SCSI_CMD_TIMEOUT=5

/** Number of seconds to send performance packets after last hello message */ # PERF_TIMEOUT=900 # # /** Default disk speed test buffer size, unit is MB */ # MPFS_DISKSPEED_BUF_SIZE=5 # # /** The value of this determines the default behavior for using hierarchical volume management. # Assign a value of 1 to use hierarchical volume management by default if it is supported by the server # Assign a value of 0 to not use hierarchical volume management by default. The default value # can be changed by using the -o hvl=0 or -o hvl=1 option on the mount command. */ # MPFS_MOUNT_HVL=1 # # /** Default value for Multipath static load-balancing */ # The static load-balancing is based on VNX best practice for VNX for block backend. # It is not useful for Symmetrix backend # Set 1 to get optimized load-balancing for multiple clients # Set 2 to get optimized load-balancing for single client # Set 0 to disable userspace load-balancing # MPFS_DISCOVER_LOAD_BALANCE=0

To modify the /etc/sysconfig/EMCmpfs file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. This example shows this file after modification:
# Default values for MPFS daemons # ## /** Default amount of time to sleep between rediscovery */ MPFS_DISCOVER_SLEEP_TIME=800 $

136

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

A
File Syntax Rules

This appendix describes the file syntax rules to follow when creating a text (.txt) file to create a site and add Linux hosts. This appendix includes these topics:

File syntax rules for creating a site ................................................ 138 File syntax rules for adding hosts.................................................. 139

File Syntax Rules

137

File Syntax Rules

File syntax rules for creating a site


The file syntax rules for creating a text (.txt) file used to create a site are described below.

VNX for file with iSCSI ports


Command syntax

To create a text file for a site with a VNX for file with iSCSI ports:

cssite sn=<site-name> spw=<site-password> un=<VNX-user-name> pw=<VNX-password> addr=<VNX-name>

where: <site-name> = name of the site <site-password> = password for the site <VNX-user-name> = username of the Control Station <VNX-password> = password for the Control Station <VNX-name> = network name or IP address of the Control Station Example To create a site with a site name of mysite, a site password of password, a VNX for file username of VNXtest, a VNX for file password of swlabtest, and an IP address of 172.24.107.242:
cssite sn=mysite spw=password un=VNXtest pw=swlabtest addr=172.24.107.242

138

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

File Syntax Rules

File syntax rules for adding hosts


The file syntax rules for creating a text (.txt) file used to add Linux hosts are described below.

Linux host
Command syntax

To create one or more Linux hosts that share the same username (root) and password:
linuxhost un=<host-root-user-name> pw=<host-password> <host-name1>[...<host-nameN>]

where: <host-root-user-name> = root username of the Linux host <host-password> = password for the Linux host <host-name1>[...<host-nameN>] = one or more Linux hostnames or IP addresses Example To create a Linux host with a root username of test, a Linux host password of swlabtest, and Linux host IP addresses of 172.24.107.242 and 135.79.124.68:
linuxhost un=test pw=swlabtest 172.24.107.242 135.79.124.68

File syntax rules for adding hosts

139

File Syntax Rules

140

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Invisible Body Tag

B
Error Messages and Troubleshooting

This appendix describes messages that the Linux server writes to the system error log and troubleshooting problems, causes, and solutions. This appendix includes these topics:

Linux server error messages........................................................... 142 Troubleshooting................................................................................ 143 Known problems and limitations .................................................. 150

Error Messages and Troubleshooting

141

Error Messages and Troubleshooting

Linux server error messages


Table 9 on page 142 describes Linux server error messages.
Table 9

Linux server error messages Explanation The session was not created. Verify the mpfsd process is running as described in Troubleshooting on page 143 The Linux server has lost contact with the VNX for file. This loss of contact is probably due to a network or server problem and not an I/O error. The first message indicates the Linux server has re-established contact with the VNX for file. The second indicates an attempt at re-establishing contact has been made, but has not succeeded. Neither message indicates an I/O error. The VNX for file specified a storage location <nnnnn>, that is inaccessible from the Linux server.

Message notification error on session create session to server lost <server_name> session expired now=<time>, expiration=<time> reestablished session OK handles may have been lost

could not find disk signature for <nnnnn> (<nnnnn> is the disk signature).

could not start <xxx> thread

A component of the Linux server failed to start. (<xxx> is a component of the Linux server.) This message is printed in the log file when the Linux server receives an error message while communicating with Symmetrix system storage over FC. All subsequent I/O operations for the file are done over NFS until the file is reopened. After the file is reopened, the FC SAN path is retried.

error accessing volume. I/O routed to LAN

142

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Troubleshooting
This section lists problems, causes, and solutions in troubleshooting the EMC VNX MPFS software. The EMC VNX MPFS for Linux Clients Release Notes provide additional information on troubleshooting, known problems, and limitations.

Installing MPFS software

These problems may be encountered while installing the MPFS software.

Troubleshooting

143

Error Messages and Troubleshooting

Problem

Installation of the MPFS software fails with an error message such as: Installing ./EMCmpfs-5.0.32.x-i686.rpm on localhost [ Step 1 ] Checking installed MPFSpackage ... [ Step 2 ] Installing MPFS package ... Preparing... ##################################### [100%] 1:EMCmpfs #####################################[100%] The kernel that you are running,2.6.22.18-0.2-default, is not supported by MPFS. The following kernels are supported by MPFS on SuSE: SuSE-2.6.16.46-0.12-default SuSE-2.6.16.46-0.12-smp SuSE-2.6.16.46-0.14-default SuSE-2.6.16.46-0.14-smp SuSE-2.6.16.53-0.8-default SuSE-2.6.16.53-0.8-smp SuSE-2.6.16.53-0.16-default SuSE-2.6.16.53-0.16-smp SuSE-2.6.16.60-0.21-default SuSE-2.6.16.60-0.21-smp SuSE-2.6.16.60-0.27-default SuSE-2.6.16.60-0.27-smp SuSE-2.6.16.60-0.37-default SuSE-2.6.16.60-0.37-smp SuSE-2.6.16.60-0.60.1-default SuSE-2.6.16.60-0.60.1-smp SuSE-2.6.16.60-0.69.1-default SuSE-2.6.16.60-0.69.1-smp SuSE-2.6.5-7.282-default SuSE-2.6.5-7.282-smp SuSE-2.6.5-7.283-default SuSE-2.6.5-7.283-smp SuSE-2.6.5-7.286-default SuSE-2.6.5-7.286-smp SuSE-2.6.5-7.287.3-default SuSE-2.6.5-7.287.3-smp SuSE-2.6.5-7.305-default SuSE-2.6.5-7.305-smp SuSE-2.6.5-7.308-default SuSE-2.6.5-7.308-smp The kernel being used is not supported. Use a supported OS kernel. The EMC VNX MPFS for Linux Clients Release Notes provide a list of supported kernels.

Cause Solution

144

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem Cause Solution

The MPFS software does not run or the MPFS daemon did not start. The MPFS software may not be installed. Verify that the MPFS software is installed and the MPFS daemon has started by using this procedure: 1. Use RPM to verify the installation: rpm -q EMCmpfs If the MPFS software is installed properly, the output is displayed as: EMCmpfs-5.0.x-x Note: Alternatively, use the mpfsctl version command to verify that Linux server is installed. The mpfsctl man page or Using the mpfsctl utility on page 107 provides additional information. 2. Use the ps command to verify that the MPFS daemon has started: ps -ef |grep mpfsd The output will look like this if the MPFS daemon has started: root 3. 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

If the ps command output does not show the MPFS daemon process is running, as root, start MPFS by using this command: $ /etc/rc.d/init.d/mpfs start

Mounting and unmounting a file system

These problems may be encountered in mounting or unmounting a file system. Refer to Mounting MPFS on page 84 and Unmounting MPFS on page 88 for more information.

Problem Cause Solution

The mount command displays messages about unknown file systems. An option was specified that is not supported by the mount command. Check the mount command options and correct any unsupported options: 1. Display the mount_mpfs man page to find supported options by typing man mount_mpfs at the command prompt. 2. Run the mount command again with the correct options.

Troubleshooting

145

Error Messages and Troubleshooting

Problem

The mount command displays this message: mount: must be root to use mount Permissions are required to use the mount command. Log in as root and try the mount command again.

Cause Solution

Problem

The mount command displays this message: nfs mount: get_fh: <hostname>:: RPC: Rpcbind failure - RPC: Timed out The VNX Server or NFS server specified is down. Check that the correct server name was specified and that the server is up with an exported file system.

Cause Solution

Problem

The mount command displays this message: $ mount -t mpfs 172.24.107.242:/rcfs /mnt/mpfs Volume APM000643042520000-0008 not found. Error mounting /mnt/mpfs via MPFS The MPFS mount operation could not find the physical disk associated with the specified file system. Use the mpfsinq command to verify the physical disk device associated with the file system is connected to the server over FC and is accessible from the server as described in Listing devices with the mpfsinq command on page 120.

Cause Solutions

Problem

The mount command displays this message: mount: /<filesystem>: No such file or directory No mount point exists. Create a mount point and try the mount again.

Cause Solution

146

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem

The mount command displays this message: mount: fs type mpfs not supported by kernel. The MPFS software is not installed. Install the MPFS software and try the mount command again.

Cause Solution

Problem

A file system cannot be unmounted. The unmount command displays this message: umount: Device busy Existing processes were using the file system when an attempt was made to unmount it, or the umount command was issued from the file system itself. Identify all processes, stop all processes, and unmount the file system again: 1. Use the fuser command to identify all processes using the file system. 2. Use the kill -9 command to stop all processes. 3. Run the umount command again.

Cause Solution

Problem Cause Solution

The mount command hangs. The server specified with the mount command does not exist or cannot be reached. Stop the mount command, check for a valid server, and retry the mount command again: 1. Interrupt the mount command by using the interrupt key combinations (usually Ctrl-C). 2. Try to reach the server by using the ping command. 3. If the ping command succeeds, retry the mount.

Troubleshooting

147

Error Messages and Troubleshooting

Problem

The mount command displays the message: permission denied. Cause 1 Permissions are required to access the file system specified in the mount command. Cause 2 You are not the root user on the server. Solution 1 Ensure that the file system has been exported with the right permissions, or set the right permissions for the file system (the EMC VNX Command Line Interface Reference for File Manual provides information on permissions). Solution 2 Use the su command to become the root user.

Causes

Solutions

Problem

The mount command displays the message: RPC program not registered. The server specified in the mount command is not a VNX Server or NFS server. Check that the correct server name was specified and the server has an exported file system.

Cause Solution

Problem

The mount command logs this message in the /var/log/messages file: Couldnt find device during mount. The MPFS mount operation could not find the physical disk associated with the specified file system. Use either the fdisk command or the mpfsinq command (as described in Listing devices with the mpfsinq command on page 120) to verify the physical disk device associated with the file system is connected to the server over FC and is accessible from the server.

Cause Solution

148

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem

The mount command displays this message: RPC: Unknown host. The server name specified in the mount command does not exist on the network. Check the server name and use the IP address if necessary, to mount the file system: 1. Ensure that the correct server name is specified in the mount command. 2. If the correct name was not specified, check whether the hosts /etc/hosts file or the NIS/DNS map Xcontains an entry for the server. 3. If the server does appear in /etc/hosts or the NIS/DNS map, check whether the server responds to the ping command. 4. If the ping command succeeds, try using the servers IP address instead of its name in the mount Xcommand.

Cause Solution

Problem

The mount command displays this message: $ mount -t mpfs ka0abc12s401:/server4fs1 /mnt mount: fs type mpfs not supported by kernel The MPFS software is not installed on the Linux server. Install the MPFS software and try the mount command again: 1. Install the MPFS software on the Linux server as described in Installing the MPFS software on page 90. 2. Run the mount command again.

Cause Solution

Miscellaneous issues
Problem Cause Solution

These miscellaneous issues may be encountered with a Linux server.

The user cannot write to a mounted file system. Write permission is required on the file system or the file system is mounted as read-only. Verify that you have write permission and try writing to a mounted file system again: 1. Check that you have write permission on the file system. 2. Try unmounting the file system (as described in Unmounting MPFS on page 88.) and remounting it in read/write mode.

Troubleshooting

149

Error Messages and Troubleshooting

Problem

This message appears: NFS server not responding. The VNX Server is unavailable due to a network-related problem, a reboot, or a shutdown. Check whether the server responds to the ping command. Also try unmounting and remounting the file system.

Cause Solution

Problem Cause 1 Solution 1

Removing the MPFS software package fails. The MPFS software package is not installed on the Linux server. Ensure that the MPFS software package name is spelled correctly, with uppercase and lowercase letters specified. If the MPFS software package name is spelled correctly, verify that the MPFS software is installed on the Linux server: $ rpm -q EMCmpfs If the MPFS software is installed properly, the output is displayed as: EMCmpfs-5.0.32-xxx If the MPFS software is not installed, the output is displayed as: Package "EMCmpfs" was not found. Trying to remove the MPFS software package while one or more MPFS-mounted file systems are active, and I/O is taking place on the active file system. A message appears on the Linux server such as: ERROR: Mounted MPFS filesystems found on the system. Please unmount all MPFS filesystems before removing the product. Unmount MPFS and try removing the MPFS software package again: 1. Stop the I/O. 2. Unmount all active MPFS file systems by using the umount command as described in Unmounting MPFS on page 88. 3. Restart the removal process.

Cause 2

Solution 2

Known problems and limitations


The EMC VNX MPFS for Linux Clients Release Notes provide known problems and limitations for MPFS clients.

150

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

This glossary defines terms useful for MPFS administrators.

C
Challenge Handshake Authentication Protocol (CHAP) client Access control protocol for secure authentication using shared passwords called secrets.

Front-end device that requests services from a server, often across a network. Interface for typing commands through the Control Station to perform tasks that include the management and configuration of the database and Data Movers and the monitoring of statistics for the VNX for file cabinet components. File-sharing protocol based on the Microsoft Server Message Block (SMB). It allows users to share file systems over the Internet and intranets. Hardware and software component of the VNX for file that manages the system and provides the user interface to all VNX for file components.

command line interface (CLI)

Common Internet File System (CIFS)

Control Station

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

151

Glossary

D
daemon UNIX process that runs continuously in the background, but does nothing until it is activated by another process or triggered by a particular event. In a VNX for file, a cabinet component running its own operating system that retrieves data from a storage device and makes them available to a network client. This is also referred to as a blade. On VNX for file, a physical storage unit as exported from the system. All other volume types are created from disk volumes. See also metavolume, slice volume, stripe volume, and volume.

Data Mover

disk volume

E
extent Set of adjacent physical blocks.

F
fallthrough Fallthrough occurs when MPFS temporarily employs the NFS or CIFS protocol to provide continuous data availability, reliability, and protection while block I/O path congestion or unavailability is resolved. This fallthrough technology is seamless and transparent to the application being used. Any Ethernet specification with a speed of 100 Mb/s. Based on the IEEE 802.3u specification. Nominally 1 Gb/s data transfer interface technology, although the specification allows data transfer rates from 133 Mb/s up to 4.25 Gb/s. Data can be transmitted and received simultaneously. Common transport protocols, such as Internet Protocol (IP) and Small Computer Systems Interface (SCSI), run over Fibre Channel. Consequently, a single connectivity technology can support high-speed I/O and networking. File system protocol used to exchange file layout information between an MPFS client and the VNX for file. See also Multi-Path File Systems (MPFS). Method of cataloging and managing the files and directories on a VNX for block.

Fast Ethernet

Fibre Channel

File Mapping Protocol (FMP)

file system

152

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

G
gateway VNX for file that is capable of connecting to multiple systems, either directly (direct-connected) or through a Fibre Channel switch (fabric-connected). Any Ethernet specification with a speed of 1000 Mb/s. IEEE 802.3z defines Gigabit Ethernet over fiber and cable, which has a physical media standard of 1000Base-X (1000Base-SX short wave, 1000Base-LX long wave) and 1000Base-CX shielded copper cable. IEEE 802.3ab defines Gigabit Ethernet over an unshielded twisted pair (1000Base-T).

Gigabit Ethernet

H
host Addressable end node capable of transmitting and receiving data.

I
Internet Protocol (IP) Network layer protocol that is part of the Open Systems Interconnection (OSI) reference model. IP provides logical addressing and service for end-to-end delivery. Address uniquely identifying a device on any TCP/IP network. Each address consists of four octets (32 bits), represented as decimal numbers separated by periods. An address is made up of a network number, an optional subnetwork number, and a host number. Protocol for sending SCSI packets over TCP/IP networks. iSCSI endpoint, identified by a unique iSCSI name, which begins an iSCSI session by issuing a command to the other endpoint (the target). iSCSI endpoint, identified by a unique iSCSI name, which executes commands issued by the iSCSI initiator.

Internet Protocol address (IP address)

Internet SCSI (iSCSI) iSCSI initiator

iSCSI target

K
kernel Software responsible for interacting most directly with the computers hardware. The kernel manages memory, controls user access, maintains file systems, handles interrupts and errors, performs input and output services, and allocates computer resources.
153

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

L
logical device One or more physical devices or partitions managed by the storage controller as a single logical entity. For iSCSI on a VNX for file, a logical unit is an iSCSI software feature that processes SCSI commands, such as reading from and writing to storage media. From a iSCSI host perspective, a logical unit appears as a disk device. Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself. Logical devices aggregated and managed at a higher level by a volume manager. See also logical device.

logical unit (LU)

logical unit number (LUN)

logical volume

M
metadata Data that contains structural information, such as access methods, about itself. On a VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes. Also called a hyper volume or hyper. Every file system must be created on top of a unique metavolume. See also disk volume, slice volume, stripe volume, and volume. Logical volume with all data recorded twice, once on each of two different physical devices. Method by which the VNX for block maintains two identical copies of a designated volume on separate disks. Process of attaching a subdirectory of a remote file system to a mount point on the local machine. Local subdirectory to which a mount operation attaches a subdirectory of a remote file system.

metavolume

mirrored pair

mirroring

mount

mount point

154

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

MPFS over iSCSI

Multi-Path File System over iSCSI-based clients. MPFS client running an iSCSI initiator works in conjunction with an IP-SAN switch containing an iSCSI to SAN blade. The IP-SAN blade provides one or more iSCSI targets that transfer data to the storage area network (SAN) systems. See also Multi-Path File Systems (MPFS). Connection between an MPFS client and a VNX for file. Shared resource designated for multiplexed communications using the MPFS file system. VNX for file feature that allows heterogeneous servers with MPFS software to concurrently access, directly over Fibre Channel or iSCSI channels, shared data stored on a EMC Symmetrix or VNX for block. MPFS adds a lightweight protocol called File Mapping Protocol (FMP) that controls metadata operations.

MPFS session MPFS share

Multi-Path File Systems (MPFS)

N
nested mount file system (NMFS) nested mount file system root network-attached storage (NAS) File system that contains the nested mount root file system and component file systems. File system on which the component file systems are mounted read-only except for mount points of the component file systems. Specialized file server that connects to the network. A NAS device, such as VNX for file, contains a specialized operating system and a file system, and processes only I/O requests by supporting popular file sharing protocols such as NFS and CIFS. Network file system (NFS) is a network file system protocol allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks.

network file system (NFS)

P
PowerPath EMC host-resident software that integrates multiple path I/O capabilities, automatic load balancing, and path failover functions into one comprehensive package for use on open server platforms connected to Symmetrix or VNX for block.

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

155

Glossary

R
Redundant Array of Independent Disks (RAID) Method for storing information where the data is stored on multiple disk drives to increase performance and storage capacities and to provide redundancy and fault tolerance.

S
server Device that handles requests made by clients connected through a network. On a VNX for file, a logical piece or specified area of a volume used to create smaller, more manageable units of storage. See also disk volume, metavolume, stripe volume, and volume. Standard set of protocols for host computers communicating with attached peripherals.

slice volume

small computer system interface (SCSI) storage area network (SAN)

Network of data storage disks. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. See also network-attached storage (NAS). Storage processor on a VNX for block. On a VNX for block, a circuit board with memory modules and control logic that manages the VNX for block I/O between the hosts Fibre Channel adapter and the disk modules. Generic term for the first storage processor in a VNX for block.

storage processor (SP)

Storage processor A (SP A) Storage processor B (SP B) stripe size stripe volume

Generic term for the second storage processor in a VNX for block.

Number of blocks in one stripe of a stripe volume. Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across the volume and are addressed in an interlaced manner. Stripe volumes make load balancing possible. See also disk volume, metavolume, slice volume, and volume.

156

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

Symmetrix Remote Data Facility (SRDF)

EMC technology that allows two or more Symmetrix systems to maintain a remote mirror of data in more than one location. The systems can be located within the same facility, in a campus, or hundreds of miles apart using fiber or dedicated high-speed circuits. The SRDF family of replication software offers various levels of high-availability configurations, such as SRDF/Synchronous (SRDF/S) and SRDF/Asynchronous (SRDF/A).

T
tar Transmission Control Protocol (TCP) Backup format in PAX that traverses a file tree in depth-first order. Connection-oriented transport protocol that provides reliable data delivery.

U
unified storage VNX for file that is connected to a captive system that is not shared with any other VNX for files and is not capable of connecting to multiple systems.

V
Virtual Storage Area Network (VSAN) VNX VNX for block VNX OE volume SAN that can be broken up into sections allowing traffic to be isolated within the section. EMC network-attached storage (NAS) product line. EMC midrange block system. Embedded operating system in VNX for block disk arrays. On a VNX for file, a virtual disk into which a file system, database management system, or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives. See also disk volume, metavolume, slice volume, and stripe volume.

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

157

Glossary

158

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

A
Access Logix configuration 63, 68, 79, 81 accessing storage 67 administering MPFS 101 architecture MPFS over Fibre Channel on VNX 19, 38 MPFS over Fibre Channel on VNX VG2/VG8 gateway 20, 38 MPFS over iSCSI on VG2/VG8 gateway 22 MPFS over iSCSI on VNX 21, 38 MPFS over iSCSI on VNX VG2/VG8 gateway 39 MPFS over iSCSI/FC on VNX 22, 39 MPFS over iSCSI/FC on VNX VG2/VG8 gateway 23, 39 arraycommpath 63, 65, 67, 68, 79, 82 Asynchronous I/O support 134 authentication, CHAP 30

C
Challenge Handshake Authentication Protocol. See CHAP CHAP one-way authentication 30 reverse authentication 30 secret 30 session authentication 30 command line interface. See mpfsctl commands commands /proc/mpfs/devices 123 mpfsctl diskreset 109 mpfsctl diskresetfreq 109 mpfsctl help 108 mpfsctl max-readahead 110 mpfsctl prefetch 112 mpfsctl reset 113 mpfsctl stats 114 mpfsctl version 117 mpfsctl volmgt 117 mpfsinfo 125 mpfsinq 120 mpfsquota 123 mpfsstat 118 comments 15 configuration overview 26 planning checklist 36 configuring Gigabit Ethernet ports 39 iSCSI target 61 storage 67 storage access 58

B
best practices file system 46, 47 LUNs 47 MPFS 28, 46, 53 MPFS threads 28 storage configuration 29 stripe size 53 VNX for block 58 VNX for file volumes 46 VNX for file with MPFS 28 VNX VG2/VG8 gateway 58

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

159

Index

VNX for file 47 VNX with MPFS 28 zones 58 creating a file system 47 file system 54 metavolume 54 security file 60 storage groups 65 stripe 52

F
failovermode 63, 65, 67, 68, 79, 82 Fibre Channel adding hosts to storage groups 68, 79 driver installation 67 switch installation 59 switch requirements 42 Fibre Channel over Ethernet (FCoE) 19 File Mapping Protocol (FMP) 24 file syntax rules 137 file system creating 47 exporting 55 mounting 55 names of mounted 50 names of unmounted 51 setup 46 unmounting 147 firewall FMP ports 94

D
DirectIO support 132 disabling arraycommpath 63, 68, 79, 82 failovermode 63, 68, 79, 82 HVM 135 read and write protection for VNX for file volumes 103 displaying accessible LUNs 47 disks 49 MPFS devices 120, 122 MPFS software version 117 MPFS statistics 114

G
Gigabit Ethernet port configuration 39

H
Hierarchical volume management (HVM) default settings 135 enable/disable 135 overview 33 values 135

E
EMC HighRoad Disk Protection (hrdp) program 102 EMC Unisphere software 45 EMCmpfs parameters hrdp_sleep_time 134 mpfs_discover_sleep_time 134 mpfs_diskspeed_buffer_size 135 mpfs_iscsi_pid_file 135 mpfs_iscsi_rediscover_time 135 mpfs_mount_hvl 135 mpfs_scsi_cmd_timeout 135 perf_timeout 135 enabling arraycommpath 63, 68, 79, 82 failovermode 63, 68, 79, 82 error messages 142

I
I/O sizes 28 installing MPFS software 90 MPFS software, troubleshooting 143 storage configuration 41 iSCSI CHAP authentication 30 iSCSI discovery address 39 iSCSI driver starting 78 stopping 78 iSCSI driver configuration CentOS 5 73 RHEL 4 70 RHEL 5 73

160

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

RHEL 6 73 SLES 10 70, 73 SLES 11 73 iSCSI initiator configuring 75 configuring ports 31, 58 connection to a SAN switch 29 IQN to define a host 81 names 59 show IQN name 73 iSCSI port configuration 61 iSCSI target configuration 61

L
Linux server configuration 29 error messages 142 LUNs accessible by Data Movers 47 adding 65 best practices 47 displaying 50 displaying all 71 failover 67 maximum number supported 47 mixed not supported 41 rediscover new 135 total usable capacity 47

M
managing using mpfs commands 101 metavolume 54 mounting a file system troubleshooting 145 mounting MPFS 84 MPFS creating a file system 47, 54 exporting a file system 55 mounting 84 mounting a file system 55, 133 setup 46 storage requirements 41 unmounting 88, 133 MPFS client troubleshooting 143 mpfs commands /proc/mpfs/devices 123

mpfsinfo 125 mpfsinq 120 mpfsquota 123 mpfsstat 118 MPFS configuration roadmap 27 MPFS configurations 21, 22 MPFS devices, displaying 120 MPFS over Fibre Channel VNX 19, 38 VNX VG2/VG8 gateway 20, 38 MPFS over iSCSI VG2/VG8 gateway 22 VNX 21, 38 VNX VG2/VG8 gateway 39 MPFS over iSCSI/FC VNX 22, 39 VNX VG2/VG8 gateway 23, 39 MPFS overview 18 MPFS parameters conf parameters 129 kernel parameters 127 persistent parameters 129 MPFS service restarting 133 stopping 133 MPFS software before installing 90 blocks per flush 116 install over existing 95 installation 90 installing from a CD 92 installing from a tar file 90 installing from EMC Online Support 90 managing using hrdp commands 103 managing using mpfs commands 101 managing using mpfsctl commands 101 post installation instructions 98 starting 95, 97 uninstalling 100 upgrading 95 upgrading from an earlier version 96 upgrading with file system mounted 97 verifying upgrade 99 version number 117 MPFS threads 28 mpfsctl commands mpfsctl diskreset 109
161

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

mpfsctl diskresetfreq 109 mpfsctl help 108 mpfsctl max-readahead 110 mpfsctl prefetch 112 mpfsctl reset 113 mpfsctl stats 114 mpfsctl version 117 mpfsctl volmgt 117 mpfsinq troubleshooting 146

removing MPFS software, troubleshooting 150 reverse CHAP authentication 30

S
SAN switch zoning 59 secret (CHAP) 30 security file creation 60 SendTargets discovery 75 setting arraycommpath 64, 68, 79, 82 failovermode 63, 68, 79, 82 setting up MPFS 46 VNX for file 44 software components CentOS 5 40 iSCSI initiator 40 MPFS software 40 NAS software 40 Red Hat Enterprise Linux 40 SuSE Linux Enterprise 40 starting MPFS 95, 97 starting the iSCSI driver 78 statistics displaying 114, 118 resetting counters 113 stopping the iSCSI driver 78 storage configuration configuring 59 installation 41 recommendations 29 requirements 41 storage group adding Fibre Channel hosts 68, 79 adding initiators 79 configuring 67 storage guidelines 29 storage pool, created MPFS 35, 45 stripe size 46 system component verification 38

N
number of blocks per flush 116

O
one-way CHAP authentication 30 overview of configuring MPFS 26

P
performance file systems 46 gigabit ethernet ports 39 iSCSI ports 43 Linux server 43, 58 MPFS 28, 29, 46, 53, 112, 118 MPFS reads 111 MPFS threads 28 MPFS with PowerPath 28 problems with Linux server 114 read ahead 111 storage configuration 29 stripe size 28, 46, 53 VNX for block 58 VNX for block volumes 46 post installation instructions 93 PowerPath support 28 with MPFS 63 Prefetch requirements 29, 94

R
Rainfinity Global Namespace 32 Read ahead performance 111 Read cache requirements 29, 94

T
troubleshooting cannot write to a mounted file system 149 installing MPFS software 143 Linux client 143

162

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

mounting a file system 145 mpfsinq command 146 NFS server response 150 removing MPFS software 150 uninstalling MPFS software 100 unmounting a file system 147 tuning MPFS 101

U
uninstalling MPFS software 100 unmounting a file system 147 unmounting MPFS 88 upgrading MPFS software 95

V
verifying an MPFS software upgrade 99 version number, displaying 117 VMware ESX server

limitations 31 requirements with Linux 31 VNX for block best practices 58 configuring using CLI commands 58 iSCSI port configuration 61 storage requirements 41 system requirements 41 VNX for file configuring 47 enabling MPFS 57 setup 44 VNX MPFS best practices guide 28, 46, 53 configuration 28 volume stripe size 28 volumes mounted 52 volumes, names of 52

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

163

Index

164

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide