Professional Documents
Culture Documents
h7113 Vplex Architecture Deployment
h7113 Vplex Architecture Deployment
high
availability and the VPLEX Witness as it is conceptually
architectured, typically by customer storage administrators and EMC
Solutions Architects. The introduction of VPLEX Witness provides
customers with absolute physical and logical fabric and cache
coherent redundancy if it is properly designed in the VPLEX Metro
environment.
This TechBook is designed to provide an overview of the features and
functionality associated with the VPLEX Metro configuration and the
importance of active/active data resiliency for todays advanced host
applications.
VPLEX value overview 19
VPLEX Family and Use Case Overview
VPLEX value overview
At the highest level, VPLEX has unique capabilities that storage
administrators value and are seeking to enhance their existing data
centers. It delivers distributed, dynamic and smart functionality into
existing or new data centers to provide storage virtualization across
geographical boundaries.
VPLEX is distributed, because it is a single interface for
multi-vendor storage and it delivers dynamic data mobility,
enabling the ability to move applications and data in real-time,
with no outage required.
VPLEX is dynamic, because it provides data availability and
flexibility as well as maintaining business through failures
traditionally requiring outages or manual restore procedures.
VPLEX is smart, because its unique AccessAnywhere technology
can present and keep the same data consistent within and
between sites and enable distributed data collaboration.
Because of these capabilities, VPLEX delivers unique and
differentiated value to address three distinct requirements within our
target customers IT environments:
The ability to dynamically move applications and data across
different compute and storage installations, be they within the
same data center, across a campus, within a geographical region
and now, with VPLEX Geo, across even greater distances.
The ability to create high-availability storage and a compute
infrastructure across these same varied geographies with
unmatched resiliency.
The ability to provide efficient real-time data collaboration over
distance for such big data applications as video, geographic
/oceanographic research, and more.
EMC VPLEX technology is a scalable, distributed-storage federation
solution that provides non-disruptive, heterogeneous
data-movement and volume-management functionality.
Insert VPLEX technology between hosts and storage in a storage area
network (SAN) and data can be extended over distance within,
between, and across data centers.
20 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
The VPLEX architecture provides a highly available solution suitable
for many deployment strategies including:
Application and Data Mobility The movement of virtual
machines (VM) without downtime. An example is shown in
Figure 1.
Figure 1 Application and data mobility example
Storage administrators have the ability to automatically balance
loads through VPLEX, using storage and compute resources from
either clusters location. When combined with server
virtualization, VPLEX allows users to transparently move and
relocate Virtual Machines and their corresponding applications
and data over distance. This provides a unique capability allowing
users to relocate, share and balance infrastructure resources
between sites, which can be within a campus or between data
centers, up to 5ms apart with VPLEX Metro, or further apart
(50ms RTT) across asynchronous distances with VPLEX Geo.
Note: Please submit an RPQ if VPLEX Metro is required up to 10ms or check
the support matrix for the latest supported latencies.
VPLEX value overview 21
VPLEX Family and Use Case Overview
HA Infrastructure Reduces recovery time objective (RTO).
An example is shown in Figure 2.
Figure 2 HA infrastructure example
High availability is a term that several products will claim they
can deliver. Ultimately, a high availability solution is supposed to
protect against a failure and keep an application online. Storage
administrators plan around HA to provide near continuous
uptime for their critical applications, and automate the restart of
an application once a failure has occurred, with as little human
intervention as possible.
With conventional solutions, customers typically have to choose a
Recovery Point Objective and a Recovery Time Objective. But
even while some solutions offer small RTOs and RPOs, there can
still be downtime and, for most customers, any downtime can be
costly.
22 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
Distributed Data Collaboration Increases utilization of
passive data recovery (DR) assets and provides simultaneous
access to data. An example is shown in Figure 3.
Figure 3 Distributed data collaboration example
This is when a workforce has multiple users at different sites
that need to work on the same data, and maintain consistency
in the dataset when changes are made. Use cases include
co-development of software where the development happens
across different teams from separate locations, and
collaborative workflows such as engineering, graphic arts,
videos, educational programs, designs, research reports, and
so forth.
When customers have tried to build collaboration across
distance with the traditional solutions, they normally have to
save the entire file at one location and then send it to another
site using FTP. This is slow, can incur heavy bandwidth costs
for large files, or even small files that move regularly, and
negatively impacts productivity because the other sites can sit
idle while they wait to receive the latest data from another site.
If teams decide to do their own work independent of each
other, then the dataset quickly becomes inconsistent, as
multiple people are working on it at the same time and are
unaware of each others most recent changes. Bringing all of
the changes together in the end is time-consuming, costly, and
grows more complicated as the data-set gets larger.
VPLEX product offerings 23
VPLEX Family and Use Case Overview
VPLEX product offerings
VPLEX first meets high-availability and data mobility requirements
and then scales up to the I/O throughput required for the front-end
applications and back-end storage.
High-availability and data mobility features are characteristics of
VPLEX Local, VPLEX Metro, and VPLEX Geo.
A VPLEX cluster consists of one, two, or four engines (each
containing two directors), and a management server. A dual-engine
or quad-engine cluster also contains a pair of Fibre Channel switches
for communication between directors.
Each engine is protected by a standby power supply (SPS), and each
Fibre Channel switch gets its power through an uninterruptible
power supply (UPS). (In a dual-engine or quad-engine cluster, the
management server also gets power from a UPS.)
The management server has a public Ethernet port, which provides
cluster management services when connected to the customer
network.
This section provides information on the following:
VPLEX Local, VPLEX Metro, and VPLEX Geo on page 23
Architecture highlights on page 25
VPLEX Local, VPLEX Metro, and VPLEX Geo
EMC offers VPLEX in three configurations to address customer needs
for high-availability and data mobility:
VPLEX Local
VPLEX Metro
VPLEX Geo
24 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
Figure 4 provides an example of each.
Figure 4 VPLEX offerings
VPLEX Local
VPLEX Local provides seamless, non-disruptive data mobility and
ability to manage multiple heterogeneous arrays from a single
interface within a data center.
VPLEX Local allows increased availability, simplified management,
and improved utilization across multiple arrays.
VPLEX Metro with AccessAnywhere
VPLEX Metro with AccessAnywhere enables active-active, block
level access to data between two sites within synchronous distances.
The distance is limited as to what Synchronous behavior can
withstand as well as consideration to host application stability and
MAN traffic. It is recommended that depending on the application
that consideration for Metro be less than or equal to 5ms
1
RTT.
The combination of virtual storage with VPLEX Metro and virtual
servers enables the transparent movement of virtual machines and
storage across a distance.This technology provides improved
utilization across heterogeneous arrays and multiple sites.
1. Refer to VPLEX and vendor-specific White Papers for confirmation of
latency limitations.
VPLEX product offerings 25
VPLEX Family and Use Case Overview
VPLEX Geo with AccessAnywhere
VPLEX Geo with AccessAnywhere enables active-active, block level
access to data between two sites within asynchronous distances.
VPLEX Geo enables better cost-effective use of resources and power.
Geo provides the same distributed device flexibility as Metro but
extends the distance up to and within 50ms RTT. As with any
Asynchronous transport media, bandwidth is also important to
consider for optimal behavior as well as application sharing on the
link.
Note: For the purpose of this TechBook, the focus on technologies is
based on Metro configuration only. VPLEX Witness is supported with
VPLEX Geo; however, it is beyond the scope of this TechBook.
Architecture highlights
VPLEX support is open and heterogeneous, supporting both EMC
storage and common arrays from other storage vendors, such as
HDS, HP, and IBM. VPLEX conforms to established worldwide
naming (WWN) guidelines that can be used for zoning.
VPLEX supports operating systems including both physical and
virtual server environments with VMware ESX and Microsoft
Hyper-V. VPLEX supports network fabrics from Brocade and Cisco,
including legacy McData SANs.
Note: For the latest information please refer to the ESSM (EMC
Simple Support Matrix) for supported host types as well as the
connectivity ESM for fabric and extended fabric support.
26 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
An example of the architecture is shown in Figure 5.
Figure 5 Architecture highlights
Table 1 lists an overview of VPLEX features along with the benefits.
Table 1 Overview of VPLEX features and benefits (page 1 of 2)
Features Benefits
Mobility Move data and applications without impact on
users.
Resiliency Mirror across arrays without host impact, and
increase high availability for critical applications.
Distributed cache coherency Automate sharing, balancing, and failover of I/O
across the cluster and between clusters.
VPLEX product offerings 27
VPLEX Family and Use Case Overview
For all VPLEX products, the appliance-based VPLEX technology:
Presents storage area network (SAN) volumes from back-end
arrays to VPLEX engines
Packages the SAN volumes into sets of VPLEX virtual volumes
with user-defined configuration and protection levels
Presents virtual volumes to production hosts in the SAN via the
VPLEX front-end
For VPLEX Metro and VPLEX Geo products, presents a global,
block-level directory for distributed cache and I/O between
VPLEX clusters.
Location and distance determine high-availability and data mobility
requirements. For example, if all storage arrays are in a single data
center, a VPLEX Local product federates back-end storage arrays
within the data center.
When back-end storage arrays span two data centers, the
AccessAnywhere feature in a VPLEX Metro or a VPLEX Geo product
federates storage in an active-active configuration between VPLEX
clusters. Choosing between VPLEX Metro or VPLEX Geo depends on
distance and data synchronicity requirements.
Application and back-end storage I/O throughput determine the
number of engines in each VPLEX cluster. High-availability features
within the VPLEX cluster allow for non-disruptive software upgrades
and expansion as I/O throughput increases.
Advanced data caching Improve I/O performance and reduce storage array
contention.
Virtual Storage federation Achieve transparent mobility and access in a data
center and between data centers.
Scale-out cluster architecture Start small and grow larger with predictable service
levels.
Table 1 Overview of VPLEX features and benefits (page 2 of 2)
Features Benefits
28 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
Metro high availability design considerations
VPLEX Metro 5.0 (and above) introduces high availability concepts
beyond what is traditionally known as physical high availability.
Introduction of the VPLEX Witness to a high availability
environment, allows the VPLEX solution to increase the overall
availability of the environment by arbitrating a pure communication
failure between two primary sites and a true site failure in a multi-site
architecture. EMC VPLEX is the first product to bring to market the
features and functionality provided by VPLEX Witness prevents
failures and asserts the activity between clusters in a multi-site
architecture.
Through this TechBook, administrators and customers gain an
understanding of the high availability solution that VPLEX provides
them:
Enabling of load balancing between their data centers
Active/active use of both of their data centers
Increased availability for their applications (no single points of
storage failure, auto-restart)
Fully automatic failure handling
Better resource utilization
Lower CapEx and lower OpEx as a result
Broadly speaking, when one considers legacy environments one
typically sees highly available designs implemented within a data
center, and disaster recovery type functionality deployed between
data centers.
One of the main reasons for this is that within data centers
components generally operate in an active/active (or active/passive
with automatic failover) whereas between data centers legacy
replication technologies use active passive techniques which require
manual failover to use the passive component.
When using VPLEX Metro active/active replication technology in
conjunction with new features, such as VPLEX Witness server (as
described in Introduction to VPLEX Witness on page 81), the lines
between local high availability and long distance disaster recovery
are somewhat blurred since HA can be stretched beyond the data
Metro high availability design considerations 29
VPLEX Family and Use Case Overview
center walls. Since replication is a by-product of federated and
distributed storage disaster avoidance, it is also achievable within
these geographically dispersed HA environments.
Planned application mobility compared with disaster restart
This section compares planned application mobility and disaster
restart.
Planned application
mobility
An online planned application mobility event is defined as when an
application or virtual machine can be moved fully online without
disruption from one location to another in either the same or remote
data center. This type of movement can only be performed when all
components that participate in this movement are available (e.g., the
running state of the application or VM exists in volatile memory
which would not be the case if an active site has failed) and if all
participating hosts have read/write access at both location to the
same block storage. Additional a mechanism is required to transition
volatile memory data from one system/host to another. When
performing planned online mobility jobs over distance a prerequisite
y is the use of an active/active underlying storage replication
solution (VPLEX Metro only at this publication).
An example of this online application mobility would be VMware
vMotion where a virtual machine would need to be fully operational
before it can be moved. It may sound obvious but if the VM was
offline then movement could not be performed online (This is
important to understand and is the key difference over application
restart).
When vMotion is executed all live components that are required to
make the VM function are copied elsewhere in the background before
cutting the VM over.
Since these types of mobility tasks are totally seamless to the user
some of the use cases associated are for disaster avoidance where an
application or VM can be moved ahead of a disaster (such as,
Hurricane, Tsunami, etc.) as the running state is available to be
copied, or in other cases it can be used to enable the ability to load
balance across multiple systems or even data centers.
Due to the need for the running state to be available for these types of
relocations these movements are always deemed planned activities.
30 EMC VPLEX Metro Witness Technology and High Availability
VPLEX Family and Use Case Overview
Disaster restart Disaster restart is where an application or service is re-started in
another location after a failure (be it on a different server or data
center) and will typically interrupt the service/application during the
failover.
A good example of this technology would be a VMware HA Cluster
configured over two geographically dispersed sites using VPLEX
Metro where a cluster will be formed over a number of ESX servers
and either single or multiple virtual machines can run on any of the
ESX servers within the cluster.
If for some reason an active ESX server were to fail (perhaps due to
site failure) then the VM can be re-started on a remaining ESX server
within the cluster at the remote site as the datastore where it was
running spans the two locations since it is configured on a VPLEX
Metro distributed volume. This would be deemed an unplanned
failover which will incur a small outage of the application since the
running state of the VM was lost when the ESX server failed meaning
the service will be unavailable until the VM has restarted elsewhere.
Although comparing a planned application mobility event to an
unplanned disaster restart will result in the same outcome (i.e., a
service relocating elsewhere) it can now be seen that there is a big
difference since the planned mobility job keeps the application online
during the relocation whereas the disaster restart will result in the
application being offline during the relocation as a restart is
conducted.
Compared to active/active technologies the use of legacy
active/passive type solutions in these restart scenarios would
typically require an extra step over and above standard application
failover since a storage failover would also be required (i.e. changing
the status of write disabled remote copy to read/write and reversing
replication direction flow). This is where VPLEX can assist greatly
since it is active/active therefore, in most cases, no manual
intervention at the storage layer is required, this greatly reduces the
complexity of a DR failover solution. If best practices for physical
high available and redundant hardware connectivity are followed the
value of VPLEX Witness will truly provide customers with
Absolute availability!
Hardware and Software 31
2
This chapter provides insight into the hardware and software
interfaces that can be used by an administrator to manage all aspects
of a VPLEX system. In addition, a brief overview of the internal
system software is included. Topics include:
Introduction ........................................................................................ 32
VPLEX management interfaces........................................................ 37
Simplified storage management ...................................................... 39
Management server user accounts .................................................. 40
Management server software........................................................... 41
Director software................................................................................ 45
Configuration overview.................................................................... 46
I/O implementation .......................................................................... 50
Hardware and Software
32 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
Introduction
This section provides basic information on the following:
VPLEX I/O on page 32
High-level VPLEX I/O flow on page 32
Distributed coherent cache on page 33
VPLEX family clustering architecture on page 33
VPLEX I/O
VPLEX is built on a lightweight protocol that maintains cache
coherency for storage I/O and the VPLEX cluster provides highly
available cache, processing power, front-end, and back-end Fibre
Channel interfaces.
EMC hardware powers the VPLEX cluster design so that all devices
are always available and I/O that enters the cluster from anywhere
can be serviced by any node within the cluster.
The AccessAnywhere feature in the VPLEX Metro and VPLEX Geo
products extends the cache coherency between data centers at a
distance.
High-level VPLEX I/O flow
VPLEX abstracts a block-level ownership model into a highly
organized hierarchal directory structure that is updated for every I/O
and shared across all engines. The directory uses a small amount of
metadata and tells all other engines in the cluster, in 4k block
transmissions, which block of data is owned by which engine and at
what time.
After a write completes and ownership is reflected in the directory,
VPLEX dynamically manages read requests for the completed write
in the most efficient way possible.
When a read request arrives, VPLEX checks the directory for an
owner. After VPLEX locates the owner, the read request goes directly
to that engine.
Introduction 33
Hardware and Software
On reads from other engines, VPLEX checks the directory and tries to
pull the read I/O directly from the engine cache to avoid going to the
physical arrays to satisfy the read.
This model enables VPLEX to stretch the cluster as VPLEX distributes
the directory between clusters and sites. Due to the Hierarchical
nature of the VPLEX directory VPLEX is efficient with minimal
overhead and enables I/O communication over distance.
Distributed coherent cache
The VPLEX engine includes two directors that each have a total of 36
GB (version 5 hardware, also known as VS2) of local cache. Cache
pages are keyed by volume and go through a lifecycle from staging,
to visible, to draining.
The global cache is a combination of all director caches that spans all
clusters. The cache page holder information is maintained in a
memory data structure called a directory.
The directory is divided into chunks and distributed among the
VPLEX directors and locality controls where ownership is
maintained.
A meta-directory identifies which director owns which directory
chunks within the global directory.
VPLEX family clustering architecture
The VPLEX family uses a unique clustering architecture to help
customers break the boundaries of the data center and allow servers
at multiple data centers to have read/write access to shared block
storage devices. A VPLEX cluster, as shown in Figure 6 on page 34,
can scale up through the addition of more engines, and scale out by
connecting clusters into an EMC VPLEX Metro (two VPLEX Metro
clusters connected within Metro distances).
34 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
Figure 6 VPLEX cluster example
VPLEX Metro transparently moves and shares workloads for a
variety of applications, VMs, databases and cluster file systems.
VPLEX Metro consolidates data centers, and optimizes resource
utilization across data centers. In addition, it provides non-disruptive
data mobility, heterogeneous storage management, and improved
application availability. VPLEX Metro supports up to two clusters,
which can be in the same data center, or at two different sites within
synchronous environments. Also, introduced with these solutions
architected by this TechBook, Geo cluster across distances achieves
the asynchronous partner to Metro. It is out of the scope of this
document to analyze VPLEX Geo capabilities.
Introduction 35
Hardware and Software
VPLEX single, dual, and quad engines
The VPLEX engine provides cache and processing power with
redundant directors that each include two I/O modules per director
and one optional WAN COM I/O module for use in VPLEX Metro
and VPLEX Geo configurations.
The rackable hardware components are shipped in NEMA standard
racks or provided, as an option, as a field rackable product. Table 2
provides a list of configurations.
VPLEX sizing tool
Use the EMC VPLEX sizing tool provided by EMC Global Services
Software Development to configure the right VPLEX cluster
configuration.
The sizing tool concentrates on I/O throughput requirement for
installed applications (mail exchange, OLTP, data warehouse, video
streaming, etc.) and back-end configuration such as virtual volumes,
size and quantity of storage volumes, and initiators.
Table 2 Configurations at a glance
Components Single engine Dual engine Quad engine
Directors 2 4 8
Redundant Engine SPSs Yes Yes Yes
FE Fibre Channel ports (VS1) 16 32 64
FE Fibre Channel ports (VS2) 8 16 32
BE Fibre Channel ports (VS1) 16 32 64
BE Fibre Channel ports (VS2) 8 16 32
Cache size (VS1 Hardware) 64 GB 128 GB 256 GB
Cache size (VS2 Hardware) 72 GB 144 GB 288 GB
Management Servers 1 1 1
Internal Fibre Channel switches (Local Comm) None 2 2
Uninterruptable Power Supplies (UPSs) None 2 2
36 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
Upgrade paths
VPLEX facilitates application and storage upgrades without a service
window through its flexibility to shift production workloads
throughout the VPLEX technology.
In addition, high-availability features of the VPLEX cluster allow for
non-disruptive VPLEX hardware and software upgrades.
This flexibility means that VPLEX is always servicing I/O and never
has to be completely shut down.
Hardware upgrades
Upgrades are supported for single-engine VPLEX systems to dual- or
quad-engine systems.
A single VPLEX Local system can be reconfigured to work as a
VPLEX Metro or VPLEX Geo by adding a new remote VPLEX cluster.
Additionally an entire VPLEX VS1 Cluster (hardware) can be fully
upgraded to VS2 hardware non disruptively.
Information for VPLEX hardware upgrades is in the Procedure
Generator that is available through EMC PowerLink.
Software upgrades
VPLEX features a robust non-disruptive upgrade (NDU) technology
to upgrade the software on VPLEX engines and VPLEX Witness
servers. Management server software must be upgraded before
running the NDU.
Due to the VPLEX distributed coherent cache, directors elsewhere in
the VPLEX installation service I/Os while the upgrade is taking
place. This alleviates the need for service windows and reduces RTO.
The NDU includes the following steps:
Preparing the VPLEX system for the NDU
Starting the NDU
Transferring the I/O to an upgraded director
Completing the NDU
VPLEX management interfaces 37
Hardware and Software
VPLEX management interfaces
Within the VPLEX cluster, TCP/IP-based management traffic travels
through a private network subnet to the components in one or more
clusters. In VPLEX Metro and VPLEX Geo, VPLEX establishes a VPN
tunnel between the management servers of both clusters. When
VPLEX Witness is deployed, the VPN tunnel is extended to a 3-way
tunnel including both Management Servers and VPLEX Witness.
Web-based GUI
VPLEX includes a Web-based graphical user interface (GUI) for
management. The EMC VPLEX Management Console Help provides
more information on using this interface.
To perform other VPLEX operations that are not available in the GUI,
refer to the CLI, which supports full functionality. The EMC VPLEX
CLI Guide provides a comprehensive list of VPLEX commands and
detailed instructions on using those commands.
The EMC VPLEX Management Console contains but is not limited to
the following functions:
Supports storage array discovery and provisioning
Local provisioning
Distributed provisioning
Mobility Central
Online help
VPLEX CLI
VPlexcli is a command line interface (CLI) to configure and operate
VPLEX systems. It also generates the EZ Wizard Setup process to
make installation of VPLEX easier and quicker.
The CLI is divided into command contexts. Some commands are
accessible from all contexts, and are referred to as global commands.
The remaining commands are arranged in a hierarchical context tree
that can only be executed from the appropriate location in the context
tree.
38 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
The VPlexcli encompasses all capabilities in order to function if the
management station is unavailable. It is fully functional,
comprehensive, supporting full configuration, provisioning and
advanced systems management capabilities.
SNMP support for performance statistics
The VPLEX snmpv2c SNMP agent:
Supports retrieval of performance-related statistics as published
in the VPLEX-MIB.mib.
Runs on the management server and fetches performance related
data from individual directors using a firmware specific
interface.
Provides SNMP MIB data for directors for the local cluster only.
LDAP /AD support
VPLEX offers Lightweight Directory Access Protocol (LDAP) or
Active Directory for an authentication directory service.
VPLEX Element Manager API
VPLEX Element Manager API uses the Representational State
Transfer (REST) software architecture for distributed systems such as
the World Wide Web. It allows software developers and other users to
use the API to create scripts to run VPLEX CLI commands.
The VPLEX Element Manager API supports all VPLEX CLI
commands that can be executed from the root context on a director.
Simplified storage management 39
Hardware and Software
Simplified storage management
VPLEX supports a variety of arrays from various vendors covering
both active/active and active/passive type arrays. VPLEX simplifies
storage management by allowing simple LUNs, provisioned from the
various arrays, to be managed through a centralized management
interface that is simple to use and very intuitive. In addition, a
VPLEX Metro or VPLEX Geo environment that spans data centers
allows the storage administrator to manage both locations through
the one interface from either location by logging in at the local site.
40 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
Management server user accounts
The management server requires the setup of user accounts for access
to certain tasks. Table 3 describes the types of user accounts on the
management server.
Some service and administrator tasks require OS commands that
require root privileges. The management server has been configured
to use the sudo program to provide these root privileges just for the
duration of the command. Sudo is a secure and well-established
UNIX program for allowing users to run commands with root
privileges.
VPLEX documentation will indicate which commands must be
prefixed with "sudo" in order to acquire the necessary privileges. The
sudo command will ask for the user's password when it runs for the
first time, to ensure that the user knows the password for his account.
This prevents unauthorized users from executing these privileged
commands when they find an authenticated SSH login that was left
open.
Table 3 Management server user accounts
Account type Purpose
admin (customer) Performs administrative actions, such as user
management
Creates and deletes Linux CLI accounts
Resets passwords for all Linux CLI users
Modifies the public Ethernet settings
service
(EMC service)
Starts and stops necessary OS and VPLEX services
Cannot modify user accounts
(Customers do have access to this account)
Linux CLI accounts Uses VPlexcli to manage federated storage
All account types Uses VPlexcli
Modifies their own password
Can SSH or VNC into the management server
Can SCP files off the management server from directories
to which they have access
Management server software 41
Hardware and Software
Management server software
The management server software is installed during manufacturing
and is fully field upgradeable. The software includes:
VPLEX Management Console
VPlexcli
Server Base Image Updates (when necessary)
Call-home software
Each are briefly discussed in this section.
Management console
The VPLEX Management Console provides a graphical user interface
(GUI) to manage the VPLEX cluster. The GUI can be used to
provision storage, as well as manage and monitor system
performance.
Figure 7 on page 42 shows the VPLEX Management Console window
with the cluster tree expanded to show the objects that are
manageable from the front-end, back-end, and the federated storage.
42 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
Figure 7 VPLEX Management Console
The VPLEX Management Console provides online help for all of its
available functions. Online help can be accessed in the following
ways:
Click the Help icon in the upper right corner on the main screen
to open the online help system, or in a specific screen to open a
topic specific to the current task.
Click the Help button on the task bar to display a list of links to
additional VPLEX documentation and other sources of
information.
Management server software 43
Hardware and Software
Figure 8 is the welcome screen of the VPLEX Management Console
GUI, which utilizes a secure http connection via a browser. The
interface uses Flash technology for rapid response and unique look
and feel.
Figure 8 Management Console welcome screen
Command line interface
The VPlexcli is a command line interface (CLI) for configuring and
running the VPLEX system, for setting up and monitoring the
systems hardware and intersite links (including com/tcp), and for
configuring global inter-site I/O cost and link-failure recovery. The
CLI runs as a service on the VPLEX management server and is
accessible using Secure Shell (SSH).
44 EMC VPLEX Metro Witness Technology and High Availability
Hardware and Software
For information about the VPlexcli, refer to the EMC VPLEX CLI
Guide.
System reporting
VPLEX system reporting software collects configuration information
from each cluster and each engine. The resulting configuration file
(XML) is zipped and stored locally on the management server or
presented to the SYR system at EMC via call home.
You can schedule a weekly job to automatically collect SYR data
(VPlexcli command scheduleSYR), or manually collect it whenever
needed (VPlexcli command syrcollect).
Director software 45
Hardware and Software
Director software
The director software provides:
Basic Input/Output System (BIOS ) Provides low-level
hardware support to the operating system, and maintains boot
configuration.
Power-On Self Test (POST) Provides automated testing of
system hardware during power on.
Linux Provides basic operating system services to the Vplexcli
software stack running on the directors.
VPLEX Power and Environmental Monitoring (ZPEM)
Provides monitoring and reporting of system hardware status.
EMC Common Object Model (ECOM) Provides management
logic and interfaces to the internal components of the system.
Log server Collates log messages from director processes and
sends them to the SMS.
EMC GeoSynchrony