You are on page 1of 79

DATA SHEE T

Consolidate file data on the same infrastructure


you already trust for a flexible, efficient, and
reliable way to meet the mission critical
demands of unstructured data.

DATASHEET

Hitachi NAS Platform 5000 Series


Simple, Non-Disruptive
Enterprise-Class NAS Migration, Management and
With enterprise organizations under increasing pressure to address the substantial
growth of file-based data, applications and life-cycle challenges you need an agile
All-Inclusive Software
file infrastructure to be more efficient and deliver information faster, while Migration deployment from our HNAS 3000
maintaining customer interest and maximizing revenue. The flexibility to deploy cost and 4000 series are built on cluster rolling
optimized, high performance storage services with non-disruptive refreshes are also upgrades and node EVS failover capabilities
critical. The new Hitachi NAS Platform 5000 series is a robust file storage solution allowing a hassle-free, non-disruptive
that delivers the best file scalability and flexibility for workload consolidation with an migration to occur in as easy as 1-2-3. See Figure 1.
enhanced accelerated file system that dramatically improves your return on The enhanced Hitachi Disaster Recovery
investment (ROI) for your enterprise scale-out filing system needs. Leverage and System (HDRS)* provides a simple interface
utilize the same, shared infrastructure you already know and trust to manage the and automates operator tasks associated
growth of your unstructured data without breaking the bank. with using a Global Active Device (GAD)
enhanced for NAS built on Hitachi VSP
storage.
• Manage your data growth while lowering costs: With our dedupe and
compression capabilities you have the most efficient technology at the lowest • Active-active NAS cluster
effective cost. management
• Lower latency, faster response times: Providing you with fresher data to drive • Automated setup and installation
better business decisions with improved response time and reporting of cluster hardware and software
completion. in less time than a manual
• High availability, no unscheduled outages: Meet and exceed your service level installation.
agreement targets for increased productivity.
• Storage provisioning with
• Ease of use: Faster deployment time with better toolsets for system set ups, simplified options to expand or
provisioning, configuration and device management, workflow automation and duplicate replicated storage.
overall system management to improve productivity.
• Snap on snap: Speed up and improve reliability of backups and clones to give • Monitors and automates recovery
for out-of sync paired volume
you quicker access to recover data.
replication.
Each system includes Hitachi NAS OS’s all-
Flexibility to Enable No Compromise Consolidation inclusive base software: SMB, NFS, iSCSI
and FTP protocol support, tenants, filesystem
audit/rollback, snapshot, deduplication,
The HNAS 5000 series supports both file and block (iSCSI) workloads for greater cluster namespace, EVS server farm
consolidation and operational simplicity. File controllers are designed with a migration, data migrator to the cloud, file and
hardware-accelerated architecture, using field-programmable gate arrays (FPGA) for directory writeable clone, read caching,
active, critical, and sensitive file services. Individual file systems belong to a Hitachi virtual server migration and security and
Enterprise Virtual Server (EVS) for NAS, and each server has its own set of IP WORM. Additional software add-ons include
addresses, policies, and individual port assignments. premium deduplication, EVS 8 packs and
HDRS).
• Virtual cluster support up to 80 nodes*
• 2x the throughput than its predecessor
• Multitenant security: Designed for up to 64 enterprise virtual servers
(EVS), per namespace, with each having its own security context to segregate network traffic for increased security.
• Unlimited virtual capacity supports long-term growth per file system/namespace, with cloud extension up to 130/840
billon objects.
• Hardware-accelerated architecture FPGA: Primary data deduplication leverages an FPGA offload engine to perform
CPU-intensive hash operations that reduce the impact on file-serving performance.
• Unmatched simplicity and power: Little to no administration, configuration or tuning is required. Scheduling is not
necessary. Premium deduplication option provides additional ingest performance by increasing the number of logical
state machines.
• Improved data reduction services with FPGAs and SVOS RF adaptive data reduction, including deduplication and
compression, minimize storage footprint and maintain controller scalability.

Built-in intelligent filesystem cloud tiering provides the mobility and control in the cloud by transparently migrating data
between local and external cloud tiers using a policy-based business rules engine. With an improved file system extension to
cloud, the HNAS 5000 series enables snapshot archiving and leverages cost-effective cloud capacity to store stale or less
active data, while providing local transparent access for users and applications.

• Kubernetes and Ansible* cloud support


• Public cloud targets: Amazon S3, Microsoft Azure and IBM® Cloud Object Storage.
• Private cloud: Hitachi Content Platform (HCP) and IBM Cloud Object Storage.
• Cross-volume link enables files that have been migrated to a cloud environment to be transparently accessed by the
application for nondisruptive retrieval.

Tiered file system (TFS) separates file system metadata from user data. TFS automatically places file system metadata on a
higher performance storage tier to increase file system performance while also providing cost efficiency.

Enhanced object replication with throughput throttling improves quality of service (QoS). Deduplication support targets and
eliminates rehydration of embedded cross volume links while allowing up to 3 data center support powered by our global-
active device metro clustering.

The cluster namespace feature creates a unified directory structure across storage pools and controllers. Multiple file systems
appear under a common root, and both server message block (SMB) and network file system (NFS) clients obtain global
access. EVS server farm migration enables tenant mobility across namespace and servers with shared storage.
*Available post GA feature support/enhancement; contact your Hitachi representative for more details

Best-in-class Efficiency
We live in a hybrid world: every data center uses cloud and on-prem storage and needs easier, more efficient ways to
move data between the two. Our HNAS 5000 series enables a broad range of efficiency technologies that deliver
maximum value and more predictable ongoing costs. Our direct cloud connect functionality to transparently move file
data to your choice of content repository or cloud service (Hitachi Content Platform, Amazon S3, IBM Cloud Object Store
or Microsoft Azure) allows you to gain unparalleled reduction in on-site storage costs and more predictable ongoing
storage costs. All services are selectable and can be activated for specific workloads, giving you maximum control over
efficiency and performance.

A new advantage to Hitachi’s enterprise scale-out file system is our onode and file packing* for small files. Every object
in a file system contains at least one small onode which are usually stored alone within the whole filesystem block
resulting in wasted space. With onode and file packing, up to eight onodes or files can be packed into one filesystem
block resulting in space saving, latency reduction and fewer writes to the storage. This new feature will allow you to
optimize your files with the highest level of performance, efficiency and simplification your business strives for.

In addition, HNAS offers linked, writable snapshot clones. With linked clones, thousands — even millions — of copies of
data sets are created very rapidly while using near-zero extra capacity. For highly virtualized environments, the ability to
create a standard “gold image” that can be used across virtual machines and desktops not only saves money, but also
reduces support and management costs.

Multilayered, Reliable Data Protection


The need for more resilience on business-critical data is noticeable now more than ever and Hitachi has you covered by
offering the upmost NAS resiliency to deliver both on site and remote site failover access allowing for more uptime and
consistent data.

• NAS object-based replication provides a fast and efficient means to replicate data over wide area networks,
improves recovery time objectives (RTOs) and facilitates simple failover and failback operations.

• Achieve fully synchronous active-active clustering of up to 500km with automated takeover, when implemented
with global-active device metro clustering.
o Global-active device ensures continuous operations with nonstop data access so IT teams can meet
their disaster recovery objectives with dramatically reduced return-to-operations time.

• A variety of snapshot options provides point-in-time data protection capabilities:

o File system snapshot includes hidden snapshot folder read-only access and Microsoft Volume
Shadow Copy Service (VSS) recovery capability for local data protection of end user files and folders.

o File and directory clones enable the creation of capacity-efficient writable snapshots (clones) of files to
accelerate production data copies in testing and development and virtual server and virtual desktop
infrastructure environments. Directory clones extend the file cloning capability to directory trees to
enable protection or repurposing of applications and databases.

o Anti-Virus support (RPC/iCAP)

o 2-way (FC attached tape)* and 3-way NDMP backup support


*Available post GA feature support/enhancement; contact your Hitachi representative for more details

Easy, Simple Migration Deployment

Figure 1. Migration from HNAS 3000/4000 Series

Node 1 Node 2 ………………. Node N


Tenant/Filesystem Tenant/Filesystem Tenant/Filesystem

… … …

Cluster Namespace ..… Cluster Namespace


Object Store Storage Pool Object Store Storage Pool

Flash HDDs Flash HDDs Flash HDDs


HNAS 5000 Series Specifications
TABLE 1: 5200 and 5300 Models

5200 5300

Protocol Support
NFS, SMB, FTP, iSCSI and HTTP(S3) to the cloud

Virtual Cluster Support Up to 40 nodes Up to 80 nodes

Aggregate Performance (GB/s) 192 384


Max. Virtual File System/Namespace
Size (w/tiering to Object Store) Unlimited up to max. files per file system/namespace
Max. Files per File
System/Namespace 130/840 Billion

Learn More

Hitachi Vantara
Corporate Headquarters Contact Information
2535 Augustine Drive USA: 1-800-446-0744
Santa Clara, CA 95054 USA Global: 1-858-547-4526
hitachivantara.com | community.hitachivantara.com hitachivantara.com/contact

HITACHI is a registered trademark of Hitachi, Ltd. VSP is a trademark or registered trademark of Hitachi Vantara LLC. Microsoft, Azure and Windows are trademarks or
registered trademarks of Microsoft Corporation. All other trademarks, service marks and company names are properties of their respective owners.
DS-SMatheson Dec 2020
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Flash Storage​
Champions Corner / Blog Viewer

 View Only

Community Home Threads 578 Library 36 Blogs 47 Members 259

INTRODUCTION TO VIRTUAL STOR AGE PLATFORM E1090 ARCHITECTURE

By Sudipta Kumar Mohapatra posted 03-24-2022 14:26


9 L ike

Contents
Executive Summary
Engineered for Performance
The VSP E1090 Controllers
Hardware Components
Front End Configuration
Back End Configuration
Performance and Resiliency Enhancements

Executive Summary
The Virtual Storage Platform (VSP) E1090 storage system is Hitachi’s newest midsized
enterprise platform, designed to utilize NVMe to deliver industry-leading performance and

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 1/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

availability. The VSP E1090 features a single, flash-optimized Storage Virtualization


Operating System (SVOS) image running on 64 processor cores, sharing a global cache of 1
TiB. Based primarily on SVOS optimizations, the VSP E1090 offers higher performance with
fewer hardware resources than competitors. In addition, the VSP E1090 was upgraded to an
advanced Cascade Lake CPU, permitting read response times as low as 41 microseconds
and data reduction throughput was improved up to 2X by the new Compression Accelerator
Module. Improvements in reliability and serviceability allow the VSP E1090 to claim an
industry-leading 99.9999% availability (on average, 0.3 seconds per year of downtime
expected). In this blog, we’ll take a brief look at the highlights of the VSP E1090 architecture.

Engineered for Performance


The VSP E1090 features a new controller board powered by two 16-core Intel Cascade Lake
CPUs operating at 2.1 GHz. The two CPUs function as a single 32-core MPU per controller.
The upgraded controller board adds 14% more CPU power compared to the VSP E990. The
extra processing power enables the VSP E1090 to leverage the NVMe protocol, which has a
low-latency command set and multiple queues per device, for unprecedented performance in
a midsized package. Because NVMe doesn’t allow for cascading connections, the VSP
E1090 supports a maximum of 4 NVMe drive boxes connected to the controllers using PCI
Express Gen3. This simple, streamlined configuration allows for low-latency, point-to-point
connections between the E1090 controllers and the NVMe SSDs.

Figure 1. The VSP E1090 offers flexible configuration options with industry-leading performance and 99.9999% availability

The VSP E1090 Controllers


Figure 1 presents a block diagram of the VSP E1090 dual-controller system. Each interface
board (CHB, DKBN) is connected to a controller using 8 x PCIe Gen 3 lanes, which means 16
GB/s of available bandwidth (8 GB/s send, and 8 GB/s receive). The configuration in Figure 1
includes two pairs of NVMe disk adapters (DKBNs) that support 1-2 NVMe drive boxes
(DBNs) and up to 48 NVMe SSDs. Each controller in this configuration has up to 96 GB/s of
front-end theoretical bandwidth (if six CHBs per controller were configured), and 32 GB/s of
back end bandwidth (two DKBNs). An alternative configuration (not pictured) doubles the
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 2/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

capability of the NVMe back end, with four DKBNs per controller supporting 3-4 DBNs and
up to 96 NVMe SSDs. The latter configuration would have 64 GB/s of back end bandwidth
per controller and up to 64 GB/s of front-end bandwidth per controller (if 4 CHBs per
controller were installed).
Like previous Hitachi enterprise products, all VSP E1090 processors run a single SVOS
image and share a global cache. Cache is distributed across individual controllers for fast,
efficient, and balanced memory access. Although VSP E1090 hardware and microcode would
permit a variety of cache configurations, the only configuration available has the maximum
cache configuration (1 TB). Therefore, all eight DIMM slots per controller are populated with
64 GB DDR4-2400 DIMMs for a total of 153.6 GB/s of theoretical memory bandwidth per
controller.

ADR performance was improved up to 2X by the addition of 2 Compression Accelerator


Modules per controller. The compression accelerators are labeled “ACLF” in Figure 1. As
observed in GPSE testing, the compression accelerator improves ADR performance as much
as 2X, while also boosting capacity savings. The Compression Accelerator Module allows the
CPU to offload the work of compression to a Hitachi-designed ASIC (application-specific
integrated circuit). The ASIC uses an efficient compression algorithm optimized for
implementation in specialized hardware. The compression accelerator operates on data in
cache using direct memory addressing (DMA); it does not require copy operations and can
perform work with very low latency. As shown in Figure 2, the Compression Accelerator
Module is connected to the controller using eight PCI Express Gen3 lanes. Within the
accelerator module, a PCIe switch connects four lanes to each of the two ASICs per
Compression Accelerator Module. The compression accelerator occupies unused space in
the fan module (two per controller), so each controller gets four compression ASICs.

Figure 2. Compression Accelerator Module Block Diagram

Hardware Components
While the VSP E1090 has a new and faster controller board, other basic hardware
components are shared with the VSP 5000 or VSP G900. CHBs are shared with VSP G900.
Up to six four-port 8/16/32 Gb FC or six two-port 10 Gb iSCSI CHBs per controller can be
installed. (Protocol types must be installed symmetrically between controller 1 and controller
2). For details on CHBs, see the VSP Gxx0 and Fxx0 Architecture and Concepts Guide.
DKBN adapters for the all-NVME back end are shared with the VSP 5000, as are the NVMe
SSDs, which are available in five different capacities (1.9 TB, 3.8 TB, 7.6 TB, 15 TB, and 30
TB). The NVMe drive box (DBN) is also shared with VSP 5000. However, unlike the VSP
5000, which has strict rules about Parity Group configuration, the VSP E1090 DBN can be

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 3/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

ordered in quantities as small as a single tray. The E1090 also includes a new option for a
SAS back end. The SAS back end shares the same architecture and components as the VSP
G900.

Front End Configuration


VSP E1090 FC ports run in universal (also called bi-directional) mode. A bi-directional port
can simultaneously function as a target (for host I/O or replication) and initiator (for external
storage or replication), with each function having a queue depth of 1,024 The highest-
performing VSP E1090 front end configuration would use “100% straight” access, in which
LUNs are always accessed on a CHB port connected to the controller that owns the LUN.
Addressing a LUN on the non-owning controller (known as “front end cross” I/O) increases
the overhead by a small amount for each command. However, our testing shows that front-
end cross I/O does not have a significant performance impact under normal operating
conditions (up to about 70% MP busy). Configuring to avoid front end cross I/O is not
recommended unless the customer requires the highest possible levels of performance.

Figure 3. VSP E1090 Universal Port Functionality

A front end I/O expansion module (a common component with F900) is also available for
VSP E1090. As shown in Figure 4, two CHB slots per controller can be used to connect to as
many as four CHBs per controller in the expansion module. With the expansion module in
place, a diskless VSP E1090 could present up to 80 FC ports, or 40 iSCSI ports per system.
However, note that the eight CHB slots in the expansion module must share the PCIe
bandwidth of the four slots to which the expansion module is connected, which may limit
throughput for large-block workloads. See the VSP Gxx0 and Fxx0 Architecture and
Concepts Guide for more detail on the I/O expansion module.

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 4/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Figure 4. The I/O Expansion Module Permits Installation of Up to Ten CHBs Per Controller

Back End Configuration


The VSP E1090 has an all-NVMe back end, which makes configuration relatively simple and
straightforward. Either two or four DKBNs per controller can be installed. As presented in
Figure 5, either CHBs or DKBNs can be installed into slots 1-E/F and 2-E/F in each controller.
A configuration with two DKBNs per controller can support one or two NVMe drive trays, and
up to 48 NVMe SSDs. With four DKBNs per controller, three or four drive trays can be
connected, accommodating up to 96 SSDs (see Figure 6). Each DKBN has two ports that
are connected to two different DBNs using 4-lane PCIe Gen3 copper cables, as shown in
Figure 6. Each cable connection has 8 GB/s of PCIe bandwidth (4 GB/s send, and 4 GB/s
receive). Each DBN (drive tray) with four standard connections has 16 GB/s send and 16
GB/s receive of PCIe bandwidth. Within each DBN (Figure 7) are two PCIe switches, each of
which is connected to two DKBNs using 4-lane PCIe cables. As shown in Figure 7, each
NVMe SSD is connected to both PCIe switches using the DBN backplane. In summary, each
NVMe SSD can be accessed using a point to point PCIe connection by two different DKBNs
on each controller, for a total of four redundant back end paths per drive.

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 5/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Figure 5. Multi-Purpose Slots Permit Installation of Two or Four DKBN Pairs

Figure 6. Connection Diagram of the Maximum Back End Configuration

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 6/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Figure 7. DBN Block Diagram

Due to the positioning as a mid-sized enterprise array, the VSP E1090 includes flexible Parity
Group configuration. Table 1 shows the supported Parity Group configurations, that can be
configured on any combination of 1-4 drive trays.

Table 1. Supported Parity Group Configurations

Finally, encrypted DKBNs (eDKBNs) are optionally available for the VSP E1090. The eDKBNs
offload the work of encryption to Field Programmable Gate Arrays (FPGAs) as shown in
Figure 8. The FPGAs allow FIPS 140 level 2 encryption with little or no performance impact.
Encrypting DKBNs is also recommended for customers requiring the maximum non-ADR
sequential read throughput of 40 GB/s (which is only available in configurations having at
least three drive trays). The eDKBNs optimize PCIe block transfers, which requires fewer
DMA operations, and improves non-ADR sequential read throughput.

Figure 8. eDKBN Block Diagram

Performance and Resiliency Enhancements


Significant enhancements in the VSP E1090 include:

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 7/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Upgraded controllers with 14% more processing power than VSP E990 and 53% more
processing power than VSP F900.
Significantly improved ADR performance through Compression Accelerator Modules
(ACLF). See: E1090 ADR Performance using NVMe SSDs and E1090 ADR Performance
using SAS SSDs.
An 80% reduction in drive rebuild time compared to earlier midsized enterprise
platforms
Smaller access size for ADR metadata reduces overhead.
Support for NVMe allows extremely low latency with up to 5X higher cache miss IOPS
per drive.

We’ve briefly reviewed the highlights of VSP E1090 architecture, including improvements in
performance, scalability and resiliency. For additional information, please visit the GPSE
Resource LIbrary.

#FlashStorage
#HitachiVirtualStoragePlatformVSP

2 comments 159 views

RELATED CONTENT

Introduction to Introduction to
Virtual Storage VSP 5200 and
Platform E990 5600
Architecture Architecture

Charles Lofton
Added 09-24-2020 Sudipta Kumar
Mohapatra
Blog Entry Added 04-11-2022

Blog Entry

Introduction to How Copy


Virtual Storage Program
Platform E590 Products
and E790 Benefit from
Architecture Compression
Accelerator
Charles Lofton
Module
Added 04-19-2021
Dang Luong
Blog Entry

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 8/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture

Added 11-11-2021

Blog Entry

Introduction to
Virtual Storage
Platform 5000
Architecture

Charles Lofton
Added 11-09-2020

Blog Entry

PERMALINK

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction

COMMENTS

Jeffrey Martin 16 days ago

Is anyone able to access the VSP Gxx0 and Fxx0 Architecture and
Concepts Guide referenced and linked twice in this document? I get an
authentication failure when trying to access.

Chayan Sarkar 05-02-2022 03:45

Excellent Information

CONTACT US CODE OF CONDUCT PRIVACY POLICY


TERMS OF USE LEGAL

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 9/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
A PROUD PART OF
HITACHI VANTARA

© HITACHI VANTARA CORPORATION. ALL RIGHTS


RESERVED.

https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 10/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Flash Storage​
Champions Corner / Blog Viewer

 View Only

Community Home Threads 578 Library 36 Blogs 47 Members 259

INTRODUCTION TO VIRTUAL STOR AGE PLATFORM E590 AND E790 ARCHITECTURE

By Charles Lofton posted 04-19-2021 22:44


6 L ike

Introduction
The Virtual Storage Platform (VSP) E590 and E790 are the newest additions to Hitachi’s midsized enterprise
product line, engineered to utilize NVMe to deliver industry-leading performance in a small, affordable form factor.
The VSP E series (which also includes the VSP E990) features a single, flash-optimized Storage Virtualization
Operating System (SVOS) image operating on Intel Xeon multi-core, multi-threaded processors. Thanks primarily to
SVOS optimizations, the new VSP E Series models offer the highest performance available in a 2U form factor. In
this blog, we will examine how Hitachi put industry leading NVMe performance into a very small package.

The VSP E series arrays leverage the NVMe storage protocol, which was designed to take advantage of the fast
response times and parallel I/O capabilities of flash. NVMe provides a streamlined command set for accessing flash
media over a PCIe bus or storage fabric, allowing for up to 64K separate queues, each with a queue depth of 64K.

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 1/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Harnessing the full power of flash with the NVMe protocol requires an efficient operating system and advanced
processors, both of which are included with the latest VSP E series arrays. The new technologies included in the
VSP E series are complemented by Hitachi’s sophisticated, flash-optimized cache architecture, with its focus on data
integrity, efficiency, and performance optimization. Table 1 presents the basic specifications of the VSP E590 and
VSP E790.

Table 1. Selected VSP E590 and E790 Specifications

Feature VSP E590 VSP E790


Form Factor 2U 2U
1
Maximum 8 KiB IOPS 3.1 Million 6.8 Million
Maximum NVMe SSDs 24 24
Maximum Raw Internal Capacity 720 TB (24 x 30 TB SSD) 720 TB (24 x 30 TB SSD)
Maximum Raw External Capacity 144 PB 216 PB
Cache Capacity Options 384 GB or 768 GB 768 GB
Maximum Internal Cache Bandwidth 460 GB/s 460 GB/s
Maximum Number of Logical Devices 32,768 49,152
Maximum LUN Size 256 TB 256 TB
Maximum Host Ports 12 iSCSI, 24 FC 12 iSCSI, 24 FC
Data-at-Rest Encryption Available Available
1. 100% Random Read Cache-Hit

Before covering the hardware in more detail, let’s begin with a review of two important SVOS features that are
common to all Hitachi storage products.

Hitachi Dynamic
Provisioning (HDP)
Hitachi Dynamic Provisioning provides a mechanism for grouping many physical devices (NVMe SSDs in the VSP E
Series) into a single pool. The pool mechanism creates a structure of 42 MB pool pages from each device within the
pool. HDP then presents automatically managed, wide-striped block devices to one or more hosts. This is like the
use of a host-based logical volume manager (LVM) and its wide striping mechanism across all member LUNs in its
“pool”. The LUNs presented by an HDP pool are called Dynamic Provisioning Volumes (DPVOLs or virtual volumes).
DPVOLs have a user-specified logical size up to 256TB. The host accesses the DPVOL as if it were a normal
volume (LUN) over one or more host ports. A major difference is that disk space is not physically allocated to a
DPVOL from the pool until the host has written to different parts of that DPVOL’s Logical Block Address (LBA) space.
The entire logical size specified when creating that DPVOL could eventually become fully mapped to physical space
using 42 MB pool pages from every device in the pool.

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 2/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Adaptive Data
Reduction (ADR)
Adaptive Data Reduction (ADR) adds controller-based compression and deduplication to HDP, greatly increasing the
effective storage capacity presented by the array. A lossless compression algorithm is used to reduce the number of
physical bits needed to represent the host-written data. Deduplication removes redundant copies of identical data
segments and replaces them with pointers to a single instance of the data segment on disk. These capacity saving
features are supported in conjunction with HDP, so that only DPVOLs can have either compression or deduplication
plus compression enabled. (Deduplication without compression is not supported). Each DPVOL has a capacity
saving attribute, for which the settings are “Disabled”, “Compression”, or “Deduplication and Compression”. DPVOLs
with either capacity saving attribute set are referred to as Data Reduction Vols (DRDVOLs). The deduplication scope
is at the HDP pool level for all DRDVOLs with the “Deduplication and Compression” attribute set.

The data reduction engine uses a combination of inline and post-process methods to achieve capacity saving with
the minimum amount of overhead to host I/O. Normally with HDP, each DPVOL is made up of multiple 42 MB
physical pages allocated from the HDP pool. But with data reduction enabled, each DRDVOL is made up of 42 MB
virtual pages. If a virtual page has not yet been processed for data reduction, it is identified as a non-DRD virtual
page and is essentially a pointer to an entire physical page in the pool. After data reduction, the virtual page is
identified as a DRD virtual page and it then contains pointers to 8 KB chunks stored in different physical pages in the
HDP pool. The initial data reduction post-processing is done to non-DRD virtual pages that have not had write
activity in at least five minutes. The non-DRD virtual page is processed in 8 KB chunks and compressed data are
written in log-structured fashion to new locations in the pool, likely one or more new physical pages. If enabled for
the DRDVOL, deduplication is then performed on the compressed data chunks, so that duplicate chunks are
invalidated (after a hash match and bit-by-bit comparison) and replaced with a pointer to the location of the physical
chunk. Garbage collection is done in the background to combat fragmentation over time by coalescing the pockets of
free space resulting from invalidated data chunks. Subsequent rewrites to already compressed data are then
handled purely inline for best performance.

A major advantage of Hitachi’s approach to data reduction is the ability to customize settings for each LUN. For
example, compression (but not deduplication) could be configured for LUNs on which a database’s unique
checksums in each 8K block could make deduplication less effective. Both compression and deduplication could be
enabled for LUNs hosting virtual desktops, which may contain multiple copies of the same data. And if an application
encrypts or compresses its data on the host, making additional capacity savings impossible, then ADR can be
completely disabled on the affected LUNs to avoid unnecessary overhead. The flexibility of Hitachi’s ADR allows
capacity savings to be obtained on appropriate data, with the least possible impact to host I/O.

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 3/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Front End
Configuration
Two options for host connectivity are currently offered—fibre channel and iSCSI. The VSP E590 and VSP E790
support 1-3 channel board pairs, to be installed in the rear of the chassis as shown in Figure 1. (Protocol types must
be installed symmetrically between controller 1 and controller 2).

Figure 1. VSP E590 and VSP E790 CHB locations

Channel boards must be installed in pairs. The FC CHBs support transfer rates up to 3200 MB/s. iSCSI CHBs
support transfer rates up to 1,000 MB/s. Additional details about the CHBs are shown in Table 2 below.

Table 2. Host Connectivity Options

CHB Transfer Rate CHBs Per System Ports Per CHB Ports Per System
16 Gb Fibre Channel 400/800/1600 MB/s 2/4/6 4 8/16/24
32 Gb Fibre Channel 800/1600/3200 MB/s 2/4/6 4 8/16/24
Fibre iSCSI 1000 MB/s 2/4/6 2 4/8/12
Copper iSCSI 100/1000 MB/s 2/4/6 2 4/8/12

VSP Ex90 FC ports operate in universal (also called bi-directional) mode. A bi-directional port can simultaneously
function as a target (for host I/O or replication) and initiator (for external storage or replication), with each function
having a queue depth of 1,024.

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 4/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

The VSP E590 and


E790 Controllers
Processing power and high-speed connectivity are at the heart of the 2U VSP E series dual-controller system (as
shown in Figure 2). Let’s begin with the controller’s connection to the channel boards. The “A” and “C” channel
board slots connect to the controller via 16 x PCIe Gen3 lanes, and thus have 32 GB/s of available bandwidth (16
GB/s send, and 16 GB/s receive). The “B” slot gets eight PCIe Gen3 lanes, and therefore has 16 GB/s of theoretical
bandwidth. Each controller has two multicore Intel Xeon CPUs, linked by two Ultra Path Interconnects (UPIs), each
supporting up to 10.4 gigatransfers per second. The two CPUs operate as a single multi-processing unit (MPU) per
controller. Like previous Hitachi enterprise products, all VSP E series processors run a single SVOS image and
share a global cache. Cache is allocated across individual controllers for fast, efficient, and balanced memory
access.

Figure 2. VSP E590 and VSP E790 Controller Block Diagram

Each CPU has six memory channels for DDR4 memory, providing as much as 115 GB/s per CPU of theoretical
memory bandwidth (up to 230 GB/s per controller, and 460 GB/s per system). Data and command transfers between
controllers are done over the two non-transparent bus (NTB) connections, which together are allocated a total of 16
x PCIe Gen3 lanes. All VSP E series arrays feature two NTB connections between controllers, thus avoiding any

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 5/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

single failure point for this critical component. Finally, each controllers’ CPUs are connected via an embedded PCIe
switch to the NVMe SSDs. Each controller can establish a point-to-point connection to each of up to twenty-four
drives over a 2-lane, 4 GB/s PCIe Gen3 bus.

The primary difference between VSP E790 and VSP E590 is processing power. The VSP E790 comes equipped with
four 16-core CPUs, while the VSP E590 has four 6-core processors. Our testing shows that the VSP E790 has
enough processing power to approach the full IOPS potential of the fast NVMe drives. For example, on the VSP
E790 we measured 3.64 million 8 KiB cache-miss random read IOPS from eight SSDs, with a response time of 0.51
milliseconds. On the VSP E590, the same test yielded 1.34 million IOPS with a 0.53 millisecond response time. With
the same drive configuration as the E590, the E790 could deliver 2.7X more IOPS because of its high-powered
CPUs. Of course, the VSP E590’s 1.34 million cache-miss random read IOPS will be more than sufficient for many
applications.

Encrypting controllers (eCTLs) are optionally available for the VSP E790 and VSP E590. The eCTLs offload the work
of encryption to Field Programmable Gate Arrays (FPGAs). The FPGAs are connected by a PCIe switch positioned
between the controller CPUs and the flash drives. The FPGAs allow FIPS 140 level 2 encryption to be done with little
or no performance impact.

Logical Devices and


MPUs
The CPUs of the Ex90 controllers are logically organized into multi-processing units (MPUs). There is one MPU per
controller, thus two MPUs per system. When logical devices (LDEVs) are provisioned, LDEV ownership is assigned
(round-robin by default) to one of the two MPUs. The assigned MPU is responsible for handling I/O commands for
the logical devices it owns. Any CPU core in the MPU can process I/O for an LDEV assigned to that MPU. Therefore,
all of the array’s processing power can be leveraged by distributing a workload across a minimum of two LDEVs (one
per MPU). However, SVOS multiprocessing can run a bit more efficiently with a larger quantity of logical devices as
discussed in this HDP blog.

Cache Architecture
The importance of storage array cache (i.e., Dynamic Random Access Memory or DRAM) may have diminished in
the era of NVMe flash drives. However, I/O cache can still provide a significant performance boost for several
reasons. First, DRAM is faster than flash, at least by an order of magnitude. When a host requests cache-resident
data (sometimes called a cache “hit”) the command can be completed with the lowest possible latency. For example,
our 8KiB random read hit testing on VSP E590 measured response times as low as 66 microseconds (0.066

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 6/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

milliseconds). The fastest response time observed in the 8 KiB random read miss testing was about 4X higher at 250
microseconds. Reading data from cache is not only advantageous because DRAM is the fastest medium, but also
means that the request for data is satisfied with no additional address lookups or back end I/O commands. And when
it comes to optimizing performance, Hitachi isn’t content to simply cache the most recently accessed data. I/O
patterns on each LDEV are periodically analyzed to identify the data most likely to be accessed repeatedly. Cache is
preferentially allocated to such blocks. Meanwhile, areas of each LUN identified as having the lowest probability of a
cache hit will not have any cache allocated. Instead, such data are sent to the host through a transfer buffer, thereby
saving the overhead of allocating a cache segment.

Cache also enhances write performance. After writes have been mirrored in both controllers’ DRAM to protect
against data loss (but before data have been written to flash) the host is sent a write acknowledgment. The quick
response to writes allows latency-sensitive applications to operate smoothly. Newly written data are held in cache for
a while, to allow for related blocks to be aggregated and written to flash together in larger chunks. This “gathering
write” algorithm reduces the need for parity operations, thereby bringing down controller and drive busy rates and
improving overall response time.

Back End
Configuration
The VSP E590 and VSP E790 both have an all-NVMe back end integrated into the controller chassis, which makes
configuration simple and straightforward. As noted earlier, up to twenty-four NVMe SSDs may be installed. Table 3
lists the supported drive types, and Table 4 shows the available RAID configurations.

Table 3. NVMe SSD Options

Name Type Capacity


19RVM NVMe SSD 3DWPD 1.9TB
38RVM NVMe SSD 1DWPD 3.8TB
76RVM NVMe SSD 1DWPD 7.6TB
15RVM NVMe SSD 1DWPD 15TB
30RRVM NVMe SSD 1DWPD 30TB

Table 4. Supported RAID Configurations

RAID Type Configurations


RAID-10 2D+2D, 2D+2D Concatenation
RAID-5 3D+1P, 4D+1P, 6D+1P, 7D+1P, 7D+1P Concatenation
RAID-6 6D+2P, 12D+2P, 14D+2P

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 7/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Hitachi storage has often been configured with RAID-6 6D+2P, or perhaps RAID-6 14D+2P for data protection and
good capacity efficiency. However, if any spare drives are to be allocated in the 2U E series arrays, only a single
14D+2P or two 6D+2P groups could be created. We therefore tested an asymmetrical configuration with I/O
distributed across one 6D+2P parity group and one 12D+2P parity group in a single HDP pool--a configuration which
offers RAID-6 data protection, good capacity efficiency, with one or two spare drives. We found no difference in
performance between the asymmetrical configuration that allows for spare drives, and a symmetrical configuration
with three RAID-6 6D+2P parity groups.

We have briefly introduced the architecture of the Virtual Storage Platform E590 and E790. For more information on
Hitachi’s implementation of NVMe, and other exemplary features of the VSP E series, see Hu Yoshida’s recent blog
entitled “Unique VSP Capabilities Which Were Not Noted in the Gartner Report.” Also see the VSP E Series page.

#FlashStorage
#ThoughtLeadership
#Blog

5 comments 201 views

RELATED CONTENT

Introduction to Introduction to
Virtual Storage Virtual Storage
Platform E990 Platform E1090
Architecture Architecture

Charles Lofton
Added 09-24-2020 Sudipta Kumar
Mohapatra
Blog Entry Added 03-24-2022

Blog Entry

Introduction to Virtual Storage


VSP 5200 and Platform E590
5600 and E790: High
Architecture Performance
With a Low
Profile
Sudipta Kumar
Mohapatra Charles Lofton
Added 04-11-2022 Added 04-21-2021

Blog Entry
Blog Entry

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 8/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Introduction to
Virtual Storage
Platform 5000
Architecture

Charles Lofton
Added 11-09-2020

Blog Entry

PERMALINK

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-st

COMMENTS

Aindrilla Das 11-04-2022 12:47

Very detailed information. Thank you.

Sumit Keshri 11-03-2022 12:02

Very helpful

Chayan Sarkar 05-02-2022 02:05

Excellent

Dipta Kundu 04-14-2022 06:15

Nice Read

Sujith Koonamodi 01-19-2022 08:44

Dear Team,

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 9/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture

Can we share this link to external partners/customer.

regards
sujith V

CONTACT US CODE OF CONDUCT PRIVACY POLICY


TERMS OF USE LEGAL

A PROUD PART OF
HITACHI VANTARA

© HITACHI VANTARA CORPORATION. ALL RIGHTS


RESERVED.

https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 10/10
VSP Midrange
Update for 2023
VSP Kevin Koplar
Technical Specialist – Core Storage
G350/E590/E790/E1090 February 2023

© Hitachi Vantara LLC 2023. All Rights Reserved.


Agility
Support today’s and tomorrow’s
business needs with a digital
infrastructure that can be upgraded and
scaled to meet the needs of any data-
driven workload.

© Hitachi Vantara LLC 2023. All Rights Reserved.


The Current Virtual Storage Platform Family

Enterprise class storage solutions The most powerful enterprise scale-


that won't break the bank out, future-proof data platform

High Performance Industry leading performance


NVMe & SAS Capacity Scale up and scale out
Simple Self Install Data in place upgrades

Common Management Common Operating System

© Hitachi Vantara LLC 2023. All Rights Reserved.


Enterprise Class Storage for Everyone

Common Operating System


Common Ops Center Software Portfolio

Flexible high
Entry Enterprise Scale-out
Entry NVMe High-density performance
Entry point to VSP Open & Mainframe Enterprise
System performance High port count
Price sensitive Hardware offload Open & Mainframe
SAS capacity SAS capacity Hardware offload
SAS media Scale-up Hardware offload
expansion expansion SAS or NVMe
SAS or NVMe SAS &/or NVMe
media

© Hitachi Vantara LLC 2023. All Rights Reserved.


Powered by the Most Advanced Software

Resilient Metro App-Aware Multisite Predictive


Scale-Out Clustering Recovery Continuity Support

Intelligent IO Flash AI-Enabled Adaptive


QoS Offload IO Pathing Tiering Reduction

Encrypted Agile Key Advanced Hardened


Retention Management Data Erasure Access
© Hitachi Vantara LLC 2023. All Rights Reserved.
VxWorks OS

Mars Pathfinder Boeing 787


SpaceX Dragon

BMW iDrive ABB Industrial Robots MRI and CT Medical Screening

© Hitachi Vantara LLC 2023. All Rights Reserved.


Legendary Hitachi Resilience

30 Years Of
Industry Leading Reliability
§ Industry’s first, best and most trusted 100%
Data Availability Guarantee
AVAILABILITY GLOBAL-ACTIVE
§ Hitachi remote support ops brings 30 years DEVICE
of knowledge to predict and prevent
downtime
§ Enterprise class: Industry benchmark for
active-active metro clustering and trusted
CLOUD SUPPORT 3 DATA CENTER
business continuity
§ Restart operation faster: Application-aware
recovery and copy data management to
prevent data loss
100% Data Availability Guarantee

© Hitachi Vantara LLC 2023. All Rights Reserved.


Key Partnerships and Alliances

§ AWS Premier Consulting Partner


§ Microsoft Global Managed Partner
§ Google Premier Partner
§ Veeam Storage Partner
§ Splunk Reference architecture
§ Veritas Partner of the Year
§ Commvault top OEM Partner
§ VMware Big Bet Partner
§ Cisco Industry Solutions Partner
§ WekaIO OEM Partner of the Year
§ Oracle Platinum Global Partner
§ SAP Platinum Partner, Gold Value Added

© Hitachi Vantara LLC 2023. All Rights Reserved.


Next Generation Applications

§ VSP E Series and VSP 5000 Series have Anthos


certification for Google Hybrid cloud infrastructure.

§ Hitachi Storage Plug-in for Containers integrates VSP


arrays with container-based applications for stateful
storage. It integrates with Red Hat OpenShift Container
Platform to provide persistent storage for stateful
container applications.

§ Hitachi Replication Plug-in for Containers is a container


“operator” that enables you to drive advanced data
services and replication.
Hitachi Storage Plug-in for
Containers

© Hitachi Vantara LLC 2023. All Rights Reserved.


Virtual Storage
Platform E Series Faster time to value
with scale out architecture
Capacity Without Compromise

Reduce storage costs with AI-driven data


reduction
Extend the life of assets with
Modern Storage Assurance

Public Cloud
integration with the
provider of your
choice
Legendary Hitachi resilience
with 100% data availability
© Hitachi Vantara LLC 2023. All Rights Reserved.
© Hitachi Vantara LLC 2023. All Rights Reserved.
VSP E590 and E790 Hardware Controller (CTL)
DIMM
BacKup Module (BKM)
# Parts Discription
Encryption FPGA
(CTLMNE, CTLSNE) 1 Controller •Consists of CPU, DIMM, CFM.
Board •CFM can be replaced without CTL
(CTL) removal.
•Only supporting Dual CTL configuration

!"#$%&'(
2 I/O Boards
•(CHB) •CHannel Board ( FC / iSCSI )
Power Supply •(DKB, •DisK Board (DKB or eDKB)
Unit (PSU) eDKB) •Hitachi Interconnect Edge
•(HIE)

3 Cache Flash Flash memory to back up the DIMM data


CPU Memory in the case of electric power failure.
NVMe Drive (CFM)

4 PS Unit Power Supply Unit


(PSU)

5 Backup !"#$%&'"#()*&'"++%,-&./&0'1234
,+"-.$%&'( Module (BKM) 56+%7,"+%8&9:+$&;<=

6 DIMM =>%8&?@,&A20;20<20!2

7 Encryption B C@+&*,%>%6+&:6&>->+%D>&9:+$@)+&%6#,-*+:@6
FPGA B !"6&@6E-&F%&,%*E"#%8&F-&,%*E"#:67&!GA
OOB Mgmt. & Maint. Boards (CTLMNE,
I/O Boards CTLSNE)
w/ Ethernet Ports

CTL
)'*+$%&'( 1
CTL
0
Slot A Slot B Slot C
PSU CHB DKB or CHB HIE or CHB Cache Flash PSU
(16x PCIe) (8x PCIe) (16x PCIe) Memory (CFM)
© Hitachi Vantara LLC 2023. All Rights Reserved.
Midrange Workhorse

• High-end Midrange offering


‒ Performance up to 8.4M IOPs, latency as low as
41uS
‒ Capacity scale up to 2.9PB all-NVMe or 25.9PB SAS VSP E1090
hybrid for multi-workload consolidation
‒ 64-core Cascade Lake-based Controller Modules
‒ New Compression Off-load Engine & Algorithm
‒ All NVMe or SAS SSD & HDD environments
8.4M IOPS
Dual-Controller
64-CORE per Controller
• Simplified Management and Lower TCO
1TB Memory
‒ Simplified Management – Embedded GUI, SVP-less Compression Offload
‒ Quick self-install Engine
------------------------
‒ Non-disruptive Data in Place upgrade to next
25.9PB MAX RAW
generation controllers
CAPACITY
‒ Controller upgrade program under Hitachi Modern
Storage Assurance

© Hitachi Vantara LLC 2023. All Rights Reserved.


General Configuration of the VSP E1090

Front-End Controller Back-End Drives

24 x NVMe
SSD SFF

NVMe
2U DBN
Fibre Channel
32Gb/s 12 x SAS
Backplane PCIe Gen3

4 Ports NL-SAS LFF

OR
iSCSI (SFP)
10Gb/s 2U DBL
2 Ports SSD SFF
24 x SAS
iSCSI (10Base-T)
1/10Gb/s SAS SFF

SAS
2 Ports 2U DBS2/DBS

SAS SFF*
60 x SFF/LFF

NL-SAS LFF
4U DB60

© Hitachi Vantara LLC 2023. All Rights Reserved.


VSP E series specifications at-a-glance
E790 E590 E1090
Hi-Mid Low High-end
Segment
Midrange Storage Platform Midrange Storage Platform Midrange Storage Platform

Model NVMe All Flash Array & SAS Hybrid NVMe All Flash Array & SAS Hybrid NVMe All Flash Array or SAS Hybrid
Software Base / Advanced Base / Advanced Base/Advanced

Enhanced NVMe and SAS Expansion Compression Accelerator


NVMe and SAS Expansion
Capabilities NVMe or SAS Expansion

NVME: 2U Controller NVME: 2U Controller NVME: 4U Controller + 4x 2U DBN


Footprint
Hybrid: 2U Controller + 2x DBS2 + 8x DBxx Hybrid: 2U Controller + 2x DBS2 + 8x DBxx SAS: 4U Controller + 4x DBS2 + 32x DBxx

Nodes/Controllers 1 Node / 2 Controllers 1 Node / 2 Controllers 1 Node / 2 Controllers

Processor Cores
64 Cores 24 Cores 64 Cores
(Cascade Lake)

Cache 768GB 768GB or 384GB 1024GB


24 x NVMe SSD 24 x NVMe SSD 96 x NVMe SSD
Drive Qty 240 x SFF SAS 240 x SFF SAS 864 x SFF SAS
480 x LFF SAS 480 x LFF SAS 960 x LFF SAS

722TB NVMe SSD 722TB NVMe SSD 2.888 PB NVMe SSD


Max capacity (raw)
8.9PB with SAS expansion 8.9PB with SAS expansion 25.7PB with SAS expansion

Front End Ports 24/12 24/12 80/40


(FC/iSCSI) 32G FC,10GbE 32G FC,10GbE 32G FC,10GbE

NVMe & SAS SSD (1.9TB, 3.8TB, 7.8TB, NVMe & SAS SSD (1.9TB, 3.8TB, 7.8TB, 15TB, NVMe or SAS SSD (1.9TB, 3.8TB, 7.8TB, 15TB,
Supported Media 15TB, 30TB) 30TB) 30TB)
SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB) SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB) SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB)
© Hitachi LLC
© Hitachi Vantara Vantara LLC 2023.
2020. All Rights
All Rights Reserved.
Reserved.
Document “Golden Nuggets”

• Hardware Architecture Guides

• https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-
virtual-storage-platform-e590-and-e790-architecture
• https://community.hitachivantara.com/blogs/sudipta-kumar-
mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc

© Hitachi Vantara LLC 2023. All Rights Reserved.


Midrange Transition Overview
Current Midrange
EOS Platforms Portfolio

VSP E990 VSP E1090

VSP G/F900

VSP G/F700 VSP E790

VSP G/F370 VSP E590

Continuing Platforms

VSP G/F350 VSP G/F350

VSP G130 VSP G130

© Hitachi Vantara LLC 2023. All Rights Reserved.


Virtual Storage
Scale Out

Scale-up Scale-out
Cluster your VSP E-series to scale performance and capacity.

Multi Controller Architecture


Make the most reliable even more resilient with 100% data availability
guaranteed.

Simple Configuration
Manage all nodes from a single management GUI.

Future Proof
Intermix NVMe and SAS technology to get the best value for your
business.

© Hitachi Vantara LLC 2023. All Rights Reserved.


VSP E
Virtual Storage Scale Out

Smart Provisioning Writes Across The Cluster

Intermix
VSP E590, E790, E1090

Controller Controller Controller Controller Controller Controller Controller Controller

NVMe SSD
SAS SSD

NVMe SSD
SAS HYBRID

Unified Management and AIOps

Cloud-Based Monitoring
© Hitachi Vantara LLC 2023. All Rights Reserved.
Virtual Storage Scale Out
Loosely Coupled Clustered Systems

Geo-Dispersed Servers/Clusters
with
Scale Out Storage
Nodes of Cluster 1 Nodes of Cluster 2

Virtual Identity Virtual Storage Identity 12345 Virtual Identity


12345 12345
Virtual LDEVs: Virtual LDEVs:

LDEVs: LDEVs:
00:01 00:01
10:01 20:01
20:01
10:02 20:02
20:02

© Hitachi Vantara LLC 2023. All Rights Reserved.


Optimize Utilization Across Systems,

by non-disruptively moving workloads with data,


using Unified Management, AIOps & Cloud Monitoring

Virtual Storage Scale Out (VSSO)

© Hitachi Vantara LLC 2023. All Rights Reserved.


Modern Storage
Assurance.

© Hitachi Vantara LLC 2023. All Rights Reserved.


Overview

WHAT
Hitachi Modern Storage Assurance is
a market competitive investment
protection and platform
lifecycle extension program allowing
customers to secure future upgrade
paths onto newer generation
controllers/platforms.

WHY HOW
Many customers seek to maximize Hitachi Modern Storage Assurance is our
their storage investments by successor controller upgrade program
extending their storage platform designed to extend the lifecycle of a
usage as long as possible. storage investment by modernizing and
They try to avoid the hassles of upgrading the controller without a
seeking additional funding for costly forklift upgrade or downtime.
system upgrades in the future.
Hitachi Modern Storage Assurance is
a data-in-place upgrade, protecting
customer data and media investment.
© Hitachi Vantara LLC 2023. All Rights Reserved.
An Introduction to Modern Storage Assurance

Next Gen
VSP E1090

Gives you the flexibility to stay


modern and agile for new
VSP E Series models

Non-disruptively take
advantage of increased
performance and new features

© Hitachi Vantara LLC 2023. All Rights Reserved.


Non-Disruptive
Eliminate complicated data
migration and forklift upgrades with
data-in-place migration

All Inclusive
Get the ability to upgrade within 1-5
years

Stay Modern
Accelerate applications with the
next generation storage architecture

© Hitachi Vantara LLC 2023. All Rights Reserved.


Midrange Assurance

No single point of
failure during
upgrade
Upgrade after 12 No Renewal
months Requirement

Eliminate workload Support for all-


migration flash and
hybrid

Available on
Supporting Open
Standard
Warrantee

100% Non-
Disruptive Upgrade
© Hitachi Vantara LLC 2023. All Rights Reserved.
Total Cost of Ownership

Modern Storage Assurance delivers


30% TCO saving over
10-years

§ Based on $-per-TB Price erosion over time


§ Remove cost of migration
§ Eliminate future infrastructure uncertainty

© Hitachi Vantara LLC 2023. All Rights Reserved.


Modern Storage Assurance Lifecycle (5 Year Example)
5 Year Maintenance & Modern Storage Assurance

Year 0 Year 3 Year 5

New VSP E1090 Upgrade to Original


System with 5 years successor model Maintenance
Maintenance & Modern (No additional Maintenance
charge) Term Ends
Storage Assurance

Successor Successor
VSP E1090
Model Model

© Hitachi Vantara LLC 2023. All Rights Reserved.


Adaptive Data Reduction (ADR)

In the Real World E-Series Rocks


It!

So simple and Effective


© Hitachi Vantara LLC 2023. All Rights Reserved.
VSP E1090 Compression Accelerator Module
• Repurposed FMD compression ASIC (NFA) but better comp. algorithm
• 2 NFA’s per module

Impact Connector

Fan Connector
Compression
Fan
Module

• Repurposed battery space


• 2 per controller
© Hitachi Vantara LLC 2023. All Rights Reserved.
Compression Accelerator Module I/O Flow

• The addition of the hardware accelerator is seamless to operation of ADR


• It is essentially a plugin to the current ADR architecture.

Write data flow


Host

LDEV Hardware Comp


1.Receive write data Accelerator

2. Reply to host
cache cache(Log-structure Area)

Software Compression Module


Drive
5. Destaged to log
3.Read logical-
structured area on disk
physical
Address map
6. Update address Map
Metadata area → Memory compression algorithm(①software,②Accelerator (LZ BasedV1)

© Hitachi Vantara LLC 2023. All Rights Reserved.


Compression algorithms
# Model Spec Compression Compression
rate(8KB) Ratio
Algorithm Comp N:1 Ratio
1 VSP 5000 Series/VSP E Series
LZ4 56% 1.8:1
Software based
2 VSP 5200/5600 Series Upgrade
LZ-based V1 45% 2.2:1
and E1090

Compression VSP 5000 Series/ VSP 5200/5600 Series


algorithms Upgrade
LZ4 ✓ software ✓ software
LZ-based V1 ― ✓ (ASIC)

✓ : Supported ―: Not Supported

Compression Accelerator Module increases compression efficiency

32
© Hitachi Vantara LLC 2023. All Rights Reserved.
Adaptive Data Reduction (ADR)
ADR Best Practice-USE IT!

Mode 1191 set to ON

Quote 2:1 NQA as standard


practice
© Hitachi Vantara LLC 2023. All Rights Reserved.
2:1 No Questions Asked Selling Model

All VSP E
2:1 deals
Always quote series and
have a “no Remediation
2:1 data VSP
signature will be
reduction 5200/5600
required” expected
ratio are
warranty
supported

1:1 quotes will A few countries ~10% of the


VSP E590 and
have to be still require deals may
E790 added
approved signature require some
January 4
remediation

© Hitachi Vantara LLC 2023. All Rights Reserved.


Why Do You Care ?

01 02 03

Takes risk away from Customer Satisfaction Enhance Margin


customer • Customer interaction with • Focus on effective
• Customer does not Global Support is minimal. capacity rather than
have to worry about • Remediation process is usable capacity.
shortage of storage simpler. • Cloud like selling,
incase the data focused on customer
reduction is not met. benefit instead of
Hitachi Vantara will hardware cost.
remediate.

Risk Reduction : Customer gets the benefits of effective capacity without the risk.

© Hitachi Vantara LLC 2023. All Rights Reserved.


Self Driving AIOps Enabled
Infrastructure.

© Hitachi Vantara LLC 2023. All Rights Reserved.


The Shift to Self-Driving , AI-enabled Operations

Management Manual
Sprawl Processes

Traditional Resource
Data Center Delivery

IT
Staff

Incident
Workload Resolution
Repatriation

© Hitachi Vantara LLC 2023. All Rights Reserved.


Storage Management Flexibility
Management options that grow with your business

General Storage Management Enterprise Storage Management

• VSP Embedded Management • Hitachi Ops Center


‒ Included with E-series ‒ Large deployments, multiple VSP arrays
‒ Small deployment, single VSP array ‒ Real-time, telemetry monitoring
‒ Simple setup & rapid provisioning ‒ Automated workflows and orchestration
‒ Default configurations for simplicity ‒ Enterprise data copy management
‒ Ideal for IT generalists ‒ Mission critical, high availability
• Hitachi Ops Center Clear Sight operations
‒ Cloud-based monitoring

Fast Storage Simplified High Advanced


Deployment System Availability Management
Configurations Enterprise Requirements
Storage
© Hitachi Vantara LLC 2023. All Rights Reserved.
Flexible Management Options
Embedded or OpsCenter Suite

Rapid Storage Provisioning


1 Provision VSP E Series storage resources
in seconds

Timely Storage Monitoring


2 Near real-time storage system
performance monitoring

Easily Switch to Hitachi Ops


Center
3 Seamless transition to Ops Center
Administrator to accommodate larger
VSP environments

Analyzer
Automator

Hitachi Ops
Center

Administrato Protector
Management Suite r

© Hitachi Vantara LLC 2023. All Rights Reserved.


Software should NOT be OPTIONAL Why?
It Assists in Strategic Control of your Hardware Solution

Modern Data Ops Center Protector


Protection Enterprise Copy Data Management
Integrated Self-Driving Management
Hitachi Ops Center Suite

Cloud-based Ops Center Clear Sight Powered by Hitachi Remote Ops


AIOps Cloud-based storage monitoring across VSP’s

On Premises Ops Center Analyzer & Automator


AIOps Real-time, telemetry monitoring with automated remediation

VSP embedded management and Op Center Administrator


Flexible Storage Rapid array setup; move to Hitachi Ops Center for advanced
Management
management capabilities

Self-Driving
Infrastructure

© Hitachi Vantara LLC 2023. All Rights Reserved.


VSP E1090 Positioning vs. Top Competitors

VSP E1090 replaces VSP E990

Hitachi Pure IBM


VSP5500 6N VSP5600 6N Power Max
OceanStor
8000
Dorado 18000
High End VSP5500 4N VSP5600 4N
Panama2 Up to 8 Bricks
Power Max AFF 800
VSP5500 2N VSP5600 2N Up to 12HA
OceanStor
2000 FlashSystem
Up to 2 Bricks Dorado 9000
9200
//X90, AFF 800
VSP5100 VSP5200 XL170 1HA
Series PowerStore
High
VSP 9000T
Midrange OceanStor
E1090
//X70 PowerStore FlashSystem Dorado 6000
VSP Series 7000T 7200
E990 VSP AFF 400 OceanStor
VSP E790 PowerStore Dorado 5000
//X50
Midrange G/F900 Series 5000T
VSP OceanStor
G/F700 //X20 PowerStore FlashSystem Dorado 3000
VSP 3000T 5100
E590 Series
VSP
Low G/F3x0 PowerStore
Midrange VSP //X10 1000T
G/F130 Series

Low End

© Hitachi Vantara LLC 2023. All Rights Reserved.


Modern Digital Infrastructure For Apps and Data

Hybrid & Multicloud Private Cloud Mission Critical Apps Data Analytics
Solutions

Simplified aaS CLOUD CONVERGED HYPERCONVERGED RACK SCALE Cisco and Hitachi
Consumption Hitachi Enterprise Cloud, Unified Compute Unified Compute Unified Compute Adaptive
STaaS and DPaaS Platform CI Platform HC Platform RS Solutions for CI

AIOPs
EverFlex
• Purchase
Management & Administrator Software Analyzer Software Automator Software • Lease
Orchestration • Consumption
Pricing

V IR T U A L S T OR A GE S OF T WA R E
Software-Defined Storage

VIRTUAL STORAGE PLATFORM


Scalable Object
Storage File Storage
Portfolio

© Hitachi Vantara LLC 2023. All Rights Reserved.


Agility
Support today’s and tomorrow’s
business needs with a digital
infrastructure that can be upgraded and
scaled to meet the needs of any data-
driven workload.

© Hitachi Vantara LLC 2023. All Rights Reserved.


Thank
You

44 © Hitachi Vantara LLC 2023. All Rights Reserved.


© Hitachi Vantara LLC 2023. All Rights Reserved.

You might also like