Professional Documents
Culture Documents
DATASHEET
Built-in intelligent filesystem cloud tiering provides the mobility and control in the cloud by transparently migrating data
between local and external cloud tiers using a policy-based business rules engine. With an improved file system extension to
cloud, the HNAS 5000 series enables snapshot archiving and leverages cost-effective cloud capacity to store stale or less
active data, while providing local transparent access for users and applications.
Tiered file system (TFS) separates file system metadata from user data. TFS automatically places file system metadata on a
higher performance storage tier to increase file system performance while also providing cost efficiency.
Enhanced object replication with throughput throttling improves quality of service (QoS). Deduplication support targets and
eliminates rehydration of embedded cross volume links while allowing up to 3 data center support powered by our global-
active device metro clustering.
The cluster namespace feature creates a unified directory structure across storage pools and controllers. Multiple file systems
appear under a common root, and both server message block (SMB) and network file system (NFS) clients obtain global
access. EVS server farm migration enables tenant mobility across namespace and servers with shared storage.
*Available post GA feature support/enhancement; contact your Hitachi representative for more details
Best-in-class Efficiency
We live in a hybrid world: every data center uses cloud and on-prem storage and needs easier, more efficient ways to
move data between the two. Our HNAS 5000 series enables a broad range of efficiency technologies that deliver
maximum value and more predictable ongoing costs. Our direct cloud connect functionality to transparently move file
data to your choice of content repository or cloud service (Hitachi Content Platform, Amazon S3, IBM Cloud Object Store
or Microsoft Azure) allows you to gain unparalleled reduction in on-site storage costs and more predictable ongoing
storage costs. All services are selectable and can be activated for specific workloads, giving you maximum control over
efficiency and performance.
A new advantage to Hitachi’s enterprise scale-out file system is our onode and file packing* for small files. Every object
in a file system contains at least one small onode which are usually stored alone within the whole filesystem block
resulting in wasted space. With onode and file packing, up to eight onodes or files can be packed into one filesystem
block resulting in space saving, latency reduction and fewer writes to the storage. This new feature will allow you to
optimize your files with the highest level of performance, efficiency and simplification your business strives for.
In addition, HNAS offers linked, writable snapshot clones. With linked clones, thousands — even millions — of copies of
data sets are created very rapidly while using near-zero extra capacity. For highly virtualized environments, the ability to
create a standard “gold image” that can be used across virtual machines and desktops not only saves money, but also
reduces support and management costs.
• NAS object-based replication provides a fast and efficient means to replicate data over wide area networks,
improves recovery time objectives (RTOs) and facilitates simple failover and failback operations.
• Achieve fully synchronous active-active clustering of up to 500km with automated takeover, when implemented
with global-active device metro clustering.
o Global-active device ensures continuous operations with nonstop data access so IT teams can meet
their disaster recovery objectives with dramatically reduced return-to-operations time.
o File system snapshot includes hidden snapshot folder read-only access and Microsoft Volume
Shadow Copy Service (VSS) recovery capability for local data protection of end user files and folders.
o File and directory clones enable the creation of capacity-efficient writable snapshots (clones) of files to
accelerate production data copies in testing and development and virtual server and virtual desktop
infrastructure environments. Directory clones extend the file cloning capability to directory trees to
enable protection or repurposing of applications and databases.
… … …
5200 5300
Protocol Support
NFS, SMB, FTP, iSCSI and HTTP(S3) to the cloud
Learn More
Hitachi Vantara
Corporate Headquarters Contact Information
2535 Augustine Drive USA: 1-800-446-0744
Santa Clara, CA 95054 USA Global: 1-858-547-4526
hitachivantara.com | community.hitachivantara.com hitachivantara.com/contact
HITACHI is a registered trademark of Hitachi, Ltd. VSP is a trademark or registered trademark of Hitachi Vantara LLC. Microsoft, Azure and Windows are trademarks or
registered trademarks of Microsoft Corporation. All other trademarks, service marks and company names are properties of their respective owners.
DS-SMatheson Dec 2020
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Flash Storage
Champions Corner / Blog Viewer
View Only
Contents
Executive Summary
Engineered for Performance
The VSP E1090 Controllers
Hardware Components
Front End Configuration
Back End Configuration
Performance and Resiliency Enhancements
Executive Summary
The Virtual Storage Platform (VSP) E1090 storage system is Hitachi’s newest midsized
enterprise platform, designed to utilize NVMe to deliver industry-leading performance and
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 1/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Figure 1. The VSP E1090 offers flexible configuration options with industry-leading performance and 99.9999% availability
capability of the NVMe back end, with four DKBNs per controller supporting 3-4 DBNs and
up to 96 NVMe SSDs. The latter configuration would have 64 GB/s of back end bandwidth
per controller and up to 64 GB/s of front-end bandwidth per controller (if 4 CHBs per
controller were installed).
Like previous Hitachi enterprise products, all VSP E1090 processors run a single SVOS
image and share a global cache. Cache is distributed across individual controllers for fast,
efficient, and balanced memory access. Although VSP E1090 hardware and microcode would
permit a variety of cache configurations, the only configuration available has the maximum
cache configuration (1 TB). Therefore, all eight DIMM slots per controller are populated with
64 GB DDR4-2400 DIMMs for a total of 153.6 GB/s of theoretical memory bandwidth per
controller.
Hardware Components
While the VSP E1090 has a new and faster controller board, other basic hardware
components are shared with the VSP 5000 or VSP G900. CHBs are shared with VSP G900.
Up to six four-port 8/16/32 Gb FC or six two-port 10 Gb iSCSI CHBs per controller can be
installed. (Protocol types must be installed symmetrically between controller 1 and controller
2). For details on CHBs, see the VSP Gxx0 and Fxx0 Architecture and Concepts Guide.
DKBN adapters for the all-NVME back end are shared with the VSP 5000, as are the NVMe
SSDs, which are available in five different capacities (1.9 TB, 3.8 TB, 7.6 TB, 15 TB, and 30
TB). The NVMe drive box (DBN) is also shared with VSP 5000. However, unlike the VSP
5000, which has strict rules about Parity Group configuration, the VSP E1090 DBN can be
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 3/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
ordered in quantities as small as a single tray. The E1090 also includes a new option for a
SAS back end. The SAS back end shares the same architecture and components as the VSP
G900.
A front end I/O expansion module (a common component with F900) is also available for
VSP E1090. As shown in Figure 4, two CHB slots per controller can be used to connect to as
many as four CHBs per controller in the expansion module. With the expansion module in
place, a diskless VSP E1090 could present up to 80 FC ports, or 40 iSCSI ports per system.
However, note that the eight CHB slots in the expansion module must share the PCIe
bandwidth of the four slots to which the expansion module is connected, which may limit
throughput for large-block workloads. See the VSP Gxx0 and Fxx0 Architecture and
Concepts Guide for more detail on the I/O expansion module.
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 4/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Figure 4. The I/O Expansion Module Permits Installation of Up to Ten CHBs Per Controller
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 5/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 6/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Due to the positioning as a mid-sized enterprise array, the VSP E1090 includes flexible Parity
Group configuration. Table 1 shows the supported Parity Group configurations, that can be
configured on any combination of 1-4 drive trays.
Finally, encrypted DKBNs (eDKBNs) are optionally available for the VSP E1090. The eDKBNs
offload the work of encryption to Field Programmable Gate Arrays (FPGAs) as shown in
Figure 8. The FPGAs allow FIPS 140 level 2 encryption with little or no performance impact.
Encrypting DKBNs is also recommended for customers requiring the maximum non-ADR
sequential read throughput of 40 GB/s (which is only available in configurations having at
least three drive trays). The eDKBNs optimize PCIe block transfers, which requires fewer
DMA operations, and improves non-ADR sequential read throughput.
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 7/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Upgraded controllers with 14% more processing power than VSP E990 and 53% more
processing power than VSP F900.
Significantly improved ADR performance through Compression Accelerator Modules
(ACLF). See: E1090 ADR Performance using NVMe SSDs and E1090 ADR Performance
using SAS SSDs.
An 80% reduction in drive rebuild time compared to earlier midsized enterprise
platforms
Smaller access size for ADR metadata reduces overhead.
Support for NVMe allows extremely low latency with up to 5X higher cache miss IOPS
per drive.
We’ve briefly reviewed the highlights of VSP E1090 architecture, including improvements in
performance, scalability and resiliency. For additional information, please visit the GPSE
Resource LIbrary.
#FlashStorage
#HitachiVirtualStoragePlatformVSP
RELATED CONTENT
Introduction to Introduction to
Virtual Storage VSP 5200 and
Platform E990 5600
Architecture Architecture
Charles Lofton
Added 09-24-2020 Sudipta Kumar
Mohapatra
Blog Entry Added 04-11-2022
Blog Entry
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 8/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
Added 11-11-2021
Blog Entry
Introduction to
Virtual Storage
Platform 5000
Architecture
Charles Lofton
Added 11-09-2020
Blog Entry
PERMALINK
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction
COMMENTS
Is anyone able to access the VSP Gxx0 and Fxx0 Architecture and
Concepts Guide referenced and linked twice in this document? I get an
authentication failure when trying to access.
Excellent Information
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 9/10
09/11/23, 15:39 Introduction to Virtual Storage Platform E1090 Architecture
A PROUD PART OF
HITACHI VANTARA
https://community.hitachivantara.com/blogs/sudipta-kumar-mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc 10/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Flash Storage
Champions Corner / Blog Viewer
View Only
Introduction
The Virtual Storage Platform (VSP) E590 and E790 are the newest additions to Hitachi’s midsized enterprise
product line, engineered to utilize NVMe to deliver industry-leading performance in a small, affordable form factor.
The VSP E series (which also includes the VSP E990) features a single, flash-optimized Storage Virtualization
Operating System (SVOS) image operating on Intel Xeon multi-core, multi-threaded processors. Thanks primarily to
SVOS optimizations, the new VSP E Series models offer the highest performance available in a 2U form factor. In
this blog, we will examine how Hitachi put industry leading NVMe performance into a very small package.
The VSP E series arrays leverage the NVMe storage protocol, which was designed to take advantage of the fast
response times and parallel I/O capabilities of flash. NVMe provides a streamlined command set for accessing flash
media over a PCIe bus or storage fabric, allowing for up to 64K separate queues, each with a queue depth of 64K.
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 1/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Harnessing the full power of flash with the NVMe protocol requires an efficient operating system and advanced
processors, both of which are included with the latest VSP E series arrays. The new technologies included in the
VSP E series are complemented by Hitachi’s sophisticated, flash-optimized cache architecture, with its focus on data
integrity, efficiency, and performance optimization. Table 1 presents the basic specifications of the VSP E590 and
VSP E790.
Before covering the hardware in more detail, let’s begin with a review of two important SVOS features that are
common to all Hitachi storage products.
Hitachi Dynamic
Provisioning (HDP)
Hitachi Dynamic Provisioning provides a mechanism for grouping many physical devices (NVMe SSDs in the VSP E
Series) into a single pool. The pool mechanism creates a structure of 42 MB pool pages from each device within the
pool. HDP then presents automatically managed, wide-striped block devices to one or more hosts. This is like the
use of a host-based logical volume manager (LVM) and its wide striping mechanism across all member LUNs in its
“pool”. The LUNs presented by an HDP pool are called Dynamic Provisioning Volumes (DPVOLs or virtual volumes).
DPVOLs have a user-specified logical size up to 256TB. The host accesses the DPVOL as if it were a normal
volume (LUN) over one or more host ports. A major difference is that disk space is not physically allocated to a
DPVOL from the pool until the host has written to different parts of that DPVOL’s Logical Block Address (LBA) space.
The entire logical size specified when creating that DPVOL could eventually become fully mapped to physical space
using 42 MB pool pages from every device in the pool.
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 2/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Adaptive Data
Reduction (ADR)
Adaptive Data Reduction (ADR) adds controller-based compression and deduplication to HDP, greatly increasing the
effective storage capacity presented by the array. A lossless compression algorithm is used to reduce the number of
physical bits needed to represent the host-written data. Deduplication removes redundant copies of identical data
segments and replaces them with pointers to a single instance of the data segment on disk. These capacity saving
features are supported in conjunction with HDP, so that only DPVOLs can have either compression or deduplication
plus compression enabled. (Deduplication without compression is not supported). Each DPVOL has a capacity
saving attribute, for which the settings are “Disabled”, “Compression”, or “Deduplication and Compression”. DPVOLs
with either capacity saving attribute set are referred to as Data Reduction Vols (DRDVOLs). The deduplication scope
is at the HDP pool level for all DRDVOLs with the “Deduplication and Compression” attribute set.
The data reduction engine uses a combination of inline and post-process methods to achieve capacity saving with
the minimum amount of overhead to host I/O. Normally with HDP, each DPVOL is made up of multiple 42 MB
physical pages allocated from the HDP pool. But with data reduction enabled, each DRDVOL is made up of 42 MB
virtual pages. If a virtual page has not yet been processed for data reduction, it is identified as a non-DRD virtual
page and is essentially a pointer to an entire physical page in the pool. After data reduction, the virtual page is
identified as a DRD virtual page and it then contains pointers to 8 KB chunks stored in different physical pages in the
HDP pool. The initial data reduction post-processing is done to non-DRD virtual pages that have not had write
activity in at least five minutes. The non-DRD virtual page is processed in 8 KB chunks and compressed data are
written in log-structured fashion to new locations in the pool, likely one or more new physical pages. If enabled for
the DRDVOL, deduplication is then performed on the compressed data chunks, so that duplicate chunks are
invalidated (after a hash match and bit-by-bit comparison) and replaced with a pointer to the location of the physical
chunk. Garbage collection is done in the background to combat fragmentation over time by coalescing the pockets of
free space resulting from invalidated data chunks. Subsequent rewrites to already compressed data are then
handled purely inline for best performance.
A major advantage of Hitachi’s approach to data reduction is the ability to customize settings for each LUN. For
example, compression (but not deduplication) could be configured for LUNs on which a database’s unique
checksums in each 8K block could make deduplication less effective. Both compression and deduplication could be
enabled for LUNs hosting virtual desktops, which may contain multiple copies of the same data. And if an application
encrypts or compresses its data on the host, making additional capacity savings impossible, then ADR can be
completely disabled on the affected LUNs to avoid unnecessary overhead. The flexibility of Hitachi’s ADR allows
capacity savings to be obtained on appropriate data, with the least possible impact to host I/O.
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 3/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Front End
Configuration
Two options for host connectivity are currently offered—fibre channel and iSCSI. The VSP E590 and VSP E790
support 1-3 channel board pairs, to be installed in the rear of the chassis as shown in Figure 1. (Protocol types must
be installed symmetrically between controller 1 and controller 2).
Channel boards must be installed in pairs. The FC CHBs support transfer rates up to 3200 MB/s. iSCSI CHBs
support transfer rates up to 1,000 MB/s. Additional details about the CHBs are shown in Table 2 below.
CHB Transfer Rate CHBs Per System Ports Per CHB Ports Per System
16 Gb Fibre Channel 400/800/1600 MB/s 2/4/6 4 8/16/24
32 Gb Fibre Channel 800/1600/3200 MB/s 2/4/6 4 8/16/24
Fibre iSCSI 1000 MB/s 2/4/6 2 4/8/12
Copper iSCSI 100/1000 MB/s 2/4/6 2 4/8/12
VSP Ex90 FC ports operate in universal (also called bi-directional) mode. A bi-directional port can simultaneously
function as a target (for host I/O or replication) and initiator (for external storage or replication), with each function
having a queue depth of 1,024.
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 4/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Each CPU has six memory channels for DDR4 memory, providing as much as 115 GB/s per CPU of theoretical
memory bandwidth (up to 230 GB/s per controller, and 460 GB/s per system). Data and command transfers between
controllers are done over the two non-transparent bus (NTB) connections, which together are allocated a total of 16
x PCIe Gen3 lanes. All VSP E series arrays feature two NTB connections between controllers, thus avoiding any
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 5/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
single failure point for this critical component. Finally, each controllers’ CPUs are connected via an embedded PCIe
switch to the NVMe SSDs. Each controller can establish a point-to-point connection to each of up to twenty-four
drives over a 2-lane, 4 GB/s PCIe Gen3 bus.
The primary difference between VSP E790 and VSP E590 is processing power. The VSP E790 comes equipped with
four 16-core CPUs, while the VSP E590 has four 6-core processors. Our testing shows that the VSP E790 has
enough processing power to approach the full IOPS potential of the fast NVMe drives. For example, on the VSP
E790 we measured 3.64 million 8 KiB cache-miss random read IOPS from eight SSDs, with a response time of 0.51
milliseconds. On the VSP E590, the same test yielded 1.34 million IOPS with a 0.53 millisecond response time. With
the same drive configuration as the E590, the E790 could deliver 2.7X more IOPS because of its high-powered
CPUs. Of course, the VSP E590’s 1.34 million cache-miss random read IOPS will be more than sufficient for many
applications.
Encrypting controllers (eCTLs) are optionally available for the VSP E790 and VSP E590. The eCTLs offload the work
of encryption to Field Programmable Gate Arrays (FPGAs). The FPGAs are connected by a PCIe switch positioned
between the controller CPUs and the flash drives. The FPGAs allow FIPS 140 level 2 encryption to be done with little
or no performance impact.
Cache Architecture
The importance of storage array cache (i.e., Dynamic Random Access Memory or DRAM) may have diminished in
the era of NVMe flash drives. However, I/O cache can still provide a significant performance boost for several
reasons. First, DRAM is faster than flash, at least by an order of magnitude. When a host requests cache-resident
data (sometimes called a cache “hit”) the command can be completed with the lowest possible latency. For example,
our 8KiB random read hit testing on VSP E590 measured response times as low as 66 microseconds (0.066
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 6/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
milliseconds). The fastest response time observed in the 8 KiB random read miss testing was about 4X higher at 250
microseconds. Reading data from cache is not only advantageous because DRAM is the fastest medium, but also
means that the request for data is satisfied with no additional address lookups or back end I/O commands. And when
it comes to optimizing performance, Hitachi isn’t content to simply cache the most recently accessed data. I/O
patterns on each LDEV are periodically analyzed to identify the data most likely to be accessed repeatedly. Cache is
preferentially allocated to such blocks. Meanwhile, areas of each LUN identified as having the lowest probability of a
cache hit will not have any cache allocated. Instead, such data are sent to the host through a transfer buffer, thereby
saving the overhead of allocating a cache segment.
Cache also enhances write performance. After writes have been mirrored in both controllers’ DRAM to protect
against data loss (but before data have been written to flash) the host is sent a write acknowledgment. The quick
response to writes allows latency-sensitive applications to operate smoothly. Newly written data are held in cache for
a while, to allow for related blocks to be aggregated and written to flash together in larger chunks. This “gathering
write” algorithm reduces the need for parity operations, thereby bringing down controller and drive busy rates and
improving overall response time.
Back End
Configuration
The VSP E590 and VSP E790 both have an all-NVMe back end integrated into the controller chassis, which makes
configuration simple and straightforward. As noted earlier, up to twenty-four NVMe SSDs may be installed. Table 3
lists the supported drive types, and Table 4 shows the available RAID configurations.
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 7/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Hitachi storage has often been configured with RAID-6 6D+2P, or perhaps RAID-6 14D+2P for data protection and
good capacity efficiency. However, if any spare drives are to be allocated in the 2U E series arrays, only a single
14D+2P or two 6D+2P groups could be created. We therefore tested an asymmetrical configuration with I/O
distributed across one 6D+2P parity group and one 12D+2P parity group in a single HDP pool--a configuration which
offers RAID-6 data protection, good capacity efficiency, with one or two spare drives. We found no difference in
performance between the asymmetrical configuration that allows for spare drives, and a symmetrical configuration
with three RAID-6 6D+2P parity groups.
We have briefly introduced the architecture of the Virtual Storage Platform E590 and E790. For more information on
Hitachi’s implementation of NVMe, and other exemplary features of the VSP E series, see Hu Yoshida’s recent blog
entitled “Unique VSP Capabilities Which Were Not Noted in the Gartner Report.” Also see the VSP E Series page.
#FlashStorage
#ThoughtLeadership
#Blog
RELATED CONTENT
Introduction to Introduction to
Virtual Storage Virtual Storage
Platform E990 Platform E1090
Architecture Architecture
Charles Lofton
Added 09-24-2020 Sudipta Kumar
Mohapatra
Blog Entry Added 03-24-2022
Blog Entry
Blog Entry
Blog Entry
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 8/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
Introduction to
Virtual Storage
Platform 5000
Architecture
Charles Lofton
Added 11-09-2020
Blog Entry
PERMALINK
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-st
COMMENTS
Very helpful
Excellent
Nice Read
Dear Team,
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 9/10
09/11/23, 15:38 Introduction to Virtual Storage Platform E590 and E790 Architecture
regards
sujith V
A PROUD PART OF
HITACHI VANTARA
https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-virtual-storage-platform-e590-and-e790-architecture 10/10
VSP Midrange
Update for 2023
VSP Kevin Koplar
Technical Specialist – Core Storage
G350/E590/E790/E1090 February 2023
Flexible high
Entry Enterprise Scale-out
Entry NVMe High-density performance
Entry point to VSP Open & Mainframe Enterprise
System performance High port count
Price sensitive Hardware offload Open & Mainframe
SAS capacity SAS capacity Hardware offload
SAS media Scale-up Hardware offload
expansion expansion SAS or NVMe
SAS or NVMe SAS &/or NVMe
media
30 Years Of
Industry Leading Reliability
§ Industry’s first, best and most trusted 100%
Data Availability Guarantee
AVAILABILITY GLOBAL-ACTIVE
§ Hitachi remote support ops brings 30 years DEVICE
of knowledge to predict and prevent
downtime
§ Enterprise class: Industry benchmark for
active-active metro clustering and trusted
CLOUD SUPPORT 3 DATA CENTER
business continuity
§ Restart operation faster: Application-aware
recovery and copy data management to
prevent data loss
100% Data Availability Guarantee
Public Cloud
integration with the
provider of your
choice
Legendary Hitachi resilience
with 100% data availability
© Hitachi Vantara LLC 2023. All Rights Reserved.
© Hitachi Vantara LLC 2023. All Rights Reserved.
VSP E590 and E790 Hardware Controller (CTL)
DIMM
BacKup Module (BKM)
# Parts Discription
Encryption FPGA
(CTLMNE, CTLSNE) 1 Controller •Consists of CPU, DIMM, CFM.
Board •CFM can be replaced without CTL
(CTL) removal.
•Only supporting Dual CTL configuration
!"#$%&'(
2 I/O Boards
•(CHB) •CHannel Board ( FC / iSCSI )
Power Supply •(DKB, •DisK Board (DKB or eDKB)
Unit (PSU) eDKB) •Hitachi Interconnect Edge
•(HIE)
5 Backup !"#$%&'"#()*&'"++%,-&./&0'1234
,+"-.$%&'( Module (BKM) 56+%7,"+%8&9:+$&;<=
6 DIMM =>%8&?@,&A20;20<20!2
7 Encryption B C@+&*,%>%6+&:6&>->+%D>&9:+$@)+&%6#,-*+:@6
FPGA B !"6&@6E-&F%&,%*E"#%8&F-&,%*E"#:67&!GA
OOB Mgmt. & Maint. Boards (CTLMNE,
I/O Boards CTLSNE)
w/ Ethernet Ports
CTL
)'*+$%&'( 1
CTL
0
Slot A Slot B Slot C
PSU CHB DKB or CHB HIE or CHB Cache Flash PSU
(16x PCIe) (8x PCIe) (16x PCIe) Memory (CFM)
© Hitachi Vantara LLC 2023. All Rights Reserved.
Midrange Workhorse
24 x NVMe
SSD SFF
NVMe
2U DBN
Fibre Channel
32Gb/s 12 x SAS
Backplane PCIe Gen3
OR
iSCSI (SFP)
10Gb/s 2U DBL
2 Ports SSD SFF
24 x SAS
iSCSI (10Base-T)
1/10Gb/s SAS SFF
SAS
2 Ports 2U DBS2/DBS
SAS SFF*
60 x SFF/LFF
NL-SAS LFF
4U DB60
Model NVMe All Flash Array & SAS Hybrid NVMe All Flash Array & SAS Hybrid NVMe All Flash Array or SAS Hybrid
Software Base / Advanced Base / Advanced Base/Advanced
Processor Cores
64 Cores 24 Cores 64 Cores
(Cascade Lake)
NVMe & SAS SSD (1.9TB, 3.8TB, 7.8TB, NVMe & SAS SSD (1.9TB, 3.8TB, 7.8TB, 15TB, NVMe or SAS SSD (1.9TB, 3.8TB, 7.8TB, 15TB,
Supported Media 15TB, 30TB) 30TB) 30TB)
SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB) SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB) SAS HDD (2.4TB, 6TB, 10TB, 14TB, 18TB)
© Hitachi LLC
© Hitachi Vantara Vantara LLC 2023.
2020. All Rights
All Rights Reserved.
Reserved.
Document “Golden Nuggets”
• https://community.hitachivantara.com/blogs/charles-lofton/2021/04/19/introduction-to-
virtual-storage-platform-e590-and-e790-architecture
• https://community.hitachivantara.com/blogs/sudipta-kumar-
mohapatra/2022/03/24/introduction-to-virtual-storage-platform-e1090-arc
VSP G/F900
Continuing Platforms
Scale-up Scale-out
Cluster your VSP E-series to scale performance and capacity.
Simple Configuration
Manage all nodes from a single management GUI.
Future Proof
Intermix NVMe and SAS technology to get the best value for your
business.
Intermix
VSP E590, E790, E1090
NVMe SSD
SAS SSD
NVMe SSD
SAS HYBRID
Cloud-Based Monitoring
© Hitachi Vantara LLC 2023. All Rights Reserved.
Virtual Storage Scale Out
Loosely Coupled Clustered Systems
Geo-Dispersed Servers/Clusters
with
Scale Out Storage
Nodes of Cluster 1 Nodes of Cluster 2
LDEVs: LDEVs:
00:01 00:01
10:01 20:01
20:01
10:02 20:02
20:02
WHAT
Hitachi Modern Storage Assurance is
a market competitive investment
protection and platform
lifecycle extension program allowing
customers to secure future upgrade
paths onto newer generation
controllers/platforms.
WHY HOW
Many customers seek to maximize Hitachi Modern Storage Assurance is our
their storage investments by successor controller upgrade program
extending their storage platform designed to extend the lifecycle of a
usage as long as possible. storage investment by modernizing and
They try to avoid the hassles of upgrading the controller without a
seeking additional funding for costly forklift upgrade or downtime.
system upgrades in the future.
Hitachi Modern Storage Assurance is
a data-in-place upgrade, protecting
customer data and media investment.
© Hitachi Vantara LLC 2023. All Rights Reserved.
An Introduction to Modern Storage Assurance
Next Gen
VSP E1090
Non-disruptively take
advantage of increased
performance and new features
All Inclusive
Get the ability to upgrade within 1-5
years
Stay Modern
Accelerate applications with the
next generation storage architecture
No single point of
failure during
upgrade
Upgrade after 12 No Renewal
months Requirement
Available on
Supporting Open
Standard
Warrantee
100% Non-
Disruptive Upgrade
© Hitachi Vantara LLC 2023. All Rights Reserved.
Total Cost of Ownership
Successor Successor
VSP E1090
Model Model
Impact Connector
Fan Connector
Compression
Fan
Module
2. Reply to host
cache cache(Log-structure Area)
32
© Hitachi Vantara LLC 2023. All Rights Reserved.
Adaptive Data Reduction (ADR)
ADR Best Practice-USE IT!
All VSP E
2:1 deals
Always quote series and
have a “no Remediation
2:1 data VSP
signature will be
reduction 5200/5600
required” expected
ratio are
warranty
supported
01 02 03
Risk Reduction : Customer gets the benefits of effective capacity without the risk.
Management Manual
Sprawl Processes
Traditional Resource
Data Center Delivery
IT
Staff
Incident
Workload Resolution
Repatriation
Analyzer
Automator
Hitachi Ops
Center
Administrato Protector
Management Suite r
Self-Driving
Infrastructure
Low End
Hybrid & Multicloud Private Cloud Mission Critical Apps Data Analytics
Solutions
Simplified aaS CLOUD CONVERGED HYPERCONVERGED RACK SCALE Cisco and Hitachi
Consumption Hitachi Enterprise Cloud, Unified Compute Unified Compute Unified Compute Adaptive
STaaS and DPaaS Platform CI Platform HC Platform RS Solutions for CI
AIOPs
EverFlex
• Purchase
Management & Administrator Software Analyzer Software Automator Software • Lease
Orchestration • Consumption
Pricing
V IR T U A L S T OR A GE S OF T WA R E
Software-Defined Storage