Professional Documents
Culture Documents
Professional
DCP-109C
Today‘s certifications:
Lenovo DC Sales Professional Lenovo DC Storage Professional
Lenovo DC Tech Sales Professional Lenovo Big Data Professional
Lenovo DC Cloud Professional Lenovo Networking Professional
The Lenovo Professional Certification Program is a way for skilled IT professionals to demonstrate their proficiency in the latest Lenovo Data Center products
and solutions. Certifications provide independent validation of Business Partner skills that can contribute towards career growth and professional advancement.
It also provides a level of trust and assurance in the ability to successfully support customer needs.
Visit lenovopartner.com for more details on Lenovo Professional Certification Program (Programme & Training / Programme Overview)
Sections
1. Product & Portfolio Overview 25%
2. Lenovo Value Proposition and Differentiators 18%
3. Positioning 18%
4. Business conversations 27%
5. Market Opportunity for Lenovo 5%
7. Services 7%
2019 Lenovo. All rights reserved. EM 8
www.pearsonvue.com/lenovo
2019 Lenovo. All rights reserved. EM 9
www.certmetrics.com/lenovo
2019 Lenovo. All rights reserved. EM 10
IT
In the next 7 years the digital universe 77 Budget
But the types of data being generated are different from the traditional structured data that today’s
solutions are optimized to store and manage. Additionally, the data is coming from more varied sources
as we add more sensor devices.
Our collective challenge is to take the data generated by individuals, businesses, and machines and
extract value that leads to business growth.
Lower TCO
60% Savings on IT Infrastructure3
Networking Tape
1. For All Flash Array price / performance optimized category (<$20,000). SPC-1 v3 results published end of July (DS4200)
2. ThinkAgile HX Series – 80% faster time to value.
3. IDC http://news.lenovo.com/article_display.cfm?article_id=2098 (HX Series w/ Nutanix)
2019 Lenovo. All rights reserved. EM 13
D1224
2U Rack mount
24x 2,5“ HDD
Up to 8 total units chained
max 192 drives – 2,94 PB capacity
D3284
5U Rack mount
84x 3,5“ HDD
Up to 4 total units chained
max 336 drives - 4,0 PB capacity
D1212
2U Rack mount
12x 3,5“ HDD
Up to 8 total units chained
max 96 drives – 1,15 PB capacity
Performance Oriented
• 12 Gbps SAS interface connectivity
• Dual ESMs and redundant power supply units for high availability
----------------------------------
Joint Venture
Single Partner Focus
----------------------------------
File
Unique Market Positioning
---------------------------------- Conflicting GTM
Block
China Joint Venture
Block
----------------------------------- Midrange
Collaborative GTM
Block
2019 Lenovo. All rights reserved. EM Entry 20
Lenovo Benefits
• Fills Midrange and AFA gap with industry leader
• PRC Localized IP offerings
• Gain Storage Seller expertise and tools
Lenovo Performance Block Lenovo AFA & Midrange Unified High-end Resale
• Entry to high performance • ONTAP Data Management • A800, A700, FAS9000
• AFA and Hybrid Block • Cloud Tiering • MFG by NetApp
• Lenovo xClarity support • Lenovo xClarity support
• MFG & Supported by Lenovo • MFG & Supported by Lenovo
PB 5-8
DE6000F
Hybrid
DE6000H
➢ Most cost effective
➢ Streaming data
DE4000F
➢ Mixed workloads
➢ Backup and recovery PB 1-4 DE4000H
PRICE
DE2000H
PERFORMANCE
100K 300K 1M
The Lenovo ThinkSystem DE Series includes all flash models and Hybrid solution.
All flash offers max performance, and Hybrid offers most cost effective solution for variety of workloads.
BACKUP & RECOVERY VIDEO SURVEILLANCE HIGH PERFORMANCE BIG DATA, ANALYTICS
Platform COMPUTING
Platform Platform
DE2000H, DE4000H DE2000H, DE4000H Platform DE4000H, DE4000F
DE4000H, DE4000F DE6000F
DE6000H, DE6000F
As you continue your conversation with your customers, think about customer needs that are ideal for
Lenovo DE Series solutions. DE Series systems can help customers who need higher performance, who
are concerned with cost and reliability, and who need a solution that is easy to use.
The four areas where you can position DE Series include data protection, physical and cyber security,
including video, technical computing, and big data analytic applications.
- Maximum bandwith for the money, high density, high - High I/O
scalability, low-latency streaming which increase the - Drive failure prediction to maximize uptime
ability to edit online and offline - Encryption for compliance
Ch 1 & 2
Ch 3 & 4
ID/
Port 1 Port 2 Drive Expansion Diag iSCSI Host
ID/ EXP1 EXP2
Diag
DE-Series expansions
DE600S DE240S DE120S
4U/60 disks 2U/24 disks 2U/12 disks
12-Gbps SAS
Architecture
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
224C 212C
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
1200GB
3.0TB 3.0TB 3.0TB 3.0TB
DE-SERIES CONTROLLERS
The DE-Series product portfolio consists of three families of system, which are defined by their
controllers. The controllers determine the number of disks that the storage system can support. The
DE2000H controllers support up to 96 disks. The DE4000H controllers support up to 192 disks. The
DE6000H controllers support up to 480 disks.
[1] The DE2000H system represent the entry-point family of storage systems for customers who want to
maximize the price and performance ratio and capacity mix of a storage system.
[2] The DE4000H system optimize performance for mixed workloads, with outstanding low latency
[3] The DE6000H system offer excellent performance. They support raw data throughput rates of up to 12
gigabytes per second. These system are targeted at high-performance computing markets, big data, and
virtual desktop infrastructures, although they work equally well in general computing environments.
DE4000H and DE6000H offers all-flash and hybrid configuration options. These options are highly reliable
and cost-effective.
Expansions
▪ 60 x 3.5" or 2.5" drives ▪ 24 x 2.5" drives ▪ 12 x 3.5" or 2.5" drives
▪ Highest throughput ▪ Highest throughput ▪ Lowest entry point
▪ Largest capacity ▪ Largest capacity ▪ NL-SAS, SSD
▪ NL-SAS, SAS, SSD ▪ SAS, SSD
Dual 1-GbE Management Ports Serial Port Serial Port USB Port Dual 12-Gbps SAS Drive
(RJ45) (Mini USB) (Factory Use Only) Expansion Ports
DE2000 only support HIC which is 2-port SAS or 2-port iSCSI 1/10Gb)
2019 Lenovo. All rights reserved. EM 32
On the left side, you see the base host interface ports. These include either two optical FC/iSCSI ports or
two RJ-45 iSCSI baseboard ports, for host connection.
The dual Ethernet management ports provide out-of-band system management access.
To the right of the base host ports are two serial ports. The mini-USB port is used for direct connections to
the internal shell operating system of the controller and enables advanced troubleshooting and
configuration. The USB port is for factory use only.
On the far right, two 12-gigabit-per-second expansion SAS ports support the addition of expansions.
Because these are 12-gigabit-per-second ports, they require a SAS-3 to SAS-2 converter cable to
connect to the 6-gigabit-per-second expansions.
The two 12-gigabit-per-second expansion SAS ports on the far right support more expansions.
You can create space for more host ports by using an add-on HIC. You can select from 12-gigabit-per-
second SAS ports, 16-gigabit-per-second FC ports, a 10-gigabit-per-second optical iSCSI card, or 10-
gigabit-per-second copper iSCSI ports.
The controller status LEDs on the controller canister define different controller base features, such as
cache active, attention, and heartbeat.
Controller memory 16GB DDR4 per controller (Hybrid), 64 DDR4 per controlles (ALL Flash)
DE6000H controllers use a 8-core CPU. You can order the DE6000H controller with 16 gigabytes of
native cache.
The DE6000H system supports in-band management access. It has two 12-gigabit-per-second wide-port
SAS drive expansion ports for redundant drive expansion paths.
The DE6000H does not come with Dual 10Gb iSCSI optical or dual 16Gb FC.
You can also order the appropriate HIC when you order the controllers.
▪ Encryption requires an additional feature key; all others are bundled with Lenovo SAN Unified Manager
2019 Lenovo. All rights reserved. EM 36
Dynamic Disk Pools (DDP) technology consists innovative method of: (1) greatly simplifying storage
management; (2) making the addition or loss of drives a nonevent; and (3) significantly reducing the time
to recover from a drive loss versus with traditional RAID (minutes versus days).
DE-Series supports hybrid systems with mixed flash and rotating disk. SSD cache is a feature that is
designed to accelerate HDD data access by caching highly read data on SSD automatically.
Snapshot copies and views allow multiple recovery points or the use of production data for testing and
development.
Thin provisioning is used for capacity-optimized configurations and eliminates guessing on how much
capacity that some volume is really going to need.
Mirroring and replication support both synchronous and asynchronous mirroring and replication.
Encryption is AES-256 with a local key manager saves the (often significant) cost of an external key
manager and is now an included feature of the Lenovo SAN Unified Manager
Embedded
on DE-Series
Requires No
Installation
Lenovo SAN System Manager is a modern, browser-based, on-box tool that enables
you to manage and monitor the DE Series controllers through an intuitive web interface.
Lenovo SAN System Manager provides automated workflows and intelligent
provisioning defaults. It has enhanced performance monitoring and tuning actions.
You can manage role-based access control, audit log, and other security features. The
left navigation pane has five main pages: Home, Storage, Hardware, Settings, and
Support.
To manage more than one DE-Series systems, you can use the SAN Unified Manager application. It
offers two management windows for system monitoring and management.
The Enterprise Management window, or EMW is the initial landing page that you see when the SAN
Unified Manager opens. Use this interface to manage the overall DE-Series storage environment. It
enables you to upgrade multiple systems simultaneously from one interface.
[1]The Array Management window, or AMW is opened on a per-storage array basis. You can manage all
storage-related functionality from the UI, but DE-Series systems also support a fully functional command
line.
RAID 0 rotates logical blocks for a given volume space across a set of disks with no space allocated for
data protection or redundancy.
RAID 1 is similar to RAID 0, but maintains a copy of all data on a mirror set of disks. RAID 1 is sometimes
called RAID 10 if it uses more than two disks at a time.
RAID 5 allocates space for parity information. This parity information can be used to recover data if
hardware fails.
RAID 6 is similar to RAID 5, but RAID 6 allocates two spaces for parity. This additional space is called the
“Q” value on DE-Series storage systems.
When a drive fails, one of the hot spares is picked, and the failed drive data is reconstructed onto this hot
spare drive. This causes a bottleneck of I/O while the rebuild is sequentially being recreated. Access to
the logical drive with the failure is significantly diminished during this time.
24-disk pool
D-piece: 512 MB
D-stripe: 4 GB
Dynamic data pool configuration: RAID 6 (8+2)
10 d-pieces: 1 d-stripe
1 d-stripe: 8 data d-pieces, 1 parity d-piece, 1 Q parity d-piece
Each d-piece is a contiguous 512-MB section of a disk. Within the disk pool, 10 d-pieces are written to 10
different disks, as selected by the controllers. Together, [1]10 associated d-pieces make up a 4-
GB d-stripe.
[2]Each d-stripe uses a RAID 6 (8+2) configuration, in which 8 of the pieces contain user data, 1 piece
contains parity information calculated from the data segments, and 1 piece contains the second parity, or
Q value, defined by RAID 6.
In this illustration, each color represents a d-piece written to a single disk. If you look for
orange pieces, you see that an orange d-piece has been written to each of the 10 disks
called out with orange arrows. [3]The 10 d-pieces make up the d-stripe. The 10 disks
are chosen pseudo-randomly by an algorithm that the controllers run, and this
“randomness” gives more protection if a disk fails in the disk pool. Sometimes you may
also hear a d-stripe referred to as a “mini RAID group.”
If one of the disks [1]fails, the other d-pieces of the d-stripes are used to recreate each d-piece from the
failed disk on another disk. In effect, multiple RAID 6 pieces are affected by the failed disk, but each of
those RAID 6 pieces is affected independently, which makes simultaneous rebuilding possible.
[2]Preservation capacity on other disks of the disk pool is used to write the reconstructed data back into
the pool. Because multiple d-pieces can be reconstructed simultaneously to the preservation capacity on
multiple disks, the reconstruction process is much faster in disk pools than it is in volume groups.
Flexible: Collaborative:
Add any* number of disks All disks in the pool sustain
for more capacity. the workload, which is
The system automatically perfect for virtual mixed
rebalances data for optimal workloads or
performance. fast reconstruction.
With Dynamic Disk Pools, you can add or lose disks without
impact, reconfiguration, or headaches.
2019 Lenovo. All rights reserved. EM 43
Balanced: An algorithm randomly spreads data across all disks, which balances the workload and any
necessary rebuilds.
Easy: There are no idle spares to manage. Active spare capacity is spread across all disks.
Flexible: Add any number of disks for more capacity. The system automatically rebalances data for
optimal performance.
Collaborative: All disks in the pool sustain the workload, which is perfect for virtual mixed workloads or
fast reconstruction.
With DDP technology, you can add or lose disks without affecting the system, without performing
reconfiguration, and without headaches.
Preservation capacity;
Data protection Hot spares
no idle disks
Snapshot images ✓ ✓
Traditional volume groups can still be valuable in DE-Series storage systems. Volume groups offer
superior I/O performance if they are properly configured. In contrast, disk pools offer extra data reliability
and flexibility. The cost of the advantages of DDP technology is the performance impact of the overhead
required to determine which 10 disks to use and then divide data into d-pieces. If a particular application
needs the absolute highest level of I/O performance, a volume group is probably the most appropriate
choice.
In contrast, disk pools are much more useful for general-purpose data applications, for which data
availability and flexibility are more important.
As storage systems grow and use more disks, the recovery advantage of disk pools becomes even more
valuable. With more disks, there is a greater chance of multiple-disk failures. Because DDP distributes
preservation capacity, it can rebuild data from failed disks much faster than volume groups, while
exposing the system to much lower risk of multiple failures that lead to a catastrophic loss of data.
Also, some users might not want to “waste” capacity on hot spares, but cannot afford the risk of multiple
disk failures. Use of disk pools eliminates this difficulty by spreading preservation capacity across all disks
in the disk pool.
If you need to use thin provisioning, then you must build your volumes in disk pools; thin provisioning is
not available for volume groups.
Benefits
Savings
▪ Simplify Storage and Data Management
o Automate storage provisioning
Available Storage
o Eliminate impact of administrator uncertainty when sizing logical drives
Data Vol. B
▪ Improve Storage Cost Efficiency
Data Vol. A
o Gain 35% - 40% improvement in storage utilization and efficiency
o Defer new storage capacity acquisition
o Reduce physical footprint and power and cooling requirements
Thin Provisioning is a capability some competitors have had for a while and it is now being introduced on
our products. The thin provisioning feature is provided as part of the base feature with this release.
There is no charge to use thin provisioning.
And as most of you are aware thin provisioning allows the decoupling of physical storage allocation from
actual provisioning of storage. A smaller amount of storage can be provided to the application than what
logically exists. This allows storage administrators to more easily manage their customers storage needs.
Used with DDP allows logical drive growth to be managed much more smoothly and easily.
Some of the benefits found in the industry with thin provisioning include increase storage efficiency.
Gaining around 35-40% improvement in storage utilizations. Reducing the total capacity and power of
on-line storage. And the ability to automate provisioning of storage.
FC and
SAS
Near-Line
SAS and
SATA
Here you can see how the SSD Cache feature differs from automated tiering approaches that rely on
physical data migration.
The SSD Cache feature is data-driven, real-time, and self-managing. It provides real-time assessment of
workload priorities. SSD Cache also optimizes I/O requests for cost and performance without the need for
complex data classification or excessive data movement.
Several competitors use the approach in the right pane. This approach requires a type of automated data
migration in which data blocks are physically moved between media tiers. Because resources must be
used to move the data, this tiering method requires additional I/O and CPU overhead, and results in
delays. SSD Cache promotes data dynamically and in real time.
Traditional automatic tiering systems have the advantage of supporting write-intensive applications. If you
know in advance which data needs to be promoted and moved, you can expect some write-performance
benefit. However, if you don’t know what data is critical and high-use, you may accidentally place it on
media to be tiered.
SSD Cache is data-dependent, so ensure that you understand the problem that you are trying to solve
before you select which volumes to use with SSD Cache.
Many DE-Series features provide enterprise-class protection, both disaster recovery and business
continuity.
[1]Dynamic Disk Pools (or DDP) technology provides unprecedented worry-free data availability and
maintains data performance when customers add disks or when disk failures occur.
[2]Snapshot images provide shorter RPOs and faster recovery with checkpoint-restart capabilities.
[3]Fibre Channel-based and IP-based mirroring provide cost-effective, enterprise-class disaster recovery
for your data by mirroring it to a remote storage system. Replication also preserves your network
infrastructure.
[4]The Volume Copy feature enables you to create a point-in-time clone of a volume on the same storage
system. This feature performs a byte-by-byte copy from the source volume to the target volume.
The Snapshot feature enables you to create [1]storage-based logical images. You can use these [2] local
point-in-time virtual images of a volume for testing, backup, and recovery operations. For example,
Snapshot images enable you to quickly roll back to a known-good dataset to reverse the effects of
viruses, data corruption, and accidental deletions.
[3]The Snapshot feature uses one data repository for all of the Snapshot images that are associated with
a base volume. Therefore, when a base volume is written to, the Snapshot image feature requires only
one write operation instead of multiple, sequential writes. To use storage capacity more efficiently, the
feature combines Snapshot images into Snapshot groups, each of which uses a single repository.
[4]Because a Snapshot image only saves the changed data for a base volume, it is not directly accessible
to hosts for read/write operations. However, you can convert a Snapshot image into a Snapshot volume
to give hosts read/write access to it.
[5]Microsoft Volume Shadow Copy Service and Virtual Disk Service provide external storage
management, data protection, and compatibility with backup applications.
The Volume Copy feature enables you to create a point-in-time, full clone of a volume by performing a
[1]byte-by-byte copy from a source volume to a target volume. When the copy is complete, the target
volume will be identical to the source volume. Because the target volume is a real volume, rather than a
logical one, it can be used as part of a disaster-recovery solution. [2] Both volumes must be on the same
storage system. The copy pair is an association between the source volume and the target volume for a
single Volume Copy operation.
[3]If a volume is re-copied from the source to the same target volume, the data in the target volume will be
overwritten. Therefore, if you need a second clone of the source volume, use a different target volume for
the copy function.
Mirror
Mirroring
Pairs
▪ Two mirroring protocols:
Volume
B B
FC
IP
The mirroring features provide [1]storage-based data replication, which enables you to replicate data
volumes from one storage system to another either [2]synchronously or asynchronously using [3]either
Fibre Channel or IP.
Mirroring features maintain a copy of data that is physically distant from the site where the data is used. If
a disaster occurs at the primary site, such as a massive power outage or a flood, the data can be quickly
accessed from the remote location. Accessing the data from a remote storage system is much faster than
uploading off-site tape backups. Also, the data that was in use at the time of the disaster does not differ at
the remote site as much as it might from a tape backup that is several days old.
DE-Series block-replication technology provides many benefits, such as block-level updates, which
reduce bandwidth and time requirements by replicating only the blocks that have changed, crash-
consistent data that is maintained at a disaster recovery site, the ability to test disaster recovery plans
without affecting production and replication, replication between dissimilar DE-Series storage systems,
and the use of a standard IP or Fibre Channel network for replication.
The DE-Series storage systems offer two types of mirroring, which mirror data in different ways to support
different needs.
The synchronous mirroring feature is used for online, real-time data replication between remote storage
arrays. Any new data that is written to the local (or primary) system is immediately transferred to the
remote (or secondary) system. The connection that links the local and remote storage systems must be
fast, so that network latency does not reduce local I/O performance.
[1] The asynchronous mirroring feature provides a controller-level, firmware-based mechanism for data
replication between primary and secondary sites. Asynchronous mirroring transfers data to the remote
site only at set intervals, so local I/O is not affected nearly as much by slow network connections.
Because local I/O is not affected by network latency, iSCSI is a viable connection alternative with
asynchronous mirroring.
The DE-Series product lines continue to grow and expand, offering your customers more storage choices.
Modularity is a key design criterion for the DE-Series product lines. The DE-Series Hybrid line offers three
controllers and three sizes of expansions (12, 24, or 60 disks), multiple disk types, and a variety of host
connections—from iSCSI to FC. The DE-Series All Flash line offers two controller models in a 24-
controller. Additional expansions can be added.
Both DE-Series storage systems can be customized: You can adjust cache-block sizes, RAID levels, and
segment sizes, and choose between traditional RAID volume groups or self-monitoring and self-repairing
dynamic disk pools. By configuring individual volume settings or volume group settings, you can optimize
performance for streaming sequential, high-bandwidth workloads or for random-transaction performance.
DE-Series storage systems all have six nines (99.9999%) in their reliability ratings.
DE Series DM series
Protocol Block Unified and file
Advanced management that is provided upstack
Data Management Needed in the storage layer
(app, OS, or file system)*
Dedicated, sequential
Shared, concurrent
Workload Characteristics IOPS-intensive, random I/O
Transactional
Extreme low latency
High bandwidth
Mixed I/O
Performance Requirements Scale-up IOPS
Scale-out IOPS
Consistent ultralow latency
Max Capacity 5,76 PB 12 PB
PERFORMANCE
Customers can, with the click of a button, set their own performance rules for
individual workloads
EFFICIENCY
Thanks to advanced data reduction technologies, which work across all of the
unified workloads, customers see on average of 5:1 space savings
FUTURE-PROOF STORAGE
Your customer’s investment is protected because they can always scale a single
array up or scale out by clustering. They can also move cold data to the cloud for
cost-effective storage.
All Flash
▪ Maximum performance
▪ Latency sensitive apps DM7000F*
PERFORMANCE / SCALABILITY
▪ Mission critical workload
Hybrid DM5000F
▪ Cost effective DM7000H*
PB 4-8 ($25k-$250k)
Business Critical / Consolidation
DM7000F
DM3000H
DM5000F DM7000H
These are traditional complete AFA offerings that do not offer the ability to configure with any HDD whats
so ever. The DM7000F will be aimed more at large enterprise customers while DM5000 is more for
Midsize business.
Accelerate every application without disruption, Customers will experience superior performance while
they are still running enhanced features such as Dedupe and Compression
Dramatically improve data center economics. They will save 11x more in power costs than if they were
implementing a traditional hard drive datacenter
Enhances availability due to the response times and low latency when using the All flash storage.
Customers will experience faster ROI when using flash. They will see 6 month payback on their
investment: based on comparison of AFF with HDD-based systems.
New Features
• Inventory Monitoring
• Call Home Feature
• Firmware Updates
• Events
• Service
Ontap is a proven leader of Storage software in the marketplace today. This will be the software running
the upcoming DM series.
Majority of storage vendors tend to use operating systems such as Microsoft Windows Server or linux
with their storage hardware. Ontap is a proprietary software that provides advantages that the other
operating systems do not
DM series and Ontap is 1 out of only 2 providers of Scale up and out unified All flash storage, with the
only other competitor being EMC Isilon.
Customers can expect a great increase in data storage efficiencies when using Ontap because of its
supreme data reduction capabilities providing the customer an average of 5:1 reduction rate.
Ontap also provides you data mobility and flexibility allowing you to manage your storage off of various
platforms, whether that be on-prem traditional data center or tiered to the cloud.
Next we will go over the features of Ontap and how this will benefit your customer.
Also, customers can now get similar high performance for both FC SAN and IP SAN
ONTAP®
Scale Out by Adding Controllers
Scale Up
Individual
Controllers
ONTAP Cloud
▪ Scale out by intermixing your choice of flash and hybrid nodes – up to 12 nodes
▪ Upgrade hardware/software or scale up without disrupting users – single array up to
140 PB overall capacity
▪ Scale to the public cloud by tiering cold data
2019 Lenovo. All rights reserved. EM 66
One of the groundbreaking features of the DM series using Ontap is its ability to scale up and out being a
unified All Flash offering.
Customers can scale out utilizing both flash and hybrid nodes, allowing flexible options on how they want
to expand. If they need to expand up they can do so whitout disrupting users allowing the business to
continue running without any planned downtime.
Use Cases
Testing,
development, ▪ Disaster recovery testing
and analytics ▪ Backup and archive
On-box data recovery ▪ Application development and testing
Compliance ▪ Analytics and reporting
Lenovo SnapMirror provides a cost-effective, easy-to-use unified replication solution across the Data
Fabric. SnapLock software is a WORM compliance solution that delivers high-performance disk-based
data permanence for hybrid and solid-state drive (SSD) deployments.
ONTAP also offers FlexGroup technology, which creates a massively scalable, high-performance NAS
container that enables your customers to provide storage for today’s high-tech industry applications.
FlexGroup technology supports computation-intensive workloads and data repositories that require a
massive NAS container with high performance and resiliency. This capability is important for enterprises
such as oil and gas, media and entertainment, electronic design automation, or EDA, and high-tech.
FlexGroup technology delivers linear scale for performance and capacity, up to 20 petabytes and 400
billion files.
FlexGroup technology delivers high performance and predictable, consistently low latency.
FlexGroup technology also uses ONTAP nondisruptive operations to enable resiliency for DM clusters.
NetApp Volume Encryption, also called NVE, secures data with software-based, data-at-rest encryption.
[1]Keys are kept in the onboard key manager, or customers can use an external key manager. ONTAP
9.3 NVE external key management is FIPS [fips] 140-2 compliant.
[2]NVE enables granular encryption of any volume and any [3]disk on All Flash DM systems. Granular
encryption [4]lowers costs by eliminating the need to purchase special hardware self-encrypting drives.
You can use this feature on software-defined storage running ONTAP Select software.
[5]You can use NVE with ONTAP data reduction technologies, [6]such as compression, deduplication,
and compaction, to maintain storage efficiency for encrypted data.
[7] Future ONTAP software enhancements automatically update the encryption algorithms, which future-
proofs the encryption solution.
▪ Improved response time for repeated, ▪ Improved response time for repeated,
random reads random reads and overwrites
▪ Cache for all volumes that are on the ▪ Cache for all volumes that are on the
controller aggregate
ONTAP Virtual Storage Tier provides two flash acceleration methods to improve the performance of DM
storage systems.
Flash Cache technology uses expansion modules to provide controller-level flash acceleration. Flash
Cache technology is an ideal option for multiple heterogeneous workloads that require reduced storage
latency for repeated random reads, such as file services. Flash Cache is simple to use, because all of the
volumes on the controller and on aggregates that use hard disks are automatically accelerated. Flash
Cache can be used for multiple heterogeneous workloads and to reduce storage latency for random
reads.
Flash Pool technology uses both hard disks and SSDs in a hybrid aggregate to provide storage-level flash
acceleration. Flash Pool technology is an ideal option for workloads that require acceleration of repeated
random reads and random overwrites, such as database and transactional applications. Because the
Flash Pool feature works at the storage level, rather than in the expansion slot of a controller, the cache
remains available even during storage failover or giveback. Like Flash Cache, the Flash Pool feature is
simple to use, because acceleration is automatically provided to volumes that are on the Flash Pool
aggregate.
Storage quality of service (QoS) can be used to deliver consistent performance by monitoring and
managing application workloads. The storage QoS feature can be configured to prevent user workloads
or tenants from affecting each other. The feature can be configured to isolate and throttle resource-
intensive workloads. The feature can also enable critical applications to achieve consistent performance
expectations. QoS policies are created to monitor, isolate, and limit workloads of storage objects such as
volumes, LUNs, files, and SVMs. Policies are throughput limits that can be defined in terms of IOPS or
megabytes per second (MBps).
For All Flash DM, QoS policy is expanded to include minimum thresholds and available maximum limits.
Adaptive QoS (AQoS) enables IOPS per volume to automatically increase or decrease based on volume
capacity changes. You can choose from three policies (Extreme, Performance, and Value) to ensure the
most efficient use of storage resources.
Cloning
Inline deduplication
Inline adaptive
compression
Inline data compaction
(new)
DM Series + ONTAP
1 PB & HA Pair in 4U chassis
Storage Capacity with 15 TB SSD
E and EF-Series is now focused on 4 specific workloads/vertical industries. We are developing reference
architectures, best practice guides and reseller/integrator partnerships in the following:
These are verticals where the E/EF value proposition of price/performance, reliability and simplicity are
key. E/EF becomes the storage building block for these vertical solutions.
These verticals are narrow in focus, but extremely deep in opportunity ($3B to $13B market
opportunities). There is no overlap with other portfolio products, so the portfolio conflict/confusion present
when E/EF was presented as a general purpose SAN has been removed.
▪ Deduplication:
▪ Elimination of duplicate data blocks Volume
to reduce the amount of physical File A File B
storage
abcd eabc deaa abcd eaaa bcde abcd eabc
▪ Volume-level
▪ Postprocess example:
▪ File A is ~20KB, using five blocks free eaaa bcde abcd
free eabc
abcd eabc deaa abcd free
▪ File B is ~12KB, using three blocks
Aggregate
Deduplication eliminates duplicate data blocks, at a volume level, to reduce the amount of physical
storage that is required. When inline deduplication is used, duplicate blocks are eliminated while they are
in main memory, before they are written to disk. When postprocess is used, the blocks are written to disk
first and duplicates are later freed at a scheduled time.
[1] In this example, postprocess deduplication has been enabled on a volume that contains two files. [2]
File A is a document of approximately 20KB. This file uses five 4KB [kilobytes] blocks. [3] File B is
another document of approximately 12KB [kilobytes]. This file uses three 4KB [kilobytes] blocks. The data
in the blocks has been simplified on the slide, using four characters. The blocks have also been color
coded on the slide to easily identify the duplicate blocks.
In file A, the [4] first and fourth block contain duplicate data, [5]one of the blocks can be eliminated. [6]
The second block in file B, also contains the same duplicate data, [7] which can be eliminated. [8]
Duplication eliminates duplicate blocks within the volume, regardless of the file.
[0] Beginning with ONTAP 9.2, you can perform cross-volume sharing in volumes that belong to the same
aggregate using aggregate-level inline deduplication. [1] Aggregate-level inline deduplication is enabled
by default on all newly created volumes on All Flash DM (AFF) systems running ONTAP 9.2 or greater.
Cross-volume sharing is not supported on Flash Pool and HDD systems.
When cross-volume sharing is enabled on an aggregate, volumes that belong to the same aggregate can
share blocks and deduplication saving. [2] A cross-volume shared block is owned by the FlexVol volume
that first wrote the block.
[3] Beginning with ONTAP 9.3, you can schedule background cross-volume deduplication jobs on AFF
systems. Cross-volume background deduplication provides additional incremental deduplication savings.
Additionally, you can automatically schedule background deduplication jobs with Automatic Deduplication
Schedule (ADS). ADS automatically schedules background deduplication jobs for all newly created
volumes with a new automatic policy that is predefined on all AFF systems.
▪ Compression:
▪ Compression of redundant data Volume
blocks to reduce the amount of File A File B
physical storage
abcd eabc deaa abcd eaaa bcde abcd eabc
▪ Volume-level
▪ Example:
▪ File A is ~20KB, using five blocks abcd eabc deaa abcd eaaa bcde abcd eabc
▪ File B is ~12KB, using three blocks
~>#! *abc
Aggregate
[1] Data compression compresses redundant data blocks, at a volume level, to reduce the amount of
physical storage that is required. When inline data compression is used, compression is done in main
memory, before blocks are written to disk. When postprocess is used, the blocks are written to disk first
and data is compressed at a scheduled time.
[2] This example starts exactly where the previous example started, except postprocess data
compression is enabled.
Data compression first combines several blocks into compression groups. In this example, the 32KB
compression group is made up of these eight 4KB [kilobytes] blocks. The data compression algorithm [3]
identifies redundant patterns, [4] which can be compressed. [5] The algorithm continues to find
redundancies and compress them. [6] After everything has been compressed, all that remains on disk are
the fully compressed blocks.
Data compaction takes I/Os that normally consume a 4KB block on physical storage [1] and packs
multiple such I/Os into one physical 4KB block.
[2] This increases space savings for very small I/Os and files, less than 4KB, that have a lot of free space.
[3] To increase efficiency, data compaction is done after inline adaptive compression and inline
deduplication.
[4] Compaction is enabled by default for All Flash DM systems shipped with ONTAP 9. [5] Optionally, a
policy can be configured for Flash Pool and HDD-only aggregates.
Volume A Volume B
8KB 8KB 8KB 4KB 4KB Vol C
Writes from 3-1KB
hosts or clients 50% compressible 80% compressible 80% compressible 55% compressible
4KB 4KB 4KB 4KB 4KB 4KB 4KB 4KB 4KB 4KB 4KB
Without
compression
11 blocks
After inline 4KB 4KB 4KB 4KB 4KB 4KB 4KB 4KB
adaptive
compression
8 blocks
4KB 4KB 4KB 4KB
After inline
data compaction
4 blocks
2019 Lenovo. All rights reserved. EM 79
▪ Data compression and compaction offer additional space savings for very small I/Os and files less than
2 KB or for larger I/Os that are highly compressible or have extensive “white space.”
▪ [1]Without compression, the data shown would take 11 4-KB blocks on physical storage. After inline
adaptive compression, [2]only 8 4-KB blocks are required.
▪ [3]Inline data compaction further consolidates the data into four blocks.
▪ Combined savings from compression and compaction provide two to four times the space savings that
are achieved from inline compression alone.
Tape vaulting
Backup
and Tape backup
recovery
D2D/D2C backup
Before we get into the particulars of how we accomplish this vision, it’s important to review some of the
nomenclature being used under the umbrella of data protection, and in doing that we should review the
key metrics that apply specifically to DP: RPO and RTO.
Recovery point objective (RPO) refers to how much data loss can your business application tolerate.
And recovery time objective (RTO) refers to how fast can you restart a failed application. Together, they
measure or define acceptable business risk. Typically, B&R deals with hours.
DR often deals with minutes, and what we call business continuity is measured in seconds, often with no
data loss at all.
▪ Active-active controllers
▪ Continuous availability: add MetroCluster ™
Defining high availability, continuous availability, and disaster recovery and how to position products in
these three areas.
▪ High availability (HA) refers to eliminating the business impact and downtime due to a hardware
failure—the ONTAP operating system provides this and more
• RAID-TEC and RAID DP technology
• Active-active controllers
▪ Continuous availability extends HA by providing protection from external events such as power,
cooling, and network failures as well as natural disasters including hurricanes, floods, and
earthquakes, eliminating planned and unplanned downtime and data loss. Adding MetroCluster to the
infrastructure adds continuous availability.
▪ Disaster recovery provides site resiliency, protecting against failures that affect an entire data center
and regional disasters; adding SnapMirror enables disaster recovery.
This is an advantage of a unified storage architecture: All of these capabilities are built in, with no need to
purchase multiple, separate solutions to achieve the same data protection goals.
Continuous
Disaster Availability
Recovery
Remote
Backup MetroCluster
Software
Availability
Asynchronous
SnapMirror
SnapVault
Local Software
Backup Snapshot
Technology
ONTAP software offers a complete portfolio of data protection solutions, which include Snapshot copies
for local backups, SnapVault replication for operational backups, asynchronous SnapMirror for disaster
recovery, and MetroCluster software for continuous availability.
When considering the overall data protection portfolio, choose when to use synchronous SnapMirror
instead of MetroCluster software. The difference is in the administrative overhead. MetroCluster software
can be deployed within a campus or between sites up to 100 kilometers apart, to protect against single-
component failures automatically, and it can manage automated site failover. SnapMirror software can
protect against regional failures by moving production to the disaster recovery site with a small outage
window. SnapMirror solutions require a failover procedure.
Lenovo Snapshot copies: Snapshot copies are automatically scheduled point-in-time copies that take up
no space and incur no performance overhead when created. Over time, Snapshot copies consume
minimal storage space, because only changes to the active file system are written. Individual files and
directories can be easily recovered from any Snapshot copy, and the entire volume can be restored back
to any Snapshot state in seconds.
Lenovo SnapRestore: SnapRestore technology rapidly restores single files, directories, or entire LUNs
and volumes from any Snapshot copy backup.
Lenovo SnapMirror: SnapMirror technology provides asynchronous replication of volumes, independent of
protocol, either within the cluster or to another ONTAP system for data protection and disaster recovery.
SnapMirror for Storage Virtual Machines, also called Storage Virtual Machine Disaster Recovery (SVM
DR) replicates both data and Storage Virtual Machine configuration settings.
Lenovo SnapVault: SnapVault is a disk-to-disk backup product that is a core feature of ONTAP software.
SnapVault software includes application-aware backup solutions that are tailored for application-specific
protection provided by the Lenovo SnapManager suite of products. SnapVault software provides cost-
effective, long-term backups of disk-based data. Volumes can be copied for space-efficient, read-only,
disk-to-disk backup either within the cluster or to another ONTAP system. SnapVault software, when
used with version-independent SnapMirror software, enables a single destination volume to serve as both
a backup and disaster recovery copy.
▪ Scalable SnapCenter
Server architecture
Host Host Host
▪ Common GUI: centralized
Plug-In Plug-In Plug-In point of management
▪ Role-based access control
SnapCenter (RBAC)
SnapCenter
Server(s)
SnapCenter
Server(s) ▪ Lightweight application,
Server
database, and OS plug-ins
▪ Designed to support
Lenovo Storage Systems multiple storage platforms
Management
Data
SnapCenter has been developed to help meet this vision and to meet the data protection challenge.
• On the left, we are showing the SnapCenter plug-ins that are installed on each host by using the
SnapCenter Server:
– These lightweight application, database, and OS plug-ins offer role-specific functions and workflows.
• SnapCenter Server and plug-ins talk to storage. SnapCenter software is designed for multiplatform
support, meaning that eventually it might work with more than ONTAP supported storage systems.
• Along with multiplatform support, SnapCenter is also designed to support multiple hypervisors.
▪ Flexible deployments
▪ Replicates between flash, disk,
cloud, and software-defined
▪ Is OS version-independent
Flash Disk Cloud
▪ Built for the enterprise
▪ Lets you tune your RPO to meet
your business requirements
▪ Is ideal for data center, remote
office or branch office (ROBO), and
Data center ROBO/colo Cloud cloud environments
▪ Native data format on secondary
2019 Lenovo. All rights reserved. EM storage enables data multiuse 85
• SnapMirror delivers seamless array-based data replication to rapidly and efficiently transport data
between storage endpoints.
• The unified replication engine streamlines backup and disaster recovery workflows with a common I/O
engine and baseline volume—dramatically reducing network and storage requirements.
• SnapMirror is a logical engine, so you can replicate data between models of controllers, with different
media (HDD, SSD), or replicate to instances in the cloud or even to software-defined storage on your
choice of hardware and hypervisor.
• SnapMirror has no distance limitation and is ideal for distributed enterprises that have data center,
remote or branch office, and cloud deployments.
• The native data format of SnapMirror means that you can use secondary data as primary data for
other business uses, like software development, testing disaster recovery, and data analytics and
reporting. You can simply use FlexClone technology to rapidly create thin clones for other uses.
• Snapshot copies are deleted when not • Snapshot copies are retained and deleted on a
required for replication specified schedule
• With synchronous replication the recovery point objective (RPO) is equal to 0, so you have zero data
loss.
• Configuration changes are automatically passed to the remote node so the administrator doesn’t have
to manage the nodes at each site.
• Operational procedures are extremely streamlined for ease of use…in case of a switchover there is
nothing that needs to be looked at…nothing to recover, nobody to inform, no need for research on how
lost data affects business/users.
• Both NAS and SAN are supported. For NAS environments, the failover is transparent for planned
maintenance and unplanned events without a disruption of service. There is a minimal outage for SAN.
• 2, 4, 8-node configurations, including mixing controller models in the cluster. Select aggregates to
mirror and those to not mirror for greater efficiency.
• New with ONTAP 9.3 is available synchronous replication over IP networks. IP network support is
currently only available with a 4-node configuration.
• MetroCluster interoperates with all the great features of ONTAP: deduplication, compression, flash,
and SnapMirror.
• MetroCluster makes sure of continuous business operations and the availability of your critical storage
assets.
• The first all-flash array vendor to offer synchronous replication; almost all others do not – and no
others support NAS.
SnapLock is a WORM offering that helps customers retain data to meet different compliance regulations.
Write once read many (WORM) describes a data storage device in which information, after it has been written,
cannot be modified. With this write protection, the data cannot be tampered with after it has been written to the
device.
• It can be enabled by using a license. It is the software layer that enables the ONTAP WAFL file
system to provide WORM capabilities. This feature eliminates the requirement of special WORM
devices, because now spinning media can be used to provide WORM capabilities.
• The solution has been certified by Cohasset for the SEC, but we comply with other regulations, too.
• The license is cluster level.
Active IQ:
- Inventory
- Health
- Upgrade
- Configuration
Customers need additional insights to help increase the operational efficiency of the ONTAP environment.
A new intelligent portfolio of services is now part of Active IQ predictive analytics and proactive self-
healing care. Active IQ offers analytics in ONTAP software-based systems. Features enable customers to
predict when they will need capacity upgrades, understand how to address risks, and model the benefits
of an upgrade.
As much as 92 percent of users that were surveyed in Q1 2018 state that Active IQ compares favorably
or very favorably to other solutions in the market. Five hundred respondents from 49 countries and across
all industries and company sizes were surveyed.
Cloud Volumes ONTAP offers software-designed storage in the cloud and enables enterprise data
management with cloud economics. A software-only storage subscription for AWS and Azure, Cloud
Volumes ONTAP runs ONTAP software, offering your customers control of their cloud data with the
power of an enterprise storage software solution.
Cloud Volumes ONTAP offers a pay-as-you-use flexible cost structure, support for SSD-based and HDD-
based storage services, and a high-availability, or HA, option for Cloud Volumes for AWS. Cloud Volumes
ONTAP is ideal for DevOps and cloud disaster recovery situations.
Cloud Volumes ONTAP enables your customers to use the cloud in new ways by enhancing data
protection, minimizing the cloud footprint, and enabling easy data movement.
The need for storage capacities grows more quickly than storage budgets grow. Data is also dynamic:
Data that is hot today might be cold tomorrow.
FabricPool is a new Data Fabric technology that enables automated tiering of Lenovo Snapshot and
secondary data to low-cost object storage tiers either on-premises or off-premises. Unlike manual tiering
solutions, the FabricPool feature reduces TCO by automating the tiering of data to lower the cost of
storage. FabricPool technology delivers the benefits of cloud economics by tiering to the public cloud on
AWS and tiering to the private cloud on Lenovo StorageGRID Webscale.
The FabricPool feature is transparent to applications and enables enterprises to take advantage of cloud
economics without sacrificing performance or having to rearchitect solutions to leverage storage
efficiency. ONTAP supports FabricPool technology on All Flash DM systems and all SSD aggregates on
DM systems.
FabricPool uses composite aggregates to combine flash and cloud into one storage pool. Hot data stays
on flash, and cold data moves to the cloud. This process is nondisruptive to users and applications.
FabricPool also automatically tracks data properties and makes data available when needed. This
process helps with cost optimization.
The primary purpose of FabricPool technology is to reduce storage footprints and associated costs.
Active data remains on high-performance SSDs, and inactive data is tiered to low-cost object storage
while preserving ONTAP data efficiencies.
FabricPool has three primary use cases:
• Reclaim space on primary storage.
• Shrink secondary storage.
• Move volumes to external capacity tiers.
SnapCenter
Insight
This illustration is used throughout the course to help you visualize data management software in respect
to where it is typically used.
ONTAP software enables customers to simplify data management, accelerate and protect their data, and
future-proof their environments across the hybrid cloud.
Use the points on this slide to inform your customers about the key innovations in ONTAP software and
how the software can help modernize their storage environments.
Be sure to tell your customers how ONTAP software can benefit them. With ONTAP, they are not limited
to only one type of storage deployment. They can satisfy their storage needs with a unified block
and file solution, can simplify their transition to a cloud-ready data center, modernize their infrastructure
with flash and cloud, deploy emerging applications with enterprise-grade data services in under 10
minutes, radically change the economics of the data center, freely move data to where it runs optimally:
that is, flash, disk, and the cloud, scale to their business needs by offering scale up and out
capabilities on unified all flash storage, and allow them to manage their data and storage from
anywhere at anytime using the ThinkSystem Storage manager.
2019 Lenovo. All rights reserved. EM * Foundation Service will be highly recommended with no exceptions 97
** Advanced Service offered only through Special bid
Foundation Essential
Foundation Services
Essential Service
3yr & 5yr Next Business Day Response
3yr & 5yr 24x7 4Hr Response
Technician Installed
• All the services above provides combined hardware maintenance, software support including Auto Support and
updates for Lenovo hardware and eligible software products.
• Foundation will be highly recommended with no exceptions
• Due to the complexity of Capacity based pricing, we will offer Advanced and YDYD only as Special bid
*These Services might be only available in certain locations. Service areas may be found at www.lenovolocator.com. Contact Lenovo or a service provider
for details on availability.
2019 Lenovo. All rights reserved. EM 98
• Customer calling in for support (either Premier Support or Preconfigured Support Levels) will follow normal support process
87% #1 68%
of storage arrays storage protocol in say IT is more complex
will be flash by 2020 the future than 2 years ago
Automate Operations
Modernize Automate
Storage Operations
Optimize IT with Power IT with simple
NVMe-ready Fibre and open automation
Channel to drive to increase productivity
business innovation
Now what you’ll see is Brocade is actively working to address these; to help eliminate the complexity as
customers look at that modern storage infrastructure. What you’ll see is there are two areas:
1. Making sure that our Gen6 products are optimized for being NVMe ready to help customers drive
that business innovation.
2. In addition, what you’ll see is with our operating systems which come on our Gen6 products, is
we’re implementing automation capabilities that will help power IT with simple and open automated
tools that will help increase their productivity.
Customers’ critical applications and data require purpose-built networking for storage and it’s all delivered
with Gen6 Fibre Channel:
• Helping accelerate application response time with flash and delivering better performance
• Maintaining that always-on business operation, giving the customer the high –availability that they
need
• Making sure that it’s easy to keep up with data growth with exceptional scalability, but at the same
time being able to provide agility so that they can adapt and optimize to meet their business
requirements.
ThinkSystem FC SAN
Lenovo Server & HBA Switch or Director Lenovo Storage
0,7 350
100000
0,3 150
40000
0,2 100
20000
0,1 50
0 0 0
IOPs Latency MB/sec
FCP NVMe FCP NVMe FCP NVMe
ThinkSystem DB400D/DB800D
• Director Class - Mid-size/Large
Enterprises
ThinkSystem DB620S • Ideal for 140+ servers
• 9U, four blade slots, up to 256 x
• Mid-range Top-of-Rack Switch 32Gb SFP+ ports & 16 x 128Gbps
• Ideal for up to 40 servers ICL ports
ThinkSystem DB610S • 1U, up to 48 x 32G SFP+ • 14U, four blade slots, up to 512 x
• Optional 4 x 128G QSFP ports 32Gb SFP+ ports & 32 x 128Gbps
• Entry Top-of-Rack Switch • 2 Tbps total bandwidth ICL ports
• Ideal for 1-16 servers • Redundant HS power supplies • 99.999% Availability – PS, Fans,
• 1U, up to 24 x 32G SFP+ CR & CP blades, WWN cards
• Single fixed power supply
▪ Switch ▪ Director
▪ Ideal for small SAN’s ▪ Ideal for larger SAN’s
▪ Fixed number of ports & limited ▪ More scalability – leverage blades up
scalability 8-96 ports to 100’s of ports
▪ Limited redundancy – potentially ▪ Maximum redundancy – high
power & cooling availability in each component
▪ Potential for lots of ports dedicated ▪ Lower percentage of ports dedicated
to Inter Switch Links to Inter Switch Links
▪ Good performance ▪ Best performance – less hops and
more bandwidth
Simple deployment Install in three easy steps and deploy best practices in a single click
Integrated Routing Software for DB400D/DB800D (Part # 7S0C000KWW)* include 1 year support
▪ Carve up fabrics and provide more control of access and utilization.
▪ Same license for both 4-slot and 8-slot chassis; License allows up to 128 Ports with IR
▪ Additional 2 years of support Part # 7S0C000SWW or Additional 4 years of support Part # 7S0C000ZWW
2019 Lenovo. All rights reserved. EM * Include 1 year of support for this software feature 116
Backup /
Secure Data
Retention ▪ As data ages, the desire is to move it to
Recovery lower cost storage
Remote Archive Regulatory
Compliance
2019 Lenovo. All rights reserved. EM 118
▪ TS4300
▪ Replacement for TS3100/T3200 due to product withdraw from market
▪ 3U base unit that can be used as standalone or rack mountable with up
to six expansions
▪ High Scalability to help reduce the costs of data center space for storage
▪ Support LTO6 / LTO7 / LTO8
from Lenovo
Form factor 1U 1U 3U
Max # drives 1 1 3 to 21
Max. capacity* LTO8: 12TB / 30TB LTO8: 108TB / 270TB LTO8: 384TB native
(native / compressed) LTO7: 6TB / 15TB LTO7: 54TB / 135TB
LTO6: 2.5TB / 6.25TB LTO6: 22.5TB / 56.25TB
▪ Modular library
▪ Up to 3.2 PB native capacity (LTO 8 cartridges
storage)
‒ 1 base unit + 6 expansion units
▪ 32 to 272 LTO storage slots
▪ Up to 5 I/O slots per module
▪ Up to 21 LTO 8, 7 & 6 hot-swappable tape
drives
− Combination of HH & FH drives
Cartridges 272
HH Tape Drives 21
Base Library
+ FH Tape Drives 7
6 Expansion Modules 19 /1
HH / FH Tape Drive Mixture
to
Range
7/7
Sections
1. Product & Portfolio Overview 25%
2. Lenovo Value Proposition and Differentiators 18%
3. Positioning 18%
4. Business conversations 27%
5. Market Opportunity for Lenovo 5%
7. Services 7%
2019 Lenovo. All rights reserved. EM 127
www.pearsonvue.com/lenovo