You are on page 1of 32

STORAGE

Home

MANAGING THE INFORMATION THAT DRIVES THE ENTERPRISE

Castagna: DNA
set to replace
disks and flash

Toigo: IBMs storage


story unfolds on a
mainframe

SNAPSHOT 1

EDITORS NOTE / CASTAGNA

VVOLs redefining
storage

VM storage shopping
still often focuses
on better backup

DNA set to replace


disks and flash

VM storage
shopping still
focuses on backup

CLOUD

STORAGE REVOLUTION / TOIGO

Vendors improve
integration of storage
arrays and cloud

IBMs surprising
storage story unfolds
on a mainframe

SNAPSHOT 2

HOT SPOTS / BUFFINGTON

Virtual server backup


buyers eye VMspecific apps and cost

Four-letter word
for long-term
retention: tape

QUALITY AWARDS

READ-WRITE / MATCHETT

Hitachis enterprise
arrays: Peoples
choice once again

Make sure all-flash


is good before
eschewing hybrid

Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention

VVOLs redefining storage

Matchett: Check
flash before
eschewing
hybrid
VMwares

VVOLs promise easier configuration and


provisioning of storage for virtual machines,
About us
but some storage vendors might struggle to keep up.

APRIL 2015, VOL. 14, NO. 2

EDITORS LETTER
RICH CASTAGNA
Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup

Storage
is in my DNA
Storage is moving out of the realm of
magnetics and solid-state electronics
and into the world of biology.

Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

Magnetic media? Thats so 20th


century. The cloud? Well, thats just spinning disks, tape
and chips you cant see but sort of trust that theyre there.
All of that pales in comparison to the latest and greatest
storage medium: DNA storage. Yes, good old deoxyribonucleic acid might be the answer to our data storage prayers.
Even before scientists have finished fully plotting the
genomes and exploring all the other mysteries locked up
in nucleotides, a handful of those wizards have figured out
a new use for our favorite building block of life.
It seems those double helixes can do more than just
carry the genes for blue eyes, an aptitude for Calculus or
the ability to consistently hit jump shots from the threepoint range. Those twisty little rascals can store our data,
too. And when I say our data, I mean the kinds of data
weve been wading through the past few yearsbig data,
FORGET ABOUT FLASH.

Internet of Things data, Web-scale data, mobile data, social datayou get the idea.
DNA researchers have known for a couple of years that
its possible to store a bunch of bits and bytes as DNA,
but its an iffy proposition and the integrity of the data is
questionable. Kind of like all that stuff you have stored
on a pile of floppy disks in the back of the closet. (Sadly, I
realize that at least half of my readers wont have any idea
what a floppy disk is.)
But a group of researchers in Zurich, Switzerland,
recently managed to work out some of the kinks in the
double helix and have developed processes to store data as
DNA in a reliable manner for the long termas in thousands, or even millions, of years. Im pretty sure thats even
better than LTO tape and longer than what the Health
Insurance Portability and Accountability Act requires.
Its fascinating stuff, to be sure, but I have trouble picturing what all this looks like. Is it something you can only
see under a microscope? Could you back all your data up
to DNA and then inject it into someonelike that surly
backup admin with the strange hair and a Megadeth
T-shirt? And instead of some complex and costly process
to replicate your data remotely, could you just put him on
a plane and send him to France?
The possibilities are endless and unquestionably intriguing, and the age of DNA storage could usher in a
whole new world of data storage screw-ups, redefining
STORAGE APRIL 2015

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention

the meaning of user error in ways we never imagined.


(I picture CSI: Silicon Valley special agents about to crack
a case, reading the DNA report and exclaiming, Hey, this
is a PowerPoint presentation! Meanwhile, on the other
side of town, a CEO is trying to explain a slide showing
some perps saliva.)
All kidding aside, there is a serious element to this stuff.
There has to be; I cant think of a single joke with DNA
as the punch line. But even without mucking about with
genomes and double helixes, storage media is still managing to grow to proportions that strain our ability to manage
them. HGST, the WD subsidiary, introduced 10 TB-capacity helium-filled disk drives, and most array vendors now
support nearline drives up to 8 TB.
Thats not the only media thats growing: Oracles
T10000D tape offers a nativeuncompressedcapacity
of 8.5 TB, while IBMs TS1150 tape drive features uncompressed capacity of up to 10 TB.
Solid-state is also experiencing a growth spurt, with
big-capacity flash such as SanDisks 4 TB SAS solid-state
drives and PCI Express flash products like Intels 2 TB SSD
DC P3700 or HGSTs 2.2 TB FlashMax III.

The issue isnt how much data we can store; were going
to keep getting more and more capacity until those DNAbased products are on the shelves of Best Buy. The real
issue is how to manage all that data, know what we have

ALL KIDDING ASIDE, THERE


IS A SERIOUS ELEMENT
TO THIS STUFF. THERE HAS
TO BE; I CANT THINK OF A
SINGLE JOKE WITH DNA
AS THE PUNCH LINE.
and where it is, and ensure that it is in a retrievable form.
If we ever plan to turn all that stuff into a useful resource
for in-depth data analysis, were going to have to overcome
all those management and administrative hurdles, even
if all our companys data ends up strolling the ChampsElysees wearing a Megadeth T-shirt and a beret. n
RICH CASTAGNA

is TechTargets VP of Editorial/Storage Media Group.

Matchett: Check
flash before
eschewing hybrid
About us

STORAGE APRIL 2015

STORAGE REVOLUTION
JON TOIGO
Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup

There may be
a mainframe
in your future
IBMs flashy new z13 is impressive,
but the storage part of the story
is yet to be told.

Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

at Lincoln Centers Frederick P. Rose Hall on


a brisk winter day in New York earlier this year, I had the
pleasure of attending a launch event for the latest IBM
z Systems mainframe, the z13.
I know, I know. For many of you, the first reaction is
to stifle a yawn, turn the page and read the next column
or article in this e-zine. Mainframes arent on your radar;
theyre just too complex, too expensive to own and operate, or just too old school.
On the other hand, from what I could see, the event was
packed, brimming with IT professionals from financial
services, manufacturing, retail, healthcare and even some
of those cutting-edge Internet outfits. Some attendees
probably mostwere current mainframe users; but there
AT THE JAZZ

were a lot of talking points aimed at both newbies and


former mainframers. In fact, IBM did its best to frame its
value case for the z13 against the backdrop of mobility,
not big iron.

AN AGILE MAINFRAME

Sure, there were a couple of gear-head presentations


covering hardware speeds and feeds (the IBM z13 processor unit consists of 3.99 billion transistors fabricated on
22 nm CMOS silicon with six, seven or eight cores delivering 5 GHz and is purchased in single-chip modules),
but the lions share of the agenda consisted of discussions
by IBMers and customers regarding the agility of the
platform and the simplicity with which resources could
be assigned and reassigned to workloadsmaking it a
great cloud in a box offering. Even the most jaded folks
found their interest piqued by a discussion of how the z13
could blend transactional processing with inline business
analytics. Doing so could pave the way for the big data
crowd to begin delivering on their promises, including intelligent interpolation of multiple data variables to enable
near real-time interactions with consumers.
With the IBM z13 in the mix, the ubiquitous mobile device (smartphone, phablet and so on) could be leveraged
to support in-store sales: more effectively targeting customers with ads and coupons for face creams, toothpastes
STORAGE APRIL 2015

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

and boneless pork chops as soon as they walk into the


store. IBM was bragging about new software for making
the connections among sales, inventory management,
customer loyalty cards and buying histories, and other
components of smart marketing. The company also made
a point to discuss its newly minted relationship with Apple, whose popular gear and apps provided the client side
of the equation. This is not a topic one might expect to
hear at a mainframe event.

world. Specialty cards (motherboards, actually) provided


connections to PCI busses, FICON and whatever other
I/O interconnects one might need. Despite the somewhat
exotic internal plumbing, with zLinux as an operating
system, it dawned on me that most contemporary server
admins would not find the environment or the platform
that difficult to grasp.

WHITHER STORAGE?
IBM z13 USES KVM TO SPAWN VMs

With the z13, IBM also embraced KVM, the increasingly


popular hypervisor technology, and bragged that you could
stand up 8,000 virtual machines inside its mainframe at
a fraction of the cost per machine of an x86 Tinkertoy
implementation using VMware or Microsoft hypervisors.
For those who were already doing clustered x86 boxes
and Hadoop, Big Blue was providing a means to connect
all that infrastructure to the mainframe, easy peasy. That
way, you could keep all your massively parallel clusters and
MapR while consolidating your transactional work (and
many production servers) into the blue box.
I had to admit that I found myself wanting one of these
mainframes again. With the cabinet doors opened, it was
evident that everything about the kit was modular and familiar. Processor unit (PU) chips feature 64 MB of shared
L3 cache for use by all cores, plus they include a generous 960 MB of L4 cache and provide communications
between drawers of PUs and their storage controllers.
Everything else looked pretty familiar from the x86 server

What was missing from the presentations I saw was any


discussion of storage outside of the memory components.
IBM has a lot of stories to tell about storage architecture,
from Tier-1 arrays with onboard hardware controllers
with bloated software functionality, to high-performance
JBODs with a hardware-based virtualization/compression
uber controllerthe SAN Volume Controllerto other
fabric and network-based solutions leveraging FICON and
Ethernet. But there was zero discussion of these elements
in the launch-day presentations. I suspect theyre saving
up their storage discussion for their Edge conference in
Las Vegas this spring.
Unfortunately, storage is where the proverbial rubber
meets the road in all IT architectures these days. Virtualizing workloads with hypervisor computing, spreading
workloads over massively clustered compute platforms
and divvying up processing activities per a MapReduce
scheme are interesting forks in the once-monolithic computing architecture that had organized the IT universe
into one app/one CPU technology stacks for so many
years. But moving to either of these architectures creates
STORAGE APRIL 2015

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention

major disruptions in how we do storage.


With virtualization, we get the I/O blender problem
that quickly reduces flash and disk storage to jumbles of
random I/O rubble. The preferred solutions of the VMwares and the Microsofts are to deploy proprietary storage
stacks that work only with workloads and data from the
single hypervisor stack.
With Hadoop, we introduce a huge data mirroring and
synchronization challenge even as we assume that everyone has the budget to simply toss any failed node and
replace it with another node using our limitless budget.
One wonders whether the architecture developed for supercomputing behind particle colliders is really optimized
for general business workloads or budgetary realities.
Some observers might argue that IBM did a great intro
of z13 by focusing on the efficiencies of processor caching.
There may even be some truth to the idea that data in a
mobile/big data world hasnt got time to be committed to
spinning rust or flashy RAMits constantly in motion,
so conventional storage ideas do not apply. I doubt that
legal eagles or auditors would agree, however. In truth,
what the IBM z13 does do is return our thinking to data

processing because of its attention to combining transaction processing with business analytics. In the end, its this
focusdata processing and not information technology
where the critical innovations are required. However, the

MAINFRAMES ARENT ON YOUR


RADAR; THEYRE JUST TOO
COMPLEX, TOO EXPENSIVE TO
OWN AND OPERATE, OR JUST
TOO OLD SCHOOL.
underlying hardware platform cannot be ignored. It must
be rock-solid, easy-to-manage and well-balanced in terms
of performance, capacity and cost.
I am anxiously awaiting IBM Edge to get the rest of the
z13 story. n
is a 30-year IT veteran, CEO and managing
principal of Toigo Partners International, and chairman of the Data
Management Institute.
JON WILLIAM TOIGO

Matchett: Check
flash before
eschewing hybrid
About us

STORAGE APRIL 2015

VMWARE VVOLs

VVOLs are coming has been ringing since


August 2011 when VMware first publicly presented the
concept in a technical session at VMworld. Finally, after
nearly four years, VMware announced the general availability of Virtual Volumes (VVOLs) at VMware Partner
Exchange 2015 in conjunction with vSphere 6 and vSphere
APIs for Storage Awareness (VASA) 2.0. A number of storage array vendors simultaneously announced VVOL support in select products. So what are VMware VVOLs, how
do they work and what benefits do they bring to users?
Simply stated, VVOLs enable the provisioning, monitoring and management of application storage at a virtual
machine (VM) level of granularity in the storage arrays
that support them. Before VMware came into the world of
computing, applications enjoyed a 1:1 relationship with a
LUN or volume that was carved out of a storage array. The
performance, capacity and data services (compression,
caching, thin provisioning, snapshots, cloning, replication, deduplication, encryption and so on) were defined
precisely but statically for that LUN. The application running on the physical server had access to all the services
available to the LUN.
When VMware abstracted the compute side with the
hypervisor, one could run multiple applications in the
THE CHANT

VVOLs are here.


Are you ready?
VVOLs are forcing major changes in storage products,
but the transition may be difficult for many IT shops.
BY ARUN TANEJA
FANDIJKI/ISTOCK

HOME
STORAGE APRIL 2015

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

form of VMs on a single physical server. Storage remained


essentially the same as before. While VMware spoke VMs,
storage continued to speak LUNs and volumes.
The resulting mismatch wreaked havoc for a decade.
The only way to make storage play nicely with VMware
was to have a single LUN support a fairly large number of
VMs. If an application started performing poorly, there
was no way to find the exact cause since the storage performance data was only available at the LUN level. The lack
of VM-level visibility made it difficult, if not impossible,
to isolate the issue and deal with it.

Anatomy of a VM
A VIRTUAL MACHINE today consists of a swap file, a

config file and at least one VMDK file. Each snapshot of the VMDK produces another VMDK file.
In the new world of VVOLs, each of these files
is represented by a VVOL. As a result, each VM
produces a minimum of three VVOLs or more, depending on the number of VMDKs in the VM.
For a mission-critical VM that is snapshotted
every 15 minutes and stores one weeks worth
of snapshots, the number of VVOLs quickly ap-

HOW DO VMWARE VVOLs WORK?

proaches 675 per VM, assuming only one VMDK

VVOLs are designed to solve this fundamental problem by


eliminating the architectural mismatch between storage
and VMs. The technology enables precise, policy-based
allocation of storage resources to a VM. These resources
may include type, amount and availability of storage, as
well as data services such as deduplication, snapshots, replication and so on. These resources can also be modified
on the fly as requirements for applications change. To fully
grasp the ins and outs of VMware VVOLs, it is important
to understand the concepts outlined below.

per VM. One can see how quickly the number of


VVOLs adds up.
To get VM-level management without VVOLs,
you would have to create 675 LUNs. Given the 256
LUN limit for each VMware host, and the fact that
most existing storage arrays have an internal LUN
limit, this is impossible.
VVOLs are designed to get past these limits but
more importantly, because they are created on demand with automated management, they enable
the creation of Web-scale infrastructures. n

STORAGE CONTAINERS AND VIRTUAL DATA STORES

A NAS or SAN storage array is initially split into a handful


of storage containers, each representing a different set of
capacities and capabilities (classes of service). Storage
containers are a logical construct and are typically created

by a storage administrator. They are presented to vSphere


as virtual data stores, thus requiring no changes on the
vSphere side. VVOLs live inside the storage containers.
STORAGE APRIL 2015

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

VVOLs, sometimes called virtual disks, define a new


virtual disk container that is independent of the underlying physical storage representation (LUN, file system or
object). In other words, regardless of the type of physical
storage attached (except for DAS), storage is presented to
vSphere in an abstracted format of a VVOL. It is the smallest unit of storage resource allocation that can be assigned
to a VM (see Anatomy of a VM on page 9). It is also the
smallest unit of measurement for the management of a
storage array. This means resources can be provisioned at
a VM level, and all monitoring and management can be
performed at the VM level.

STORAGE POLICY-BASED MANAGEMENT


AND POLICY-DRIVEN CONTROL PLANE

The policy-driven control plane acts as a bridge between


applications and the storage infrastructure. It is responsible for mapping the VM to a storage container that is
capable of meeting the policy.
In this software-defined storage model, the VM administrator uses the storage policy-based management
(SPBM) interface in vSphere to define a set of policies
that can be applied individually to each VM. These policies define the type of resources to be delivered by the
storage arrays. For instance, a platinum policy may use
flash resources and the best data protection, capacity
optimization and disaster recovery capabilities of the
available storage arrays, whereas a gold policy may use
lesser resources.
Since all VVOLs and VMs are provisioned and managed

automatically via policy using SPBM, the VMware infrastructure can scale to thousands or tens of thousands of
VMs without increasing costs. Contrast this with how
difficult it is to upgrade or downgrade a VM that is tied
to a LUN.

A COMMONLY ASKED
QUESTION IS WILL MY
EXISTING STORAGE ARRAY
SUPPORT VVOLs? THE
SHORT ANSWER IS NO.
In addition to allocating the appropriate storage services to the VM, the control plane is also responsible for
ongoing monitoring of these VMs to ensure each VM
continues to get the resources assigned to it by the policy.

VIRTUAL DATA PLANE

The virtual data plane abstracts all the storage services


available on an array, so they can be delivered (or not) to
individual VMs. Historically, a VM sitting inside a given
LUN received whatever capabilities and services were
available to that LUN. For instance, a VM that did not need
to be replicated to another site was replicated whether or
not the LUN was set up with that service.
These abstracted services are made available to the
control plane for consumption. These resources do not
have to come solely from external storage arrays; they may
come from Virtual SAN (VSAN), from vSphere itself or
STORAGE APRIL 2015

10

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

from third parties. The control plane decides which services are to be made available to a given VM, based on the
policy associated with that VM. VVOLs are VMwares implementation of the virtual data plane for external storage
arrays, whereas VSAN provides x86 hypervisor-converged
storage.

How VVOLs work


Data services are made available to virtual
machines (VMs) via policy.

VM

VM

VM

VM

PROTOCOL ENDPOINTS

The communication between an ESXi host and the storage


arrays is handled by Protocol Endpoints (PEs). This is a
transport mechanism that connects VMs to their VVOLs
on demand. One PE can connect to a very large number
of VVOLs and does not suffer from the configuration
limit of LUNs (a VMware host can only connect to 256
LUNs).
In a Network File System (NFS) storage array environment, the PE is discoverable as an NFS server mount point
and each VMDK produces its own VVOL; in addition,
each VVOL is inside its own storage container.

VASA PROVIDER

The VASA provider is software typically implemented


in the storage array that tells the ESXi host and vCenter
what capacities and capabilities are available in the storage
array.
It is through the VASA provider that the storage communicates if it has flash, different types of hard disk drives,
caching, snapshots, compression, deduplication, replication, encryption, cloning and other capabilities.

Policy Driven Control Plane = SPBM

Virtual Data Plane


Replication, dedupe, snapshot

Virtual Data Stores


VSAN

VVOLs

x86 Servers
with DAS

NAS/SAN

STORAGE APRIL 2015

11

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

The topology information is also communicated this


way. Is it a Fibre Channel array? If so, how many ports?
Does it have multipathing?
All this information is used in the creation of policies
and virtual disks. If the storage array has built-in quality of
service (QoS) support, VASA would inform vSphere and
the ESXi hosts of its availability. VASA 2.0 is required for
VVOL support.

Classes of service
Each VM is mapped to the appropriate storage
container based on policies set by the storage
administrator.

VM

VM

VM

VM

OVERALL BENEFITS OF VVOLs

By now, it should be evident that VVOLs represent a major


shift in the way storage is provisioned and managed in a
VMware environment. The concept of a LUN does not disappear, but storage administrators dont have to deal with
them anymore. All external storage becomes abstracted,
as do all storage services. Applications become associated
with the right type of storage and only those services that
are needed for that VM. All monitoring and management
becomes VM-centric, and resources are not wasted as they
are in the LUN world. Performance management is more
precise and issues can be pinpointed more easily.
As applications needs change over time, resources can
be added and subtracted automatically and non-disruptively. Also, no changes are needed to applications and
no forklift upgrades are required to get into the world of
VVOLs. Customers can continue to run existing applications as they switch to VVOLs. The two environments can
coexist and be managed from a common vCenter console.
However, VVOLs do require vSphere 6 and VASA 2.0, and
the storage array also must support VASA 2.0.

vSphere
SPBM

VVOLs

VVOL-based NAS/SAN

Storage
container 1

Storage
container 2

Storage
container 3

Storage
container 4

STORAGE APRIL 2015

12

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

CAN EXISTING STORAGE ARRAYS SUPPORT VVOLs?

A commonly asked question is Will my existing storage


array support VMware VVOLs? The short answer is no.
But if the question is Can the existing storage architecture be modified to support VVOLs? the answer is yes. A
non-trivial amount of engineering is required, especially
if the architecture is 15 to 20 years old.
This is why storage products that support VVOLs are
just coming to market and only a few models at a time.
EMC is starting with VNXe and VMAX3, and will add
new models and products over time. Hewlett-Packard is
starting with 3PAR models. I expect each storage array
vendor to have a phased strategy of support, given the
magnitude of the task.
It is important to remember that simply supporting the
provisioning of VVOLs is not enough. One may be able
to provision VVOLs but only those data services that are
supported will apply.
Getting into VVOLs will not be a simple matter of a onetime upgrade. Users need to understand the full picture of
what models and services are supported to decide when to
upgrade their infrastructure and in what order.

WHAT ABOUT VENDORS THAT


HAVE VM-CENTRIC PRODUCTS?

NexGen Storage, Nutanix Inc., Scale Computing, SimpliVity, Tintri and several other vendors have already implemented VM-centricity and have been shipping and
supporting products for several years. Do they lose all their
advantage now that VVOLs are out? Does VVOL level the

playing field? The short answer is No way. All their data


services are VM-centric already. It will take other vendors
a minimum of one year to get all their models and data
services supported for VVOLs.
Another factor to keep in mind is that many of the
players listed above have implemented extremely strong
QoS features. And, lest we forget, VVOLs do not give you
automatic QoS for applications. The underlying storage
array must implement it. If it does, then QoS can be surfaced via VASA to vSphere and be attached to the policies.
There is a general misunderstanding in the marketplace that VVOLs inherently deliver QoS. This is not true.
Very few storage arrays today have sophisticated QoS
functionalityand adding it is non-trivial. There are
exceptions, of course. This means those vendors with
high-quality QoS will continue to enjoy competitive advantages for the foreseeable future.

SUMMARY

VVOLs are forcing serious changes in storage products


across the whole industry. The benefits VMware VVOLs
bring are numerous and unquestionable. But change is
hard and the next 18 months will be difficult for most IT
shops, as they struggle to understand VVOLs and determine how to bring about this change without upsetting
their ongoing operations. n
is founder and president at Taneja Group, an analyst
and consulting group focused on storage and storage-centric server
technologies.
ARUN TANEJA

STORAGE APRIL 2015

13

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe

Snapshot 1

Storage purchases to support virtual servers lean toward backup

D Why is your company buying storage,


backup or management tools?*

organization be purchasing?

VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice

Expanding
virtualization

More than
1 petabyte

63%

New apps/sites/
other demands
Technology is better
since initial deployment

32%

Old systems
not working well

16%

250 TB
to 499 TB

10%

100 TB
to 249 TB

7%

6%

Other

50 TB to
99 TB

* MULTIPLE SELECTIONS PERMITTED

About us

1 TB to 10 TB

1%

34%

18%

10 TB
to 49 TB

165 TB

Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid

24%

6%

500 TB to
999 TB

39%

34
s
6
1
10
7
18
+
24

D How much backup capacity will your

Average
new backup
capacity to
be added.

D Backup looms

large in buying plans:


What will your
planned purchase
address?*
* MULTIPLE SELECTIONS PERMITTED

72%

62%

61%

56%

Storage for
new VMs

Backup for
new VMs

Storage for
existing VMs

Backup for
existing VMs

STORAGE APRIL 2015

14

CLOUD USE CASES

CLOUD COMPUTE AND cloud storage have evolved from over-

hyped concepts to legitimate options for IT professionals


addressing real business challenges. In addition, traditional on-premises primary storage vendors are delivering practical integration with cloud storage providers.
Architecturally, primary storage providers use several
approaches to extend their storage systems reach into
the cloud:
n

Cloud storage
strategies, use cases
explained
Primary storage vendors tighten integration
with cloud storage providers while users extend
operations into the cloud.
BY GEORGE CRUMP

 lace matching systems on-premises and in the cloud


P
providers data center.
E
 nable the primary storage system to send data to a
dissimilar secondary storage system, including cloud
storage.
C
 reate a virtual instance of the storage system in
the cloud providers environment. This uses the raw
storage capabilities of the cloud provider, but management is performed with the storage vendors
software.

Each of these architecture decisions can enable various


cloud storage use cases, which are explained on the following pages.

FANDIJKI/ISTOCK

HOME
STORAGE APRIL 2015

15

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention

USE CASE 1: CLOUD AS DR

The most common use case for cloud storage is as a disaster recovery target. Many vendors now provide the ability
to replicate data to a cloud provider to protect against a
data center outage. In most cases, this capability uses a
virtualized instance of the vendors storage software running in the cloud. Replication software essentially sees
another storage system in the cloud and replicates data to
it. The other option is to set up an identical storage array in
a cloud data center. Unlike the virtual instance, this hardware can be dedicated to the subscribing organization.
It is important to note that both of these use cases simply provide data movement from point A to point B. It does

Protecting the cloud


AS MORE DATA is created in the cloud, protecting

that data becomes critical. One way to do this is


to copy data to another cloud provider. Or secondary copies could reside on-premises. For example,
it might make sense to copy archive data to an
on-premises tape device as well as the cloud. If

Matchett: Check
flash before
eschewing hybrid

you are developing an application in the cloud,

About us

ensure data that lives in the cloud is protected.

it may make sense to use cloud-to-cloud backup


software. There are a lot of options available to
Dont make the mistake of assuming your data is
safe because its in the cloud. n

not mean that the applications and networking needed


to access those applications are properly architected. In
theory, both options could provide that complete functionality, but that is a process the organization needs to
work through with the provider.
If an organization wants to completely fail over to the
cloud, it will need to work with the cloud provider to
ensure that appropriate network conversions are handled
automatically in the transition. Additionally, it must be
sure the provider has the available compute resources and
the ability to start that application in their cloud when
needed. Finally, the organization should ensure the provider can deliver the appropriate quality of service while
the application is running in its cloud.

USE CASE 2: CLOUD AS A TIER

Many storage systems today have the ability to move data


from one tier of storage to another based on user-defined
criteria. This is often the movement of data between
high-performance flash storage to slower, less-expensive
hard disk drive storage.
By integrating a cloud storage connection, the tiering
software can also move data to the cloud. The movement
between disk and cloud is seamless, other than the latency
of the Internet connection. This approach may be preferable for archiving data, since users can set policies around
data age, usage and so on.
In most cases, the performance difference between
cloud and disk is negligible for a couple of reasons. First,
many organizations have invested in high-speed Internet
STORAGE APRIL 2015

16

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

bandwidth and, second, access would be limited to occasional retrieval of discrete files. As a result, the time lag
in transfer would hardly be noticeable in most situations.

are comfortable running critical applications in the cloud.


This hybrid approach allows organizations to use the cloud
when it makes sense, while continuing to maintain a data
center for predictable performance and high security.

USE CASE 3: CLOUD AS AN APPLICATION INCUBATOR

One of the most expensive aspects of developing a new


application is the cost of the hardware associated with
that development. Typically, the storage system selected
for development is actually the system that will be used
during production, so it must be fully scaled to the performance and capacity demands of that application in its
production state.
The cloud may be a better place to start application
development. Compute and storage can be purchased as
the process develops instead of all upfront. Also, the application can be stress-tested in the cloud prior to bringing
the application into production. Finally, developing in the
cloud allows the purchase of the on-premises equipment
to be far more accurate. The key ingredient is how to move
this application from the cloud to on-premises storage for
actual production use.
Some primary storage vendors accomplish this with
virtualized cloud versions of their storage systems. This
allows the application to be developed and tested on the
exact same storage software that will be used in production. Using a virtualized version of the storage system
makes transfer to the on-premises physical system far
easier. The application and its data can be replicated or
migrated between the two systems.
Most organizations will never reach a point where they

USE CASE 4: CLOUD AS PRIMARY STORAGE

Storage system vendors now have the ability to host unstructured data in the cloud. This hybrid approach uses an
on-premises appliance that caches data locally, while the
primary copy of data is located in the cloud.
An increasing number of vendors offer the ability to
host block storage in the cloud. These appliances use
sophisticated capabilities to make sure that frequently
accessed data is kept on-premises. The result is a relatively
small but active segment of the data set stored entirely on
flash storage. Applications like SharePoint and Exchange
are particularly well-suited for this. The cores of these
semi-structured environments are relatively small, but
attachments increase the size of the store. Also, both
applications have APIs that allow vendors to safely move
older attachments to the cloud.
Today, most storage systems and IT professionals treat
the on-premises data center as the hub of the data universe
with the cloud being a spoke. In the future, this model may
be turned around so that the cloud is the hub. This will be
especially true for organizations with a highly dispersed
workforce. n
is president of Storage Switzerland, an IT analyst firm
focused on storage and virtualization.
GEORGE CRUMP

STORAGE APRIL 2015

17

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid

Snapshot 2

Virtual server specialization and cost are key for VM backup buyers

D Which backup solutions

will your organization be considering


for purchase?*
VM-specific
backup software

40%
32%

Disk backup

24%

Private cloud
backup

22%
18%

Traditional
agent-based
backup software

17%

CDP or near-CDP

17%

About us

Public cloud backup

considerations when purchasing


backup solutions*
81%

Cost

63%

Scalability

58%

Capacity

Traditional backup
software with VM
backup option

Tape backup

D The 10 most important

14%
* MULTIPLE SELECTIONS PERMITTED

Ease of implementation/
management
Compatible with VMs
Reduce/
deduplicate data

44%
42%
40%

Recovery time objective


requirements

37%

Compatible with
physical servers

36%

Vendor support/
post sale
Recovery point objective
requirements

33%
28%
* MULTIPLE SELECTIONS PERMITTED

STORAGE APRIL 2015

18

QUALITY AWARDS: ENTERPRISE ARRAYS

Quality Awards user satisfaction survey


indicates that enterprise-array users are pleased with the
storage systems theyre using, its clear theyre not quite as
enthusiastic as they have been in recent years.
Hitachi Data Systems VSP/USP series of big storage
iron earned top honors as our survey respondents gave
it the highest overall grade for service and reliability.
Hitachi isnt a stranger to the enterprise array Quality
Awards winners circle, having prevailed among some
pretty tough competition four times on previous surveys.
The other four finalists, in order of finish, were a familiar cast: EMC, Hewlett-Packard (HP), NetApp and IBM.
Despite the generally lower scores compared to other
years, all the product lines fared well and most of the
ratings categories were very closely contested.
Its not possible to say precisely why the scores tilted
lower this time, but the more modest results might indicate that users have higher expectations of their primary
storage systems, especially with beefier midrange hybrid
and all-flash systems so readily available. While those
flashier arrays generally cant match the overall capabilities or capacities of the more traditional enterprise arrays,
they can outperform them handily.
WHILE OUR LATEST

Hitachi is back
on top of the
enterprise array hill
In a very close contest, Hitachi lands in the enterprise
storage array winners circle for the fifth time.
BY RICH CASTAGNA

????

HOME
STORAGE APRIL 2015

19

Home

Overall rankings: Hitachi narrowly edges EMC

Castagna: DNA
set to replace
disks and flash

HITACHIS OVERALL AVERAGE score of 6.26 earned it a first-place finish by a relatively small margin

KEY STATS

over EMC (6.19). But the distance between EMC and the third- and fourth-place vendors was
even narrower, with HP netting an overall 6.16 and NetApp scoring 6.14. Indeed, while the
scores might have been lower than what weve seen recently, the distance between first and
last was measured by a mere 0.30 pointsthe skinniest margin weve ever seen.
Hitachi built its leading score by coming out on top in three of the five rating categories,
with wins for sales-force competence, initial product quality and product reliability. NetApp
and HP divvied up the other two categories, with HP leading the group for product features
and NetApp on top for technical support.
The entire groups average overall score was 6.14. Youd have to go back seven years to find
a lower group tally. Last year, every product managed to score higher than 6.00 in each of
the ratings categories. This time around, only two vendorsHitachi and EMCearned that
distinction, although a couple of others came quite close.

0.07

Toigo: IBMs storage


story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Matchett: Check
flash before
eschewing hybrid
About us

This years scores


were the lowest in all
five categories going
back at least seven
years.

This is the fourth time


Hitachi has won the
Quality Award for
enterprise arrays,
breaking a tie with
three-time winner
NetApp.

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention

The difference between winner Hitachis


and second-place
EMCs scorescarrying on a tradition of
close finishes among
enterprise arrays.

Enterprise storage arrays: Overall rankings

6.26

6.19

6.16

6.14

5.96

Hitachi

EMC

HP

NetApp

IBM

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

20

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Sales-force competence:
Hitachis sales reps stand out
KEY STATS

all start with the courting ritual known as the sales process. But
what happens during this getting-to-know-you phase can set the tone for an ongoing rapport.
Among our gang of five enterprise storage array vendors, Hitachi received the best mark in
the sales-force competence rating category, which lays out some of the most effective storage
sales support techniques. It earned that 6.33 grade by posting the top scores on four of the
six category statements. Hitachis best marks came for having flexible sales reps (6.61) and
reps that understand customers businesses (6.44). It also prevailed for reps who are easy to
negotiate with (6.28) and keeping customers interests foremost (6.16).
Second-place EMC (6.20) registered the highest scores on the two remaining rating
statements: a 6.54 for The vendors sales support team is knowledgeable and a 6.45 on the
statement My sales rep is knowledgeable about my industry.
HP posted a category tally of 6.07, followed fairly closely by NetApp (5.94) and IBM (5.85).
From a group perspective, the best overall average for all five vendors was on the knowledgeable sales force statement, with the group posting an average of 6.37. The groups weakest
showinga 5.85was for My sales rep keeps my interests foremost.
USER-VENDOR RELATIONSHIPS

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

This is the fourth


time Hitachi has won
the sales-force competence category. In
seven of 10 surveys,
the overall winner also
won for sales-force
competence.

There was only one


statement that had all
the vendors exceeding
6.00: having knowledgeable sales support teams.

6.84

For enterprise arrays,


Hitachi achieved this
score for sales-force
competence two years
ago.

Enterprise storage arrays: Sales-force competence

6.33

6.20

6.07

5.94

5.85

Hitachi

EMC

HP

NetApp

IBM

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

21

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Initial product quality: Out of the box,


enterprise vendors avoid major snags

KEY STATS

scored well in the initial product quality category, an indication their users
were able to get up and running quickly without encountering major snags. Hitachi prevailed
in this category with a 6.34 rating, but all five vendors were fairly closely bunched. Once again,
Hitachi fashioned its victory by leading the pack on four of the categorys six rating statements.
The vendor flexed its muscles with nearly identical scores on the statements I am satisfied
with the level of professional services this product requires (6.51) and This product was
easy to get up and running (6.50).
HP (6.22) and NetApp (6.20) posted consistent scores across all statements to finish second and third, respectively. But the top dogs for the other two statements were EMC with a
category-high 6.60 for products that install without defects, and IBMs 6.29 for products that
are easy to use (just barely nudging out NetApp with its 6.28).
As a group, the best across-the-board average was a 6.39 for products that install without
defects, which suggests very effective quality assurance. At the other end of the spectrum, the
groups lowest average of 5.98 came on the statement The product requires very little vendor
intervention, which might suggest that a little too much user handholding is still required.
ALL FIVE FINALISTS

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

Only one vendor


NetAppracked up
6.00+ scores for all six
category statements.

This is the fifth time


Hitachi has won the
initial product quality
category; it earned
top honors in each of
those Quality Awards
surveys.

6.20

The groups overall


average score for
initial product quality
is the lowest weve
seen in seven years.

Enterprise storage arrays: Initial product quality

6.34

6.22

6.20

6.12

6.11

Hitachi

HP

NetApp

EMC

IBM

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

22

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Product features: HP ekes out win,


but array features approaching parity

KEY STATS

FOR YEARS, INDUSTRY experts have been citing feature parity among the relatively small crop

of enterprise array vendors. The results of our product features rating category provide strong
evidence that parity is at hand; to call the finish in this rating category a close race would
be an understatement. HPs 6.22 put it at the front of the pack, but only by the narrowest of
margins, with a trifling 0.02 points separating the top four finishers.
Its about as close to a statistical dead heat among four vendors that weve ever seen, but HP
did come out on top for three of the categorys seven statements, with second-place NetApp
nabbing two and Hitachi the final pair.
HPs best scoreand the best score for the full categorywas an impressive 6.60 picked up
for the key statement Overall, this products features meet my needs. Its other high grades
were for mirroring features (6.36) and management features (6.29).
As it often does, NetApp earned the top mark (6.42) for its snapshot features, along with a
solid 6.21 for remote replication features. Hitachi led the group on the final two statements,
with a 6.52 for product scalability and a 6.31 for interoperability with other vendors products,
which was likely due to its virtualization capabilities.

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

5.82

While Hitachi fared


well for arrays that
are interoperable with
other vendors products, as a whole, the
group notched its lowest average for that
statement.

EMC didnt score highest on any statement,


but earned extremely
consistent marks to
finish in a tie for second in the category.

9 of 10

In all past enterprise


storage array surveys,
the overall winner
also scored best for
features, but not this
year.

Enterprise storage arrays: Product features

6.22

6.21

6.21

6.20

5.95

HP

EMC

NetApp

Hitachi

IBM

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

23

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Product reliability: Hitachis


patch management bolsters reliability

KEY STATS

THE TRUE VALUE of an enterprise array is measured over time in terms of its product reliability.

All of the vendors product lines represented in this survey earned enviable grades for their
dependability. Hitachi returned to the forefront in this category, with a 6.44 that gave it a
small lead over second-place EMC (6.35). But then the tally sheet tightens with HP trailing
EMC by only 0.05 points and NetApp lagging HP by a similar margin. IBM rounded out
the group with a still solid 6.05. These results should be good news for any company in the
market for an enterprise-class array.
Hitachi and EMC each had top tallies for two statements, and HP triumphed on the final
statement. Hitachis best marks were for patches that can be applied non-disruptively (6.64)
and for requiring few unplanned patches (6.52). EMCs leading scores came for products that
experience very little downtime (6.55) and products that meet service-level requirements
(6.45).
Third-place HP snared the final statement with a 6.26 rating for Vendor provides comprehensive upgrade guidance. NetApps best showing was on the very little downtime statement (6.46), while IBMs 6.16 for meeting service-level requirements was its highest grade.

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

Enterprise storage arrays: Product reliability

6.44

6.35

6.30

6.25

6.05

Hitachi

EMC

HP

NetApp

IBM

1 of 25

Out of 25 statement
scores for all five vendors in this category,
only one was below
6.00 (a 5.96).

7.00

The highest product


reliability category
score ever for
enterprise arrays
was earned by Hitachi
two years ago.

6.41

The best average


score for all five finalists in this category
was for products that
experience very little
downtime.

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

24

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

Technical support: NetApps


support teams deliver as promised

KEY STATS

support can help outweigh a multitude of other product or service shortcomings. Users tend to have high expectations when it comes to support and can
be critical judges of how quickly and well that support is delivered. As in past surveys, the
scores in the technical support rating category are lower than in other categories. But NetApp
managed to muster a very respectable 6.09 category score by outdistancing the competition
on four of the eight rating statements.
NetApps best marks came for delivering support as contractually specified (6.64), having
knowledgeable third-party partners (6.16), and providing adequate documentation and other
supporting materials (6.06).
EMC (6.05) and Hitachi (6.01) came in second and third, divvying up the remaining four
statements. EMCs leading marks came for having knowledgeable support personnel (6.30)
and for Support issues rarely require escalation (5.94). Hitachi set the pace for resolving
problems in a timely manner (6.18) and for taking ownership of problems (6.11).
HP and IBMs best results came on the same statement, with HP notching a 6.36 and IBM
earning a 6.21 for delivering support as contractually specified.
RESPONSIVE, TIMELY TECHNICAL

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

Enterprise storage arrays: Technical support

6.09

6.05

6.01

5.99

5.82

NetApp

EMC

Hitachi

HP

IBM

5.71

Vendor training is
disappointing. The
groups lowest average statement score
was for The vendor
provides adequate
training.

This is the third time


NetApp has won the
enterprise array technical support rating
category.

5.99

The finalists overall


average for technical
support is the lowest
weve seen in seven
years.

CHART IS BASED ON A 1.00 TO 8.00 SCALE

STORAGE APRIL 2015

25

Home

Would you buy this product again?

Castagna: DNA
set to replace
disks and flash

ON EACH Quality

Toigo: IBMs storage


story unfolds on a
mainframe
VVOLs redefining
storage

Awards survey, in addition to the five groups of category rating statements,


we also pose a general question, asking if users would buy the same product again given what
they know now. The results are often surprising, and can run against the grain of the category
scores. But with close races in each of the rating categories in this survey, the results of the
buy again question are not so surprising this time around, with all of the vendors receiving
positive endorsements from their users. RICH CASTAGNA

PRODUCTS IN
THE SURVEY
The following products
were included in the 10th
Quality Awards for enterprise storage arrays
survey; the number of
responses for finalists is
shown in parentheses.
n EMC VMAX or VMAXe

(145)

VM storage
shopping still
focuses on backup

n Fujitsu Eternus DX8400/

DX8700 or DX200 S3/


DX500 S3/DX600 S3*

Enterprise storage arrays: Would you buy this product again?


Cloud storage
strategies, use
cases explained

n Hewlett-Packard (HP)

Virtual server
backup buyers eye
VM-specific apps

n Hitachi Data Systems

Hitachis enterprise
arrays remain
peoples choice

XP Series or HP 3PAR
StoreServ 7000/HP 3PAR
StoreServ 10000 (129)
VSP/VSP G1000/USP/USP
V Series (46)

91%

90%

87%

87%

83%

HP

EMC

NetApp

Hitachi

IBM

n IBM DS8000 Series or XIV

Storage System (81)


n NEC D8 Series*
n NetApp FAS6000/V6000

or FAS8000 Series (97)

Buffington: Tape
offers long-term
data retention

*Too few responses


to qualify as a finalist

Matchett: Check
flash before
eschewing hybrid
About us

ABOUT THE QUALITY AWARDS

The Storage magazine and SearchStorage Quality Awards are designed to identify and recognize products that have proven their quality and reliability in
actual use. The results are derived from a survey of qualified readers who assess products in five main categories: sales-force competence, initial product
quality, product features, product reliability and technical support. Our methodology incorporates statistically valid polling that eliminates market share as
a factor. Our objective is to identify the most reliable products on the market regardless of vendor name, reputation or size. Products were rated on a scale of
1.00 to 8.00, where 8.00 is the best score. A total of 305 respondents provided 505 product evaluations.

STORAGE APRIL 2015

26

HOT SPOTS
JASON BUFFINGTON
Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup

Why cloud
wont kill tape
for retention
Cloud is often sold as a replacement
for tape, but has major limitations
when it comes to long-term retention.

Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

WHEN DEVELOPING A data

protection plan, IT teams should


take a hybrid approach to on-site and off-site protection
that uses disk, cloud and tape in almost every scenario.
Sure, some organizations can go without tape due to a lack
of long-term retention requirements, while others will
never be able to use the cloud for one security concern or
another. But for the rest of us, a hybrid plan should likely
include all three.
To be clear, I am a huge fan of cloud-based data protection, especially for endpoint devices and remote offices.
Im an even bigger fan of using it for disaster recovery as a
service as a superset of backup as a service. Almost every
data protection activity should start with diskpreferably
deduplicated diskbut there are reasons why the tertiary

copy should be on tape, as much as some cloud companies


want to put the last nail in tapes coffin. One major reason
is chain of custody.
If you are in an audited environment that requires longterm retention, it is important to be able to prove that data
on a seven-year-old tape is, in fact, seven years old and
has not been modified. There are some very compelling
extended-hold capabilities in disk target systems, as well
as tape cartridges, so you can be assured that the data
was written once even if it is later read many times
(WORM). If you have WORM-enabled tape cartridges
or disk volumes, you can write to them all you want, but
your options to modify the data are typically relegated to
delete volume.
How do you assure an auditor that the data within
your cloud service has not been tampered with?

You cant, at least not today. Perhaps the closest thing to


that is if your service provider offers a virtualized disk target system via their cloud service, and that target system
has an extended-hold feature. Depending on the implementation and the amicability of the auditor, that might
pass, but there is another problem.
Most folks, once they clear away fear, uncertainty and
doubt about tapes being fragile and slow, just want out
of the tape management burden. If you dont want to
manage your tape inventory, handle off-site tape storage
STORAGE APRIL 2015

27

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice

or perform periodic maintenance on tape drives, there


are some great services for that. And if you outsource
the tape management for a few years and then decide to
change managed services or bring the tapes back on-site,
you can do that. The tapes, in their pristine original state,
are returned to be stored however you like throughout
their lifecycle.
What do you do if you want to switch clouds after
three years, but your data has to be stored for seven?

You cant pull back three years of iterations from a cloud


provider without an inordinate amount of effort and expense (with most products). So you will likely have cloud
lock-in, whereby if you are going to stop using tape for
long-term retention, you must commit to that cloud provider for the seven-year term so the data is pristine within
the cloud storage. If you later choose to change cloud providers, you will either be invalidating the older copies or
you will have to leave the old data with the original cloud
provider until it ages out, while protecting your newer

data with the new cloud provider. Since much of the data
set will be similar to what it was last week with the old
provider, this will likely double your storage footprint
until the older data ages out of the original provider.
There are some exceptions. If the service provider has
the ability to export your entire data set into a secure
disk appliance or other repository that can be assured
to be unscathed, you have more options. If your cloud
service provider stores your data on tapes for long-term
retention, you have options when it comes to developing a
data protection plan. If you believe your organization can
store long-term data more effectively in the cloud than on
tapes, be sure your auditors and legal department agree
with your retention strategy before you throw those tape
drives away. n
is a senior analyst at Enterprise Strategy
Group. He focuses primarily on data protection, as well as Windows
Server infrastructure, management and virtualization. He blogs at
CentralizedBackup.com and tweets: @Jbuff.
JASON BUFFINGTON

Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

STORAGE APRIL 2015

28

READ/WRITE
MIKE MATCHETT
Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup

Check storage
setup before
going all-flash
With all-flash, hybrid and converged
hybrid arrays to choose from, it makes
sense to take a big-picture approach.

Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid

storage vendors have now announced their


entry into the all-flash storage array market, with most
having offered hybridssolid-state drive-pumped traditional arraysfor years. As silicon storage gets cheaper
and denser, it seems inevitable that data centers will migrate from spinning disks to faster, better and cheaper
options, with non-volatile memory poised to be the longterm winner. But the storage skirmish today seems to be
heading toward the total cost of ownership end of things,
where two key questions must be answered:

A
 re hybrid arrays a better choice to handle mixed workloads through advanced QoS and auto-tiering features?

All-flash proponents argue that cost and capacity will


continue to drop for flash compared to hard disk drives
(HDDs), and that no workload is left wanting with the
ability of all-flash to service all I/Os at top performance.
Yet we see a new category of hybrids on the market that
are designed for flash-level performance and then fold in
multiple tiers of colder storage. The argument there is
that data isnt all the same and its value changes over its
lifetime. Why store older, un-accessed data on a top tier
when there are cheaper, capacity-oriented tiers available?

ALL THE KEY

About us
n

H
 ow much performance is needed, and how many workloads in the data center have data with varying quality of
service (QoS) requirements or data that ages out?

THERES HYBRID AND THEN THERES HYBRID

Its misleading to lump together hybrids that are traditional arrays with solid-state drives (SSDs) added and
the new hybrids that might be one step evolved past allflash arrays. And it can get even more confusing when
the old arrays get stuffed with nothing but flash and are
positioned as all-flash products. To differentiate, some
industry wags like to use the term flash-first to describe
newer-generation products purpose-built for flash speeds.
That still could cause some confusion when considering
both hybrids and all-flash designs. It may be more accurate
to call the flash-first hybrids flash-converged. By being
STORAGE APRIL 2015

29

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

flash-converged, you can expect to buy one of these new


hybrids with nothing but flash inside and get all-flash
performance.
We arent totally convinced that the future data center
will have just a two-tier system with flash on top backed
by tape (or a remote cold cloud), but a hot-cold storage
future is entirely possible as intermediate tiers of storage
get, well, dis-intermediated.
Weve all predicted the demise of 15K HDDs for a while;
can all the other HDDs be far behind as QoS controls get
more sophisticated in handling the automatic mixing of
hot and cold to create any temperature storage you might
need?

JUST WHAT IS TRADITIONAL STORAGE?

This brings us to the issue of what traditional storage really


is these days. Everyone compares their shiny new products
to so-called traditional storage, yet we think that what
once was traditional storage has changed significantly and
is about to change even more.
One of the biggest changes isnt necessarily due to the
ability to drop in SSDs in place of HDDs, but rather being able to use the growing bounty of computing power
available.
CPU chip capabilities continue to advance as fast as
flash. More built-in processing power, like more cores
supporting more threads, faster execution pipelines, and
upcoming features like chip-level encryption support
mean more inline and online storage features can be delivered in software. For example, its now possible for most

storage vendors to provide inline deduplication based on


software processing.
This has led to the rise of software-defined storage
(SDS), which is really an acknowledgement that all of
what a storage array does can be run as a program and,
in turn, be dynamically programmable. While some vendors still productively leverage custom ASICs (HP 3PAR,

SDS CAN ENABLE FASTER


REFRESH CYCLES TO
ACCOMMODATE NEW TECHNOLOGIES AND PROVIDE
INCREASINGLY INTELLIGENT
STORAGE-SIDE ANALYTICS.
SimpliVity), many array controllers have long been mostly
software. While SDS vendors sell just the software part
and leave the infrastructure up to users, many SDS purchasers still end up buying a pre-loaded SDS appliance
that doesnt look much different from traditional storage
when it comes off the pallet.
Still, ambitious new SDS providers have brought some
benefits to the storage market. We see improvements
offered in QoS at fine-grained levels, dynamic online
configuration and partitioning, inline storage features
and broader capacity efficiencies. SDS can enable faster
refresh cycles to accommodate new technologies and provide increasingly intelligent storage-side analytics.
STORAGE APRIL 2015

30

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps

TIERING ISNT A BAD WORD

This brings me back to the key hybrid feature of auto-tiering. Tiering is evolving quickly from being based on
relatively simple data aging or recent access algorithms
working with large chunks of data, to being based on finegrained, small chunk analyses of access and usage patterns
over varying time intervals, the stated or required QoS
of the data, competing workloads and the increasingly
dynamic makeup of available storage resources.
All-flash proponents might talk about how theyre
becoming cost-efficient (per capacity) enough to handle
more mixed workloads with differing requirements. At the
same time, flash-converged hybrids are getting better at
delivering targeted QoS, including pinning top-end workloads in flash. The all-flash array market gang counters
with how any effort put into determining QoS is a waste

of Opex when every workload can get consistent flash performance. Still, a large percentage of data quickly moves
down the value chain, with much data never or almost
never getting accessed after a short active lifetime.
Were definitely approaching a watershed moment in
storage. Big vendors like EMC, Hewlett-Packard, IBM and
NetApp have hedged their bets with traditional hybrid,
all-flash and flash-converged hybrid options, while smaller
players like Kaminario, Nimble Storage, Pure Storage and
Violin Memory each promote a specific vision. Either way,
we suspect the future traditional storage array will provide for a wide range of workloads without much manual
storage administration, and the lowest TCO option will
eventually dominate. n
MIKE MATCHETT

is a senior analyst and consultant at Taneja Group.

Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid
About us

STORAGE APRIL 2015

31

TechTarget Storage Media Group

Home
Castagna: DNA
set to replace
disks and flash
Toigo: IBMs storage
story unfolds on a
mainframe
VVOLs redefining
storage
VM storage
shopping still
focuses on backup
Cloud storage
strategies, use
cases explained
Virtual server
backup buyers eye
VM-specific apps
Hitachis enterprise
arrays remain
peoples choice
Buffington: Tape
offers long-term
data retention
Matchett: Check
flash before
eschewing hybrid

STORAGE MAGAZINE
VP EDITORIAL/STORAGE MEDIA GROUP

Rich Castagna

STORAGE DECISIONS TECHTARGET CONFERENCES


EDITORIAL EXPERT COMMUNITY COORDINATOR Kaitlin Herbert

Andrew Burton
Ed Hannan
ASSOCIATE EDITORIAL DIRECTOR Ellen OBrien
CONTRIBUTING EDITORS James Damoulakis, Steve Duplessie,
Jacob Gsoedl
DIRECTOR OF ONLINE DESIGN Linda Koury

SUBSCRIPTIONS
www.SearchStorage.com

SEARCHSTORAGE.COM
SEARCHCLOUDSTORAGE.COM
SEARCHVIRTUALSTORAGE.COM
ASSOCIATE EDITORIAL DIRECTOR Ellen OBrien
SENIOR NEWS DIRECTOR Dave Raffo
SENIOR NEWS WRITER Sonia R. Lelii
SENIOR WRITER Carol Sliwa
STAFF WRITER Garry Kranz
SITE EDITOR Sarah Wilson
ASSISTANT SITE EDITOR Erin Sullivan

TECHTARGET INC.
275 Grove Street, Newton, MA 02466
www.techtarget.com

EXECUTIVE EDITOR

SENIOR MANAGING EDITOR

SEARCHDATABACKUP.COM
SEARCHDISASTERRECOVERY.COM
SEARCHSMBSTORAGE.COM
SEARCHSOLIDSTATESTORAGE.COM
EXECUTIVE EDITOR Andrew Burton
SENIOR MANAGING EDITOR Ed Hannan
STAFF WRITER Garry Kranz

STORAGE MAGAZINE
275 Grove Street, Newton, MA 02466
editor@storagemagazine.com

2015 TechTarget Inc. No part of this publication may be transmitted or reproduced


in any form or by any means without written permission from the publisher.
TechTarget reprints are available through The YGS Group.
About TechTarget: TechTarget publishes media for information technology
professionals. More than 100 focused websites enable quick access to a deep store
of news, advice and analysis about the technologies, products and processes crucial
to your job. Our live and virtual events give you direct access to independent expert
commentary and advice. At IT Knowledge Exchange, our social community, you can
get advice and share solutions with peers and experts.
COVER IMAGE AND PAGE 8: FANDIJKI/ISTOCK

About us

Stay connected! Follow @SearchStorageTT today.

STORAGE APRIL 2015

32

You might also like