You are on page 1of 24

What is Converged infrastructure (CI)?

Converged infrastructure (CI) is an approach to data center management that relies on a specific vendor
and the vendors partners to provide pre-configured bundles of hardware and software in a single chassis.
A CI essentially turns disparate data center components into an appliance form factor that can be centrally
managed. The goal of a converged infrastructure is to minimize compatibility issues and simplify the
management of servers, storage systems and network devices while reducing costs for cabling, cooling,
power and floor space. Converged infrastructure is sometimes called unified computing, datacenter-in-abox or infrastructure-in-a-box.
As

the

CI

market

continues

to

grow,

vendors

are

differentiating

their

products

by

bundling virtualization software with their CI offerings. Such an approach is sometimes marketed as
being a "hyper-converged infrastructure" solution. (The prefix "hyper" refers to the hypervisor in the
virtualization layer.) Vendors may also provide additional functionality forcloud bursting or disaster
recovery, providing administrators with the ability to completely manage both physical and virtual
infrastructures in a federated manner.
What is Vcentre , EXSI , vsphere ?
VMware vCenter Server, formerly known as VirtualCenter, is the centralized management tool for the
vSphere suite. VMware vCenterServer allows for the management of multiple ESX servers and virtual
machines (VMs) from different ESX servers through a single console application.
VMware Inc. is a software company that develops many suite of software products specially for
providing various virtualization solutions. There are many cloud products, datacenter products, desktop
products and so on.
vSphere is a software suite that comes under data center product. vSphere is like Microsoft Office suite
which has many software like MS Office, MS Excel, MS Access and so on. Like Microsoft Office,
vSphere is also a software suite that has many software components like vCenter, ESXi, vSphere client
and so on. So, the combination of all these software components is vSphere. vSphere is not a particular
software that you can install and use, it is just a package name which has other sub components.
ESXi, vSphere client and vCenter are components of vSphere. ESXi server is the most important part of
vSphere. ESXi is the virtualization server. It is type 1 hypervisor. All the virtual machines or Guest OS are
installed on ESXi server. To install, manage and access those virtual servers which sit above of ESXi
server, you will need other part of vSphere suit called vSphere client or vCenter. Now, vSphere client
allows administrators to connect to ESXi servers and access or manage virtual machines. vSphere client is
installed on the client machine (e.g. Administrators laptop). The vSphere client is used from client

machine to connect to ESXi server and do management tasks. So now what is vCenter? Why we need it?
Try cloning existing virtual machine using just a vSphere client without vCenter server.
vCenter server is similar to vSphere client but its a server with more power. vCenter server is installed on
Windows Server or Linux Server. VMware vCenter server is a centralized management application that
lets you manage virtual machines and ESXi hosts centrally. vSphere client is used to access vCenter
Server and ultimately manage ESXi servers. vCenter server is compulsory for enterprises to have
enterprise features like vMotion, VMware High Availability, VMware Update Manager and VMware
Distributed Resource Scheduler (DRS). For example, you can easily clone existing virtual machine in
vCenter server. So vCenter is another important part of vSphere package. You have to buy vCenter license
separately.

The diagram above shows vSphere suite in a more descriptive way. vSphere is a product suite, ESXi is a
hypervisor installed on a physical machine. vSphere Client is installed on laptop or desktop PC and is
used to access ESXi Server to install and manage virtual machines on ESXi server. vCenter server is
installed as virtual machine on top of ESXi server. vCenter server is a vSphere component which is
mostly used in large environment where there are many ESXi server and dozens of virtual machines. The
vCenter server is also accessed by vSphere client for management purpose. So, vSphere client is used to
access ESXi server directly in small environment. In larger environment, vSphere client is used again to
access vCenter server which ultimately manages ESXi server.
What is cluster ?
A cluster is a group of hosts. When a host is added to a cluster, the host's resources become part of the
cluster's resources. The cluster manages the resources of all hosts within it. Clusters enable the vSphere
High Availability (HA) and vSphere Distributed Resource Scheduler (DRS) solutions.
A cluster acts and can be managed as a single entity. It represents the aggregate computing and memory
resources of a group of physical x86 servers sharing the same network and storage arrays. For example, if
the group contains eight servers with four dual-core CPUs each running at 4GHz and 32GB of memory,
the cluster has an aggregate 256GHz of computing power and 256GB of memory available for running
virtual machines
What is Microsoft cluster ?

Microsoft Cluster Services (MSCS) has been around since the days of Windows NT4, providing high
availability to tier 1 applications such as, MS Exchange and MS SQL Server. With the release of Server
2008, Microsoft Clustering Services was renamed to Windows Server Fail-Over Clustering (WSFC) with
several enhancements.
This post will focus on the design choices when implemented with VMware vSphere. This is not intended
as a step-by-step install guide of MSCS/WSFC.
The first step in determining whether or not you have a requirement for MSCS/WSFC (and is true for
every application/service you are virtualising) is to assess and define the application availability
requirements. This is ordinarily defined as an understanding of the impact downtime/service availability
would have on the tier 1 application. How would this impact stakeholders, application owners and most
importantly customers? Id also like to add, just because the application has always been clustered
doesnt mean you have to continue down that path. VMware vSphere offers several high availability
solutions which can be used collectively to support applications where unplanned downtime must be
minimal. All options should be carefully considered to understand the impact of a decision on the
application, operation and administration of the environment.
Difference between Microsoft and vmware cluster ?
Comparisions are based on

Licensing

Virtualization Scalability

VM Portability, High Availability and Disaster Recovery

Storage

Networking

Guest Operating Systems

Licensing: At-A-Glance

# of Physical CPUs
per License

Microsoft
Windows Server 2012
R2
+ System Center 2012
R2 Datacenter Editions

VMware
Notes
vSphere 5.5
Enterprise Plus +
vCenter Server
5.5

With Microsoft,
each Datacenter

Edition license
provides
licensing for up
to 2 physical
CPUs per Host.
Additional
licenses can be
stacked if more
than 2 physical
CPUs are
present.
With VMware, a
vSphere 5.5
Enterprise Plus
license must be
purchased for
each physical
CPU. This
difference in
CPU licensing is
one of the factors
that can
contribute to
increased
licensing costs.
In addition, a
minimum of one
license of
vCenter Server
5.5 is required for
vSphere
deployments.
# of Managed OSEs Unlimited
per License

Unlimited

Both solutions
provide the
ability to manage
an unlimited
number of
Operating System
Environments per
licensed Host.

# of Windows Server Unlimited


VM Licenses per
Host

With VMware,
Windows Server
VM licenses

must still be
purchased
separately. In
environments
virtualizing
Windows
Server workloads
, this can
contribute to a
higher overall
cost when
virtualizing with
VMware.
VMware does
include licenses
for an unlimited #
of VMs running
SUSE Linux
Enterprise Server
per Host.
Includes Anti-virus / Yes - System Center
Anti-malware
Endpoint
protection
Protectionagents include
d for both Host and VMs
with System Center 2012
R2

Yes Includes vShield


Endpoint
Protection
which deploys as
EPSEC thin
agent in each
VM + separate
virtual appliance.

Includes full SQL


Database Server
licenses for
management
databases

No Must
purchase
additional
database server
licenses to scale
beyond
managing 100
hosts and 3,000
VMs with
vCenter Server
Appliance.

Yes Includes all needed


database server licensing
to manage up to 1,000
hosts and 25,000 VMs
per management server.

VMware
licensing includes
an internal
vPostgres
database that
supports
managing up to
100 hosts and
3,000 VMs via
vCenter Server
Appliance.
SeeVMware
vSphere 5.5
Configuration

Maximums for
details.
Includes licensing for Yes Included in System
Enterprise
Center 2012 R2
Operations
Monitoring and
Management of
hosts, guest VMs and
application
workloads running
within VMs.

No Operations
Monitoring and
Management
requires separate
license
for vCenter
Operations
Manageror
upgrade
tovSphere with
Operations
Management

Includes licensing for Yes Included in System


Private Cloud
Center 2012 R2
Management
capabilities pooled
resources, selfservice, delegation,
automation,
elasticity,
chargeback/showbac
k

No Private
Cloud
Management
capabilities
require
additional cost
ofVMware
vCloud Suite.

Includes
Yes Included in
management tools for theRDS role of Windows
provisioning and
Server 2012.
managing VDI
solutions for
virtualized Windows
desktops.

No VDI
management
requires
additional cost
of VMware
Horizon View.

Includes web-based Yes Included in System


management console Center 2012App
Controller using web
browsers supporting
Silverlight 5, and
free Windows Azure
Pack for multi-tenant
self-service VM
management using web
browsers supporting

Yes Included
invSphere Web
Clientusing IE
8,9,10, Firefox
and Chrome.

HTML5/JavaScript.
Virtualization Scalability: At-a-Glance
Microsoft
Windows Server 2012
R2
+ System Center 2012
R2 Datacenter Editions

VMware
Notes
vSphere 5.5
Enterprise Plus
+ vCenter Server
5.5

Maximum # of
Logical
Processors per
Host

320

320

With vSphere 5.5


Enterprise Plus,
VMware has caught
up to Microsoft in
terms of Maximum #
of Logical Processors
supported per Host.

Maximum
Physical RAM
per Host

4TB

4TB

With vSphere 5.5


Enterprise Plus,
VMware has caught
up to Microsoft in
terms of Maximum
Physical RAM
supported per Host.

Maximum Active 1,024


VMs per Host

512

Maximum Virtual 64
CPUs per VM

64

When using VMware


FT, only 1 Virtual
CPU per VM can be
used.

Hot-Adjust
Yes - Hyper-V provides
Virtual CPU
the ability to increase
Resources to VM and decrease Virtual
Machine limits for
processor resources on
running VMs.

Yes - Can HotAdd virtual


CPUs for
running VMs on
selected Guest
Operating
Systems and
adjust
Limits/Shares
for CPU
resources.

VMware Hot-Add
CPU feature requires
supported Guest
Operating System.
Check VMware
Compatibility
Guidefor details.
VMware Hot-Add
CPU feature not
supported when

using VMware FT
Maximum Virtual 1TB
RAM per VM

1TB

When using VMware


FT, only 64GB of
Virtual RAM per VM
can be used.

Hot-Add Virtual
RAM to VM

Yes ( Dynamic
Memory )

Yes

Requires supported
Guest Operating
System.

Dynamic
Memory
Management

Yes ( Dynamic
Memory )

Yes ( Memory
Ballooning )
Note
thatmemory
overcommit is
not supported
for VMs that are
configured as an
MSCS VM
Guest Cluster.

VMware vSphere 5.5


also supports another
memory
technique:Transparent
Page Sharing (TPS).
While TPS was useful
in the past on legacy
server hardware
platforms and
operating systems, it
is no longer effective
in many environments
due to modern servers
and operating systems
supportingLarge
Memory Pages(LMP)
for improved memory
performance.

Guest NUMA
Support

Yes

Yes

NUMA = NonUniform Memory


Access. Guest
NUMA support is
particularly important
for scalability when
virtualizing large
multi-vCPU VMs on
Hosts with a large
number of physical
processors.

Maximum # of
physical Hosts
per Cluster

64

32

Maximum # of
VMs per Cluster

8,000

4,000

Virtual Machine
Snapshots

Yes - Up to 50 snapshots Yes - Up to 32


per VM are supported. snapshots per
VM chain are
supported,
but VMware
only
recommends 2to-3.
In addition, VM
Snapshots are
not supported
for VMs using
an iSCSI
initiator.

Integrated
Yes - via System Center
Application Load 2012 R2 VMM
Balancing for
Scaling-Out
Application Tiers

No Requires
additional
purchase
ofvCloud
Network and
Security
(vCNS) or
vCloud Suite.

Bare metal
Yes - via System Center
deployment of
2012 R2 VMM
new Hypervisor
hosts and clusters

Yes VMwareAuto D
eploy and Host
Profiles supports
bare metal
deployment of
new hosts into
an existing
cluster, but does
not support bare
metal
deployment of
new clusters.

Bare metal
deployment of
new Storage

Yes - via System Center No


2012 R2 VMM

hosts and clusters


Manage GPU
Virtualization for
Advanced VDI
Graphics

Yes - Server GPUs can


be virtualized
and pooled across VDI
VMs viaRemoteFX and
native VDI management
features in RDS role.

Virtualization of
USB devices

Yes - Client USB


Yes - via USB
devices can be passed to Pass-through sup
VMs via Remote
port.
Desktop
connections. Direct
redirection of USB
storage from Host
possible withWindowsto-Gocertified devices.
Direct redirection of
other USB devices
possible with third-party
solutions.

Virtualization of
Serial Ports

Yes - Virtual Machine


Serial Ports can be
connected to Named
Pipes on a host. Named
Pipes can then be
connected to Physical
Serial Ports on a host
using
freePipeToCom tool.
Live Migration of VMs
using virtualized serial
ports can be provided
via 3rd party software,
such as Serial over
Ethernet and Network
Serial Port, or 3rd party
hardware, such as Digi
PortServer

Yes - via vDGA


and
vSGA features,
but requires
separate
purchase of
VMware
Horizon View to
manage VDI
desktop pools.

Yes - Virtual
MachineSerial
Ports can be
connected to
Named Pipes,
Files or Physical
Serial Ports on a
host.
vMotion of VMs
using virtualized
serial ports can
be supported
when using 3rd
party virtual
serial port
concentrators,
such asAvocent

Note that the ability to


perform Virtual
Machine Live
Migration (or
vMotion) for VM's
with virtualized serial
ports requires a thirdparty option on both
solutions compared.

TS and Lantronix
UDS1100

ACS v6000.

Minimum Disk
Footprint while
still
providing manage
ment of multiple
virtualization
hosts and guest
VM's

~800KB - Microkernelized hypervisor


( Ring -1 )

~155MB - Mono
lithic hypervisor
w/ Drivers( Ring
-1 + 0 )

Boot from Flash

Yes - Supported
viaWindows-toGodevices.

Yes

Microsoft and
VMware each use
different approaches
for hypervisor
~5GB - Drivers +
architecture. Each
Management ( Parent
~4GB approach offers
Partition - Ring 0 + 3 ) Management
different advantages
( vCenter Server as noted in the
Microsoft Hyper-V uses Appliance columns to the left.
a modern microRing 3 )
kernelized hypervisor
See When it comes to
architecture, which
VMware
hypervisors, does size
minimizes the
vSphere uses a really matter? for a
components needed
larger classic
more detailed realwithin the hypervisor
monolithic
world comparison.
running in Ring -1,
hypervisor
while still providing
approach, which Frequently, patch
strong scalability,
incorporates
management comes
performance, VM
additional code, up when discussing
security, Virtual Disk
such as device
disk footprints.
security and broad
drivers, into the SeeOrchestrating
device driver
hypervisor. This Patch
compatibility.
approach can
Management for more
make device
details on this area.
driver
compatibility an
issue in some
cases, but offers
increased
compatibility
with legacy
server hardware
that does not
support Intel-VT
/ AMD-V
hardwareassisted
virtualization.

Boot from SAN

Yes - can leverage


included iSCSI Target
Server or 3rd party
iSCSI / FC storage
arrays using software or
hardware boot
providers.

Yes - can
leverage 3rd
party iSCSI /
FC storage
arrays using
software or
hardware boot
providers.

VM Portability, High Availability and Disaster Recovery: At-a-Glance


Microsoft
Windows Server 2012
R2
+ System Center 2012
R2 Datacenter
Editions

VMware
Notes
vSphere 5.5 Enterprise
Plus + vCenter Server
5.5

Live
Yes Unlimited
Migration of concurrent Live VM
running VMs Migrations. Provides
flexibility to cap at a
maximum limit that is
appropriate
for yourdatacenter
architecture.
Particularly useful
when using RDMAenabled NICs.

Yes but limited to 4


concurrent vMotions
per host when using
1GbE network adapters
and 8 concurrent
vMotions per host when
using 10GbE network
adapters.

Live
Yes Supported
Migration of viaShared Nothing
running VMs Live Migration
without
shared
storage
between hosts

Yes Supported
viaEnhanced vMotion.

Live
Migration
using
compression
of VM
memory state

Yes Supported
No
viaCompressed Live
Migration, providing
up to a 2X increase in
Live Migration
speeds.

Live

Yes Supported

No

Migration
over RDMAenabled
network
adapters

viaSMB-Direct Live
Migration, providing
up to a 10X increase
in Live Migration
speeds and low CPU
utilization.

Live
Migration of
VMs
Clustered
with
Windows
Server
Failover
Clustering
(MSCS Guest
Cluster)

Yes by
configuringrelaxed
monitoring of MSCS
VM Guest Clusters.

No based on
documented vSphere
MSCS Setup
Limitations

Highly
Available
VMs

Yes Highly
available VMs can be
configured on
aHyper-V Host
cluster. If the
application running
inside the VM is
cluster aware, a VM
Guest Cluster can
also be configured via
MSCS for faster
application failover
times.

Yes Supported
byVMware HA, but
with the limitations
listed above when
using MSCS VM Guest
Clusters.

Failover
Prioritization
of Highly
Available
VMs

Yes Supported by
Yes
clustered priority
settings on each
highly available VM.

Affinity
Rules for
Highly
Available
VMs

Yes Supported
bypreferred cluster
resource owners and
anti-affinity VM
placement rules.

Yes

Cluster-

Yes Supported via

Yes Supported

Aware
Updating for
Orchestrated
Patch
Management
of Hosts.

included ClusterAware
Updating (CAU) role
service.

byvSphere 5.5 Update


Manager, but if using
vCenter Server
Appliance, need
separate 64-bit
Windows OS license
for Update
Management server. If
supporting more than 5
hosts and 50 VMs, also
need a separate SQL
database server.

Guest OS
Application
Monitoring
for Highly
Available
VMs

Yes

Yes Provided
byvSphere App HA,
butlimited to only the
following applications:
Apache Tomcat, IIS,
SQL Server, Apache
HTTP Server,
SharePoint,
SpringSource tc
Runtime.

VM Guest
Clustering via
Shared
Virtual Hard
Disk files

Yes Provided via


native Shared VHDX
support for VM Guest
Clusters

Yes But only SingleHost VM Guest


Clustering supported
via Shared VMDK
files. For VM Guest
Clusters that extend
across multiple hosts,
must use RDM instead.

Maximum # 64
of Nodes per
VM Guest
Cluster

5 - as documented
inVMware Guidelines
for Supported MSCS
Configurations

Intelligent
Placement of
new VM
workloads

Yes Provided
viaIntelligent
Placementin System
Center 2012 R2

Yes Provided
viavSphere DRS,
butwithout ability to
intelligently place fault
tolerant VMs using
VMware FT.

Automated

Yes Provided

Yes - Provided

Load
Balancing of
VM
Workloads
across Hosts

viaDynamic
Optimizationin
System Center 2012
R2

viavSphere DRS,
butwithout ability to
load-balance VM Guest
Clusters using MSCS.

Power
Optimization
of Hosts
when loadbalancing
VMs

Yes Provided
viaPower
Optimization in
System Center 2012
R2

Yes Provided via


Distributed Power
Management
(DPM)within avSphere
DRS cluster, with the
same limitations listed
above for Automated
Load Balancing.

Fault Tolerant No - The vast


VMs
majority
of application
availability needs can
be supported
via Highly Available
VMs and VM Guest
Clustering on a more
cost-effective and
more-flexible basis
than software-based
fault tolerance
solutions. If required
for specific business
applications,hardware
-based fault tolerance
server solutions can
be leveraged where
needed.

Yes Supported
viaVMware FT, but
there are a large number
oflimitations when
using VMware FT,
including no support for
the following when
using VMware FT: VM
Snapshots, Storage
vMotion, VM Backups
via vSphere Data
Protection, Virtual
SAN, Multi-vCPU
VMs, More than 64GB
of vRAM per VM.

Backup VMs Yes - Provided via


and
included System
Applications Center 2012 R2 Data
Protection
Manager with support
for Disk-to-Disk,
Tape and Cloud
backups.

Yes - Only supports


Disk-to-Disk backup of
VMs via vSphere Data
Protection.
Application-level
backup integration
requires separately
purchased vSphere
Data Protection

Software-based fault
tolerance solutions,
such as VMware FT,
generally have
significant
limitations. If
applications require
more comprehensive
fault tolerance than
provided via Highly
Available VMs and
VM Guest
Clustering,hardwarebased fault tolerance
server solutions offer
an alternative choice
without the limits
imposed by softwarebased fault tolerance
solutions.

Advanced.
Site-to-Site
Asynchronou
s VM
Replication

Yes Provided
viaHyper-V
Replica with 30second, 5-minute or
15-minute replication
intervals. Minimum
RPO = 30-seconds.

Yes Provided
viavSphere
Replicationwith
minimum replication
interval of 15-minutes.
Minimum RPO = 15minutes.

Hyper-V Replica also


supports extended
replication across
three sites for added
protection.

In VMware solution,
Orchestrated Failover
of Site-to-Site
replication can be
provided via
separately
licensedVMware
SRM.
In Microsoft solution,
Orchestrated Failover
of Site-to-Site
replication can be
provided via
includedPowerShell a
t no additional cost.
Alternatively, a GUI
interface for
orchestrating failover
can be provided via
the separately
licensedWindows
Azure HRMservice.

Storage: At-a-Glance
Microsoft
Windows Server 2012
R2
+ System Center
2012 R2 Datacenter
Editions

VMware
Notes
vSphere 5.5 Enterprise
Plus + vCenter Server
5.5

Maximum #
Virtual SCSI
Hard Disks
per VM

256 ( Virtual SCSI )

60 ( PVSCSI )
120 ( Virtual SATA )

Maximum
Size per
Virtual Hard
Disk

64TB

62TB

vSphere 5.5 support


for 62TB VMDK files
is limited to when
using VMFS5 and
NFS datastores only.

In vSphere 5.5,
VMFS3 datastores are
still limited to 2TB
VMDK files.
In vSphere 5.5, HotExpand, VMware FT ,
Virtual Flash Read
Cache and Virtual
SAN are not
supported with 62TB
VMDK files.
Native 4K
Yes - Hyper-V
No
Disk Support providessupport for
both 512e and
4K large sector-size
disks to help ensure
compatibility with
emerging innovations
in storage hardware.
Boot VM
from Virtual
SCSI disks

Yes ( Generation 2
VMs )

Yes

Hot-Add
Virtual SCSI
VM Storage
for running
VMs

Yes

Yes

Hot-Expand
Virtual SCSI
Hard Disks
for running
VMs

Yes

Yes but not


supported with new
62TB VMDK files.

Hot-Shrink
Virtual SCSI
Hard Disks
for running
VMs

Yes

No

Storage

Yes ( Storage QoS )

Yes ( Storage IO

In VMware vSphere

Quality of
Service

Control )

5.5, Storage IO
Control is not
supported for RDM
disks.
In Windows Server
2012 R2, Storage QoS
is not supported for
Pass-through disks.

Virtual Fibre
Channel to
VMs

Yes ( 4 Virtual FC
Yes ( 4 Virtual FC
NPIV ports per VM ) NPIV ports per VM )
-but not supported
when using VM Guest
Clusters with MSCS.

vSphere 5.5 Enterprise


Plus also includes a
software initiator
forFCoE support for
VMs.
While not included
inbox in Windows
Server 2012 R2, a nocost ISV solution is
available here to
provide FCoE support
for Hyper-V VMs.

Live Migrate
Virtual
Storage for
running VMs

Yes - Unlimited
concurrent Live
Storage
migrations. Provides
flexibility to cap at a
maximum limit that is
appropriate
for yourdatacenter
architecture.

Yes but only up to 2


concurrent Storage
vMotion operations
per host / only up to 8
concurrent Storage
vMotion operations
per datastore. Storage
vMotion is also not
supported for MSCS
VM Guest Clusters.

Flash-based
Read Cache

Yes - Using SSDs in


Tiered Storage
Spaces, limited up to
160 physical disks
and 480 TB total
capacity.

Yes but only up to


400GB of cache per
virtual disk / 2TB
cumulative cache per
host for all virtual
disks.

Flash-based
Write-back
Cache

Yes - Using SSDs in


Storage Spaces for
Write-back Cache.

No

See this article for


additional challenges
and considerations
when implementing
Flash-based Read
Caching on VMware.

SAN-like
Yes Included in
No
Storage
Windows Server 2012
Virtualization R2 Storage Spaces.
using
commodity
hard disks.

VMware
providesVirtual
SAN which is
included as
anexperimental
featurein vSphere 5.5.
You can test and
experiment with
Virtual SAN, but
VMware does not
expect it to be used in
a production
environment.

Automated
Yes Included in
No
Tiered
Windows Server 2012
Storage
R2 Storage Spaces.
between SSD
and HDD
using
commodity
hard disks.

VMware
providesVirtual
SAN which is
included as
anexperimental
featurein vSphere 5.5.
You can test and
experiment with
Virtual SAN, but
VMware does not
expect it to be used in
a production
environment.

Can consume Yes


storage via
iSCSI, NFS,
Fibre Channel
and SMB 3.0.

Yes Except no
support for SMB 3.0

Can present
storage via
iSCSI, NFS
and SMB 3.0.

Yes Available via


No
included iSCSI Target
Server, NFS
Server andScale-out
SMB 3.0
Server support. All
roles can be clustered
for High Availability.

Storage
Multipathing

Yes
via MPIO andSMB

Yes via VAMP

VMware
providesvSphere
Storage Appliance as a
separately licensed
product to deliver the
ability to present NFS
storage.

Multichannel
SAN Offload Yes via ODX
Capability

Yes via VAAI

Thin
Provisioning
and Trim
Storage

Yes Available
viaStorage Spaces
Thin Provisioning
and NTFS Trim
Notifications.

Yes but trim


operations must be
manually processed by
running esxcli vmfs
unmap command to
reclaim disk space.

Storage
Encryption

Yes via BitLocker

No

Deduplication
of storage
used by
running VMs

Yes Available via


included Data
Deduplication role
service.

No

Provision VM
Storage based
on Storage
Classification
s

Yes via Storage


Classifications in
System Center 2012
R2

Yes via Storage


Policies, formerly
called Storage
Profiles, in vCenter
Server 5.5

Dynamically
balance and
re-balance
storage load
based on
demands

Yes Storage IO load


balancing and rebalancing is
automatically handled
on-demand by
bothSMB 3.0 Scale
Out File
Server and Automate
d Storage Tiers in
Storage Spaces.

Yes Performed
viaStorage DRS, but
limited in loadbalancing frequency.
The default DRS loadbalance interval only
runs at 8-hour
intervals and can be
adjusted to run loadbalancing only as
often as every 1-hour.

Microsoft and
VMware use different
approaches for storage
load balancing.
Microsoft's approach
is to provide granular,
on-the-fly load
balancing at an IOlevel across SSD and
HDD for better
granularity.
VMware's approach is
to provide storage
load balancing at a
VM-level and use
Storage vMotion to
live migrate running
VM's between storage
locations periodically

in an attempt to
distribute storage
loads for running
VMs.
Integrated
Provisioning
and
Management
of Shared
Storage

Yes - System Center


2012 R2
VMM includes
storage provisioning
and management of
SAN Zoning, LUNS
and Clustered Storage
Servers.

No - Provisioning and
management of Shared
Storage is available
through some 3rd
party storage vendors
who offer plug-ins to
vCenter Server 5.5.

Networking: At-a-Glance
Microsoft
Windows Server 2012 R2
+ System Center 2012 R2
Datacenter Editions
Distributed
Switches across
Hosts

Yes Supported byLogical


Yes
Switches in System Center 2012
R2

Extensible
Yes - Several partners offer
Virtual Switches extensions today, such
as Cisco, NEC,Inmon and 5nine.
Windows Server 2012 R2 offers
new support for co-existence of
Network Virtualization and
Switch Extensions.

NIC Teaming

VMware
Notes
vSphere 5.5
Enterprise Plus +
vCenter Server 5.5

Replaceable, not
extensible VMware virtual
switch is
replaceable, not
incrementally
extensible with
multiple 3rd party
solutions
concurrently

Yes Up to 32 NICs per NIC


Yes Up to 32
Team. Windows Server 2012
NICs per Link
R2 provides newDynamic Load Aggregation Group
Balancing mode usingflowlets to
provide efficient load balancing
even between a small number of
hosts.

Private VLANs
(PVLAN)

Yes

Yes

ARP Spoofing
Protection

Yes

No Requires
additional purchase
of vCloud Network
and Security
(vCNS) or vCloud
Suite.

DHCP Snooping Yes


Protection

No Requires
additional purchase
of vCloud Network
and Security
(vCNS) or vCloud
Suite.

Router
Yes
Advertisement
Guard Protection

No Requires
additional purchase
of vCloud Network
and Security
(vCNS) or vCloud
Suite.

Virtual Port
ACLs

Yes - Windows Server 2012 R2


adds support for Extended ACLs
that include Protocol, Src/Dst
Ports, State, Timeout &
Isolation ID

Yes - via
new Traffic
Filtering and
Markingpolicies in
vSphere 5.5
distributed
switches

Trunk Mode to
VMs

Yes

Yes

Port Monitoring Yes

Yes

Port Mirroring

Yes

Yes

Dynamic Virtual Yes


Machine Queue

Yes

IPsec Task
Offload

No

Yes

Single Root IO
Virtualization
(SR-IOV)

Yes

Yes SR-IOV is
supported by
vSphere 5.5
Enterprise Plus,
butwithout support
for vMotion,
Highly Available
VMs or VMware
FT when using SRIOV.

Virtual Receive
Side Scaling
( Virtual RSS )

Yes

Yes ( VMXNet3 )

Network Quality Yes


of Service

Yes

Network
Virtualization /
SoftwareDefined
Networking
(SDN)

Yes Provided via Hyper-V


No Requires
Network Virtualization based
additional purchase
onNVGRE protocol and in-box ofVMware NSX
Site-to-Site NVGRE Gateway.

Integrated
Network
Management of
both Virtual and
Physical
Network
components

Yes System Center 2012 R2


VMMsupports integrated
management of virtual
networks, Top-of-Rack (ToR)
switches and integrated IP
Address Management

No

What is a virtual machine ?


A virtual machine (VM) shares physical hardware resources with other users but isolates the operating
system or application to avoid changing the end-user experience. Specialized software called
a hypervisor emulates the PC client or server'sCPU, memory, hard disk, network and other hardware

resources completely, enabling virtual machines to share the resources. The hypervisor can emulate
multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run
Linux and Windows server operating systems on the same underlying physical host. Virtualization saves
costs by reducing the need for physical hardware systems. Virtual machines more efficiently use
hardware, which lowers the quantities of hardware and associated maintenance costs, and reduces power
and cooling demand. They also ease management because virtual hardware does not fail. Administrators
can take advantage of virtual environments to simplifybackups, disaster recovery, new deployments and
basic system administration tasks.
Virtual machines do not require specialized hypervisor-specific hardware. Virtualization does however
require more bandwidth, storage and processing capacity than a traditional server or desktop if the
physical hardware is going to host multiple running virtual machines. VMs can easily move, be copied
and reassigned between host servers to optimize hardware resource utilization. Because VMs on a
physical host can consume unequal resource quantities (one may hog the available physical storage while
another stores little), IT professionals must balance VMs with available resources.