You are on page 1of 4

9916 Brooklet Drive

Houston, Texas 77099


Phone 832-327-0316
www.safinatechnolgies.com

SAN vs NAS - What Is the Difference?


From NAS-SAN.com

At first glance NAS and SAN might seem almost identical, and in fact many times either will work in a given
situation. After all, both NAS and SAN generally use RAID connected to a network, which then are backed up
onto tape. However, there are differences -- important differences -- that can seriously affect the way your data
is utilized. For a quick introduction to the technology, take a look at the diagrams below.

Wires and Protocols


Most people focus on the wires, but the difference in protocols is
actually the most important factor. For instance, one common
argument is that SCSI is faster than ethernet and is therefore better.
Why? Mainly, people will say the TCP/IP overhead cuts the efficiency
of data transfer. So a Gigabit Ethernet gives you throughputs of 600-
800 Mbps rather than 1000Mbps.

But consider this: the next version of SCSI (due date?) will double the
speed; the next version of ethernet (available in beta now) will multiply
the speed by a factor of 10. Which will be faster? Even with overhead?
It's something to consider.

The Wires
--NAS uses TCP/IP Networks: Ethernet, FDDI, ATM (perhaps TCP/IP
over Fibre Channel someday)
--SAN uses Fibre Channel
--Both NAS and SAN can be accessed through a VPN for security

The Protocols
--NAS uses TCP/IP and NFS/CIFS/HTTP
--SAN uses Encapsulated SCSI

More Differences
NAS SAN
Almost any machine that can connect to the LAN (or is Only server class devices with SCSI Fibre Channel
interconnected to the LAN through a WAN) can use NFS, can connect to the SAN. The Fibre Channel of the
CIFS or HTTP protocol to connect to a NAS and share files. SAN has a limit of around 10km at best
A NAS identifies data by file name and byte offsets, A SAN addresses data by disk block number and
transfers file data or file meta-data (file's owner, transfers raw disk blocks.
permissions, creation data, etc.), and handles security, user
authentication, file locking
A NAS allows greater sharing of information especially File Sharing is operating system dependent and
between disparate operating systems such as Unix and NT. does not exist in many operating systems.
File System managed by NAS head unit File System managed by servers

Backups and mirrors (utilizing features like NetApp's Backups and mirrors require a block by block copy,
Snapshots) are done on files, not blocks, for a savings in even if blocks are empty. A mirror machine must be
bandwidth and time. A Snapshot can be tiny compared to its equal to or greater in capacity compared to the
source volume. source volume.
9916 Brooklet Drive
Houston, Texas 77099
Phone 832-327-0316
www.safinatechnolgies.com

Comparing SAN and NAS


From SMBT IT Journal

One of the greatest confusions that I have seen in recent years is that between NAS and SAN. Understanding
what each is will go a long way towards understanding where they are useful and appropriate.

Our first task is to strip away the marketing terms and move on to technical ones. NAS stands for Network
Attached Storage but doesn’t mean exactly that and SAN stands for Storage Area Network but is generally used
to refer to a SAN device, not the network itself. In its most proper form, a SAN is any network dedicated to
storage traffic, but in the real world, that’s not how it is normally used. In this case we are hear to talk about NAS
and SAN devices and how they compare so we will not use the definition that includes the network rather than
the device. In reality, both NAS and SAN are marketing terms and are a bit soft around the edges because of
it. They are precise enough to use in a normal technical conversation, as along as all parties know what they
mean, but when discussing their meaning we should strip away the cool-sounding names and stick to the most
technical descriptions. Both terms, when used via marketing, are used to imply that they are a certain technology
that has been “appliancized” which makes the use of the terms unnecessarily complicated but no more useful.

So our first task is to define what these two names mean in a device context. Both devices are storage servers,
plain and simple, just two different ways of exposing that storage to the outside world.

The simpler of the two is the SAN which is properly a block storage device. Any device that exposes its storage
externally as a block device falls into this category and can be used interchangeably based on how it is used. The
block storage devices are external hard drives, DAS (Direct Attach Storage) and SAN. All of these are actually
the same thing. We call it an external hard drive when we attach it to a desktop. We call it a DAS when we
attach it to a server. We call it a SAN when we add some form of networking, generally a switch, between the
device and the final device that is consuming the storage. There is no technological difference between these
devices. A traditional SAN can be directly attached to a desktop and used like an external hard drive. An
external hard drive can be hooked to a switch and used by multiple devices on a network. The interface between
the storage device and the system using it is the block. Common protocols for block storage include iSCSI, Fibre
Channel, SAS, eSATA, USB, Thunderbolt, IEEE1394 (aka Firewire), Fibre Channel over Ethernet (FCoE) and
ATA over Ethernet (AoE.) A device attaching to a block storage device will always see the storage presented
as a disk drive, nothing more.

A NAS, also known as a “filer”, is a file storage device. This means that it exposes its storage as a network
filesystem. So any device attaching to this storage does not see a disk drive but instead sees a mountable
filesystem. When a NAS is not packaged as an appliance, we simply call it a file server and nearly all computing
devices from desktops to servers have some degree of this functionality included in them. Common protocols
for file storage devices include NFS, SMB / CIFS and AFP. There are many others, however, and technically
there are special case file storage protocols such as FTP and HTTP that should qualify as well. As an extreme
example, a traditional web server is a very specialized form of file storage device.

What separates block storage and file storage devices is the type of interface that they present to the outside
world, or to think of it another way, where the division between server device and client device happens within
the storage stack.

It has become extremely common today for storage devices to include both block storage and file storage from
the same device. Systems that do this are called unified storage. With unified storage, whether you can say
that it is behaving as block storage or file storage device (SAN or NAS in the common parlance) or both is based
upon the behavior that you configure for the device not based on what you purchase. This is important as it
drives home the point that this is purely a protocol or interface distinction, not one of size, capability, reliability,
performance, features, etc.
Both types of devices have the option, but not the requirement, of providing extended features beneath the
“demarcation point” at which they hand off the storage to the outside. Both may, or may not, provide RAID,
logical volume management, monitoring, etc. File storage (NAS) may also provide file system features such as
Windows NTFS ACLs.

The key advantage to block storage is that the systems that attach to it are given an opportunity to manipulate
the storage system as if it were a traditional disk drive. This means that RAID and logical volume management,
which may already have been doing in the “black box” of the storage device can now be done again, if desired,
at a higher level. The client devices are not aware what kind of device they are seeing, only that it appears as a
disk drive. So you can choose to trust it (assume that it has RAID of an adequate level, for example) or you can
combine multiple block storage devices together into RAID just as if they were regular, local disks. This is
extremely uncommon but is an interesting option and there are products that are designed to be used in this
way.

More commonly, logical volume management such as Linux LVM, Solaris ZFS or Windows Dynamic Disks is
applied on top of the exposed block storage from the device and then, on top of that, a filesystem would be
employed. This is important to remember, with block storage devices the filesystem is created and managed by
the client device, not by the storage device. The storage device is blissfully unaware of how the block storage
that it is presenting is used and allows the end user to use it however they see fit with total control. This extends
even to the point that you can chain block storage devices together with one providing the storage to the next
being combines, perhaps, into RAID groups – block storage devices can be layered, more or less, indefinitely.

Alternatively, a file storage device contains all of the block portion of the storage so any opportunity for RAID,
logical volume management and monitoring must be handled by the file storage device. Then, on top of the
block storage, a filesystem is applied. Commonly this would be Linux’ EXT4, FreeBSD and Solaris’ ZFS,
Windows NTFS but other filesystems such as WAFL, XFS, JFS, BtrFS, UFS and more are certainly possible. On
this filesystem, data will be stored. To them share this data with the outside world a network file system (also
known as a distributed file system) is used which provides a file system interface that is network enabled – NFS,
SMB and AFP being the most common but, like in any protocol family, there are numerous special case and
exotic possibilities.

A remote device wanting to use storage on the file storage device would see it over the network the same as it
would see a local filesystem and is able to mount it in an identical manner. This makes file storage especially
easy and obvious for end consumer to use as it is very natural in every aspect. We use network file systems
every day for normal desktop computing. When we “map a drive” in Windows, for example, we are using a
network file system.

One critical differentiation between block storage and file storage that must be differentiated between is that,
while both potentially can sit on a network and allow multiple client machines to attach to them, only file storage
devices have the ability arbitrate that access. This is very important and cannot be glossed over.

Block storage appears as a disk drive. If you simply attach a disk drive to two or more computers at once, you
can imagine what will happen – each will know nothing of the other and will be unaware of new files being
created, others changing and they systems will rapidly begin to overwrite each other. If your file system is read
only on all nodes, this is not a problem. But if any system is writing or changing the data, the others will have
problems. This generally results in data corruption very quickly, typically on the order of minutes. To see this in
extreme action, imagine having two or three client system all believe that they have exclusive access to a disk
drive and have them all defragment it at the same time. All data on the drive will be scrambled in seconds.

A file storage device, on the other hand, has natural arbitration as the network file system handles the
communications for access to the real file system and filesystems, by their nature, are multi-user naturally. So
if one system attached to a file storage device makes a change, all systems are immediately aware of the change
and will not “step on each others toes.” Even if they attempt to do the the file storage device’s filesystem
arbitrates access and has the final say and does not let this happen. This makes sharing data easy and
transparent to end users. (I use the term “end users” here to include system administrators.)
This does not mean that there is no means of sharing storage from a block device, but the arbitration of it cannot
be handled by the block storage device itself. Block storage devices are be made “shareable” by using what is
known as a clustered file system. These types of file systems originated back when server clusters shared
storage resources by having two servers attached with a SCSI controller on either end of a single SCSI cable
and having the shared drives attached in the middle of the cable. The only means by which the servers could
communicate was through the file system itself and so special clustered file systems were developed that allowed
there to be communications between the devices, alerting each to changes made by the other, through the file
system itself. This actually works surprisingly well but clustered file systems are relatively uncommon with Red
Hat’s GFS and Oracle’s OCFS being some of the best well known in the traditional server world and VMWare’s
much newer VMFS having become extremely well known through its use for virtualization storage. Normal users,
including system administrators, may not have access to clustered file systems or may have needs that do not
allow their use. Of important note is also that the arbitration is handled through trust, not through enforcement,
like with a file storage device. With a file storage device, the device itself handles the access arbitration and
there is no way around it. With block storage devices using a clustered file system, any device that attaches to
the storage can ignore the clustered file system and simply bypass the passive arbitration – this is so simple that
it would normally happen accidentally. It can happen when mounting the filesystem and specifying the wrong
file system type or through a drive misbehaving or any malicious action. So access security is critical at the
network level to protect block level storage.

The underlying concept being exposed here is that block storage devices are dumb devices (think glorified disk
drive) and file storage devices are smart devices (think traditional server.) File storage devices must contain a
full working “computer” with CPU, memory, storage, filesystem and networking. Block storage devices may
contain these things but need not. At their simplest, block storage devices can be nothing more than a disk drive
with a USB or Ethernet adapter attached to them. It is actually not uncommon for them to be nothing more than
a RAID controller plus Ethernet or Fiber Channel adapters to be attached.

In both cases, block storage device and file storage devices, we can scale down to trivially simple devices or
can scale up to massive “mainframe class” ultra-high-availability systems. Both can be either fast or slow. One
is not better or worse, one is not higher or lower, one is not more or less enterprise – they are different and serve
generally different purposes. And there are advanced features that either may or may not contain. The challenge
comes in knowing which is right for which job.

I like to think of block storage protocols as being a “standard out” stream, much like on a command line. So the
base level of any storage “pipeline” is always a block device and numerous block devices or transformations can
exist with each being piped one to another as long as the output remains a block storage protocol. We only
terminate the chain when we apply a file system. In this way hardware RAID, network RAID, logical volume
management, etc. can be applied in multiple combinations as needed. Block storage is truly not just blocks of
data but building blocks of storage systems.

One point that is very interesting is that since block storage devices can be chained and since network storage
devices must accept block storage as their “input” it is actually quite common for a block storage device (SAN)
to be used as the backing storage for a file storage device (NAS), especially in high end systems. They can
coexist within a single chassis or they can work cooperatively on the network.

You might also like