You are on page 1of 9

Open System SnapVault


Open Systems SnapVault is a disk-to-disk backup and recovery solution to protect data residing on
non NetApp storage systems and platforms. This agent-based solution transfers data directly from
an OSSV host to a NetApp secondary storage system in the form of block-level incremental backups.
These backups are captured as Snapshot copies on the NetApp secondary system. The advantage is
fast, reliable, space-optimized backups centralized on NetApp technology.

What makes OSSV so special is its ability to work at a block level (not a file level). When an OSSV
backup takes place, the client queries the remote systems filesystem and looks for changed files.
Once a list of changed files has been created, the OSSV client then does a block-level comparison
using block checksums to distinguish which blocks have changed within those files. A copy of those
blocks is then created and sent to NetApp storage for backup. This is an incredibly efficient way of
performing backups that requires very little storage space and very little bandwidth in-between the
source and destination.

OSSV for Data Migration?

OSSV methodology can be used to do data migrations. For example, to migrate Windows share data
on a standalone file server to the NetApp filer. In most situations, professionals use native Windows
tools (i.e. robocopy) to migrate this data. A baseline copy would be generated, additional
incrementals may occur up until the respective cutover time (outage window), client access would
be removed, one last incremental update would occur, DNS changes would be made (if necessary),
and client access would be restored. This process is simple but time consuming depending upon the
size of the cifs share and the rate of change of data during incremental updates. Traditional tools like
robocopy are file-level utilities. They look for files that have changed based on date/time, archive bit,
etc. If a file has changed (even slightly), the file is flagged to be transferred. Imagine having a bunch
of large files (1GB+) that change in-between each interval. The time it takes to do each update could
take a lot longer that one would ever like.

Advantage of OSSV over traditional copy tools:

OSSV edges other tools when it comes to incremental backups. OSSV incremental backup only
copies block-level changes. Even though OSSV has to run through the entire directory/file tree on
the source and look for changed files. The difference comes in the data transfer time.

OSSV will ONLY send changed blocks to the NetApp. If you are migrating a bunch of large files, this
method will considerably cut-down on data transfer time.

The basic components that make up OSSV architecture are as follows:

OSSV host (Windows/UNIX based)

OSSV agent software
NetApp Host Agent
TCP/IP network
NetApp storage system
DFM (OnCommandCore): This is optional, in case you want to manage backup and
restore from central location.

Lets run through this process step-by-step:

1. Download and install the OSSV client on the source host (an existing Windows file server
containing data that you want to move to CIFS or NFS shares on a NetApp). See the Open Systems
SnapVault Installation and Administration Guide for further information. Please see the last page for
reference material.

2. During OSSV installation, be sure to add the destination NetApp to the QSM Access List in order to
allow SnapVault to gain access to the source data. SnapVault is a pull replication technology. The
baseline transfer and all updates are initiated by the destination NetApp. If you forget to perform
this step during install, you can go back within the OSSV Configurator on the host and edit the
settings there. The QSM Access List is under the SnapVault tab of the OSSV Configurator.
3. Verify that the following licenses exist on the destination NetApp FAS array: sv_ontap_sec and
one of the following dependent on the source OS (sv_windows_pri, sv_linux_pri, sv_unix_pri, or

4. On the destination NetApp, create a new volume with enough space to hold the data to be

5. Make sure SnapVault is enabled on the destination NetApp:

snapvault status; options snapvault.enable on

6. Kick-off a baseline transfer using the following command:

snapvault start -S [source_hostname]:[source_path] [dest_path]


snapvault start -S fileserver: C:\myshares\ /vol/fileshares/myshares

Note how you need to specify the name of a new qtree in the destination path. This qtree is created
as part of the SnapVault initialization. There is no need to create this ahead of time.

7. Check status of the new SnapVault relationship and wait for the baseline to complete:

snapvault status

8. Youll probably want to continue running incremental updates to the destination NetApp prior to
performing a cutover. This can be done manually using the following command. You could also use
the NetApp PowerShell Toolkit and create a PowerShell script and subsequent Windows Scheduled
Task to run this automatically (a topic of discussion for a later date).

snapvault update [destination_path]


snapvault update /vol/fileshares/myshares

9. Schedule a maintenance window to perform the cutover from the Windows file server to the
NetApp. How long of a maintenance window should be determined based on how long your
incremental SnapVault updates are taking plus additional time for prep, check, and test tasks.

10. At the time of the maintenance window, start by removing access to the source file server in
order to prevent any data changes from being made. On a Windows file server, youre best bet is to
just stop the Server service. Once you have removed user access, kick-off a SnapVault update on the
destination NetApp (see step 8). Let this complete. Once completed, rename the source Windows
file servers hostname. The NetApp will be assuming this hostname so that users will not have to
make any changes to UNC paths in order to access their shares. If youre changing the hostname of a
Windows file server that is joined to an Active Directory domain, a reboot will be necessary for the
change to be made. Once the old file server has been rebooted, go into your internal DNS and create
a new CNAME record. The new CNAME record should use the hostname of the old Windows file
server and point to the hostname of the NetApp. (NOTE: This assumes that the destination NetApp
was previously configured for CIFS and joined to your Active Directory domain).

11. Validate that you can access shares on the NetApp using the old hostname. You may need to
perform a DNS flush on your workstation in order to resolve the old hostname with the new IP
address of the NetApp.

12. Once you have validated access, youll need to convert the SnapVault destination volume on the
NetApp to a read/writable volume. SnapVault destinations are always read-only and cannot be made
read/writable without additional configuration. This process is documented in the NetApp KB article
provided at the end of this article.

The process of converting a SnapVault destination to a read/writable volume can take a few minutes
to complete. The bulk of the conversion time occurs during the SnapMirror break process. This can
take upwards of an hour depending on the model of controller used to perform this operation.

13. Bring over your CIFS shares. Depending on the number of shares that existing on the Windows
file server, you could either migrate the shares manually or via the use of scripts. One can do a net
share dump from the Windows file server, copy that output to an Excel spread sheet, and then use
that data to create cifs shares -add commands on the NetApp console. Validate that all shares have
been copied over and you can access all of them on the NetApp. You can use cifs sessions to
validate your connectivity to the NetApp.

Backup & Restore are based on Technology.

SnapVault is a pull replication technology. The baseline transfer and all updates are initiated by
the destination NetApp. SnapVault use TCP port 10566 for data transfer. Network connections are
always initiated by the destination system; that is, SnapVault pull data rather than push data.

How does OSSV actually transfer data from primary to secondary system?

Data is moved via TCP/IP network using TCP port 10566. The communications protocol is QSM
(based on Qtree-SnapMirror). This is not to be confused with NDMP protocol. NDMP (TCP port
10000) is used by NDMP-based management applications (ex- DFM/OnCommandCore) for
management and control of the SnapVault primary and secondary systems. Actual data transfer
happens over TCP port 10566.
Where can I locate QSM TCP port 10566 during BACKUP & RESTORE?

This port can be located at the point of Data Retrieval. For ex

During BACKUP: 10566 is located at OSSV Host

During RESTORE: 10566 is located at FILER

During idle state: OSSV host listens on following ports.

TCP win2k8ossv:0 LISTENING


TCP win2k8ossv:0 LISTENING


TCP win2k8ossv:0 LISTENING


During idle state: DFM Server host listens on following ports.




What exactly happens during OSSV RESTORE?

When you kick-in Restore from DFM server and do 'netstat -abnp tcp, you can see DFM server
talking to both OSSV and Filer on NDMP port 10000 via DFPM (Protection Manager Module).

Note: Both NDMP Port 10000 remains active until the restore is complete.




1. DFM server sends restore request to Filer.

NDMP connection (restore request) accepted from (DFM)



2. OSSV host connects to Filer and establishes connection.

QSM Server connected to machine on port 10566 (Filer)



3. OSSV host begins pulling the data across from the filer.

fas01> netstat -a

Active TCP connections (including servers)

Local Address Remote Address Swind Send-Q Rwind Recv-Q State [FILER][OSSV] 4202496 2089152 7340880 0


win2k8ossv:C:\luck\jre-7u6-windows-x64.exe: Source - Transferring (OSSV_host)

4. Data is fully restored to the OSSV host.

Restored data from /vol/ossv_new/Cxxluck/netapp.snapdrive.linux_5_1.rpm to

C:/luck/netapp.snapdrive.linux_5_1.rpm via default interface

5. Releases the snapshot used by the restore.

Released snapshot used by the restore of

6. Finally, releases the relationship used by the restore.

Released relationship used by the restore of


7. Restore Ends.

Successfully restored path using SnapVault Restore

What exactly happens during OSSV BACKUP?

When you kick-in Backup from DFM server and do 'netstat -abnp tcp, you can see DFM server
talking to Filer & OSSV host on NDMP port 10000 via DFPM.

Note: Once backup relationship is established, DFM is no longer talking to OSSV host, and you can
actually see port 10000 no longer listening on OSSVhost after few seconds/minute. At times, you
may not see this port established on OSSV Host during Backup at all.

1. DFM server sends backup request to Filer & also talks to OSSV host for creating a base-line





2. Filer connects to the OSSV host on QSM interface and establishes connection.



3. Filer initiates the backup by pulling the data from OSSV host as shown in the netstat output below.

fas01> netstat -a

Active TCP connections (including servers)

Local Address Remote Address Swind Send-Q Rwind Recv-Q State [FILER] [OSSV-HOST] 65280 0 27 7340853

Issue faced during RESTORE

OSSV is fairly easy setup and one should expect to perform Backup & Restore without any issues as
long as Port 10000 & 10566 is opened at the firewalls.

At my customer site, Backup worked smoothly but every time we tried to do restore it just wont
obey. Following error was seen:


Connection had exception. Failed to connect to filer

I cracked my head for a week (almost frustrated) to get around this issue but with no luck. I finally
decided to do netstat abnp tcp on OSSV Host while running Restore and this is where I discovered
the following:


TCP [OSSV] 52202 [FILER]:10566 SYN_SENT


Basically, TCP connection was never established, Host never received SYN, ACK Packet back.

This was also captured in the pktt trace I ran between OSSV Host & Filer. At this point I knew SYN
packets are being sent but ignored at the FILER end. I finally decided to open a case with NetApp


NetApp Support discovered that the snapvault interface was listed in the snapmirror blocked
interface list on the filer. Removing the interface from this list solved the issue.

Filer>options interface.blocked.snapmirror

Therefore, SYN packets on port 10566 were ignored.

On the side note, I think packet tracing is quite an invaluable tool when it comes to dealing with
issues on TCP/IP network, it can almost tell you whats wrong, if not how to fix it. I guess, you learn
these skills with experience.
Document reference:
OSSV Installation and Admin Guide: 3.0.1

Installation and administration Guide: Part No. 215-05638_C0

OSSV Best Practices Guide:

OnCommand Core 5.1 :( DFM)

Guide to Common Provisioning and Data Protection Workflows for 7-Mode: Part No. 210-05421_A0

Note: You need a support account to access this knowledge resource. To login go to

How to release a SnapVault relationship and make the secondary qtree writable (Useful: When
you are doing Data Migration using OSSV, rather than robocopy or any other CIFS copy tools)


-Prepared by

Ashwin Pawar