You are on page 1of 22

Best Practices for using Storage Lifecycle Policies in NetBackup 6.5.3 and 6.5.

4
This document provides a list of best practices and guidelines for using Storage Lifecycle Policies (SLPs) in your NetBackup 6.5.3 or 6.5.4 environment.

Contents
General Guidelines for both NetBackup 6.5.3 and 6.5.4 ............................................... 3 Upgrade your system to the latest release ..................................................................... 3 Introduce SLPs into the environment gradually ........................................................... 3 Plan for duplication time ............................................................................................... 3 Avoid backlog ............................................................................................................... 4 Monitor SLP progress ................................................................................................... 5 To reduce backlog: Deactivate processing or deactivate SLPs ................................... 5 If using storage unit groups with Media Server Load Balancing, be conservative ...... 6 If using storage unit groups with Inline Copy, be conservative ................................... 6 Be conservative with the number of SLPs you create .................................................. 7 Tune the size and frequency of your SLP duplication jobs .......................................... 7 Strategies specific to tape devices (including Virtual Tape Library) ........................... 8 Considerations for VTL staging.............................................................................. 8 Minimize contention for tape drives ....................................................................... 8 Provide an appropriate number of virtual tape drives for duplication .................... 9 Do not share virtual tape drives ............................................................................ 10 Minimize contention for media (real or virtual tapes) .......................................... 10 Preserve multiplexing ........................................................................................... 10 Use Inline Copy to make multiple copies from a tape device (including VTL) ... 11 Large duplication jobs are more efficient for tape ................................................ 11 Environments with very large images and many very small images .................... 12 Reduce disk contention by disabling duplication during the backup window............ 13 Do not edit an active SLP with in-process images ..................................................... 13

Symantec TechNote 327648

July 29, 2009

Large duplication jobs are more efficient (regardless of the type of media) .............. 14 To duplicate many very small images can be a performance issue ............................ 14 SLPs should not share tapes ........................................................................................ 14 Guidelines for using NetBackup 6.5.4 ........................................................................... 15 Placement of VTL devices in a hierarchical duplication configuration ..................... 15 10,000 fragment limit for images fetched in each session .......................................... 15 New 6.5.4 touch file RB_DO_NOT_USE_SHARED_RESERVATIONS.................... 15 Tune the Resource Broker configuration file.............................................................. 16 Guidelines for using NetBackup 6.5.3 ........................................................................... 19 Tune for SLP service restart ....................................................................................... 19 1,000 fragment limit for images fetched in each session ............................................ 19 Effects of the RB_USE_SHARED_RESERVATIONS touch file ............................... 19 Tune the Resource Broker configuration file.............................................................. 20

Symantec TechNote 327648

July 29, 2009

General Guidelines for both NetBackup 6.5.3 and 6.5.4


The practices listed in this section are general guidelines for tuning Storage Lifecycle Policies and apply to both NetBackup 6.5.3 and 6.5.4.

For additional, version-specific information, see Guidelines for using NetBackup 6.5.4 on page 16 and Guidelines for using NetBackup 6.5.3 on page 19.

Upgrade your system to the latest release


NetBackup has made improvements to Storage Lifecycle Policies with each release, so consider upgrading to the latest version as soon as possible. The Getting Started section of the NetBackup technical support web site contains a Late Breaking News link for each release. Each release lists the updates that are available to install in order to keep all Storage Lifecycle Policy and Resource Broker bundles current. Subscribe to these pages to stay informed of updates and new information.

Introduce SLPs into the environment gradually


Gradual implementation of plans to move more data through your environment lets you tune your environment as your data protection requirements grow. Gradual implementation of SLPs will allow you to learn how SLPs work best at your site. Do not immediately begin to duplicate all of the corporate data. By adding duplication jobs to your environment, you will be asking more of the backup/storage infrastructure. Consider factors such as whether additional hardware is needed. To help determine how duplication and SLPs in general will impact your resources, apply SLPs to a small set of backups and observe how much hardware use increases.
NOTE: Because the cost of managing tapes held in off-site storage is relatively high,

users have traditionally made just one duplicate copy of a subset of their backups to send off-site. With recent trends toward newer disk technologies (including Virtual Tape Libraries) and SLPs, users now want to make multiple duplicate copies of all of their corporate data. Doing so with no prior experience with these technologies can lead to difficulties. Symantec strongly advises that you move into these new strategies slowly. For example, if you do not currently duplicate a set of backups using Vault, perhaps it does not need to be duplicated with SLPs. Always consider the additional stress that duplication may have on the rest of your environment, so that you make the best decisions.

Plan for duplication time


To duplicate data generally takes longer than to back up data. Duplication also consumes twice the bandwidth from storage devices than backups consume because a duplication job must read from one storage device and write to another storage device. Symantec TechNote 327648 3 July 29, 2009

Make sure that you have enough hardware. The bandwidth available for duplication (the number of tape drive pairs, for example) needs to be large enough to accommodate the amount of data that needs to be duplicated. Duplication taxes the NetBackup resource broker (nbrb) twice as much as backups, because duplication needs resources to read and to write. If nbrb is overtaxed, it can slow the rate at which all types of new jobs are able to acquire resources and begin to move data. Hardware problems happen. Plan to maintain some daily or weekly slack time to allow for hardware issues. To allow for additional time is a best practice for all types of duplication activity and is not specific to Storage Lifecycle Policies.

Avoid backlog
Backlog refers to the effect of duplications not keeping up with backups. When duplications fall behind, unfinished images build up to create a backlog. As more backups are performed, the duplication backlog can increase rapidly. There are two reasons that an increasing duplication backlog is a serious problem: 1. SLPs duplicate your oldest images first. While an old backlog exists, your newest backups will not be duplicated. This can cause you to miss your Service Level Agreements. 2. To reduce (and ultimately get rid of) a backlog, your duplications must be able to catch up. Your duplications must be able to process more images than the new backups that are coming in. For duplications to get behind in the first place, they may not have sufficient hardware resources to keep up. To turn this around so that the duplications can catch up, you may need to add even more hardware or processing power than you would have needed to stay balanced in the first place. The key to avoiding backlog is to ensure that images are being duplicated as fast as new backup images are coming in, over a strategic period of time. The strategic period of time differs from site to site, but might be 24 hours, 3 days, or 1 week. If you are backing up 500 GB of data daily, you need to be duplicating at least 500 GB of data daily. The backlog should always be decreasing. Do not add more data to the Storage Lifecycle Policy configuration until you reach this goal. Consider the following questions to prepare for and avoid backlog: Under normal operations, how soon should backups be fully duplicated? What are your Service Level Agreements? Determine a metric that works for the environment. Is the duplication environment (that includes hardware, networks, servers, I/O bandwidth, and so on) capable of meeting your business requirements? If your SLPs are configured to use duplication to make more than one copy, do your throughput estimates and resource planning account for all of those duplications?

Symantec TechNote 327648

July 29, 2009

Do you have enough storage to allow for downtime in your environment if there are problems? Have you planned for the additional time it will take to recover if a backlog situation does occur? After making changes to address the backlog, additional time will be needed for duplications to catch up with backups.

Monitor SLP progress


If you monitor the progress of the Storage Lifecycle Policies using the Storage Lifecycle Policy utility command, nbstlutil, you can foresee potential problems before a backlog builds up and gets out of hand. For 6.5.4 and subsequent versions, use the nbstlutil command to display all unfinished copies:
nbstlutil stlilist image_incomplete U

The list is sorted by age. The image at the top of the list is the oldest, so the backup time of that image is the age of the backlog. In most configurations that image should not be more than 24 hours old. It should certainly not be older than your RPO (Recovery Point Objective) time. For 6.5.4 and subsequent versions, you can use nbstlutil or bpimagelist to count incomplete images (not incomplete copies):
nbstlutil stlilist image_incomplete U | grep Image | wc l bpimagelist stl_incomplete | grep Image | wc -l

For 6.5.3 and previous versions, you can use bpimagelist to monitor for backlog:
bpimagelist stl_incomplete | grep Image | wc -l

To reduce backlog: Deactivate processing or deactivate SLPs


SLPs process the oldest images first. Until the backlog is cleared up, your new backups will not be duplicated. The easiest way to get out of the backlog is to use nbstlutil to make inactive (or perhaps even cancel) Storage Lifecycle Policy processing of old images. If you choose to make the old images inactive, then later (when your newest backups are completely duplicated) you can make the inactive images active again and SLPs will resume processing of those images. If you choose to cancel the images, they will never again be processed by SLPs, so the unfinished copies will never be made by the SLP. If you have hundreds or thousands of images to deactivate or cancel, a script would be helpful.

Symantec TechNote 327648

July 29, 2009

Commands to deactivate or cancel processing:


nbstlutil inactive backupid xxxxxx nbstlutil cancel backupid xxxxxx NOTE: If you choose to cancel processing, be aware that the images with Capacity

Managed retention or Expire After Duplication retentions are subject to immediate expiration, before other copies can be made. Canceling the processing reduces the backlog quickly, but it may not be the best option in your environment. The nbstlutil command above only tells the Storage Lifecycle Manager service (nbstserv) to stop processing the images (to stop creating new duplication jobs for the images). The nbstlutil command does not affect the duplication jobs that are already active or queued. Consequently, you also need to cancel the queued duplication jobs. (At your own discretion, you could also cancel the active duplication jobs.) An alternative to inactivating or canceling a list of images would be to temporarily disable SLPs that are not urgent, to allow urgent SLPs that use the same hardware resources to catch up and complete. TechNote 307360 describes how to write and schedule scripts to disable and enable duplications at the SLP and storage destination levels. Again, you will also need to cancel queued (and perhaps active) duplication jobs.

If using storage unit groups with Media Server Load Balancing, be conservative
Using the Media Server Load Balancing option on storage unit groups can negatively affect Resource Broker performance. The more storage units and the more media servers represented by the storage unit group, the more the Resource Broker needs to search to find the best resource matches. As long as the Resource Broker search fails to find resources for a job, the search is run for every job using the storage unit group, every time through the Resource Broker work list. The Resource Broker continues to run this search for each evaluation cycle and for each such job until resources are found. Be conservative with storage unit groups by limiting the number of storage units and the number of media servers represented by the storage unit group. If you add a media server or a storage unit to the storage unit group, pay attention to how it affects performance. This is best practice for all backup and duplication operations and is not specific to Storage Lifecycle Policies.

If using storage unit groups with Inline Copy, be conservative


SLPs with multiple duplication destinations look for opportunities to use Inline Copy for the duplication jobs. Using storage unit groups for multiple duplication destinations that are candidates for Inline Copy can negatively affect Resource Broker performance. An Inline Copy request needs to search for the following: Symantec TechNote 327648 6 July 29, 2009

The best common media server (that is, a server with access to make all copies). Adequate storage for each copy. As long as the Resource Broker search fails to find resources for a job, the search will be run for every such job from each time through the Resource Broker work list. If there are many jobs in the work list that need to find such resources, the performance of the Resource Broker suffers. Be conservative with storage unit groups by limiting their use on destinations that are candidates for Inline Copy within a single SLP. If you must use them this way, limit the number of storage units and the number of media servers represented by the storage unit group. If you add a media server or a storage unit, pay attention to how it affects Resource Broker performance. This is best practice for all backup and duplication operations and is not specific to Storage Lifecycle Policies.

Be conservative with the number of SLPs you create


Examine your plan and look for ways to reduce the number of SLPs. Perhaps other backup policies could share an existing SLP. This is consistent with general best practices to be conservative with the number of Backup Policies, Storage Unit Groups, etc. that you configure. Minimizing complexity is always a best practice.

Tune the size and frequency of your SLP duplication jobs


Do you want your duplication jobs to be large (as recommended for VTL staging)? Or do you want your duplication jobs to happen quite soon after the backup completes? LIFECYCLE_PARAMETER file entries Use the following entries in the LIFECYCLE_PARAMETERS file to tune the size and frequency of duplication jobs. Changes to these parameters will be effective without restarting any NetBackup services:
MIN_GB_SIZE_PER_DUPLICATION_JOB 180

Adjusting this value, indicated in gigabytes, lets you affect the number and size of duplication jobs. If the MIN_GB_SIZE_PER_DUPLICATION_JOB setting is small, you create more duplication jobs. If it is large, you will create fewer, larger duplication jobs. Note: For low-volume times of day or other low-volume conditions, the MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB timeout will allow smaller duplication jobs to run prior to reaching the MIN_GB_SIZE_PER_DUPLICATION_JOB target size. Default value: 8 GB. Symantec TechNote 327648 7 July 29, 2009

MAX_GB_SIZE_PER_DUPLICATION_JOB 250

This entry controls the maximum size of a duplication job. (When a single image is larger than the maximum size, that one image will be put into its own duplication job.) Consider your tolerance for long-running duplication jobs. Can you tolerate a job that will run for 8 hours, consuming tape drives the entire time? Four hours? One hour? Then calculate the amount of data that would be duplicated in that amount of time. Remember that the larger the duplication job is, the more efficient the duplication job will be. Default value: 25 GB.
MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB 30

Often there will be low-volume times of day or low-volume SLPs. If new backups are small or are not appearing as quickly as during typical high-volume times, adjusting this value can improve duplication drive utilization during low-volume times or can help you to achieve your SLAs. Reducing this value allows duplication jobs to be submitted that do not meet the minimum size criterion. Default value: 30 minutes. A setting of 30 to 60 minutes works successfully for most environments.
DUPLICATION_SESSION_INTERVAL_MINUTES 5

This parameter indicates how frequently nbstserv looks to see if enough backups have completed and decides whether or not it is time to submit a duplication job(s). Default: 5 minutes. A setting of 15 to 30 minutes works successfully for most environments.

Strategies specific to tape devices (including Virtual Tape Library)


The following sections address tapes and tape libraries. Remember that NetBackup treats a Virtual Tape Library in exactly the same way that it treats a physical tape library to NetBackup there is no difference between physical and virtual tape. Considerations for VTL staging VTL staging refers to the practice of using a VTL as an interim location for backup images. In this scenario, the backup images are written to a VTL and subsequently duplicated from the VTL to some other storage device. Following are best practices that are of use if you are using VTL staging. Minimize contention for tape drives Backlogs can come about because of inefficient duplication, which can be caused by tape drive contention. The following methods can reduce this problem:

Symantec TechNote 327648

July 29, 2009

Assign a job priority to NetBackup policies. o To give backup jobs preferential access to drives, assign a higher Job priority to the backup policies and a lower priority for the Duplication job priority in the SLP. o A backup Job priority greater than the Duplication job priority makes it more likely that duplication jobs will not get tape drives if backup jobs are waiting in the queue. Reserve some tape drives specifically for backup jobs and some specifically for duplication jobs. The following guidance applies if you are not using the Shared Storage Option: o Tape drives for the backup jobs: If you need duplication jobs to run concurrently with backup jobs, then define a storage unit for your backup policies which uses only a subset of the tape drives in the device. To do this, ensure that the storage units setting for Maximum concurrent drives is less than the total number of drives in the library. For instance, if your tape device has 25 drives you may want to set Maximum concurrent drives for your backup storage unit to 12, thereby reserving the other 13 drives for duplication (or restore) jobs. (Suppose these 13 drives are for 12 duplication jobs to read images, and for 1 restore job to read images.) o Tape drives for duplication jobs: Define a destination storage unit for your duplication jobs that matches, in number, the number of read drives reserved for duplication at the original device. To follow the previous example, this duplication storage unit would have a Maximum concurrent drives of 12, to match the 12 read drives reserved for duplication on the previous device. For the duplication jobs it is important to keep a 1:1 ratio of read virtual drives to write physical drives. Keep in mind that media contention can still make this inefficient. To reduce media contention, be sure to follow the other guidelines in this document, including tuning the SLP environment for very large duplication jobs (via the LIFECYCLE_PARAMETERS file). Provide an appropriate number of virtual tape drives for duplication When you are duplicating from VTL to real tape, configure at least twice as many tape drives in your VTL as there are physical drives in the destination device. For example, if you have 5 physical tape drives that will write the duplicate images, configure at least 10 VTL drivesuse 5 for backup and 5 for duplication. The number of duplication drive pairs should be greater than or equal to the number of tape drives used during the backup window. For example if 5 VTL drives are used for backup there should be at least 5 drive pairs5 VTL drives and 5 physical drives available for duplication. If fewer drive pairs are available for duplication, the duplication

Symantec TechNote 327648

July 29, 2009

will take more time than the original backups required and a backlog of unfinished images could accumulate. If more drive pairs are available for duplication, the duplication can be done in less time than the original backups. (This is subject to the performance of the hardware involved and SLP duplication batching.) Do not share virtual tape drives Because NetBackup does not differentiate between physical and virtual tape drives it is possible to configure the drives in a VTL to behave in exactly the same way as physical tape drives from a NetBackup perspective, including using the Shared Storage Option to share virtual tape drives between media servers. While such sharing is possible with virtual tape drives it is not efficient and can lead to drive contention. When using VTLs create as many virtual tape drives as you need for all media servers, do not create a smaller number and share them between media servers. Minimize contention for media (real or virtual tapes) When multiple jobs (backup jobs and duplication jobs) need to access the same tape, the contention for the tapes can cause huge inefficiencies in your duplication environment. There are many ways to reduce the contention for media. In general you want to reduce the number of jobs that need to access any single tape. Preserve multiplexing Though it has other benefits, this feature will allow one single duplication job to read all multiplexed images from a tape. (See Preserve multiplexing on page 10.) Inline Copy If you need to make multiple copies from the media, you can reduce the number of jobs that need to read the source image by using Inline Copy. In this way only one duplication job will read the source image once, then create multiple copies. (See Use Inline Copy to make multiple copies from a tape device (including VTL) on page 11.) Large duplication jobs Configuring for duplication jobs to be large enough to read all images from a tape or a set of tapes will reduce the number of jobs needing access to any single tape. (See Large duplication jobs are more efficient for tape on page 11.) Be very careful if configuring an SLP to use Hierarchical Duplication with a VTL as the middle device. (See Placement of VTL devices in a hierarchical duplication configuration on page 15.) Avoid configuring SLPs such that they will share tapes. (See SLPs should not share tapes on pages 14.) These strategies are discussed below or in subsequent sections of this document. Preserve multiplexing If backups are multiplexed on tape, Symantec strongly recommends that the Preserve multiplexing setting is enabled in the Change Destination window for the subsequent duplication destinations. Preserve multiplexing allows the duplication job to read Symantec TechNote 327648 10 July 29, 2009

multiple images from the tape while making only one pass over the tape. (Without Preserve multiplexing the duplication job must read the tape multiple times so as to copy each image individually.) Preserve multiplexing significantly improves performance of duplication jobs. On a VTL, the impact of multiplexed images on restore performance is negligible relative to the gains in duplication performance. Use Inline Copy to make multiple copies from a tape device (including VTL) If you want to make more than one copy from a tape device, use Inline Copy. Inline Copy allows a single duplication job to read the source image once, and then to make multiple subsequent copies. In this way, one duplication job reads the source image, rather than requiring multiple duplication jobs to read the same source image, each making a single copy one at a time. By causing one duplication job to create multiple copies, you reduce the number of times the media needs to be read. This can cut the contention for the media in half, or better. Large duplication jobs are more efficient for tape Duplicating images from tape works best if you tune Storage Lifecycle Policies to create duplication jobs that are as large as your business requirements can tolerate. (The ability to do this at 6.5.3 is extremely limited.) The MIN_GB_SIZE_PER_DUPLICATION_JOB value should be large enough to capture as many tapes as you can tolerate for each duplication job. (The larger the duplication job, the longer it will consume tape drives; that may be a limiting effect given your other business requirements.) Large duplication jobs are more efficient when duplicating from tape because: 1. If the original images are multiplexed on the virtual tapes, the duplication job must be large enough for Preserve multiplexing to work. Suppose your images are 10 GB and are multiplexed at a level of 4 (so that four images are interleaved on the tape at a time). Suppose MIN_GB_SIZE_PER_DUPLICATION_JOB is set to 8 GB (the default). The SLP will put only one 10 GB image into each duplication job, so Preserve multiplexing will not be able to do what it is supposed to do; the duplicate job will have to reread the tape for each image. MIN_GB_SIZE_PER_DUPLICATION will need to be set to at least 40 GB (and would be better at 80 GB or more) for Preserve multiplexing to work well. 2. If the duplication job is allowed to be too small, then multiple duplication jobs will require the same tape(s). Suppose there are an average of 500 images on a tape. Suppose the duplication jobs are allowed to be small enough to pick up only about 50 images at a time. For each tape, the SLP will create 10 duplication jobs that all need the same tape. Contention for media causes various inefficiencies.

Symantec TechNote 327648

11

July 29, 2009

In release 6.5.4 and above, a large duplication job that reads from tape will chose a batch of images that reside on a common media set that may span many tapes. This is more efficient because the larger the duplication jobs, the fewer jobs will be contending with each other for the same media. Environments with very large images and many very small images Though a large MIN_GB_SIZE_PER_DUPLICATION_JOB works well for large images, it can be a problem if you also have many very small images. It may take a long time to accumulate enough small images to reach a large MIN_GB_SIZE_PER_DUPLICATION _JOB. To mitigate this effect, you can keep the timeout that is set in MAX_MINUTES _TIL_FORCE_SMALL_DUPLICATION_JOB small enough for the small images to be duplicated sufficiently soon. However, the small timeout (to accommodate your small images) may negate the large MIN_GB_SIZE_PER_DUPLICATION_JOB setting (which was intended to accommodate your large images). It may not be possible to tune this perfectly if you have lots of large images and lots of very small images. For instance, suppose you are multiplexing your large backups to tape (or virtual tape) and have decided that you would like to put 2 TB of backup data into each duplication job to allow Preserve multiplexing to optimize the reading of a set of images. Suppose it takes approximately 6 hours for one tape drive to write 2 TB of multiplexed backup data. To do this, you set MIN_GB_SIZE_PER_DUPLICATION_JOB to 2000. Suppose that you also have lots of very small images happening throughout the day, that are backups of database transaction logs which need to be duplicated within 30 minutes. If you set MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB to 30 minutes, you will accommodate your small images. However, this timeout will mean that you will not be able to accommodate 2 TB of the large backup images in each duplication job, so the duplication of your large backups may not be optimized. (You may experience more rereading of the source tapes due to the effects of duplicating smaller subsets of a long stream of large multiplexed backups.) If you have both large and very small images, consider the tradeoffs and choose a reasonable compromise with these settings. Remember: If you choose to allow a short timeout so as to duplicate small images sooner, then the duplication of your large multiplexed images may be less efficient. Recommendation: Set MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB to twice the value of DUPLICATION_SESSION_INTERVAL_MINUTES. Experience has shown that DUPLICATION_SESSION_INTERVAL_MINUTES tends to perform well at 15 to 30 minutes. A good place to start would be to try one of the following, then modify these as you see fit for your environment:
DUPLICATION_SESSION_INTERVAL_MINUTES 15 MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB 30

Symantec TechNote 327648

12

July 29, 2009

Or:
DUPLICATION_SESSION_INTERVAL_MINUTES 30 MAX_MINUTES_TIL_FORCE_SMALL_DUPLICATION_JOB 60

Reduce disk contention by disabling duplication during the backup window


If you are staging your backups to a disk device using SLPs, you may want to use nbstlutil to disable duplications during the backup window, if you can afford to. If you are using disk, concurrent reads (duplication jobs) and writes (backup jobs) on the primary disk pool (copy 1) cause reduced disk efficiency and speed. In a future release, this problem can be avoided by using the new Maximum I/O streams per volume option on the Disk Pool. In earlier versions, this problem can be avoided with scheduled scripts that disable SLP duplication during the backup window. TechNote 307360 describes how to write and schedule scripts to disable and enable SLP duplication requests globally or at the SLP or storage destination levels. Duplication jobs that are already queued will not be affected by this mechanism so you may also want your script to cancel queued duplication jobs. Note: When there are a lot of SLP-incomplete images in the EMM work list, it can take a long time to disable or enable duplication.

Do not edit an active SLP with in-process images


Do not edit a Storage Lifecycle Policy while it has images that are in process. If a destination is deleted, images could be lost prematurely. If you remove a storage unit from the NetBackup configuration and from the SLP while SLP images are in process, duplication for the images created prior to the change will not work. The SLP still wants to use the old storage unit for those images, but the storage unit does not exist. You could end up with an unmanageable backlog. Other changes to a Storage Lifecycle Policy may create other unexpected behaviors. For more information, see TechNote 323746. Instead of editing a Storage Lifecycle Policy, you can perform either of the following procedures: Create a new Storage Lifecycle Policy with the changed configuration. Change applicable backup policies to point to the new SLP. The images that were in-process following the old SLP configuration will continue to be processed (unless you cancel them manually). Deactivate the SLP and backup policies that use the SLP (See TechNote 307360.) Let all duplications from this SLP complete, or cancel them manually. Change the SLP. Reactivate the SLP and the backup policies.

Symantec TechNote 327648

13

July 29, 2009

Large duplication jobs are more efficient (regardless of the type of media)
Whether you are copying images from disk or VTL, if duplication jobs are allowed to be large, there will be fewer of them and the job queue processed by Resource Broker will be smaller. This allows Resource Broker to be more efficient. If duplication jobs are allowed to be small, the excessive number of duplication jobs in the queue will negatively impact the ability for all types of queued jobs to acquire resources quickly. An earlier section in this document describes how to configure SLPs for large duplication jobs.

To duplicate many very small images can be a performance issue


Duplicating many very small images can cause performance problems because of the overhead involved in acquiring resources for each image in the duplication job. (Small is considered less than 100 MB.) In a future release, this overhead will have been significantly reduced to address this limitation. This applies to all types of duplication activity and is not specific to Storage Lifecycle Policies. (Vault and disk storage units that do staging will also exhibit this behavior. Their performance in this regard will also improve in a future release.) Small images are usually caused by the following: Backups of database transaction logs Differential Incremental backups Deduplicated (compressed) images

SLPs should not share tapes


Avoid letting two or more Storage Lifecycle Policies share a set of tapes. Doing so can cause media contention between duplication jobs and result in idle tape drives. The following are viable alternatives: Use a different volume pool for each SLP. Use a different retention for each SLP and dont allow mixed retentions on media. (Note that when the Expire after Duplication option is enabled, the SLP uses a retention level of 0.)
NOTE: These alternatives avoid contention between SLPs, but there will still be some

contention between duplication jobs created by a single SLP. This issue is being addressed in a future release. In earlier versions this issue can be mitigated by configuring SLPs to create large duplication jobs, as described elsewhere in this document.

Symantec TechNote 327648

14

July 29, 2009

Guidelines for using NetBackup 6.5.4


Practices listed in this section apply to only those environments that are running NetBackup 6.5.4.

Placement of VTL devices in a hierarchical duplication configuration


Be very careful when using a Virtual Tape Library for the middle device in a hierarchical duplication configuration. Configuring your SLP this way can cause excessive media contention. For example, if you are duplicating from VTL_1 to VTL_2 to Long Term Storage, VTL_2 is impacted by both incoming and outgoing duplication jobs, and both contend for tape resources. You can mitigate the effect with a script that uses nbstlutil to disable/enable duplications to individual destinations. (See TechNote 307360.)

10,000 fragment limit for images fetched in each session


In 6.5.3, the number of images that could be included in an nbstserv session was limited to 1000 fragments in EMM (set in the LIFECYCLE_PARAMETERS file with the parameter MAX_IMAGES_FETCHED_SESSION 1000). In 6.5.4 and subsequent releases, this value is limited to 10,000 and does not need to be changed. If MAX_IMAGES_FETCHED_SESSION is defined in the LIFECYCLE_PARAMETERS file, this should be removed when you upgrade to 6.5.4.

New 6.5.4 touch file RB_DO_NOT_USE_SHARED_RESERVATIONS


In 6.5.4, the Resource Broker uses shared reservations by default. Note that this is a change from 6.5.3, in which shared reservations were not used by default. If tape drive utilization degrades after upgrading from 6.5.3 to 6.5.4, consider disabling shared reservations by using the RB_DO_NOT_USE_SHARED_RESERVATIONS touch file, which is new in release 6.5.4. The touch file is located in the following directory:
/usr/openv/netbackup/db/config/RB_DO_NOT_USE_SHARED_RESERVATIONS

The RB_USE_SHARED_RESERVATIONS touch file that was used in release 6.5.3 to change the default is no longer used in 6.5.4. (See Effects of the RB_USE_SHARED_RESERVATIONS touch file on page 19.) When shared reservations is enabled, the Resource Broker will allow more than one duplication job that require the same source tape to go active at the same time. However, a single tape can only be actively read by one duplication job at a time. One active duplication job may need to wait until the current job that is reading the tape completes. If the job that has to wait for the source tape has already allocated a tape drive for writing, then the write drive will sit idle until the source tape is available. Symantec TechNote 327648 15 July 29, 2009

Shared reservations can increase or decrease tape drive utilization, depending on the batching of the images across the duplication jobs. If duplication jobs are large (could run for hours) and require multiple source tapes, then shared reservations may improve overall performance even though a write drive may sit idle now and then. (A duplication job could copy images from many tapes before having to wait for a tape, rather than having to wait for the tape before even beginning to copy the other tapes.) If duplication jobs are small and require only portions of a single source tape, then shared reservations could cause an excessive amount of idle time on write drives. If your SLPs share tapes (see SLPs should not share tapes on page 14), then shared reservations could increase or decrease the performance, depending on the nature of the sharing. In a future release, SLPs will perform image batching differently to reduce contention between duplication jobs.

Tune the Resource Broker configuration file


The NetBackup Resource Broker (nbrb) queues resource requests as they come in from the NetBackup job manager, so that it can accumulate the resources needed for a job to run, can assign resources to jobs in an intelligent order, and can respond to the job manager with the reasons why queued jobs are waiting. The nbrb evaluation cycle refers to the processing of the resource request queue for queued jobs. On a heavily loaded system, it may take 10 minutes or more to process the entire request queue. Prior to release 6.5.4, the default behavior was to process the entire request queue in one uninterrupted pass, during which changes in resource availability and resource requests were not considered. As of 6.5.4, the default behavior is for nbrb to interrupt the evaluation cycle every 3 minutes (the frequency can be modified) so that it can consider changes in resource availability. The changes that can be considered (or not) before the entire queue is processed, are as follows: Releasing resources: Resources (tape drives, tapes, disk volumes, counted resources, and so on) may have been given up by completed jobs, but will not be released until the start of the next evaluation cycle. When a resource is given up by a completed job, the Resource Broker marks the resource as ToBeReleased. The actual release is done at the start of the evaluation cycle. Until then, other jobs that could have used the resource are not able to, because the resource is still effectively in use. For 6.5.4 and later releases: By default, resources are released every 3 minutes. Unloading tape drives: Tape drives that were previously released may have, by this time, exceeded their media unload delay setting. Though these drives are considered to be available resources for pending jobs, such a drive can only be used if another job can also make use of the tape that is still mounted in it. If the pending jobs in the queue cannot use the loaded drive (because the volume pool is different or retention

Symantec TechNote 327648

16

July 29, 2009

level does not match), the drive remains loaded and unusable by the remaining jobs until the start of the next evaluation cycle. For 6.5.4 and later releases: By default, drives are unloaded every 3 minutes. The default behavior can be altered using DO_INTERMITTENT_UNLOADS, as described later in this section. New resource requests (that is, new job requests) that have arrived. These new requests are not considered until the start of the next evaluation cycle. Even if these jobs have a high priority, they are not considered until all lower priority jobs have had a chance to acquire resources. The following settings in the Resource Broker configuration file (nbrb.conf) allow you to alter the behavior of the resource request queue:
SECONDS_FOR_EVAL_LOOP_RELEASE DO_INTERMITTENT_UNLOADS

These settings are discussed below:


SECONDS_FOR_EVAL_LOOP_RELEASE

This entry controls the time interval after which nbrb interrupts its evaluation cycle and releases resources that have been given up by completed jobs. (It optionally unloads drives that have exceeded their media unload delay.) Only after the resources are released can they be assigned to other jobs in the queue. Experience has shown that changing this entry seems to be effective only when the evaluation cycle times are greater than 5 minutes. You can determine the time of the evaluation cycle by looking at the nbrb logs. Keep the following points in mind as you tune this value: There is a cost to interrupting the evaluation cycle. Under certain load conditions, a 1-minute setting with a 2-minute evaluation cycle can work. Experiment with the values and reach a number that suits your environment best. A value of 3 minutes works best for the majority of users and it is the same as the default media unload delay. For release 6.5.4, the default value is 180 (3 minutes). Do not set this value lower than 120 (2 minutes).
DO_INTERMITTENT_UNLOADS

The default value is 1. This value should always be set to 1, unless you are directed otherwise by Symantec technical support to address a customer environment-specific need.

Symantec TechNote 327648

17

July 29, 2009

This value affects what happens when nbrb interrupts its evaluation cycle because the SECONDS_FOR_EVAL_LOOP_RELEASE time has expired. When the value is set to 1, nbrb initiates unloads of drives that have exceeded the media unload delay.

Symantec TechNote 327648

18

July 29, 2009

Guidelines for using NetBackup 6.5.3


Practices listed in this section apply to only those environments that are running NetBackup 6.5.3.

Tune for SLP service restart


Adjust how long it takes NetBackup to restart the Storage Lifecycle Manager service (nbstserv) by changing the following entry in the LIFECYCLE_PARAMETERS file:
DUPLICATION_SESSION_INTERVAL_MINUTES 1

Change the default value from 5 minutes to 1 minute. Make this change after a shutdown and before restarting nbstserv. (See the process below.) A setting of 1 minimizes the time for pending and new duplications to start following a restart of the services. After duplication resumes, reset this number to the running value. The default is 5 minutes and 15 minutes works well for most sites.
NOTE: Adjusting DUPLICATION_SESSION_INTERVAL_MINUTES when restarting nbstserv is no longer necessary with 6.5.4 and future versions.

1,000 fragment limit for images fetched in each session


In 6.5.3, the number of images that can be included in an nbstserv session was limited to 1000 fragments in EMM. The following setting in the LIFECYCLE_PARAMTERS file determines the value:
MAX_IMAGES_FETCHED_SESSION 1000

Limiting the number of fragments effectively limits the number of images processed with each session. If a single image has more than 1000 fragments, it will be the only image processed by an nbstserv session. Symantec does not recommend using a lower value for this setting.

Effects of the RB_USE_SHARED_RESERVATIONS touch file


In 6.5.3, shared reservations are not used by default. The RB_USE_SHARED_RESERVATIONS touch file is used in 6.5.3 to change the default. Create this touch file only if told to do so by Symantec technical support. The touch file would be located in the following directory:
/usr/openv/netbackup/db/config/RB_USE_SHARED_RESERVATONS

Note that in 6.5.4, the default behavior changes. In 6.5.4 the Resource Broker uses shared reservations by default. (See New 6.5.4 touch file RB_DO_NOT_USE_SHARED_RESERVATIONS on page 15.)

Symantec TechNote 327648

19

July 29, 2009

When shared reservations is enabled, the Resource Broker will allow more than one duplication job that require the same source tape to go active at the same time. However, a single tape can only be actively read by one duplication job at a time. One active duplication job may need to wait until the current job that is reading the tape completes. If the job that has to wait for the source tape has already allocated a tape drive for writing, then the write drive will sit idle until the source tape is available. Shared reservations can increase or decrease tape drive utilization, depending on the batching of the images across the duplication jobs. If duplication jobs are large (could run for hours) and require multiple source tapes, then shared reservations may improve overall performance even though a write drive may sit idle now and then. (A duplication job could copy images from many tapes before having to wait for a tape, rather than having to wait for the tape before even beginning to copy the other tapes.) If duplication jobs are small and require only portions of a single source tape, then shared reservations could cause an excessive amount of idle time on write drives. If your SLPs share tapes (see SLPs should not share tapes on page 14), then shared reservations could increase or decrease the performance, depending on the nature of the sharing. In a future release, SLPs will perform image batching differently to reduce contention between duplication jobs.

Tune the Resource Broker configuration file


The NetBackup Resource Broker (nbrb) queues resource requests as they come in from the NetBackup job manager, so that it can accumulate the resources needed for a job to run, can assign resources to jobs in an intelligent order, and can respond to the job manager with the reasons why queued jobs are waiting. The nbrb evaluation cycle refers to the processing of the resource request queue for queued jobs. On a heavily loaded system, it may take 10 minutes or more to process the entire request queue. The changes that can be considered (or not) before the entire queue is processed, are as follows: Releasing resources: Resources (tape drives, tapes, disk volumes, counted resources, and so on) may have been given up by completed jobs, but will not be released until the start of the next evaluation cycle. When a resource is given up by a completed job, the Resource Broker marks the resource as ToBeReleased. The actual release is done at the start of the evaluation cycle. Until then, other jobs that could have used the resource are not able to, because the resource is still effectively in use. For 6.5.3 and all earlier releases: By default, resources are not released until the entire queue is processed. Unloading tape drives: Tape drives that were previously released may have, by this time, exceeded their media unload delay setting. Though these drives are considered to be available resources for pending jobs, such a drive can only be used if another Symantec TechNote 327648 20 July 29, 2009

job can also make use of the tape that is still mounted in it. If the pending jobs in the queue cannot use the loaded drive (because the volume pool is different or retention level does not match), the drive remains loaded and unusable by the remaining jobs until the start of the next evaluation cycle. For 6.5.3 and all earlier releases: By default, drives are not unloaded until the entire queue is processed. The default behavior can be altered using DO_INTERMITTENT_UNLOADS, as described later in this section. New resource requests (that is, new job requests) that have arrived. These new requests are not considered until the start of the next evaluation cycle. Even if these jobs have a high priority, they are not considered until all lower priority jobs have had a chance to acquire resources. For 6.5.3 and all earlier releases, new resource requests are ignored until the entire queue is processed. The following settings in the Resource Broker configuration file (nbrb.conf) allow you to alter the behavior of the resource request queue:
SECONDS_FOR_EVAL_LOOP_RELEASE DO_INTERMITTENT_UNLOADS

These settings are discussed below:


SECONDS_FOR_EVAL_LOOP_RELEASE

This entry controls the time interval after which nbrb interrupts its evaluation cycle and releases resources that have been given up by completed jobs. (It optionally unloads drives that have exceeded their media unload delay.) Only after the resources are released can they be assigned to other jobs in the queue. Experience has shown that changing this entry seems to be effective only when the evaluation cycle times are greater than 5 minutes. You can determine the time of the evaluation cycle by looking at the nbrb logs. Keep the following points in mind as you tune this value: There is a cost to interrupting the evaluation cycle. Under certain load conditions, a 1-minute setting with a 2-minute evaluation cycle can work. Experiment with the values and reach a number that suits your environment best. A value of 3 minutes works best for the majority of users and it is the same as the default media unload delay. For release 6.5.3, the default behavior is to complete the evaluation cycle. Setting this value causes the evaluation cycle to be interrupted at the frequency set.

Symantec TechNote 327648

21

July 29, 2009

Do not set this value lower than 120 (2 minutes).


DO_INTERMITTENT_UNLOADS

This value should always be set to 1, unless you are directed otherwise by Symantec technical support to address a customer environment-specific need. Note that this parameter is not set by default in 6.5.3 and needs to be added to the nbrb.conf file. This value affects what happens when nbrb interrupts its evaluation cycle because the SECONDS_FOR_EVAL_LOOP_RELEASE time has expired. When the value is set to 1, nbrb initiates unloads of drives that have exceeded the media unload delay.

Symantec TechNote 327648

22

July 29, 2009

You might also like