You are on page 1of 6

FAST VP Configuration

parameters and Unisphere alert


configuration
FAST VP configuration parameters
There are multiple settings that affect the behavior of FAST VP. These include
• FAST VP Performance & Data Movement Mode
• Workload Analysis Period
• Initial Analysis Period
• Relocation Rate
• Pool Reserved Capacity
•VP Allocation by FAST Policy
• Over-subscription limits

The following sections describe each of these settings, their effect on the behavior of FAST VP, as well as possible and default setting values.
Performance time window
• Use default, performance metrics are collected 24 hours a day, every day.
Data movement time window
• Allow FAST VP to perform movements 24 hours a day, every day.
• It is recommended that both monitoring and data movement windows are configured to be always open, so that FAST VP can use the most
recent analysis and metrics to optimize data placement.
• The performance and movement time windows should be open all the time there's rarely a need to restrict FASTVP from analyzing or
moving data within particular time windows.
• FAST is generally intelligent enough to differentiate between your typical daytime transactional workloads and your nightly backup
workloads and batch jobs.

Workload Analysis Period (WAP)


• A longer WAP will factor less-recent host activity into FAST VP promotion/demotion scores. Shorter WAP, allows FAST VP to react to
changes more quickly, but may lead to greater amounts of data being moved between tiers. Use default WAP (168 hrs.) A WAP of
less than one week will result in higher FC pool free space and more data demoted to SATA.
Initial Analysis Period (IAP)
• At the initial deployment of FAST VP, it may make sense to set the IAP to 168 hours (one week).
• During steady state, the IAP can be reduced to 24 hours (one day)
FAST VP Relocation Rate (FRR)
• For the initial deployment of FAST VP, start with a more conservative value for the relocation rate, perhaps 7 or 8.
• When it is seen that the amount of data movements between tiers is less, the FRR can be set to a more aggressive level of 5 or
higher. Do NOT set an FRR lower than 5 in 5876 code as this will cause excessive DA busy.
VP Allocation by FAST Policy
• As a best practice, it is recommended that VP Allocation by FAST Policy be enabled.
• With this feature enabled , FAST VP attempts to allocate new writes in the most appropriate tier first, based on available
performance metrics. If no performance metrics are available, the allocation is attempted in the pool the device is bound to.
• if the FC pool fills up, Allocation by FAST Policy will allow new host allocations to "spill over" into the other tiers in your FAST policy.
Pool Reserved Capacity (PRC)
• For individual pools with bound thin devices, set the PRC based on the lowest allocation warning level for that thin pool.
• Set “Pool Reserved Capacity” for the EFD and SATA pool to 1%.
• Set “Pool Reserved Capacity” for the FC to 10%.
•if a warning is triggered when a thin pool has reached an allocation of 80 percent of its capacity, then the PRC should be set to 20 percent.
This ensures that the remaining 20 percent of the pool is only used for new host-generated allocations, and not FAST VP data movements.
Over-subscription limits
• after determining the capacity available to FAST VP, an over -subscription limit can then be calculated for the pool devices that are
going to be bound to it.
• To ensure the configured capacity of the array is not oversubscribed , the limit can be calculated by dividing the capacity of the thin
pool being used for binding into the available capacity of all the pools. This value is then multiplied by 100 to get a percent value.
• For example, consider a configuration with a 1TB EFD pool, a 5 TB FC pool, and a 10TB SATA pool. The total available capacity is
16TB. If all thin devices are bound to FC, the oversubscription limit could be set to320% —(16/ 5)*100.This value can be set higher if
the intention is to initially oversubscribe the configured physical capacity of the array , and then add storage on an as-needed basis.
• For thin pools where devices will never be bound , the subscription limit can be set to 0 percent.
• This prevents any accidental binding or migration to that pool.
Recommendation
•Collect the Performance and capacity baseline to analyze the FAST VP.
• Check FAST VP compliance reports to determine tier usage and performance requirements for the associate SG.
•Adjust tier % as required to maintain a healthy balance between FC pool consumption and SATA pool spindle utilization.
•Increase the EFD capacity where the EFD Capacity is less than 4%.
•Don't bind the EFD to any application, if so then increase the EFD Capacity.
•If Tdevs are bound to EFD/SATA pool and the pool is highly utilized then set the PRC to 20%.

Alert monitoring
•We have configured first and second threshold to generate the alerts and sending it to Netcool to create incident tickets.
•some of these thresholds are a bit low for what we would expect to see in the industry. Warning can be removed from generating tickets.
•We need to consider an setting that generate tickets for abnormalities and not for normal VMAX/Server/Application operations.
•We have to reconsider of monitoring the SG Response time because it is depends on the server/application IO Profile.
•For example, in certain situations it can be perfectly normal for a back-end director or FE port to be above 75% utilized during a normal
business operation. This utilization % will very much depend on the workload and it is important to set these so that abnormalities are
generating tickets and not normal VMAX operations.
•Also we need to consider setting a certain amount of time for the even to continuously occur before activating the alert. For example "% busy
is higher than 85% for more than 15 minutes" or similar. Using this logic when creating custom alerts will help prevent the creation of
unnecessary alerts.
FAST VP Capacity Report Before and After

You might also like