You are on page 1of 1

 In Hybrid Cloud Broker project, we were using Netuitive tool for data analysis.

It had
many limitation. There are many discovery tools like BMC petrol are available in
market. These tool gives host resource information at regular intervals, like CPU
utilization, Memory utilization. Netuitive tool used to read these data and based on the
policy set, used to generate the alarms. It also used to provide forecasted data.
 Major limitation in forecast of this type of tools are, Prediction is valid only if the there
is no change in application deployment environment. That means, say application is
running on a host, resource data is captured and fed into netuitive, it forecast the
prediction. If the application is moved to new host, then all these prediction has no
meaning. Historical data has no meaning. Hence once again we have to start taking the
data from beginning. (Netuitive supported only static environment).
 In cloud environment, it is common, applications are dynamically moved from one ESX
server to another. So we felt prediction done by the currently available tools does not
support the cloud environment. This motivated us to come out with our own
methodology to resolve this problem.
 Even though applications are dynamically migrated from one host to anther, we used
calibration component to calibrate the data and then analyzed the data using statistical
method like Hoteling’s T2 statistics (used for finding the accuracy of prediction) and
used decomposition method (MYT)
 There are lots of tools are available to discover the workloads like CPU, Memory and
disk IO. Our application analyzes the pattern change. As part of analysis we looked into
trend pattern, seasonal component and cyclical components of the pattern. (Later in the
session, if needed we will explain how to we computed trend, seasonal and cyclical
component).
 Pattern analysis helped to detect the miss behavior and provide early warning before
problem occurs. So that we can take the preventive action.
 If you look into architecture diagram, it mainly had two components, one is calibration
component and second one fault detection component. Calibration component computes
the base value of CPU, Memory, Network IO utilization. (It takes hardware
configurations like core, socket into account. We have look table.) Base data is computed
for hypothetical scenario where the application executes on the base machine.
 Forecasting is done by considering the historical data and using those historical data we
compute trend, seasonal, cyclical components. Used multiplicative model (Function of
(trend, seasonal, cyclical) * Error component) for prediction. Following are steps
 Decide the length of the data and find the average, compute centered moving
average.
 Remove the original data to isolate trend, and cyclical components.
 Compute seasonal factor by studying the pattern displayed for the season length (L).
 Remove seasonal factor data & cyclical component to compute trend
 Determine cyclical component by calculating the difference between actual value and
trend.
 Calculate error component after separating trend, seasonal and cyclical component
from the actual data.
 Forecast trend, seasonal and cyclical component independently and then aggregate
them using multiplicative model to compute final forecasted data.

You might also like