You are on page 1of 112

General types of noises

• Mechanical noise
• Electronic noise
• Thermal noise
• Intermodulation noise
• Impulse noise
• Transit-time noise.
Signal Processing
Signal Processing

• Signal processing is focuses on analysing, modifying, and synthesizing


signals such as sound, images, and scientific measurements.
Signal processing
• Signal processing is the analysis, interpretation, and manipulation of
signals like sound, images time-varying measurement values and
sensor data etc…
• For example biological data such as electrocardiograms, control
system signals, telecommunication transmission signals such as radio
signals, and many others.
Need of Signal Processing

• When a signal is transmitted from one point to another there is every


possibility of contamination/deformation of the signal by
external/Internal noise.

• So to retrieve the original signal at the receiver suitable filters are to


be used. i.e the signal is processed to obtain the pure signal.
Commonly used signal analysis techniques (Data Science)
Statistical analysis

• Statistical analysis is the collection and interpretation of data in order


to uncover patterns and trends. It is a component of data analytics.

• Statistical analysis can be used in situations like gathering research


interpretations, statistical modeling or designing surveys and studies.
Statistical analysis

• In an effort to organize their data and predict future trends based on


the information,

• statistical analysis is a way for it to be examined as a whole, as well as


broken down into individual samples.
Statistical analysis
Five steps are taken during the process, including:

• Describe the nature of the data to be analyzed.


• Explore the relation of the data to the underlying population.
• Create a model to summarize understanding of how the data relates
to the underlying population.
• Prove (or disprove) the validity of the model.
• Employ predictive analytics to anticipate future trends.
Statistical analysis
• The above five methods/steps are basic, yet effective, in coming to
accurate data-driven conclusions.
Magnitude analysis
• Scale analysis (or order-of-magnitude analysis) is a powerful tool used in
the mathematical sciences for the simplification of equations with many
terms.
• First the approximate magnitude of individual terms in the equations is
determined. Then some negligibly small terms may be ignored.

[Note : magnitude or size of a mathematical object is a property which


determines whether the object is larger or smaller than other objects of the
same kind. More formally, an object's magnitude is the displayed result of an
ordering (or ranking)—of the class of objects to which it belongs]
• Consider for example the momentum equation of the Navier–Stokes equations in the vertical coordinate direction of the

atmosphere.

• Where R is Earth radius, Ω is frequency of rotation of the Earth, g is gravitational acceleration, φ is latitude, ρ is density of

air and ν is kinematic viscosity of air (we can neglect turbulence in free atmosphere).

• In synoptic scale we can expect horizontal velocities about U = 101 m.s−1 and vertical about W = 10−2 m.s−1. Horizontal

scale is L = 106 m and vertical scale is H = 104 m. Typical time scale is T = L/U = 105 s. Pressure differences in troposphere

are ΔP = 104 Pa and density of air ρ = 100 kg⋅m−3. Other physical properties are approximately:

• R = 6.378 × 106 m; Ω = 7.292 × 10−5 rad⋅s−1; ν = 1.46 × 10−5 m2⋅s−1; g = 9.81 m⋅s−2.; Estimates of the different terms in

equation
• Now we can introduce these scales and their values into equation
(A1):
Standard deviation
• Standard deviation is a method of statistical analysis that measures
the spread of data around the mean.
• When you’re dealing with a high standard deviation, this points to
data that’s spread widely from the mean.
• Similarly, a low deviation shows that most data is in line with the
mean and can also be called the expected value of a set.
• Standard deviation is mainly used when you need to determine the
dispersion of data points (whether or not they’re clustered).
Probability density
• Probability density function (PDF) is a statistical expression that defines a
probability distribution (the likelihood of an outcome) for a discrete
random variable (e.g., a stock or ETF) as opposed to a continuous random
variable.

• The difference between a discrete random variable is that you can


identify an exact value of the variable.

• The normal distribution is a common example of a PDF, forming the well-


known bell curve shape.

• Example : In finance, traders and investors use PDFs to understand how


price returns are distributed in order to evaluate their risk and expected
return profile.
• They are typically depicted on a graph, with a normal bell curve indicating neutral market risk, and a bell at
either end indicating greater or lesser risk/reward.
• When the PDF is graphically represented, the area under the curve will indicate the interval in which the
variable will fall. The total area in this interval of the graph equals the probability of a discrete random
variable occurring.
• More precisely, since the absolute likelihood of a continuous random variable taking on any specific value is
zero due to the infinite set of possible values available, the value of a PDF can be used to determine the
likelihood of a random variable falling within a specific range of values.
Sknewness
• Skewness is a measure of asymmetry or distortion of symmetric
distribution.
• It measures the deviation of the given distribution of a random
variable from a symmetric distribution, such as normal distribution.
• A normal distribution is without any skewness, as it is symmetrical on
both sides.
• Hence, a curve is regarded as skewed if it is shifted towards the right
or the left.
Sknewness
Types of Skewness
Positive Skewness
• If the given distribution is shifted to the left
and with its tail on the right side, it is a
positively skewed distribution. It is also
called the right-skewed distribution.
• As the name suggests, a positively skewed
distribution assumes a skewness value of
more than zero. Since the skewness of the
given distribution is on the right, the mean
value is greater than the median and moves
towards the right, and the mode occurs at
the highest frequency of the distribution. A tail is referred to as the tapering of the curve
differently from the data points on the other side.
Negative Skewness

• If the given distribution is shifted to the right


and with its tail on the left side, it is a
negatively skewed distribution.
• It is also called a left-skewed distribution. The
skewness value of any distribution showing a
negative skew is always less than zero.
• The skewness of the given distribution is on
the left; hence, the mean value is less than
the median and moves towards the left, and
the mode occurs at the highest frequency of
the distribution.
Measuring Skewness
• Skewness can be measured using several methods; however, Pearson
mode skewness and Pearson median skewness are the two frequently
used methods.
• The Pearson mode skewness is used when a strong mode is exhibited
by the sample data.
• If the data includes multiple modes or a weak mode, Pearson’s
median skewness is used.
Measuring Skewness
Extreme values
• The Extreme Value Theorem guarantees both a maximum and
minimum value for a function under certain conditions.
• Use Extreme Value Theorem
• Now that we have proved the extreme value theorem, let us learn how to use it
with the help of an example.
• Consider function f(x) = x3 - 27x + 2. Find the maximum and minimum values of
f(x) on [0, 4] using the extreme value theorem.

• Solution: Since f(x) = x3 - 27x + 2 is differentiable, therefore it is continuous. Since


[0, 4] is closed and bounded, therefore we can apply the extreme value theorem.
Differentiate f(x) = x3 - 27x + 2.

• f'(x) = 3x2 - 27
• Setting f'(x) = 0, we have
⇒ 3x2 - 27 = 0
⇒ 3x2 = 27
⇒ x2 = 27/3 = 9
⇒ x = -3, 3
• So, x = -3, 3 are the critical points. Now, we find the value of f(x) at critical points and the
endpoints of the interval.

• f(-3) = (-3)3 - 27(-3) + 2 = -27 + 81 + 2 = 56

• f(3) = (3)3 - 27(3) + 2 = 27 - 81 + 2 = -52

• f(0) = (0)3 - 27(0) + 2 = 2

• f(4) = (4)3 - 27(4) + 2 = -42

• So the minimum value of f(x) on [0, 4] is -52 and its maximum value on [0, 4] is 56.
Mean
• Mean is used to perform the statistical analysis , which is more
commonly referred to as the average.
• When you’re looking to calculate the mean, you add up a list of
numbers and then divide that number by the items on the list.
• When this method is used it allows for determining the overall trend
of a data set, as well as the ability to obtain a fast and concise view of
the data.
• The statistical mean is coming up with the central point of the data
that’s being processed. The result is referred to as the mean of the
data provided.
Regression

• Regression is the relationship between a dependent variable (the data


you’re looking to measure) and an independent variable (the data
used to predict the dependent variable).
• It can also be explained by how one variable affects another, or
changes in a variable that trigger changes in another, essentially cause
and effect. It implies that the outcome is dependent on one or more
variables.
• The line used in regression analysis graphs and charts signify whether
the relationships between the variables are strong or weak, in
addition to showing trends over a specious amount of time.
Regression
Regression
Y = a + b(x)

A refers to the y-intercept, the value of y when x = 0


X is the dependent variable
Y is the independent variable
B refers to the slope, or rise over run
Hypothesis testing

• In statistical analysis, hypothesis testing, also known as “T Testing”, is


a key to testing the two sets of random variables within the data set.
• This method is all about testing if a certain argument or conclusion is
true for the data set.
• It allows for comparing the data against various hypotheses and
assumptions.
• It can also assist in forecasting how decisions made could affect the
Analysis.
Hypothesis testing
• In statistics, a hypothesis test determines some quantity under a
given assumption.
• The result of the test interprets whether the assumption holds or
whether the assumption has been violated.
• This assumption is referred to as the null hypothesis, or hypothesis 0.
Any other hypothesis that would be in violation of hypothesis 0 is
called the first hypothesis, or hypothesis 1.
Hypothesis testing
• The results of a statistical hypothesis test need to be interpreted to
make a special claim, which is referred to as the p-value.
• Let's say what you’re looking to determine has a 50% chance of being
correct.
Sample size determination

• When it comes to analyzing data for statistical analysis, sometimes


the dataset is simply too large, making it difficult to collect accurate
data for each element of the dataset. When this is the case, most go
the route of analyzing a sample size, or smaller size, of data, which is
called sample size determination.
• To do this correctly, you’ll need to determine the right size of the
sample to be accurate. If the sample size is too small, you won’t have
valid results at the end of your analysis.
Sample size determination
• To come to this conclusion, you'll use one of the many data sampling
methods. You could do this by sending out a survey to your
customers, and then use the simple random sampling method to
choose the customer data to be analyzed at random.
• On the other hand, a sample size that is too large can result in wasted
time and money. To determine the sample size, you may examine
aspects like cost, time, or the convenience of collecting data.
Time Domain analysis
• A time domain analysis is an analysis of physical
signals, mathematical functions, or time series of
economic or environmental data, in reference to
time.

Examples of time domain data


• Time domain refers to variation of amplitude of
signal with time.
• For example consider a typical Electro cardiogram
(ECG). If the doctor maps the heartbeat with time
say the recording is done for 20 minutes, we call it a
time domain signal.
Correlation
• Correlation refers to a process for establishing the relationships
between two variables. You learned a way to get a general idea about
whether or not two variables are related, is to plot them on a “scatter
plot”.
• While there are many measures of association for variables which are
measured at the ordinal or higher level of measurement, correlation
is the most commonly used approach.
Correlation
• Correlation is a statistical measure (expressed as a number) that
describes the size and direction of a relationship between two or
more variables.
• A positive correlation means that both variables change in the same
direction.
• A negative correlation means that the variables change in opposite
directions.
Positive correlation/ Negative correlation
The four types of correlation
coefficients are given by:
• Pearson Correlation Coefficient
• Linear Correlation Coefficient
• Sample Correlation Coefficient
• Population Correlation Coefficient
Correlation
• A correlation coefficient quite close to 0, but either positive or negative,
implies little or no relationship between the two variables. A correlation
coefficient close to plus 1 means a positive relationship between the two
variables, with increases in one of the variables being associated with
increases in the other variable.
• A correlation coefficient close to -1 indicates a negative relationship between
two variables, with an increase in one of the variables being associated with
a decrease in the other variable. A correlation coefficient can be produced
for ordinal, interval or ratio level variables, but has little meaning for
variables which are measured on a scale which is no more than nominal.
• For ordinal scales, the correlation coefficient can be calculated by using
Spearman’s rho. For interval or ratio level scales, the most commonly used
correlation coefficient is Pearson’s r, ordinarily referred to as simply the
correlation coefficient.
Pearson Correlation Coefficient

• The most common formula is the Pearson Correlation coefficient used


for linear dependency between the data sets. The value of the
coefficient lies between -1 to +1. When the coefficient comes down
to zero, then the data is considered as not related. While, if we get
the value of +1, then the data are positively correlated, and -1 has a
negative correlation.
Pearson Correlation Coefficient
Linear Correlation Coefficient Formula

• The formula for the linear correlation coefficient is given by;


Sample Correlation Coefficient Formula

• The formula is given by:


• rxy = Sxy/SxSy

Where
• Sx and Sy are the sample standard deviations
• Sxy is the sample covariance.
Population Correlation Coefficient Formula

• The population correlation coefficient uses σx and σy as the


population standard deviations and σxy as the population covariance.

rxy = σxy/σxσy
Correlation Example
Years of Education and Age of Entry to
Labour Force Table.1 gives the number of
years of formal education (X) and the age of
entry into the labour force (Y), for 12 males
from the Regina Labour Force Survey. Both
variables are measured in years, a ratio level
of measurement and the highest level of
measurement. All of the males are aged
close to 30, so that most of these males are
likely to have completed their formal
education.

Table 1. Years of Education and Age of Entry into Labour Force for 12 Regina Males
• Since most males enter the labour force soon after they leave formal
schooling, a close relationship between these two variables is expected.
By looking through the table, it can be seen that those respondents who
obtained more years of schooling generally entered the labour force at an
older age. The mean years of schooling are x̄ = 12.4 years and the mean
age of entry into the labour force is ȳ= 17.8, a difference of 5.4 years.

• This difference roughly reflects the age of entry into formal schooling,
that is, age five or six. It can be seen through that the relationship
between years of schooling and age of entry into the labour force is not
perfect. Respondent 11, for example, has only 8 years of schooling but did
not enter the labour force until the age of 18. In contrast, respondent 5
has 20 years of schooling but entered the labour force at the age of 18.
The scatter diagram provides a quick way of examining the relationship
between X and Y.
Covariance
• Covariance is a statistical term that refers to a systematic relationship
between two random variables in which a change in the other reflects
a change in one variable.
• Analysis of covariance (ANCOVA) is a method for comparing sets of
data that consist of two variables (treatment and effect, with the
effect variable being called the variate), when a third variable (called
the covariate) exists that can be measured but not controlled and that
has a definite effect on the variable of interest
• The positive covariance states that two assets are moving together
give positive returns while negative covariance means returns move
in the opposite direction.
• Covariance is usually measured by analyzing standard deviations from
the expected return or we can obtain by multiplying the correlation
between the two variables by the standard deviation of each variable.
Impulse response
• In signal processing, the impulse
response, or impulse response
function (IRF), of a dynamic
system is its output when
presented with a brief input
signal, called an impulse.
• More generally, an impulse
response refers to the reaction of
any dynamic system in response to
some external change.
Time weighting – Fast -Slow
• If the letter is F, S or I, this represents the time weighting, with F =
fast, S = slow, I =impulse.
• Time weighting is applied so that levels measured are easier to read
on a sound level meter.
• The time weighting damps sudden changes in level, thus creating a
smoother display.
Graphs of fast, slow, and impulse time weightings applied so that sound levels measured are
easier to read on a sound level
Time weighting

• The graph indicates how this works. In this example, the input signal
suddenly increases from 50 dB to 80 dB, stays there for 6 seconds, then
drops back suddenly to the initial level.
• A slow measurement (yellow line) will take approximately 5 seconds (attack
time) to reach 80 dB and around 6 seconds (decay time) to drop back down
to 50 dB.
• S is appropriate when measuring a signal that fluctuates a lot.
Time weighting
• A fast measurement (green line) is quicker to react. It will take
approximately 0.6 seconds to reach 80 dB and just under 1 second to
drop back down to 50 dB.
• F may be more suitable where the signal is less impulsive.
Graphs of fast, slow, and impulse time weightings applied so that sound levels measured are easier to
read on a sound level
Decision tree analysis
• Decision tree analysis involves visually outlining the potential
outcomes and consequences of a complex decision.
• These trees are particularly helpful for analyzing quantitative data and
making a decision based on numbers.

Similar to the Decision tree analysis following analysis used in NVH


• Time weighting analysis
• Frequency weighting analysis
Time weighting
• The decision to use fast or slow is often reached by what is prescribed
in a standard or a law.
• However, the following can be used as a guideline: The slow
characteristic is mainly used in situations where the reading with the
fast response fluctuates too much (more than about 4 dB) to give a
reasonably well-defined value.
• Modern digital displays largely overcome the problem of fluctuating
analogue meters by indicating the maximum R.M.S. value for the
preceding second
Time weighting

• An impulse measurement (blue line) will take approximately 0.3


seconds to reach 80 dB and over 9seconds to drop back down to 50 dB.
• The impulse response, I can be used in situations where there are sharp
impulsive noises to be measured, such as fireworks or gunshots.
Leq
= equivalent. Equivalent values are a form of time weighting that is easier to read on a display than the
instantaneous sound level.
L AT or L eq : Equivalent continuous sound level
• Leq = Equivalent values are a form of time weighting that is easier to
read on a display than the instantaneous sound level.
• If you look at these graphs of sound level over time, the area under
the blue curve represents the energy.
• The horizontal redline drawn to represent the same area under the
blue curve, gives us the Leq. That is the equivalent value or average
of the energy over the entire graph.
L AT or L eq : Equivalent continuous sound level
• LAeq is not always a straight line. If the LAeq is plotted as the
equivalent from the beginning of the graph to each of the
measurement points, the plot is shown in the second graph.
L AT or L eq : Equivalent continuous sound level
• Sound exposure level—in decibels—is not much used in industrial
measurement. Instead, the time-averaged value is used.
• This is the time average sound level or as it is usually called the
'equivalent continuous sound level' has the formal symbol
• L AT as described in definitions of IEC61672-1 where many correct
formal symbols and their common abbreviations are given. These
mainly follow the formal ISO acoustic definitions. However, for mainly
historical reasons,
• L AT is commonly referred to as Leq.
L AT or L eq : Equivalent continuous sound level
• Formally,
• L AT is 10 times the base 10 logarithm of the ratio of a root-mean-
square A-weighted sound pressure during a stated time interval to
the reference sound pressure and there is no time constant involved.
To measure
• LAT an integrating-averaging meter is needed; this in concept takes the
sound exposure, divides it by time and then takes the logarithm of
the result.
L AT or L eq : Equivalent continuous sound level
• The analysis of the regression equations calculated on an hourly basis
proves that the correlation coefficients between LI0 and Leq do not
show any significant variation through 24h of day (from 0-95 to 0.99):
the corresponding equation parameters and standard deviations
show only minor changes through the 24 h period.
• However, the correlation coefficients between L90 and Leq range
from 0-65 during the night to 0-90 at day.
Lmax and Lmin

• If the words max or min appear in the label, this simply represents
the maximum or minimum value measured over a certain period of
time.
Frequency domain
Difference between time-domain and frequency domain
• As stated earlier, a time-domain graph displays the changes in a signal
over a span of time, and frequency domain displays how much of the
signal exists within a given frequency band concerning a range of
frequencies.
Frequency domain method
• In engineering and statistics, frequency domain is a term used to
describe the analysis of mathematical functions or signals with
respect to frequency, rather than time.
• The frequency domain (FD) method converts the signal from the time
domain to the frequency domain by a fast Fourier transform (FFT),
while the time domain (TD) method calculates peak-to-peak value of
the pulse waveform directly from the time samples.
Frequency or Spectral analysis
• A basic noise or vibration meter would thus provide a single root-
mean-square level of the time history measured over a wide
frequency band which is defined by the limits of the meter itself.
• The single root mean- square levels of the noise or vibration signals
generally represent the cumulative total of many single frequency
waves. Since the time histories can be synthesised by adding single
frequency (sine) waves together using Fourier analysis procedures.
• Quite often, it is desirable for the measurement signal to be
converted from the time to the frequency domain, so that the various
frequency components can be identified, and this involves frequency
or spectral analysis.
Frequency domain
Example:
• The Frequency domain works by allowing a representation of the
qualitative behavior of a system, as well as characteristics of the way
the system response to changes in bandwidth, gain, phase shift,
harmonics, etc.
• A discipline in which the frequency domain is used for graphical
representation is in noise and vibration.
• Often NVH engineers display an signal within a frequency domain in
order to better understand the shape and character of an signal.
Frequency analysis
• Vibration Analysis: FFT, PSD, and Spectrogram Basics [Free Vibration is
an oscillating motion about an equilibrium so most vibration analysis
looks to determine the rate of that oscillation, or the frequency.
• The number of times a complete motion cycle occurs during a period
of one second is the vibration's frequency and is measured in hertz
(Hz).
Frequency domain
Frequency analysis
• Frequency Analysis is the study of quantitatively describing the
characteristics of a set of data is called descriptive statistics.
• In statistics, frequency is the number of times an event occurs.
• Frequency Analysis is an important area of statistics that deals with
the number of occurrences (frequency) and analyzes measures of
central tendency, dispersion, percentiles, etc.
• "Pattern approved" sound level meters typically offer noise
measurements with A, C and Z frequency weighting
Applications of Frequency Domain

• For example, sound source exist between a range of 20-20,000Hz,


and some frequencies are harder for the human ear to withstand.
• The frequency 3,400Hz is a harsh frequency (the sound of babies
crying), and the human ear is specifically tuned to respond viscerally
to that sound.
• An NVH engineer may reduce the strength of that frequency in the
frequency domain using an sound equalizer (Cancellation Technique).
• By displaying the audio signal in the frequency domain, an engineer
can boost and reduce signals to make the sounds more pleasant for
the human ear.
Applications of Frequency Domain
• Humans can detect sounds in a frequency range from about 20 Hz to
20 kHz. (Human infants can actually hear frequencies slightly higher
than 20 kHz, but lose some high-frequency sensitivity as they mature;
the upper limit in average adults is often closer to 15–17 kHz.)
Instrument for frequency domain analysis
• Signals can be measured in the laboratory in either domain, but using
different instruments.
• While the oscilloscope is the most common instrument for time
domain measurements, the spectrum analyzer is the most common
instrument for frequency domain measurements.
frequency analysis

• 3 types of measures of frequency analysis


• Most popular measures of dispersion used for frequency analysis are
Standard Deviation, Variance and Range.
Measures of Central Tendency
• It is a single measure that tries to describe the set of data through a
value that represents the central position within that data set.
• Most popular measures of central tendency used for frequency
analysis are Mean, Median and Mode.
• While the mean is the average value of the data set
• The median is the middle observation (observation which has an
equal number of values lying above and below it) in the data set.
• Mode is the value that occurs the most number of times in a data
set.
Measures of Dispersion
• These reflect the spread or variability of data within a data set. Most
popular measures of dispersion used for frequency analysis are
Standard Deviation, Variance and Range.
Percentile Values
• A percentile value shows what percent of values in a data set fall
below a certain percent.
• Frequency Analysis commonly uses percentile values like Quartiles,
Deciles, Percentiles, etc.
• While the 10th percentile value shows that 10% of the observations
fall below it in a data set, it is also called the 1st Decile (where the
data set is divided into 10 Deciles at intervals of 10% each).
• Similarly the 25th, 50th and 75th percentiles are also called the 1st,
2nd and 3rd Quartile respectively (where the data set is divided into 4
Quartiles at intervals of 25% each).
Options for Vibration Analysis Programming
• There are a lot of different ways to analyze shock and vibration data. You can use standalone software packages or, my
preference, you can develop your own analysis program or algorithm that does exactly what you need. After all, in the
world of shock and vibration testing, each application is a little different.
• Even within the world of computational programs there are many different options, each with their own benefits and
drawbacks. So in this post we'll discuss 7 well known computing platforms that work great for analyzing shock and
vibration data:

• LabVIEW
• MATLAB
• Python
• enDAQ Cloud
• GNU Octave
• Scilabs
• FreeMat
Vibration Analysis Software Packages
Free Standalone Software Options
• 1. VibrationData ToolBox
• 2. enDAQ Lab
• 3. enDAQ Cloud

Analysis Software Products


• 4. DADiSP
• 5. DPlot
• 6. m + p International
• 7. VibrationVIEW
• 8. Bruel and Kjaer
• 9. ProAnalyst
• 10. FEMtools
Reference
• https://blog.endaq.com/vibration-analysis-fft-psd-and-
spectrogram#:~:text=Vibration%20is%20an%20oscillating%20motion,
measured%20in%20hertz%20(Hz).
Campbell diagram
• A Campbell diagram plot represents a system's response spectrum as
a function of its oscillation regime. It is named for Wilfred Campbell,
who introduced the concept.
• It is also called an interference diagram
• The Campbell diagram is an overall or bird's-eye view of regional
vibration excitation that can occur on an operating system.
• The Campbell diagram can be generated from machine design criteria
or from machine operating data.
Campbell diagram
• A Campbell diagram represents the vibration frequencies of a system at
various operating RPMs.
• A traditional Campbell diagram uses an equation for motion to express the
external force caused by the rotational frequency as a periodic function.
• This function is mapped on a graph, which allows you to analyze the
vibration characteristics of a system.
• Test or simulation of a complicated rotor system for which an equation of
mode cannot be produced. To create such a diagram, increase in RPM
against time and record the system response to each RPM status.
• Then, partition the data and input it into a fast Fourier transform (FFT)
algorithm in order to display the frequency response on a 3D graph.
Plotting the Campbell Diagram

• There are many ways to plot the Campbell diagrams, but the
Waterfall diagrams are the most popular
• This type of diagram expresses the FFT data in a 3D space. Each point
in the 3D space is mapped based on the RPM, frequency, and
amplitude inputted into the algorithm.
• The constant band method or the constant ratio method used to
draw the Waterfall diagrams.
Order tracking by
constant band: Draws
the diagram based on
the rotational speed and
frequency
Campbell Diagram
• Step 1 Creating the Data for the Campbell Diagram
• Step 2 Opening the Campbell Diagram Dialog Window
• Step 3 Inserting Data into the Dialog Window
• Step 4 Draw Data
• Step 4 Setting the Time Zone
• Step 5 Configuring the Frame Settings
• Step 6 Configuring the FFT Settings
• Step 7 Configuring the Plot Settings
• Step 8 Drawing and Modifying the Campbell Diagram
https://functionbay.com/documentation/onlinehelp/default.ht
m#!Documents/campbelldiagram3d.htm
Campbell Diagram
Cascade diagrams
• Cascade charts can be used as an alternative to a stacked bar chart to
show segmentation by categories, products or regions.
• Cascade chart, also known as a waterfall chart, shows how each bar
relates to other bars and as how it contributes to the total.
• Example : Use a cascade chart to walk your audience through the line
items on a financial statement or to explain changes in a key measure
between time periods.
Coherence function.
• Coherence function is defined as the measure of the causal relationship
between two signals with the presence of other signals.

• Coherence in statistics is an indication of the quality of the information,


either within a single data set, or between similar but not
identical data sets.

• Fully coherent data are logically consistent and can be reliably combined
for analysis.
Coherence calculation

• The coherence is the magnitude of the trial-averaged cross- spectrum


between the two signals at frequency index j, divided by the
magnitude of the trial-averaged power spectrum of each signal at
frequency index j.
Correlation functions

• Coherence measures the degree of linear dependency of two signals


by testing for similar frequency components.

• Correlation is another measure of the relationship between two


signals. A correlation coefficient is used to evaluate similarity.
Correlation functions
• Correlation functions are a useful indicator of dependencies as a
function of distance in time or space.
• They can be used to assess the distance required between sample
points for the values to be effectively uncorrelated.
• In addition, they can form the basis of rules for interpolating values
at points for which there are no observations.
Case study
• Statistical Analysis of Noise Levels in Urban Areas

• Environmental noise measurements hare been carried out during recent


years at different cities and locations of Spain.
• The noise levels have been continuously sampled over 24 h periods using a
noise level analyzer. The data contained a total of 4200 measurement
hours.
• All the information has been used to investigate the time patterns of the
noise levels under a wide range of different conditions.
• This case aims to study the relationships between several noise descriptors
urban areas.
• A-weighted noise levels have been measured continuously over 24 h
periods in 50 different selected locations of seven Spanish cities:
• In all cases, the instantaneous sound levels were sampled every 0-1 s,
resulting in a total count of 36000 samples per hour.
• All hourly values of L1, L10, L50,L90, L99 and Leq were obtained
through complete 24 h periods.
• Relationships have been derived to link the statistical indices, Lx and
Leq, with the standard deviation, d, of the noise level distribution.
Two such relationships are:
Correlations
• The precise determination of noise level distributions and percentile
noise level values Lx is usually based in the use of quite sophisticated
and expensive instruments (tape recorders, statistical analyzers, etc.).
• The use of modern integrating sound level meters provides only the
values of Leq for a given time period. Therefore, it is interesting to
investigate the Lx-Leq relationships in order to obtain information on
the main features of instantaneous sound level distributions under
the different experimental conditions that are usually found in urban
areas.
CONCLUSIONS

• A tentative extrapolation of the results obtained in this investigation would


suggest that in order to reduce the economic cost of general purpose noise
surveys in urban areas (for example, those related to the measurement of
the noise map of a city or the prediction of general annoyance produced by
the noise on a community).
• The complete sampling schedules can be substituted by much simpler
short time measurement techniques using an appropriate noise descriptor
(Leq), without producing a serious loss of any relevant information.
• The use of the equations given in this paper (or other similar) can afford a
sufficient basis for predicting the noise descriptors and noise level
distributions actually observed in most conditions usually found in urban
areas.

You might also like