You are on page 1of 11

# Week 2: chapter three assessing and presenting experimental data

Dr. Belal Gharaibeh 6/3/2011

– Theory is also “derived” for a physical system and it is also like “data” but we call it a model (Newton’s law) – Measurement should NOT be compared to a theory to assess its quality .Ask your self. how good are the data? – Determine the quality of the data measured before using it and making engineering decision – We can compare if the data are “good” by comparing to theory derived results.

we should ask. so we say with 95% certainty (19 times of 20) that the meter has an error of 1 L/s or less. • The reading has accuracy of 1 L/s at odds of 19 to 1.• We are trying to measure “actual value” of physical quantity being measured and that is our Standard • The error is defined as the difference between the measured and true physical value of the quantity. And you can find a theory within this accuracy . “what is the error of the data?” • True value is something we can never know exactly because we have to measure it as the first step and process of measurement will have errors • We can estimate the possible amount of error. example: 95% of reading from one flowmeter will have an error of less than 1 L/s.

but we also need to estimate the bound on ε. • This bound is in the form of: • • Where u is the uncertainty estimated at odds of n:1 only one measurement in n will have an error whose magnitude is greater than u • • .Types of error • Error = ε = xm-xtrue • We want to minimize error in the experiment design step.

Specific cause of error • varies from experiment to experiment • Or within the same experiment • Two general classes of error . a scale reads 5% high. see figures – Bias error: also called systematic errors are those happening the same way each time a measurement is made. then every time you measure with it the reading will be +5% higher than true value .

for example: errors from mechanical friction or vibration may cause the reading to fluctuate about the true value – If enough measurements are taken then the precision error will be clear – Readings will cluster about the true value. or by knowledge of how the instrument was calibrated . therefore we can use statistical analysis to estimate the size of the error. – Bias errors cannot be treated using statistical analysis because they are fixed and do not show a distribution – Bias errors are estimated by comparison of the instrument to a more accurate standard.– Precision error: also called random error are different for each successive measurement but have an average value of zero.

manufacturing and maintenance. calibration is the adjusting the equipment to read the measured values in the right way – zero offset error: causes all readings to be offset by a constant amount (xoffset) – scale errors: change in the slope of the output relative to the input . see figure • – Certain recurring human errors: when a human reads high values every time – Certain errors caused by defective equipment: equipment sometimes have “built-in errors” resulting from incorrect design. The measuring process changes the characteristics of both source of the measured quantity and the measuring system. For example: the sound pressure level sensed by a microphone is not the same level if the microphone is not in the room we want to measure the sound level for – Limitations of system resolution . causes all readings to change by a fixed percentage . – Loading error: the effect of the measurement procedure on the system being tested. These errors are constant and can be solved by calibration if they don’t change with time.Classification of errors • • Bias or systematic errors : – Calibration errors (most common) see figure the most common bias errors is from calibration.

example: instrument designed to measure constant speed will not measure any changes in speed you should not use it.Classification of errors (continue) • Precision or random error – Certain human errors: human is inconsistent in taking the reading – Errors caused by disturbances to the equipment: – Errors caused by fluctuating experimental conditions: usually coming from outside interference like vibration or temperature – Errors derived from not measuring the system sensitivity: the sensitivity comes from the design or the manufacturing process of the instrument. or in the process of making light bulbs not every bulb will be the same exactly. Such errors are NOT a measurement errors but they look like precision errors from measurements and can be estimated in similar statistical methods .

errors before or after making measurements • Mistakes during an experiment: human is not trained to use the instrument • Calculation errors after an experiment .Classification of errors (continue) • Illegitimate errors.

and hysteresis (path dependence. figure – Errors from calibration drift and variation in test or environmental conditions: happens when the response is varying with time and usually from the sensitivity to temperature and humidity this will be bias error • If the test time is long.Classification of errors (continue) • Errors that are sometimes bias and sometimes precision errors – Instrument backlash. This is precision error – Errors from variations in procedure or definition among experiments: when the experiment is done with more than one instrument or by different people. In this case each time the test is made it has different bias which means you have precision error from all the tests . The reading is low when the measured variable is increasing and reading low when the measured variable is decreasing. errors will fluctuate during that time causing different calibration errors for each time you make long tests. friction. figure): example is the friction of a scale indicator of an instrument.

Terms used in rating instrument performance • Accuracy: the difference between the measured and true values. • Resolution: the smallest increment or change in the measured value that can be determined from the instrument readout scale. The manufacturer will specify a maximum error as the accuracy. • Sensitivity: the change of an instrument output per unit change in the measured quantity. what about the odds? • Precision: the difference between the instrument’s data values during repeated measurements of the same quantity. The display screen might round a number or truncate numbers . • Reading error: errors when reading a number from the display scale.