You are on page 1of 61

MECAHNICAL DEPARTMENT

L. D. COLLEGE OF ENGINEERING
AHMEDABAD.

CERTIFICATE

THIS IS TO CERTIFY THAT WITH


ENROLLMENT NO. OF M.E. IN I.C. ENGINE & AUTOMOBILE
SEMESTER 2ND HAS SATISFACTORILY COMPLETED THE TERM WORK IN
SUBJECT EXPERIMENTAL TECHNIQUES AND INSTRUMENTATIONS IN
THERMAL SYSTEMS (SUBJECT CODE: 3722105) FOR ACADEMIC YEAR 2021-
22.

TERM DATE:

FACULTY SIGNATURE HEAD OF DEPT.

INTERNAL EXAMINAR EXTERNAL EXAMINAR


Experimental Techniques and Instrumentations in Thermal Systems - 3722105

L D COLLEGE OF ENGINEERING
MECHANICAL ENGINEERING DEPARTMENT
INDEX
SR. TITLE PAGE DATE OF SIGN OF REMARK
NO NUMEBR ASSESSMENT FACULTY

1 To calibrate and measure 1 to 6


temperature using thermocouple.

2 To carry out calibration of pressure 7 to 10


measuring device: pressure gauge, u-
tube manometer

3 To measure the thermal conductivity 11 to 17


of any fluid.

4 To carry out calibration of flow 18 to 23


measuring devices: orifice meter and
rotameter.

5 To measure the direct and diffuse 24 to 34


solar radiation using pyranometer
and pyrheliometer

6 To carry out exhaust gas analysis 35 to 38


with gas chromatographer

7 To Study And Familier with data 39 to 47


logging and aquisition system.

8 To study of various electronic 48 to 51


control used in thermal
measurement

9 To study and compare various 52 to 56


advanced measurement techniques.

10 To perform experiment with any 57 to 59


thermal system and carry out
uncertainty analysis for same.

M.E. in I.C. Engine & Automobile Semester-II 2021-22


EXPERIMENT: - 1

AIM: To calibrate and measure temperature using thermocouple.

1. Thermocouple
➢ Principle:
Thermocouple used to measure temperature, are composed of two dissimilar
metals that produce a small voltage when joined together—one end of a
thermocouple joins each metal. A thermocouple thermometer then reads voltage
produced. Thermocouples can be manufactured from a range of metals and
typically register temperatures between 200 and 2,600 degrees Celsius (C).
Depending on the types of metals in the thermocouple, the specific temperature
range will vary.

➢ Construction:
In order to achieve accurate readings from a thermocouple, it’s essential to
calibrate the device accordingly.
Typically, thermocouples are standardized by using 0 degrees C as a reference
point, and many devices can adjust to compensate for the varying temperatures at
thermocouple junctions.
Sheathed thermocouples are available in three different junctions: grounded,
ungrounded and exposed.
Grounded junctions feature wire junctions that are attached to the inner probe
wall, enabling effective heat transfer from the outside of the probe wall to the
junction.
Ungrounded probes feature unattached
wire junctions, which promotes electrical
isolation.
Exposed junctions feature a junction
that extends beyond the sheath, enabling a
quick response time but limiting their use
to non-corrosive and non-pressurized
environments. Figure 1.1 Thermocouple circuit

1
➢ Working:
The working principle of thermocouple is based on three effects, discovered
by See beck, Peltier and Thomson. They are as follows:

1) See beck effect: The See beck effect states that when two different or unlike
metals are joined together at two junctions, an electromotive force (emf) is
generated at the two junctions. The amount of emf generated is different for
different combinations of the metals.
2) Peltier effect: As per the Peltier effect, when two dissimilar metals are joined
together to form two junctions, emf is generated within the circuit due to the
different temperatures of the two junctions of the circuit.
3) Thomson effect: As per the Thomson effect, when two unlike metals are joined
together forming two junctions, the potential exists within the circuit due to
temperature gradient along the entire length of the conductors within the circuit.

➢ Calibration:
To calibrate a thermocouple, various types of measuring equipment,
standards, and procedures must be in place.
First, a control temperature must be established that is stable and provides a
constant temperature; it must be uniform and cover a large enough area that the
thermocouple can adequately be inserted into it (such as an ice bath). Sources of
controlled temperatures are called fixed points.
A fixed-point cell is composed of a metal sample within a graphite crucible,
with a graphite thermometer submerged in the metal sample. When this metal
sample reaches the freezing point, it maintains a very stable temperature.
The freezing point occurs when a material reaches the point between the solid and
liquid phase. A reference junction temperature must also be established; typically,
0 degrees C is used. A measuring instrument, such as Fluke 702 calibrator, can be
used to measure thermocouple output.
A simple calibration process can be done by following a few basic
instructions. A basic calibration process involves heating water to 30 degrees C
in a thermo bath.
Next, the thermocouple is turned on and each of two multimeter leads are
attached to one end of the thermocouple at this point, the multimeter should

2
register one microvolt. One junction of the thermocouple is then placed into the
thermo bath.
The voltage can be recorded once the multimeter reading becomes stable. The
water temperature is increased to 35 degrees C, and again the voltage is
recorded. This process is repeated by increasing the temperature by five-degree
increments and recording the voltage, until 60 degrees C is reached.

➢ Applications of thermocouple:
1. Thermocouples are the most popular type of temperature sensors.
2. They are used as hospital thermometers, and in diagnostics testing for vehicle
engines.
3. Some gas appliances such as boilers, water heaters, and ovens use them as
safety features; if the pilot light is out, the thermocouple stops the gas valve
from operating.
4. They are also used as an aid in milk pasteurization, and as food thermometers.
5. In industry, they are valuable as probes and sensors.

2. RTD (resistance temperature detector


➢ Principle:
A Resistance Thermometer or Resistance Temperature Detector is a device
which used to determine the temperature by measuring the resistance of pure
electrical wire. This wire is referred to as a temperature sensor. If we want to
measure temperature with high accuracy, RTD is the only one solution in
industries. It has good linear characteristics over a wide range of temperature.

➢ Construction and Working:


The construction is typically such that the wire is wound on a form (in a coil)
on notched mica cross frame to achieve small size, improving the thermal
conductivity to decrease the response time and a high rate of heat transfer is
obtained. In the industrial RTD’s, the coil is protected by a stainless-steel sheath
or a protective tube.
So that, the physical strain is negligible as the wire expands and increase the
length of wire with the temperature change. If the strain on the wire is increasing,
then the tension increases. Due to that, the resistance of the wire will change which

3
is undesirable. So, we don’t want to change the resistance of wire by any other
unwanted changes except the temperature changes.
This is also useful to RTD maintenance while the plant is in operation. Mica
is placed in between the steel sheath and resistance wire for better electrical
insulation. Due less strain in resistance wire, it should be carefully wound over
mica sheet.

➢ Working
We can get this RTD in market. But we must know the procedure how to use
it and how to make the signal conditioning circuitry. So that, the lead wire errors
and other calibration errors can be minimized. In this RTD, the change in
resistance value is very small with respect to the temperature. So, the RTD value
is measured by using a bridge circuit. By supplying the constant electric current
to the bridge circuit and measuring the resulting voltage drop across the resistor,
the RTD resistance can be calculated. Thereby, the temperature can be also
determined. This temperature is determined by converting the RTD resistance
value using a calibration expression. The different modules of RTD are shown in
below figures.

Figure 1.2 Two wire RTD bridge


➢ Calibration:
Fixed point calibration
• It is used for the highest-accuracy calibrations by national metrology
laboratories. It uses the triple point, freezing point or melting point of pure
substances such as water, zinc, tin, and argon to generate a known and
repeatable temperature

4
• These cells allow the user to reproduce actual conditions of the ITS-
90 temperature scale. Fixed-point calibrations provide extremely accurate
calibrations (within ±0.001 °C). A common fixed-point calibration method for
industrial-grade probes is the ice bath.

Figure 1.3 Three wire RTD bridge Figure 1.4 Four wire RTD bridge

• The equipment is inexpensive, easy to use, and can accommodate several


sensors at once. The ice point is designated as a secondary standard because its
accuracy is ±0.005 °C (±0.009 °F), compared to ±0.001 °C (±0.0018 °F) for
primary fixed points.

➢ Comparison calibrations
• is commonly used with secondary SPRTs and industrial RTDs.
• The thermometers being calibrated are compared to calibrated thermometers
by means of a bath whose temperature is uniformly stable.
• Unlike fixed-point calibrations, comparisons can be made at any temperature
between −100 °C and 500 °C (−148 °F to 932 °F). This method might be more
cost-effective, since several sensors can be calibrated simultaneously with
automated equipment.
• These electrically heated and well-stirred baths use oils and molten salts as the
medium for the various calibration temperatures.

5
➢ Advantages:
• The RTD can be easily installed and replaced.
• It is available in wide range.
• The RTD can be used to measure differential temperature.
• They are suitable for remote indication.
• Stability maintained over long period of time.
• No necessity of temperature compensation.

➢ Disadvantages:
• The RTD require more complex measurement circuit.
• It is affected by shock and vibration.
• Bridge circuit is needed with power supply.
• Slower response time than a thermocouple.
• Large bulb size.
• Possibility of self-heating.
• Higher Initial cost.
• Sensitivity is low.

6
EXPERIMENT: - 2

AIM: To carry out calibration of pressure measuring device: pressure gauge, u-


tube manometer

1) Calibration of pressure gauge


Device: dead weight tester
A dead weight tester (DWT) is a device used to produce and measure pressure,
commonly used for pressure gauge calibration.

➢ Construction:
The construction of a DWT is
basically in the form of an oil-filled
chamber fitted with a cylinder-piston
combination above it.
It also features a plunger with a handle
and a weighting platform or pan
attached to the top of the piston, used to
apply varying degrees of pressure to the
oil in the chamber. In addition, it also
has a reservoir to collect displaced oil,
an adjusting piston and a port where the
pressure gauge is connected during
calibration tests. Figure 2.1 Pressure gauge calibrating device

➢ Working:
Using a dead weight tester, pressure gauges are calibrated through the
application of known weights to the DWT’s piston, the cross-sectional area of
which is also known. This creates a sample of known pressure, which is then
introduced to the pressure gauge being tested to observe its response.
Set up the device being tested on a firm, stable and level surface, and follow
these 7 steps for pressure gauge calibration:

7
1. Check whether the test device is reading zero, by connecting it to the test port
on the DWT. If it isn’t, correct the error before moving on to the next step.
2. Note the cross-sectional area of the piston and rotate the handle of the adjusting
piston until its rod comes out fully. Fill oil into the reservoir up to its halfway
level.
3. Open the oil reservoir’s shutoff valve and let the DWT fill completely with oil
by manually lifting the vertical piston to its maximum position. Do this gently
to avoid air bubbles.
4. Close the shutoff valve and place the first known weight on the platform of the
vertical piston.
5. Turn the handle of the adjusting piston to ensure that both it and the sample
weight are supported by the oil in the chamber.
6. Spin the vertical piston to make sure it is floating freely and allow the system
to stabilize for a few moments.
7. After the system has stabilized, make note of the sample weight, DWT reading
and reading on the pressure gauge being tested, as well as error.

➢ Calibration:
• To start with, check that the calibrator or standard you’re using has been
calibrated in accordance with the manufacturer’s recommendations. If it is
already out of calibration, the results of the procedure would be unreliable.
• Connect the pressure gauge that is to be calibrated to the pressure source. Make
sure there is a block valve to isolate the pressure source from the rest of the
system and a bleeding valve for releasing pressure.
• Set the pointer so that it reads zero on the pressure scale.
• Apply the maximum pressure the gauge can measure and make adjustments till
the gauge being calibrated indicates the right pressure.
• Isolate the pressure source and completely depressurize the system using the
bleed valve.
• Verify that the gauge reads zero, or adjust it as needed.
• Repeat steps 4 to 6 till both the readings are accurate.
• If the gauge includes a linearizing adjustment, adjust the pressure source to 50%
of the maximum pressure the gauge can measure and check the reading.

8
• Check if the gauge readings are correct at zero, 50%, and maximum pressure,
and adjust it each time till all of them are accurate. This step requires a lot of
care and patience.
• After all the readings are correct, write down the gauge’s readings at the applied
pressures onto a calibration sheet.
• If you are performing a bench calibration and need to issue a calibration
certification, draw a graph plotting the increasing and decreasing applied
pressures against the gauge readings.

2) Calibration of inclined u-tube manometer


➢ Construction and working
Inclined tube manometer, Askania
manometer and calibration
arrangement consisting of a large glass
bottle and connection.
Among the manometers available the
inclined tube manometer is commonly
used owing to its greater sensitivity in
comparison to the vertical U-tube
manometer. The inclined tube
manometer comprises of an inclined
glass tube of 6mm bore serving ads one
limb of U-tube manometer. The other is
essentially a large diameter vessel.A
device for varying the inclination of the Figure 2.2 Calibration of inclined u-tube
tube may also be fitted with the manometer
instrument. Owing to the larger diameter
ratio between the two limbs, the change in the liquid level in the vessel so
negligible in comparison to the inclined tube. The pressure difference is thus
directly given by the change in the liquid level in the inclined tube. The inclined
tube observation is however, correlated to the vertical scale reading by
multiplying with a suitable factor.

9
➢ CALIBRATION:
• Place the Askania manometer and the inclined tube manometer on s firm
surface and level them. Take the initial readings of these instrument.
• Blow air into the glass bottle through the tube provided and tighten the stop
cork.
• Take the observation of the pressure from the Askania as well as the inclined
tube.
• By releasing the pressure, a little from the glass bottle take another set of
observation.
• Repeat the process in step (4) to get about 10-12 set of pressure values.
• Plot the calibration graph as the Askania manometer without adjusting for the
vertical units. Fit a straight line passing through these points.

10
EXPERIMENT: - 3

AIM: - To measure the thermal conductivity of any fluid.

➢ Abstract:
Determining the physical properties of substances is an important subject in
many advanced engineering applications. The physical properties of fluids
(liquids and gases), such as thermal conductivity, play an important role in the
design of a wide variety of engineering applications, such as heat exchangers. In
this article, the authors describe an undergraduate junior-level heat transfer
experiment designed for students in order to determine the thermal conductivity
of fluids. Details of the experimental apparatus, testing procedure, data reduction
and sample results are presented. One of the objectives of this experiment is to
strengthen and reinforce some of the heat transfer concepts, such as conduction,
covered in the classroom lectures. The experimental set-up is simple, the
procedure is straightforward and students’ feedback has been very positive.

➢ Experimental apparatus:
The experimental apparatus, shown schematically in Figure 3.1, consists of
two parts, namely the test module and control panel. These two components are
elaborated on below.

➢ Test module:
The test module is a plug and jacket assembly that consists of a cylindrical
heated plug and cylindrical water-cooled jacket. The fluid (liquid or gas), whose
thermal conductivity is to be measured, fills a small radial clearance between the
heated plug and the water-cooled jacket. It should be noted that the clearance is
made small in size so as to prevent natural convection in the fluid.
The cylindrical plug is made of aluminium (to reduce thermal inertia and
temperature variation) with a built-in cylindrical heating element and temperature
sensor (thermocouple). The temperature sensor is inserted into the plug close to
its external surface. The plug also has ports for the introduction and venting of the
fluid (liquid or gas) whose thermal conductivity is to be measured.

11
The plug is placed in the middle of the cylindrical water jacket. The water
jacket is constructed from brass and has a water inlet and drain connections. A
thermocouple is also fitted to the inner sleeve of the water jacket.

Figure 3.1 Schematics of the experimental apparatus and instrumentation

12
➢ Control Panel:
The test module is connected to the control panel (a small console) by flexible
cables for the voltage supplied to the heating element. The control panel includes
all the necessary electrical wiring with variable transformer, power transducer,
temperature controller/indicator, digital displays for temperature, analogue meter
for voltage and a thermocouple selector switch.

➢ Calibration:
Before utilising the unit in order to measure the thermal conductivity of a fluid
(liquid or gas), the unit must be calibrated. This is because not all the power input
is transferred by conduction through the test fluid, some energy (incidental heat
transfer) will be lost to the surroundings and some will be radiated across the
annulus. In this calibration process, students generate a curve that characterises
this incidental heat loss. The incidental heat transfers in the unit are determined
by using air (whose thermal conductivity is well known and documented) in the
radial space.
Procedure:
➢ The following is a brief summary of the procedure to carry out the calibration of
the unit:
1. Set up the equipment and make the necessary connections;
2. Pass water through the jacket at about 3 litre per minute;
3. Connect the small flexible tubes to the charging and vent unions;
4. Close off the tubing with a pure air sample trapped in the device;
5. Switch on the electrical supply;
6. Adjust the variable transformer to give about 10V;
7. At intervals, check the temperature of the plug, T1, and jacket, T2, and when
they are stable, record their values and also the voltage;
8. Repeat steps 6 and 7 for 20V, 30V, 40V, 50V and 60V.

➢ Calculations:
The calculations are determined as follows:
1) Find the thermal conductivity of the air, air, at the average temperature, Tavg
= (T1 + T2)/2. Temperature-dependent thermal conductivity values for air
are found in any heat transfer textbook, such as Incropera and DeWitt, as
well as Özisik.

13
2) Calculate the rate of heat conducted through the air lamina, Q, from
Fourier’s Law, ie:
Qc = kA (ΔT/Δr) … (1)
where the area is A = 0.0133 m2, the radial clearance is Δr = 0.34 mm, and
the temperature difference is ΔT = T1 − T2.
3) Calculate the rate of electrical heat input, Qe, from:
Qe = v2/R … (2)
where V is the voltage and R is the resistance of element, R = 54.8 Ω.
4) Find the incidental heat transfer, Qi, (loss, radiation, etc). The incidental
heat transfer is the difference between the electrical heat input and the heat
conducted through the fluid in the radial clearance, i.e.:
Qi = Qe − Qc … (3)

➢ Results:
From the measured data and the results obtained above, a calibration curve of
the incidental heat transfer, Qi, against the average temperature, T avg, can be
generated. Figure 2 presents the calibration curve of the unit that was obtained by
one of the laboratory student groups. As can be seen from the figure, the incidental
heat transfer increases linearly as the average temperature increases.

Figure 3.2 The calibration curve

14
➢ Determination of the thermal conductivity of a fluid:
Once the calibration curve is obtained and the unit is cleaned and reassembled,
students can then introduce the fluid (liquid or gas) to be tested into the radial
clearance. It should be noted that it is important to ensure that no bubbles exist if
the test fluid is liquid. Water is then passed through the jacket and the variable
transformer is adjusted to the desired voltage. The voltage value is chosen to give
a reasonable temperature difference and heat transfer rate. When stable, the plug
and jacket temperatures, as well as the voltage, are recorded.
The incidental heat transfer rate, Qi, is then found from Figure 2 at the average
temperature, Tavg. Once the rate of the incidental heat transfer is determined, the
rate of heat conducted through the test fluid (liquid or gas) is then found from
Equation (3) as Qc = Qe − Qi , and the thermal conductivity of the test fluid
(liquid/gas) can be calculated from Equation (1), i.e. k = QΔr/(AΔT). It is
recommended that this procedure is repeated at another voltage value in order to
ensure the consistency of the measurements.
In addition, an uncertainty analysis of the measured thermal conductivity of
the test fluid is performed. The calibration process can be used to estimate the
uncertainty in the heat conduction across the sample. The conduction across the
sample is found from Fourier’s Law, which, for this situation, is given in Eq. 1 as
follows:
Qc = kA(ΔT/Δr) … (4)
The application of the standard uncertainty procedure, as described in Ref. [6] to
Eq. (4), yields:

where U represents the uncertainty in the quantity indicated with the subscript and
uncertainty in the temperature measurements are assumed equal, i.e. UT1 ≈ UT2
≈ UT. The radial clearance and the area are provided by the manufacturer and
assumed to be very accurate so that UΔr ≈ UA ≈ 0.
The thermal conductivity of the air at a given temperature is assumed to be known
within 2.5%, and the uncertainties in the temperature measurements are assumed

15
to be less than 1° C. Thus, the uncertainty in the heat conduction across the sample
is estimated to be less than 4%.
The uncertainty in the thermal conductivity can be found with a similar procedure.
First, Eq. (4) is rearranged to solve for the thermal conductivity, ie:
k=(QcΔr/AΔT) … (6)
and the procedure outlined in Ref. [6] yields:

As previously, the radial dimension and the area are assumed to be very
accurate so that ≈ ≈ 0 UΔr UA and the uncertainty in the temperature
measurements are assumed to be equal, UT1 ≈ UT2 ≈ UT, so that Eq. (7) can be
simplified to:

The uncertainty heat conduction across the sample was estimated be 4% and
the level of uncertainty in the temperature measurement was conservatively
estimated to be 1° C, so that the relative uncertainty in the thermal conductivity is
less than 5% for typical experimental conditions.
The uncertainty in the measured results are estimated (at the 95% confidence
level) according to the procedure outlined by Moffat.

16
➢ Conclusion:
A heat transfer laboratory experiment in which undergraduate mechanical
engineering students measure the thermal conductivity of a liquid or a gas is
presented in this article and includes the procedures and relevant calculations.
In this experiment, students perform the calibration of the experimental apparatus
and then employ the apparatus to determine the thermal conductivity of a liquid
or a gas. This kind of experience serves to enhance the level of understanding of
the transfer of thermal energy by undergraduate mechanical engineering students,
while also exposing them to several important concepts involved in heat transfer.

17
EXPERIMENT: - 4

AIM: To carry out calibration of flow measuring devices: orifice


meter and rotameter.

1) Orifice meter
➢ Principle:
When an orifice plate is placed in a pipe carrying the fluid whose rate of flow
is to be measured, the orifice plate causes a pressure drop which varies with the
flow rate. This pressure drop is measured using a differential pressure sensor and
when calibrated this pressure drop becomes a measure flow rate. The flow rate is
given by:

Where,
Qa = flow rate
Cd = Discharge Coefficient
A1 = Cross sectional area of pipe
A2 = Cross sectional area of orifice
P1, P2 = Static pressure

➢ Construction:
The main parts of
orifice meter are as
follows:
A stainless-steel orifice
plate which is held
between flanges of a pipe
carrying the fluid whose
flow rate is being
measured.
It should be noted that for
a certain distance before Figure 4. 1 Orifice meter
and after the orifice plate

18
fitted between the flanges, the pipe carrying the fluid should be straight in order to
maintain laminar flow conditions.
Openings are provided at two places 1 and 2 for attaching a differential pressure
sensor (U-tube manometer, differential pressure gauge etc.) as shown in the
diagram

➢ Working:
The detail of the fluid movement inside the pipe and orifice plate has to be
understood. The fluid having uniform cross section of flow converges into the
orifice plate’s opening in its upstream. When the fluid comes out of the orifice
plate’s opening, its cross section is minimum and uniform for a particular distance
and then the cross section of the fluid starts diverging in the downstream.
At the upstream of the orifice, before the converging of the fluid takes place, the
pressure of the fluid (P1) is maximum. As the fluid starts converging, to enter the
orifice opening its pressure drops. When the fluid comes out of the orifice opening,
its pressure is minimum (p2) and this minimum pressure remains constant in the
minimum cross section area of fluid flow at the downstream. This minimum cross-
sectional area of the fluid obtained at downstream from the orifice edge is
called VENA-CONTRACTA.
The differential pressure sensor attached between points 1 and 2 records the
pressure difference (P1 – P2) between
these two points which becomes an
indication of the flow rate of the fluid
through the pipe when calibrated.

➢ Material of construction:
The Orifice plates in the Orifice
meter, in general, are made up of stainless
steel of varying grades.

Figure 4. 1 Different shape orifice plates

19
➢ Shape & Size of Orifice meter:
Orifice meters are built in different forms depending upon the application
specific requirement, the shape, size and location of holes on the Orifice Plate
describes the Orifice Meter Specifications as per the following:
• Concentric Orifice Plate
• Eccentric Orifice Plate
• Segment Orifice Plate
• Quadrant Edge Orifice Plate

➢ Applications of Orifice Meter:


• The concentric orifice plate is used to measure flow rates of pure fluids and has
a wide applicability as it has been standardized.
• The eccentric and segmental orifice plates are used to measure flow rates of
fluids containing suspended materials such as solids, oil mixed with water and
wet steam.

➢ Advantages of Orifice Meter:


• It is very cheap and easy method to measure flow rate.
• It has predictable characteristics and occupies less space.
• Can be used to measure flow rates in large pipes.

➢ Limitations of Orifice Meter:


• The vena-contract length depends on the roughness of the inner wall of the pipe
and sharpness of the orifice plate. In certain cases, it becomes difficult to tap the
minimum pressure (P2) due to the above factor.
• Pressure recovery at downstream is poor, that is, overall loss varies from 40% to
90% of the differential pressure.
• In the upstream straightening vanes are a must to obtain laminar flow conditions.
• Gets clogged when the suspended fluids flow.
• The orifice plate gets corroded and due to this after sometime, inaccuracy occurs.
Moreover, the orifice plate has low physical strength.
• The coefficient of discharge is low.

20
2) Rotameter
➢ Principle:
The rotameter's operating principle is based on a float of given density's
establishing an equilibrium position where, with a given flow rate, the upward force
of the flowing fluid equals the downward force of gravity. It does this, for example,
by rising in the tapered tube with an increase in flow until the increased annular
area around it creates a new equilibrium position. By design, the rotameter operates
in accordance with formula for all variable-area meters, directly relating flow rate
to area for flow.

➢ Construction and Working:


Rotameters are the most widely used type of variable-area (VA) flowmeter. In
these devices, the falling and rising action of a float in a tapered tube provides a
measure of flow rate (see Figure 4.3). Rotameters are known as gravity-type
flowmeters because they are based on the opposition between the downward force
of gravity and the upward force of the flowing fluid. When the flow is constant, the
float stays in one position that can be related to the volumetric flow rate. That
position is indicated on a graduated scale.
Note that to keep the full force of gravity in
effect, this dynamic balancing act requires a
vertical measuring tube.
Other forms of gravity-type VA meters may
incorporate a piston or vane that responds to
flow in a manner similar to the float's
behavior. All these devices can be used to
measure the flow rates of most liquids, gases,
and steam. There are also similar types that
balance the fluid flow with a spring rather
than gravitational force. These do not require
vertical mounting, but corrosive or erosive
fluids can damage the spring and lead to Figure 4.2 Rotameter
reduced accuracy.
The term rotameter derives from early versions of the floats, which had slots to
help stabilize and center them and which caused them to rotate. Today's floats take
a variety of shapes, including a spherical configuration used primarily in purge

21
meters. The materials of construction include stainless steel, glass, metal, and
plastic.
The tapered tube's gradually increasing diameter provides a related increase in
the annular area around the float, and is designed in accordance with the basic
equation for volumetric flow rate:
Q = kA(gh)0.5
Q = volumetric flow rate, e.g., gallons per minute
k = a constant

A = annular area between the float and the tube wall

g = force of gravity

h = pressure drop (head) across the float

With h being constant in a VA meter, we have A as a direct function of flow rate Q.


Thus, the rotameter designer can determine the tube taper so that the height of the
float in the tube is a measure of flow rate.

➢ Rotameter Design Components


The two basic components of every rotameter are the tapered metering tube and
the float. Tube sizes vary from 1/16 to 4 in., with a 1/8–2 in. range being the most
common. Of course, each model has limitations as to capacity, temperature,
pressure, and, in the case of liquids, viscosity. Rotameters can be glass tube, metal
tube or plastic tube.

➢ Calibration:
Linear scale graduations can be an arbitrary 0%–100% for the meter range.
Calibration can be direct reading in terms of a specific gas or liquid, or a graph that
plots meter readings vs. flow rates in terms of the fluid being measured. Such
graphs make it easy to adapt a meter to handle fluids other than those for which it
was bought; changeover is simply a matter of having a different conversion chart
designed for the new fluid.

22
➢ Advantages:
• The cost of rotameter is low.
• It provides linear scale.
• It has good accuracy for low and medium flow rates.
• The pressure loss is nearly constant and small.
• Usability for corrosive fluid.

➢ Disadvantages:
• When opaque fluid is used, float may not be visible.
• It has not well in pulsating services.
• Glass tube types subjected to breakage.
• It must be installed in vertical position only.

➢ Applications:
• The rotameter is used in process industries.
• It is used for monitoring gas and water flow in plants or labs.
• It is used for monitoring filtration loading.

23
EXPERIMENT: - 5
AIM: - To measure the direct and diffuse solar radiation using
pyranometer and pyrheliometer

Solar radiation is a term used to describe visible and near-visible (ultraviolet


and near-infrared) radiation emitted from the sun. The different regions are
described by their wavelength range within the broad band range of 0.20 to 4.0
µm (microns). Terrestrial radiation is a term used to describe infrared radiation
emitted from the atmosphere. The following is a list of the components of solar
and terrestrial radiation and their approximate wavelength ranges.
• Ultraviolet: 0.20 – 0.39 µm
• Visible: 0.39 – 0.78 µm
• Near-Infrared: 0.78 – 4.00 µm
• Infrared: 4.00 – 100.00 µm

Approximately 99% of solar, or shortwave, radiation at the earth’s surface is


contained in the region from 0.3 to 3.0 µm while most of terrestrial, or longwave,
radiation is contained in the region from 3.5 to 50 µm.
Outside the earth’s atmosphere, solar radiation has an intensity of approximately
1370 watts/meter2. This is the value at mean earth-sun distance at the top of the
atmosphere and is referred to as the Solar Constant. On the surface of the earth on
a clear day, at noon, the direct beam radiation will be approximately 1000
watts/meter2 for many locations. While the availability of energy is affected by
location (including latitude and elevation), season, and time of day, the biggest
factors affecting the available energy are cloud cover and other meteorological
conditions which vary with location and time.

1) Ultraviolet Measurements
For the measurement of sun and sky ultraviolet radiation in the wavelength
interval 0.295 to 0.385 µm, which is particularly important in environmental,
biological, and pollution studies the Total Ultraviolet Radiometer was developed.
This instrument utilizes a photoelectric cell protected by a quartz window. A
specially designed Teflon diffuser not only reduces the radiant flux to acceptable
levels but also provides close adherence to the Lambert cosine law. An
24
encapsulated narrow bandpass (interference) filter limits the spectral response of
the photocell to the wavelength interval 0.295-.0385 µm.

2) Shortwave Measurements: Direct, Diffuse and Global


As solar radiation passes through the earth’s atmosphere, some of it is
absorbed or scattered by air molecules, water vapor, aerosols, and clouds. The
solar radiation that passes through directly to the earth’s surface is called Direct
Normal Irradiance (DNI). The radiation that has been scattered out of the direct
beam is called Diffuse Irradiance. The direct component of sunlight and the
diffuse component of skylight falling together on a horizontal surface make up
Global Irradiance. The three components have a geometrical relationship.
Direct radiation is best measured by use of a pyrheliometer, which measures
radiation at normal incidence. The Normal Incidence Pyrheliometer consists of a
wire wound thermopile at the base of a tube with a viewing angle of approximately
5º which limits the radiation that the thermopile receives to direct solar radiation
only.
The pyrheliometer is mounted on a Solar Tracker or an Automatic Solar
Tracker for continuous readings. Diffuse radiation can either be derived from the
direct radiation and the global radiation or measured by shading a pyranometer
from the direct radiation so that the thermopile is only receiving the diffuse
radiation. Eppley has developed Shade Disk Adaption Kit that mounts on the SMT
which allows you to measure the diffuse and direct at the same time. We also
manufacture the Shadow Band Stand, for Diffuse measurements in sites where
there is no power available to operate an Automatic Tracker.
Global radiation is measured by a pyranometer. The modern pyranometer
manufactured by the Eppley Laboratory, using wire wound plated thermopiles,
can be one of three models: the Standard Precision Pyranometer , the Global
Precision Pyranometer, and the Black & White Pyranometer.

3) Longwave (Infrared) Measurements


The Precision Infrared Radiometer was a development of the PSP
Pyranometer (forerunner to the SPP Pyranometer) and continues to be the industry
standard for precise measurement of incoming or outgoing longwave radiation.
This thermopile detector is used to measure the “net radiation” of the PIR and a
case thermistor is used to determine the outgoing radiation from the case. A dome

25
thermistor is also included if one wishes to measure the dome temperature as
compared to the case temperature to make any “corrections” to the final result.

4) Albedo/Reflection Measurements
Albedo is the ratio of the incoming shortwave divided by the reflected
shortwave. This allows for better calibration results and prevents the cold
junctions of the two sensors from affecting each other.

5) Net Radiation Measurements


Net radiation is the sum of four individual measurements: Incoming
Shortwave, Reflected Shortwave, Incoming Longwave and Outgoing Longwave.

6) Sunshine Duration Measurements


Sunshine duration is typically defined as the amount of time that the Direct
Normal Irradiance (DNI) is greater than 120 Wm-2. This can be determined by
using the data collected from the sNIP.

❖ DEVICE
1. Pyranometer
A pyranometer is used to measure the energy from the sun. When levelled in the
horizontal plane, this is called the Global Shortwave Irradiance (GLOBAL) and
when positioned in a plane of a PV Array, it is called the Total Irradiance in the
plane of array (TPA). Inverted, a pyranometer is used to measure the Reflected or
Albedo Irradiance (ALBEDO). A pyranometer can also be shaded from the direct
beam of the sun to measure the Diffuse Shortwave Irradiance (DIFFUSE).

➢ Construction and Working of Pyranometer


Based on the design of the distinguished Pyranometer, the SPP has faster
response time, a reduced night time thermal offset, an improved cosine response
and a better temperature dependence. This makes the SPP the ideal instrument for
high quality network measurements and as a transfer standard for calibration of
other pyranometers. A thermistor is included for measuring instrument
temperature.

26
➢ Specification of Pyranometer
• Application Network Measurements (Global)
• Classification Secondary Standard / High Quality
• Traceability World Radiation Reference (WRR)
• Spectral Range 295-2800 nm
• Output 0-10 mV analog
• Sensitivity approx. 8 μV / Wm-2
• Impedance approx. 700 Ω
• 95% Response Time 5 seconds
• Zero Offset a) 5 Wm-2
• Zero Offset b) 2 Wm-2
• Non-Stability 0.5%
• Non-Linearity 0.5%
• Directional Response 10 Wm-2 Figure 5.1 S.P. Pyrometer
• Operating Temperature -50°C to +80°C (Eppley laboratory)
• Temperature Response 0.5% (-30°C to +50°C)
• Tilt Response 0.5%
• Calibration Uncertainty* < 1%
• Measurement Uncertainty*
• Single Point < 10 Wm-2
• Hourly Average approx. 2%
• Daily Average approx. 1%

2. Pyrheliometer
A pyrheliometer mounted on a solar tracker is used to measure the Direct
Beam Solar Irradiance (DNI) from the sun. Historically, the preferred field of
view for Pyrheliometers was based on a 10:1 ratio which equated to approximately
5.7. To officially be considered a Secondary Standard, the pyrheliometer in
question must be calibrated with WRR traceability through a Primary Standard
Pyrheliometer such as the AHF Cavity Radiometer. EPLAB Calibrations are
typically performed against a Secondary Standard Pyrheliometer.

27
➢ Specification
• Application Standard/Network Measurements
• Classification Secondary Standard* / High Quality
• Traceability World Radiation Reference (WRR)
• Spectral Range 250-3500 nm
• Field of View 5º
• Output 0-10 mV analog
• Sensitivity approx. 8 μV / Wm-2
• Impedance approx. 200 Ω
• 95% Response Time 5 seconds
• Zero Offset 1 Wm-2
• Non-Stability 0.5%
• Non-Linearity 0.2%
• Spectral Selectivity 0.5% Figure 5.2 Pyrheliometer (Epply
• Temperature Response 0.5% Laboratary)
• Calibration Uncertainty** < 1%
• Measurement Uncertainty**
• Single Point < 5 Wm-2
• Hourly Average approx. 1%
• Daily Average approx. 1%

➢ Calibrations
All calibrations at Eppley are performed according to internationally accepted
techniques and procedures with traceability to the proper World Standards.
Pyrheliometers (sNIP, NIP) are compared on Eppley’s Research Building Roof
Platform according to procedures described in ISO 9059 and Technical
Procedure, TP04 of The Quality Assurance Manual on Calibrations and are
traceable to the World Radiation Reference (WRR) through comparisons with
AHF standard self-calibrating cavity pyrheliometers which participate at the
International Pyrheliometric Comparisons (IPC).
Pyranometers are compared in Eppley’s Integrating Hemisphere according to
procedures described in ISO 9847 and Technical Procedure, TP01 of The Quality
Assurance Manual on Calibrations and are traceable to the World Radiation
Reference (WRR) through comparisons with AHF standard self-calibrating cavity
28
pyrheliometers which participate at the International Pyrheliometric Comparisons
(IPC).
Pyrgeometers (PIR) are compared Blackbody Calibration System according to
Technical Procedure, Quality Assurance Manual on Calibrations and are traceable
to the International Practical Temperature Scale (IPTS) and to the World Infrared
Standard Group (WISG).
Total Ultraviolet Radiometers (TUVR) are compared to procedures described
Technical Procedure, Quality Assurance Manual on Calibrations and are traceable
to the National Institute of Standards and Technology (NIST).
Company recommends a minimum calibration cycle of five (5) years but
encourages annual calibrations for highest measurement accuracy.

➢ Applications
• Meteorology: Climate Study and Long-Term Monitoring / Modelling
The Earth’s radiation budget is a critical component of our weather and
climate, atmospheric circulation and ocean currents. Therefore, reliable and
accurate long-term measurements of shortwave and longwave irradiance are
essential for detecting climate change trends. From grassy plains, rain forests,
deserts, remote mountains, polar regions, equatorial regions, on aircraft and
balloons, on ships and ocean buoys. Universities & Government Institutions
from every continent and often working in cooperation with other institutions
to create networks of stations that measure and study accurate, reliable, long
term data sets of Solar & Atmospheric conditions.

• Solar Power: Site Selection, Predictions and Performance Testing


There is tremendous interest in decreasing the world’s reliance on fossil fuel
energy and increasing production of Alternative “Green” Energy such as Solar
Power. Photovoltaic (PV) and Concentrating Solar Power (CSP) are two
rapidly growing industries worldwide to meet these objectives. Accurate solar
measurements are be used for determining the best site selections for the plants,
for predicting solar inputs and for testing the performance of the plants based
on the inputs.
In general, a PV site will mount a Pyranometer in the plane of the array to
measure the “Total Irradiance in the plane of array” (TPA) and the CSP site
will use the Pyrheliometer on a tracker to measure Direct Normal Irradiance

29
(DNI). Often though, the researchers will prefer to install a complete solar
monitoring station to measure Global, Diffuse and Direct (and TPA).

• Reference Cells:
Solar Reference (PV) Cells are made of the same materials used in PV Panels
are common for evaluating the performance of PV. However, different designs
and constructions of Reference Cells result in different performance results due
to temperature and spectral selectivity. Therefore, the SPP Pyranometer is used
as a thermopile-based standard for different Reference Cells to be compared
with traceability to the World Radiation Reference (WRR).

• Material Testing:
Testing of materials and systems of all types are performed with solar, UV
and infrared measurements playing a critical role. These tests vary widely over
many industries. Examples include colour or material degradation due to UV
exposure; performance testing on heating & cooling (AC) systems in buildings,
automobiles, military vehicles, and aircraft; reflectance tests of low angled
roofs or paving materials, improving bottling for soda, milk and other liquids.
These tests can be done outdoors using the sun as the source or in
Solar/Temperature Chambers in the lab and allow for repeating tests in multiple
locations.

30
ISO9060PyranometerClassification
SPP: - Standard Precision Pyrometer
GPP: - Global Precision Pyrometer
PSP: - Precision specific Pyrometer

Secondary First Class Second Class


Standard
Response time < 15s < 30s < 60s
Zero Offset-A + 7 Wm-² + 7 Wm-² + 7 Wm-²
Zero Offset-B ± 2 Wm-² ± 2 Wm-² ± 2 Wm-²
Non-stability ± 0.8% ± 1.5% ± 3%
Non-linearity ± 0.5% ± 1% ± 3%
Directional Response ± 10 Wm-² ± 20 Wm-² ± 20 Wm-²
Spectral selectivity ± 3% ± 5% ± 10%
Temperature response
± 2% ± 4% ± 8%
Tilt response ± 0.5% ± 2% ± 5%
Table 5.1 ISO Standard
ResponseTime:
Characterized by the time during which the instrument reaches 95% of the final
value. This test by capping the instrument in full sun and timing the drop to zero.

SPP: < 5 sec.


GPP: < 5 sec.

Figure 5.3 Response time

31
ZeroOff-SetA:
Test(a) is forcases whenthenet thermal radiant flux densityis 200Wm-2
such as when the instrument is at 30°C and the sky is temperature -10°C. Eppley
performs this test in our Blackbody Calibration System and by monitoring
Nighttime Offsets.

-2
SPP ±2Wm
-2
GPP±2Wm
-2
PSP ±2Wm
-
8-48 ±2Wm

Figure 5. 4 Response with Temperature Drop

ZeroOff-SetB:
Test(b)istheresultofa5degreechangeintemperatureoverone hourandis
performedin Eppley’stemperaturechamber.

SPP < 5 Wm-2


GPP < 5 Wm-2
PSP < 7 Wm-2
8-48 < 1 Wm-2

Figure 5. 5 Night time Offset


32
Non-Stability:
The change in sensitivity per year is primarily due to UV degradation of the
Black Optical Lacquer on the thermopile. The simplest method of determining
this is through observational data.

SPP average 0.2% per year (since 2012)

GPP average0.2%peryear(since2013 –limitedsample)

PSP average less than 1% per year

8-48 less than 0.5% per year


Table 5.2 Non-Stability
Non-Linearity:
Deviationofsensitivityfromlow(100Wm-2)tohigh(1000Wm-2) Intensity
is testedon Eppley High Intensity Lamp Bench.

SPP ± 0.5%
GPP ± 0.5%
PSP ± 0.5%
8-48 ± 1.0%

Table 5.3 Non- Linearity

33
Spectral:
Eppley has independently tested the Schott Glass WG295 hemispheres as
well as the Black Optical Lacquer to assure uniform spectral transmittance from 0.3
to 2.8 microns.

less than 2% from


350 1500 nm.

Figure 5.6 Spectral Selectivity Eppley pyranometers

Temperature:
Temperature Dependence Tests are performed in Eppley’s Temperature
Chambers. Note that while the tests are often -30°C to +50°C, these are not the
operational limits of the instruments. These instruments can be used in hotter (or
colder) climates but you may wish to contact Eppley for a special temperature
dependence test in these extreme climate areas.

SPP ±0.5%
GPP ±0.5%
PSP ±1.0%

Figure 5.7 Temperature Dependence of Eppley Pyrometers

34
EXPERIMENT: - 6
AIM: To carry out exhaust gas analysis with gas chromatographer

• Principle of chromatography:
A gas chromatograph (GC) is an analytical instrument that measures the
content of various components in a sample. The analysis performed by a gas
chromatograph is called gas chromatography. Principle of gas chromatography:
The sample solution injected into the instrument enters a gas stream which
transports the sample into a separation tube known as the "column." (Helium or
nitrogen is used as the so-called carrier gas.) The various components are
separated inside the column. The detector measures the quantity of the
components that exit the column. To measure a sample with an unknown
concentration, a standard sample with known concentration is injected into the
instrument. The standard sample peak retention time (appearance time) and area
are compared to the test sample to calculate the concentration.

Figure 6.1 working of chromatographer

35
• Chromatographer construction and working:
Main parts &their function:
(1) Chromatograph Oven:
The analytical components of the gas chromatograph (the columns, valves,
and detectors) are enclosed in a heated oven compartment. The performance and
response of the chromatograph columns and the detectors are very susceptible to
changes in temperature, so the oven is designed to insulate these components from
the effects of ambient temperature changes and maintain a very stable temperature
internally. A gas chromatograph’s performance is directly correlated to the
temperature stability of the columns and detector, so the temperature control is
typically better than ± – 0.5 °F (± – 0.3 °C)

(2) Chromatograph Columns:


The heart of the gas chromatograph is the chromatograph columns. The
columns separate the gas mixture into its individual components using some
physical characteristic. In the case of most hydrocarbon applications, “boiling
point” columns are used and separate the components by their individual boiling
points; however, other applications may use molecular size (molecular sieve
columns) or polarity differences to achieve the separation.
Chromatograph columns are constructed by packing a tube with column
packing material. The material is held in place by sintered metal filters at either
end of the tube. The packing material consists of very small support material that
has a very thin coating of liquid solvent. This is called the stationary phase.
The sample gas is carried through the columns by the carrier gas. The combination
of the carrier gas and the sample gas is called the moving phase. The carrier gas
is a gas which is not a component of interest (it is not being measured) and acts
as a background gas that permits the easy detection of the components being
measured. Typically, helium is used for hydrocarbon applications; however,
hydrogen, argon, and nitrogen are also used, depending on the application.
As the gas sample moves through the column, components with lower boiling
points move more slowly than the components with higher boiling points. The
speed at which this separation occurs is dependent on the temperature of the
column. The length of the column determines the amount of separation of the
components.

36
(3) Detectors:
After the components have been separated by the chromatograph columns,
they then pass over the detector. most common detector used for most
hydrocarbon gas measurements is the thermal conductivity detector (TCD) The
TCD uses two thermistors that will reduce in resistance as their temperature rises.
The thermistors are connected on either side of a Wheatstone bridge with a
constant current power supply. As the carrier gas passes over the thermistors, it
removes heat from the thermistor bead, dependent on the carrier gas’s thermal
conductivity. Helium is a commonly used carrier gas because it has a very large
thermal conductivity, and, therefore, will reduce the temperature of the
thermistors bead considerably.
On the reference side of the detector, only pure carrier gas will pass over the
thermistor bead, so the temperature and resistance will remain relatively constant.
On the measure side of the detector, the carrier gas and each component in
series of elution from the columns passes over the thermistor bead, removing heat
from the bead dependent on the thermal conductivity. When there is only carrier
gas passing over the detector bead, the temperature of the bead will be similar to
the reference detector (any difference is compensated for using the bridge
balance).
However, the gas components will have different thermal conductivities than
the carrier gas. As the component flows across the thermistor bead, less heat is
removed from the bead, so the temperature of the thermistor increases, reducing
the resistance. This change in resistance imbalances the electrical bridge and
results in a milli-voltage output. The amount of difference and, therefore, the
output signal is dependent on the thermal conductivity and the concentration of
the component.
The detector output will then be amplified and passed to the gas
chromatograph controller for processing.

(4) Gas Chromatograph Controller:


The controller may be remote from the oven or integral, depending on the
design and application, and performs several functions, including:
■ Control the valve timing and oven temperature
■ Control stream selection
■ Store and analyze the detector output
■ Calculate composition from the detector output
■ Calculate physical properties from the composition (e.g. BTU, Specific
Gravity)

37
➢ Applications of Gas Chromatography:
• GC analysis is used to calculate content of a chemical product, for example in
assuring the quality of products in the chemical industry; or measuring toxic
substances in soil, air or water.
• Gas chromatography is used in the analysis of:
air-borne pollutants performance enhancing drugs in athlete’s urine samples oil
spills essential oils in perfume preparation

38
EXPERIMENT: - 7
AIM: - To Study nd Familier with data logging and aquisition system.

➢ Introduction
1) Data Loggigng System:
Temperature and relative humidity level can affect various type of
measurement recorded in many fields. Hence, temperature and humidity must be
maintained within certain limits [1] to achieve repeatable results, reduce the cost
of tedious corrections and meet regulatory and correctness requirements. It has
been found that chart recorders cannot record temperature and humidity
accurately enough to meet quality and regulatory requirements. Chart recorders
are difficult to calibrate and to maintain, many are prone to sensor drift, which
tends to set worse over time and it may not be fully corrected. As chart recorders
use moving parts, they gradually deteriorate and require increasing amount of
maintenance and calibration to keep them accurate. Data loggers use digital
technologies, such as advanced microprocessors, solid state sensors and fully
featured software, which maximize accuracy. As there is no moving part to wear
out and with powerful software compensation, data loggers can deliver greater
accuracy over larger periods of time. Due to their small size and portability, they
can also be moved closer to the critical areas where calibrations take place,
providing greater accuracy for each calibration.

2) Data acquisition systems:


Data acquisition systems have evolved over time from electromechanical
recorders containing typically from one to four channels to all-electronic systems
capable of measuring hundreds of variables simultaneously. Early systems used
paper charts and rolls or magnetic tape to permanently record the signals, but since
the advent of computers, particularly personal computers, the amount of data and
the speed with which they could be collected increased dramatically. However,
many of the classical data-collection systems still exist and are used regularly.
Data Logger is an electronic device that automatically records, scans and
retrieves the data with high speed and greater efficiency during a test or
measurement, at any part of the plant with time [2]. The type of information
recorded is determined by the user i.e. whether temperature, relative humidity,

39
voltage, pulse is to be recorded; therefore it can automatically measure electrical
output from any type transducer and log the value. A data logger works with
sensors to convert physical phenomena and stimuli into electronic signals such as
voltage or current. These electronic signals are then converted into binary data,
which is then easily analyzed by software and stored for post process analysis.
Data Loggers are based on digital processor. It is an electronic device that record
data over the time in relation to location either with a built-in instrument or sensor
or via external instruments and sensors. Data Logger can automatically collect
data on a 24-hour basis, this is the primary and the most important benefit of using
the data loggers.

1) Operation of Data Logger


The characteristic of data logger is to take sensor measurements and store the
data for future use. However, a data logging application rarely requires only data
acquisition and storage. The ability to analyze and present the data to determine
results and make decisions based on the logger data is needed. A complete data
logging application generally requires most of the elements/components
illustrated below.
i. Experiment: The various parameters whose values are to be recorded from a
particular environment or object is given as input to the sensors in experiment
part.
ii. Sensors: The inputs from various sources are given to the data logger through
various sensors to measure various parameters such as temperature, humidity
where electrical signals are converted to temperature and humidity values.
iii. User Interface: The interface for interaction with the software and sensors is
provided and using implemented algorithm analysis is done for storage of data.
iv. Software: It displays the information stored from sensors for and also
maintains data for long time storage.

➢ What are the different types of data loggers?


The four main types of data loggers are: stand-alone USB data loggers,
Bluetooth Low Energy (BLE)-enabled data loggers, web-based systems, and
wireless sensors (data nodes).
• USB Loggers
Short-term trend logging with manual offload

40
• Bluetooth (BLE) Loggers
Wireless data access via mobile devices
• Web-based Systems
Long-range wireless internet access
• Wireless Sensors
Short-range centralized data collection

1. USB data loggers are compact, reusable, and portable, and offer low cost and
easy setup and deployment. Internal-sensor models are used for monitoring at
the logger location, while external-sensor models (with flexible input channels
for a range of external sensors) can be used for monitoring at some distance
from the logger. USB loggers communicate with a computer via a USB
interface, but for greater convenience, a data shuttle device can be used to
offload data from the logger for transport back to a computer.
2. BLE-enabled loggers are also compact, reusable, portable, easy to set up and
deploy, and offer the added benefit of being able to measure and transmit data
wirelessly to mobile devices over a 100-foot range. These loggers are
particularly useful in applications where deployments are in hard-to-reach or
limited-access areas. Without having to disturb the logger, you can use a cell
phone or tablet to view data in graphs, check the operational status of loggers,
share data files, and store data in the cloud.

Figure 7.1 USB data logger

41
3. Web-based data logging systems enable remote, around-the-clock, internet-
based access to data via cellular, WI-FI, or Ethernet communications. These
systems can be configured with a variety of external plug-in sensors and
transmit collected data to a secure web server for accessing the data.
4. Wireless sensors, or data nodes, transmit real-time data from dozens of points
to a central computer or gateway, eliminating the need to manually retrieve and
offload data from individual data loggers.

2) DIGITAL DATA ACQUISITION SYSTEM


In general, analog DASs are used for measurement systems with wide
bandwidth. But the accuracy is less. So digital DASs which have high accuracy,
low per channel cost and narrow bandwidth (slowly varying signal) are designed.
Figure shows the block diagram of a digital data acquisition system. The function
of the digital data acquisition system includes handling analog signals, making
the measurement, converting and handling digital data, internal programming and
control.

Figure 7.2 Block diagram of a digital data-acquisition system

Here, the transducer translates physical parameters to electrical signals


acceptable by the acquisition system. The physical parameters include
temperature, pressure, acceleration, weight, displacement, velocity etc.
Electrical quantities such as voltage, stance, and frequency may be
measured directly. The signal conditioner includes the supporting circuitry for the
transducer. This circuit may provide excitation power, balancing circuits and
calibration elements and an example of this is a strain-gauge bridge lance and
power supply unit the scanner or multiplexer accepts multiple analog inputs and
sequentially connects them to one measuring instrument. The signal converter

42
translates the analog signal to a form acceptable by the analog to digital converter
like an amp1ifier used for amplifying low-level voltages generated by
thermocoup1es or strain gauges.
The analog to digital converter (ADC) converts the analog voltage to its
equivalent digital form. The output of the ADC may display visually and is also
available as voltage outputs indiscrete steps for further processing or recording on
a digital recorder. The auxiliary section contains instruments for system
programming and digital data processing such as linearizing and limit
comparison. These functions may be performed by individual instruments or by a
digital computer. The digital recorder records digital information on punched
cards, perforated paper tape, magnetic tape, typewritten pages or a combination
of these systems. Digital recorder may be preceded by a coupling unit that
translates the digital information to the proper form for entry into particular digital
recorder selected.

Figure 7.3 Digital Data Acquisition System

➢ Types of Data Acquisition Systems


Data Acquisition Systems, often abbreviated to DAS or DAQ, are systems
designed to measure and track some form of physical system, and convert this data
into a form that can be viewed and manipulated on a computer.
The design and implementation of DAS is a complicated field. The first DAS
were designed by IBM back in the 1960s, and were huge assemblages of computers
and hardware. As the field has developed, more generic systems have become
available, and accordingly it is now possible to measure and analyze almost any
form of physical system.

43
DAS are now used in many different fields, from industrial production to scientific
experiments, and the type of system used is different depending on each
application.
In general, however, types of DAS can be broken into three components — the
sensors used to collect data from the physical systems, the circuitry used to pass
this data to a computer, and the computer system on which it can be viewed and
analyzed.

Figure 7.4 Data Acquisition Systems

If you are setting up a DAS, these are also the three factors that should be
considered. Time spent thinking about exactly which data you need to collect, and
how you want to work with the data once it is collected, can save significant time
and money further down the line.
Let’s take a look some of the most common options in all three of these fields.

➢ Sensors
The design of any DAS must start with the physical system which is being
measured. With the range of sensors available today, it is possible to measure
almost any physical property of the system you are interested in. Careful
consideration must be made, therefore, of exactly the type of data you need to
collect. It might be nice to be able to track the temperature of your industrial
printer, for instance, but you need to think about whether this information will
actually be useful for you.

44
Examples of common phenomenon that are measured by DAS are temperature,
light intensity, gas pressure, fluid flow, and force.
For each variable to be measured, there exists a particular type of sensor.
Sensors, in this sense, are essentially transducers, transforming physical energy
into electrical energy. For instance, a basic pressure sensor will be activated and
driven by the pressure it is measuring, and pass this information as an electronic
signal to the DAS.
For this reason, it is important to recognize that it is not possible to measure
every variable you want to without effecting the system itself. This is because any
sensor will affect the system it is designed to measure, and remove energy from it.
This is especially important if the system being measured works on small
tolerances, because the addition of even a small sensor to these systems can drain
too much energy from them for effective operation.
In short, though there is likely a sensor available to measure almost any aspect
of your systems, it is not always wise to try and measure every variable. Instead,
think carefully about the data you actually need, and use the minimum number of
sensors that will achieve this.

➢ Signal Processing
Typically, DAS use dedicated hardware to pass signals from sensors to the
computer systems that will collect and analyze the data. Converting a messy,
sometimes noisy, signal from a physical system into a format that can be used and
manipulated on a computer can be a tricky business.
One of the first obstacles to be overcome in this regard is signal strength. As
outlined above, typically sensors are designed to take the smallest amount of
energy possible from the system they are being used to measure. In practice, this
also means that the signal they output is of a very low intensity, and must to
amplified to be of any use.
It is therefore critical to use an amplifier that is able to amplify the signal
cleanly. A noisy amplifier will ultimately warp and color the data collected, which
in some cases can render it useless.
Another thing to think about when designing a DAS is the type of signal that
you will use to pass data between the various parts of your system. Most sensors
will output a single ended analog signal. Whilst this type of signal is good at
capturing the raw state of the system being measured, it is also very susceptible to

45
noise and distortion. A common fix for this problem is to convert the signal coming
from the sensors into a differential signal, which is much more stable and easier to
work with.

➢ Computer Hardware and Software


Once the signal has been amplified and cleaned up, it must be fed into a
computerized system for collection and analysis. Nowadays, most DAS use
standard PC hardware, meaning that if components of the system fail, they can be
easily replaced with off-the-shelf items.
First and foremost, the signal must be converted into a digital format that the
computer understands. Typically, this is done using the pre-existing ports on a PC,
such as the parallel ports, USB, etc. Another approach is to use cards connected to
slots in the motherboard. With this second approach, a common problem is that
the number of ports on a PCI card is too few to accept all of the inputs needed. To
work around this problem, a breakout box is used to combine multiple signals into
a single input.
DAS cards often contain multiple components that are able to perform signal
processing before passing the signal to the software. In the most advanced cards,
these functions are accessible via a bus by a microcontroller, though some cheaper
systems used hard wired logic. For both types of card, proprietary device drivers
are often needed.
The next stage in processing the signal is to pass it to software. Nowadays, a
vast variety of different software solutions are available for use with DAQ, and the
choice of which to use depends on the type of data being collected and how it needs
to be processed. Typically, however, such systems are based on commonly
understood programming languages such as C++ or MATLAB, providing a large
scope for customization.
Despite these changes, the data acquisition market has changed and DAQ
Cards find themselves increasingly an obsolete form of DAQ. Newer devices that
allow for more robust feature and capabilities while not being limited by fixed
cards are now starting to dominate the market.

46
➢ Advantages of DAQifi Devices
DAQ cards typically output data using a dedicated hard link, and in years past
this often meant having a separate PC workstation for every data acquisition
process. Not only did this mean extra expense in terms of hardware, it often meant
that bringing data from several processes together was a manual, painful business.
DAQifi cards send the collected data over a WiFi network — either an existing one,
or one generated by the device itself — to custom software.
What this means in practice is that a single PC, tablet, or even smart phone can
be used to aggregate all the data being collected, bringing it all together for easy
analysis and manipulation. This also means that the computer being used to collect
and manipulate data does not need any additional hardware to be used for this
purpose.
In addition, DAQifi devices represent better value than many DAQ card
solutions. This is because DAQ cards are often made to be used to collect one type
of data only, and in many cases, this means that a bank of cards must be used in
order to collect even quite basic data. The flexibility of DAQifi devices makes
them cheaper to implement in many situations.
This is especially true in situations where portability is paramount. The fact
that DAQifi devices run on their own power makes them ideal for situations where
having a dedicated PC workstation is simply impossible. This is the case in many
industrial processes, where the environment is not conducive to the health of
computer hardware, and also in situations where the system under study is
inherently mobile, such as in automotive engineering.
Lastly, the user interface which comes as standard on DAQifi devices means
that using them is incredibly simple in comparison to many DAQ card solutions.
Often, even in high-end scientific applications, all that is needed from a data
acquisition system is for it to feed data to a centralized device, in a format which
is easy to work with, for later analysis.
This is exactly what DAQifi devices achieve, and it is therefore not surprising
that they are eclipsing DAQ card solutions in many situations.

47
EXPERIMENT: - 8

AIM: -To study of various electronic control used in thermal


measurement

➢ Introduction
Temperature measurement in today’s industrial environment encompasses a
wide variety of needs and applications. To meet this wide array of needs the process
controls industry has developed a large number of sensors and devices to handle this
demand. In this experiment you will have an opportunity to understand the concepts
and uses of many of the common transducers, and actually run an experiment using
a selection of these devices. Temperature is a very critical and widely measured
variable for most mechanical engineers.
Many processes must have either a monitored or controlled temperature. This can
range from the simple monitoring of the water temperature of an engine or load
device, or as complex as the temperature of a weld in a laser welding application.
More difficult measurements such as the temperature of smoke stack gas from a
power generating station or blast furnace or the exhaust gas of a rocket may be need
to be monitored. Much more common are the temperatures of fluids in processes or
process support applications, or the temperature of solid objects such as metal plates,
bearings and shafts in a piece of machinery.

1. PWM Control:
The PWM or Pulse Width Modulation control is used to control higher end
devices. The PWM signal is a square wave output of a fixed frequency that varies
the on duration of the signal or the duty cycle. This signal is typically a low-level DC
voltage signal in the rage of 0 to 5 volts or 0 to 24 volts. It can also be done in a
current output such as 4 to 20 milliamps. In each of these cases the minimum value
represents the off state and the high value represents the on state of the signal.
This type of a signal is normally used to control valves or positioners. Typically,
the base frequency of this type of control is in the range of a few hundred hertz, but
can be as high as ten or twenty thousand hertz. This frequency is dependent on the
particular controller and the needs of the device under control. The on percentage of
the PWM signal generates the desired valve opening, closing or position.
48
Figure 8.1 PWM controller

➢ Analog Output:
The analogy output control method uses a variable analogy signal, such as a 0-
10-volt DC, -10 to +10-volt signal or current signal (0 to 20 ma or 4 to 20 ma) as the
control output. This signal is generated by the controller, and similar to the PWM
control the level is proportional to the controller’s command signal. As an example,
if the control was generating a 0 to 10-volt control signal, a 25% output would be
2.5volts, and a 50% control output would be 5 volts. This signal is very commonly
used in a 4-20 milliamp output configuration since a signal below 4 milliamps
indicates a line failure and a definite control action can be taken to put the system in
a failed safe mode. This signal output is always a very low power signal and
additional power amplification is required at the control device end to make an actual
control move. Relay Output: The relay output control generally consists of a form C
or form A relay contact. The relay contact generally has a current rating of ten amps
or less, and many times less than one amp. This type of control is the least expensive
of the control outputs and is only useful in and ON/OFF controller. The cycle time
from ON to OFF usually needs to be something longer than five seconds to prevent
premature failure of the relay. There are two ways in which the relay contact can be
shown. The graphic below shows both methods for both a form A and form C contact.

➢ DC Pulse output:
This method of control output generates a DC signal that is of low power. The
low power signal is fed to a control device that has the ability to turn the low power
switching signal into either a high-power signal or into an actual control value. For
instance, using a pulse output signal for an on off control, wired to a solid-state relay

49
can allow a single controller to drive hundreds of thousands of watts of heating
capacity. If this same signal is used in a PWM system, it can be used to control the
position of valves the size of small cars. The signal itself tells the control device what
to do, and the control device uses additional power to amplify this signal to a physical
change.

➢ SSR Output:
The solid-state relay output is an AC semiconductor version of a form A contact.
That being it is either on or off. The solid-state relay output will switch ONLY
alternating current loads and will typically be limited to a maximum current of 5
amps. If larger currents are required, an external SSR is recommended. One caution
to note. Solid state relays will switch only an alternating current load, and will only
turn off as the voltage on the line side of the relay crosses zero. This only happens
twice in each cycle. For this reason, setting an on/off time of less than 1/60th of a
second will produce unexpected results. It also means that if you select a longer time
and are using a PWM method of control the pulse width time (T2) will always be in
16 millisecond increments. This holds even if you are using a DC pulse width system
to control an external SSR. In general, it is a good idea to set your T1 time of any
PWM or ON/OFF system driving an SSR to not less than one second.

2. Proportional control:
The most basic control algorithm for control of any device, is to measure a
command signal and subtract a feedback signal from it, creating an error signal. This
error signal is amplified by a certain amount. This amount is known as GAIN. As the
feedback signal varies farther from the command signal, the error x GAIN signal
grows proportionally larger. This is the signal that generates the control output. In
the case of ON/OFF control, when the proportional signal grows higher than a
specified limit, the output is turned off. When the signal grows smaller than a certain
amount, it turns the output on. This is a typical control method for a heater system.
Using a Proportional control with a PWM or analog signal makes a more efficient
system. In this control mode the amount of deviation from the set point changes the
pulse width or analog output. The higher the error signal, the more the output signal
is changed. This is the essence of proportional control. The output is changed
proportionally to the error signal.

50
3. PD (Proportional – Derivative control):
If you want to change the output signal quickly with a smaller change in the error
signal you will get the system to hold the temperature somewhat better. The problem
is that in this method the control has a tendency to overshoot, or raise the temperature
higher than desired because it is heating faster to get to the set point faster. The rate
of change of the feedback signal is known as the derivative of the signal. If the
feedback signal deviates too quickly, there is a chance we will overshoot the desired
value. By taking the rate of change of the signal into account we know we need to
slow down the control output some to reduce this. The derivative of the feedback is
subtracted from the error to minimize this. The new control algorithm would look
something like: (Command – feedback) * PropGain – Derivative (feedback) *DGain

4. PID (Proportional – Integral – Derivative):


The PID control takes the PD control one step farther. Since the PD controller
can actually settle at a set point different than the desired set point, due to the
derivative action if the proportional gain is too low, we need to add an additional
element to make sure that it gets there. The derivative action only

Figure 8.2 PID controller


works while the feedback is changing. If the proportional gain is not high enough the
system will happily settle some place near, but not at, the desired control point. An
integral is a sum over time. In this case it is the sum of the errors over a period of
time. If the system has settled at a point below the set point, for instance, there will
be some remaining error signal (command – feedback). Even if this error is small,
since the integral is a sum over time, the integral value will begin building, and over
time grow larger.

51
EXPERIMENT: - 9

AIM: To study and compare various advanced measurement


techniques.

1. Interferometers
Interferometers are investigative tools used in many fields of science and
engineering. They are called interferometers because they work by merging two
or more sources of light to create an interference pattern, which can be measured
and analyzed; hence 'Interfere-meter'. The interference patterns generated by
interferometers contain information about the object or phenomenon being
studied. They are often used to make very small measurements that are not
achievable any other way. This is why they are so powerful for detecting
gravitational waves--LIGO's interferometers are designed to measure a distance
1/10,000th the width of a proton!
Widely used today, interferometers were actually invented in the late 19th
century by Albert Michelson. The Michelson Interferometer was used in 1887 in
the "Michelson-Morley Experiment", which set out to prove or disprove the
existence of "Luminiferous Aether"--a substance at the time thought to permeate
the Universe. All modern interferometers have evolved from this first one since it
demonstrated how the properties of light can be used to make the tiniest of
measurements. The invention of lasers has enabled interferometers to make the
smallest conceivable measurements, like those required by LIGO.

➢ Construction
Remarkably, the basic structure of LIGO's interferometers differs little from
the interferometer that Michelson designed over 125 years ago, but with some
added features, described in LIGO's Interferometer. Because of their wide
application, interferometers come in a variety of shapes and sizes. They are used
to measure everything from the smallest variations on the surface of a microscopic
organism, to the structure of enormous expanses of gas and dust in the distant
Universe, and now, to detect gravitational waves. Despite their different designs
and the various ways in which they are used, all interferometers have one thing in
common: they superimpose beams of light to generate an interference pattern. The
basic configuration of a Michelson laser interferometer is shown at right. It

52
consists of a laser, a beam splitter, a series of mirrors, and a photodetector (the
black dot) that records the interference pattern

Figure 9.1 Interferometers


➢ Working
In a Michelson interferometer, a laser beam passes through a 'beam splitter'
that splits the original single beam into two separate beams. The beam splitter
allows half of the light to pass through, while reflecting the other half 90-degrees
from the first. Each beam then travels down an arm of the interferometer. At the
end of each arm is a mirror. This mirror reflects each beam back along its initial
path toward the beam splitter where, now coming from the opposite direction, the
two beams are recombined into a single beam. As they meet up, their waves
interfere with each other before traveling to a photodetector that measures the
brightness of the recombined beam as it returns. LIGO's interferometers are set
up so that, as long as the arms don't change length while the beams are traveling,
when the two beams recombine, their light waves cancel each other out
(destructively interfere) and no light reaches the photodetector.

53
But what happens if the distance traveled by the lasers does change while they
are making their way through the interferometer? If one arm gets longer than the
other, one laser beam has to travel farther than the other and it takes longer to
return to the beam splitter. Though the beams entered the interferometer at the
same time, they don't return to the beam splitter at the same time, so their light
waves will be offset when they recombine. This changes the nature of the
interference they experience. Rather than totally destructively interfering,
resulting in no light coming out of the interferometer, a little light will 'leak' out
and be seen by the photodetector. If the arms change length over a period of time
(say with the passage of a gravitational wave), the pattern of light coming out of
the interferometer will also change in-step with the movement of the arms.
Basically, a flicker of light emerges. In an interferometer, any change in light
intensity indicates that something happened to change the distance traveled by
one or both laser beams. Critically, the shape of the interference pattern emerging
from the interferometer over a period of time can be used to calculate precisely
how much change in length occurred over that period. LIGO looks for very
specific characteristics (how the interference pattern changes over time) to
determine if it has caught the passage of a gravitational wave.

➢ What is an Interference Pattern?


To better understand how interferometers work, it helps to understand more
about 'interference'. If you have ever thrown stones into a flat, glassy pond or pool
and watched what happened, you already know about interference. When the
stones hit the water, they generate concentric waves that move away from the
stone's point of entry. And where two or more of those concentric waves intersect,
they interfere with each other, the points of intersection being larger or smaller or
completely canceling each other out. The visible pattern occurring where waves
intersect constitutes an "interference" pattern.

➢ Interference patterns in water


Interference patterns in water. The "interference" occurs in the regions where
the expanding circular waves from the different sources intersect. (Credit:
Wikimedia commons)

54
➢ What’s an IFO con and des interference

Figure 9. 2 Interference in water


When the peaks of two waves meet, their peaks add up. When the peaks of
one wave meet the valleys of another, they cancel out. www.explainthatstuff.com
of interference are simple to understand. The figure at right shows two specific
kinds of interference: total constructive interference and total destructive
interference. In total constructive interference, when the peak of one wave merges
with the peak of another wave, they add together and 'construct' a larger wave. In
total destructive interference, the peak of one wave meets the valley of an identical
wave, and they totally cancel each other out (they 'destroy' each oth

Figure 9.3 Iinterference

55
In nature, the peaks and valleys of one wave will not always perfectly meet
the peaks or valleys of another wave like the illustration shows. Regardless of how
they merge, the height of the wave resulting from the interference always equals
the sum of the heights of the merging waves. When the waves don't meet up
perfectly, partial constructive or destructive interference occurs. The animation
below illustrates this effect. If you watch closely, you will see that the black wave
goes through a full range of heights from twice as high and deep (where total
constructive interference occurs) to flat (where total destructive interference
occurs) as the red and blue waves pass 'through' each other (interfere). In this
example, the black wave is the interference pattern! Note how it continues to
change as long as the red and blue waves continue to interact.

56
EXPERIMENT: - 10

AIM: To perform experiment with any thermal system and carry


out uncertainty analysis for same.

➢ Introduction:
The choice of the appropriate installation method plays an important role for
accurate temperature measurement. In the cryogenic and high vacuum
environment, due to poor contact between the cryogenic temperature sensor and
the surroundings that the sensor is Installed and intended to measure, the self-
heating from sensor measuring current brings about temperature difference and
creates a potential temperature measurement error. The self-heating temperature
difference is directly proportional to the thermal resistance for a mounted sensor,
which means that lower installation thermal resistance of sensors is advantageous
to obtain better measurement results.
A measurement model for the installation thermal resistance of sensor is built
in terms of two currents method which is always used to measure self-heating
effect. A cryostat that can provide variable temperature in the accurate
temperature measurement and control experiments is designed and manufactured.
This cryostat can reach 3K in a few hours and the sample temperature can reach
as high as 20 K. Based on the experimental results, the measurement uncertainty
of the thermal resistance are also analyzed and calculated. To obtain the best
measurement results in our cryostat, the thermal resistances of sensors with two
installation methods are measured and compared.

➢ Experimental Setup
In order to obtain the cryogenic temperature, a new cryostat was designed, as
shown in fig. This cryostat had a simple structure that consists of a two-stage GM
cryocooler Cryogenics of America, Inc. RDK-415D), cryostat wall, radiation
shield, thermal damper, sample holder, Cernox temperature sensors, temperature
controller and other measuring instruments. All the measurement devices used for
acquiring data were connected by IEEE-488 cables and controlled by a personal
computer using a program written by LabView software.

57
The radiation shield was made of copper and the thickness is 1 mm. It was
connected to the first stage of the cryocooler by a flange. The sample holder and
heat sink are made of oxygen-free high conductivity copper (OFHC) in order to
get a good heat conduction. The thermal damper with 1.2 mm of thickness and 45
mm of diameter is made of PTFE and located between the sample holder and the
cold head to reduce the temperature fluctuations.

Figure 10.3 Cryostat

Two temperature sensors are attached to the bottom of oxygen-free copper


block and their temperatures are automatically controlled by the temperature
Controller. Two Rhodium-Iron RTDs are used to monitor the temperature of the
cold head and the sample holder during the test because the Cernox RTDs we used
is calibrated from 4K to 40K. The wires from the measurement instruments (Fluke
1594A) to temperature sensors are twisted
to reduce the inductive effect
All measurements were performed with a DC bridge Fluke 1594A
(uncertainty 0.8ppm) in combination with a 10000ohm standard resistor
(uncertainty1.5ppm) placed in a temperature-controlled bath. Thermal resistances
of two different mounting methods (VGE-7031 Varnish, Apiezon N Grease) were
measured at cryogenic temperatures (4.2K、6K、8K、10K、14K) on Cernox
temperature sensor (Cernox-1050-SD SN 70210) by two-current method. The
measurement uncertainty of the thermal resistance is also analyzed and calculated
by the present theory. It is worth mentioning that the increase in the sensor

58
excitation current will result in a decreasing temperature measurement standard
deviation. A larger measurement current will also lead to more obvious
temperature difference between the two measurement currents 1 I and 2 I.
Nevertheless, a larger excitation current will dissipate more power in the
temperature sensor, raising its temperature above the mounted environment. It is
a significant problem to choose an appropriate measurement current to balance
the standard deviation and the self-heating effect. In our experiment, we choose
35 A as the measurement current for 4.2K and 6K, and 65 A for 8K, 10K and
14K.

➢ Conclusion
Cernox thermometer self-heating is a significant factor in high accuracy
cryogenic temperature measurement that cannot be eliminated. The temperature
difference caused by self-heating is depended on the thermal resistance of the
sensor and its environment. In this paper, the thermal resistance of the Cernox
temperature sensor and its surroundings from 4.2K to 14K is calculated by two-
current method. The results show that the thermal resistances of the two mounting
methods (VGE-7031 Varnish and Apiezon N Grease) are roughly equivalent. It
can be observed that with the increasing of the temperature, the effective thermal
resistances are gradually decreased. The uncertainty of the thermal resistance is
also analyzed in this paper. The uncertainty of the thermal resistance decreased
with the increasing of the temperature, while the relative uncertainty for different
temperature is equal and less than 3%.

59

You might also like