You are on page 1of 11

Methodology of nutritional survey

 Nutritional Assessment
 Nutritional assessment systems
 Methods used in nutritional assessment
 The design of nutritional assessment systems

Good Nutrition Essential for Health


Good nutrition is critical for the well-being of any society and to each individual within that
society. The variety, quality, and quantity of available food and the patterns of food
consumption can profoundly affect health.
Scurvy, for example, was among the first diseases recognized as being caused by a
nutritional deficiency. One of the earliest descriptions of scurvy was made in 1250 by the
French writer Joinville who observed it among the troops of Louis IX at the siege of Cairo.
When Vasco da Gama sailed to the East Indies around the Cape of Good Hope in 1497, more
than 60% of his crew died of scurvy. In 1747, James Lind, a British naval surgeon, conducted
the first controlled human dietary experiment showing that consumption of citrus fruits cured
scurvy.

Deficiency Diseases Once Common


During the nineteenth century and the first half of the twentieth century, scurvy and other
deficiency diseases such as rickets, pellagra, beriberi, xerophthalmia, and goiter (caused by lack
of adequate dietary vitamin D, niacin, thiamin, vitamin A, and iodine, respectively) were
commonly seen in the United States and throughout the world and posed a significant threat to
human health.
During the same time, infectious disease was the major cause of death throughout the world.
Nutritional deficiencies and infectious disease remain serious problems in many developing
countries and even among certain population groups in America and other developed countries.
Sanitation measures, improved health care, vaccine development, and mass immunization
programs have dramatically reduced the incidence of infectious disease in developed nations.
An abundant food supply, fortification of some foods with important trace nutrients, enrichment
to replace certain nutrients lost in food processing, and better methods of determining the
nutrient content of foods have made nutrient deficiency diseases relatively uncommon in
developed nations. Among certain groups, however, deficiencies of certain nutrients remain a
problem.

Chronic Disease Now Epidemic

Despite the many advances of nutritional science, nutrition-related diseases not only continue to
exist but result in a heavy toll of disease and death. In recent decades, however, they have taken
a form different from the nutrient-deficiency diseases common in the early 1900s. Diseases of
dietary excess and imbalance now rank among the leading causes of illness and death in
America and play a prominent role in the epidemic of chronic disease that Western nations are
currently experiencing. In the 1988, 5 of the 10 leading causes of death—coronary heart disease
(heart attack), certain cancers, stroke, diabetes mellitus, and atherosclerosis were linked with
diet. Excessive alcohol consumption played a major role in an-related diseases not only
continues to exist but result in a heavy toll of disease and death. In recent decades, however,
they have taken a form different from the nutrient-deficiency diseases common in the early
1900s.
11 leading causes of death, USA, 1990

Rank Cause of death Number Percent of


total deaths
1* Heart diseases 725.010 33.5
2* Cancers 506.000 23.4
3* Strokes 145,340 6.7
4** Accidents 93.550 4.3
5 Chronic obstructive lung 88,980 4.1
diseases
6 Pneumonia and influenza 78,640 3.6
7* Diabetes mellitus 48.840 2.3
8** Suicide 30,780 1.4
9** Homicide 25,700 1.2
10** Chronic liver disease and 25.600 1.2
cirrhosis
11 Human immunodeficiency 24,120 1.1
virus infection

Source: National Center lor Health Statistics. Monthly Vita I Statistics Report. Vol. 39.
No. 13.1991.
*
Causes of death in which diet plays a part.
**
Causes of death in which excessive alcohol consumption plays a part.

The Surgeon General's Report on Nutrition and Health points out that although these diseases
are caused by a combination of dietary and nondietary factors and that the exact proportion
attributable to diet is uncertain, "it is now clear that diet contributes in substantial ways to the
development of these diseases and that modification of diet can contribute to their prevention. . .
. The Report's main conclusion is that overconsumption of certain dietary components is now a
major concern for Americans. While many food factors are involved, chief among them is the
disproportionate consumption of foods high in fats, often at the expense of foods high in
complex carbohydrates and fiber that may be more conducive to health."3
The continuing presence of nutrition-related disease makes it essential that health
professionals be able to determine the nutritional status of individuals. This will help identify
persons who might benefit from nutritional intervention to improve their health and which
interventions would be appropriate. Nutritional assessment is the key to determining a person's
nutritional status.
Nutritional Assessment
Nutritional assessment is an attempt to evaluate the nutritional status of individuals or
populations through measurements of food and nutrient intake and evaluation of nutrition-
related health indicators. The U.S. Department of Health and Human Services defines nutritional
assessment as "the measurement of indicators of dietary status and nutrition-related health status
to identify the possible occurrence, nature, and extent of impaired nutritional status," which can
range from deficiency to toxicity.

Nutritional assessment systems

Nutritional assessment systems can take one of three forms: surveys, surveillance, or screening
(WHO, 1976a).

The nutritional status of a selected population group may be assessed by means of a cross-
sectional survey. This nutrition survey may establish baseline nutritional data and/or ascertain
the overall nutritional status of the population. Cross-sectional nutrition surveys can also identify
and describe those population subgroups 'at risk' to chronic malnutrition. They are less likely to
identify acute malnutrition or to provide information on the possible causes of malnutrition.
Information on the extent of existing nutritional problems is obtained, which can be used to allo-
cate resources to those population subgroups in need, and to formulate policies to improve the
overall nutrition of the population.

Nutrition surveillance

The characteristic feature of surveillance is the continuous monitoring of the nutritional status of
selected population groups. Surveillance studies therefore differ from nutrition surveys because
the data are collected, analyzed, and utilized, for an extended period of time. Sometimes, only
data for specific at risk subpopulation groups identified in earlier nutrition surveys, are
collected. Unlike cross-sectional nutrition surveys, surveillance studies identify the possible
causes of malnutrition, and hence can be used to formulate and initiate intervention measures at
the population or subpopulation level. Nutrition surveillance can be carried out on selected
individuals, in which cases the term 'monitoring', rather than surveillance, is used.
Nutrition screening
The identification of malnourished individuals requiring intervention can he accomplished by
nutrition screening. This involves a comparison of an individual's measurements with
predetermined risk levels or "cutoff" points. Screening can be carried out at the individual level
as well as on specific subpopulations considered to be at risk. Screening programs are usually
less comprehensive than surveys or surveillance studies.
Each of these three types of nutritional assessment systems has been adopted in clinical
medicine to assess the nutritional status of hospitalized patients. Today, nutritional assessment is
often performed on patients with acute traumatic injury, on those undergoing surgery, on
chronically ill medical patients, and on elderly patients. Screening can initially be carried out to
identify those patients requiring nutritional management, after which a more detailed and
comprehensive baseline nutritional assessment of the individual may take place. This
assessment will clarify and expand the nutritional diagnosis, and will establish the severity of
the malnutrition. Finally, a nutritional monitoring system is often initiated to follow the
response of the patient to nutritional therapy.

Methods used in nutritional assessment


Nutritional assessment systems utilize a variety of methods to characterize each stage in the
development of a nutritional deficiency state (Table 1). The methods are based on a series of
dietary, laboratory, anthropometric, and clinical measurements, used either alone or, more
effectively, in combination.
Dietary methods
The first stage of a nutritional deficiency is identified by dietary assessment methods. During
this stage, the dietary intake of one or more nutrients is inadequate, either because of a primary
deficiency (low levels in the diet), or because of a secondary deficiency. In the latter case,
dietary intakes may appear to meet nutritional needs, but conditioning factors (such as certain
drugs, dietary components, or disease states) interfere with the ingestion, absorption, transport,
utilization, or excretion of the nutrient(s).

Table 1: Generalized scheme for the development of a nutritional deficiency.

Stage Depletion Stage Method(s) Used


1 Dietary inadequacy Dietary

2 Decreased level in reserve tissue Biochemical


store
3 Decreased level in body fluids Biochemical

4 Decreased functional level in Anthropometric/Biochemi


tissues cal
5 Decreased activity in nutrient- Biochemical
dependent enzyme
6 Functional change Behavioral/Physiological
7 Clinical symptoms Clinical
8 Anatomical sign Clinical
Laboratory methods
Several stages in the development of a nutritional deficiency state can be identified by laboratory
methods. In primary and/or secondary deficiencies, the tissue stores become gradually depleted
of the nutrient(s). As a result of this depletion, reductions may occur in the levels of nutrients or
in the levels of their metabolic products in certain body fluids and tissues, and/or in the activity
of some nutrient-dependent enzymes. This depletion may be detected by biochemical tests,
and/or by tests that measure physiological or behavioral functions dependent on specific
nutrients (Solomons, 1985). Examples of such functions include: dark adaptation (vitamin A);
taste acuity (zinc); capillary fragility (vitamin C; and cognitive function (iron). Functional tests
provide a measure of the biological importance of a given nutrient because they assess the
functional consequences of the nutritional deficiency (Solomons and Allen, 1983). In general,
physiological functional tests are not suitable for field surveys; they are too invasive, and often
require elaborate equipment.

Anthropometric methods
Measurements of the physical dimensions and gross composition of the body are used in
anthropometric methods (Jelliffe, 1966). The measurements vary with age and degree of
nutrition and, as a result, are particularly useful in circumstances where chronic imbalances of
protein and energy are likely. They can be used, in some cases, to detect moderate as well as
severe degrees of malnutrition.
Clinical methods
A medical history and a physical examination are clinical methods used to detect signs (i.e.
observations made by a qualified examiner) and symptoms (i.e. manifestations reported by the
patient) associated with malnutrition. These signs and symptoms are often nonspecific, and only
develop during the advanced stages of nutritional depletion; for this reason, diagnosis of a
nutritional deficiency should not rely exclusively on clinical methods (McGanity, 1974)..
Nutritional assessment methods may also involve the collection of information on variables
known to affect the nutritional status of a population, including relevant economic and
sociodemographic data; cultural practices, food habits, food beliefs, and food prices.

The design of nutritional assessment systems


The design of the nutritional assessment system is critical if time and resources are to be used
effectively. The assessment system used, and the type and number of measurements selected,
will depend on a variety of factors.
Study objectives
The general design of the assessment system and the measurements or indices selected should be
dictated by the study objectives. For example, if information on the current overall nutritional
status at the national level is needed to estimate the prevalence of malnutrition, a comprehensive
baseline cross-sectional nutrition survey, involving all four methods of nutritional assessment,
may be necessary. Alternatively, to evaluate the impact of nutrition intervention on a specific
high-risk group, a nutritional surveillance system, designed to monitor acute (i.e. short-term)
changes in nutritional status, may be employed. To identify malnourished individuals, a
screening system using indices which reflect both past and present nutriture, may be selected.

Sampling protocols
It is important that sampling protocols be designed and rigorously adhered to throughout the
study to avoid systematic bias in the sample selection, and to ensure that the sample is random
and representative of the target population. Extrapolation of the results to the population at large
will then be valid. Self-selection (i.e. survey participation by consent) produces unrepresentative
samples and, sometimes, systematic bias; for example, only those with a higher level of
education may volunteer to participate. In these circumstances, it is essential to identify the
probable direction and magnitude of the bias arising from the sample design and no response
rate (Elwood, 1983).
The sample design may necessitate the use of random number tables or generators to produce
a randomly selected representative sample. Alternatively, stratified and/or multi-stage random
sampling may be required. In stratified sampling, the target population is divided into a number
of categories or strata (e.g. urban and rural populations, different ethnic groups, various
geographical areas or administrative regions) from each of which is drawn a separate random
sample. In multi-stage random sampling, a number of levels of sampling are defined, from each
of which is drawn a random sample.

For example, in the Nutrition Canada National Survey, the sample was selected in three stages.
The first stage involved the selection of 80 enumeration areas reflecting the different regions,
population types (40 metropolitan, 24 urban, and 16 rural), income levels (low and other), and
seasons (January to May and June to December). The second stage of sampling involved the
selection of a random sample of households from within each enumeration area, and the third
stage involved the selection of persons from within households. The latter were randomly
selected from each of 10 age-sex categories (Health and Welfare Canada, 1973).

Validity
Validity is important in the design of nutritional assessment systems because it describes the
adequacy with which any measurement or index reflects the nutritional parameter of interest.
For example, if information on the long-term nutritional status of an individual is required, the
dietary measurement should provide a valid reflection of the true 'usual' nutrient intake, rather
than the intake over a single day (Block, 1982). Similarly, the biochemical index should be a
valid measure of the total body content of a nutrient or the size of the tissue store most sensitive
to deficiency, and not reflect the recent dietary intake (Solomons, 1985). The design of the
nutritional assessment system must include consideration of the validity of all the indices
selected.
Precision
The degree to which repeated measurements of the same variable give the same value is a
measure of precision also referred to as reproducibility or reliability. The study design should
always include some replicate observations (repeated but independent measurements on the
same subject/sample). In this way, the precision of each measurement procedure can be
calculated and expressed as the coefficient of variation (CV%).
CV% = standard deviation x 100% / mean
The precision of a measurement is a function of the random measurement errors, and, in
certain cases, true variability in the measurement that occurs over time. For example, the
nutrient intakes of an individual vary over time (intra-subject variation) and this results in
uncertainty in the estimation of usual nutrient intake. This variation characterizes the true 'usual
intake' of an individual. Unfortunately, intra-subject variation cannot be distinguished
statistically from the random measurement errors, irrespective of the design of the nutritional
assessment system.
Random measurement errors
An insensitive instrument or variations in the measuring and recording technique produced by
random measurement errors reduce the precision of a measurement by increasing the variability
about the mean. They do not influence the mean or median value. Random measurement errors
may occur when the same examiner repeats the measurements (within- or intra-examiner error)
or when several different examiners repeat the same measurement (between- or inter-examiner
error). Unfortunately, such errors produce measurements which are imprecise in an
unpredictable way, resulting in less certain conclusions. Random measurement errors can be
minimized by incorporating standardized measurement techniques and the use of trained
personnel in the nutritional assessment system, but can never be entirely eliminated.

Accuracy
Accuracy is a term best used in a restricted statistical sense to describe the extent to which the
measurement is close to the true value. It therefore follows that a measurement can be precise,
but, at the same time, inaccurate—a situation which occurs when there is a systematic bias in the
measurements. Accurate measurements, however, necessitate high precision. The design of any
biochemical assessment protocol should include the use of reference materials, with certified
values for the nutrient of interest, to control the accuracy of the analytical methods. The control
of accuracy in other assessment methods is difficult and is discussed in more detail in later
chapters. It is preferable to avoid statements such as: 'Weight for age is an "accurate" reflection
of acute protein-energy malnutrition. The term 'valid' is more appropriate here, and confusion
with the well-accepted statistical usage of 'accurate' is avoided.
Systematic measurement errors or bias
Unfortunately, systematic measurement errors may arise in any nutritional assessment method.
Such errors reduce the accuracy of a measurement by altering the mean or median value. They
have no effect on the variance, and hence do not alter the precision of the measurement (Himes,
1987). Examples of measurement bias include dietary scales which always over or underestimate
weight, skinfold calipers which systematically over or underestimate skinfold thickness, and a
biochemical method for assaying vitamin C in foods which systematically underestimates the
vitamin C content because only the reduced form of vitamin C (L-ascorbic acid) is measured.
Interviewer and respondent bias may also reduce the accuracy of dietary assessment results
(Anderson, 1986). For example, social desirability bias by respondents may occur if they
consistently underestimate their alcohol consumption in a food-frequency questionnaire or
dietary record method. Bias is important as it cannot be removed by subsequent statistical
analysis (NRC, 1986). Consequently, care must be taken to reduce and if possible eliminate all
sources of systematic errors in the design of the nutritional assessment system, by careful
attention to the equipment and methods selected. This is particularly important in cross-sectional
surveys, where the absolute values of the parameters are often compared with reference data.
Sensitivity
The sensitivity of an index refers to the extent to which it reflects nutritional status, or predicts
changes in nutriture. Sensitive indices show large changes as a result of only small changes in
nutritional status, and, as a result, have the ability to identify and classify those persons within a
population who are genuinely malnourished. An indicator with 100% sensitivity correctly
identifies all those individuals who are genuinely malnourished: no malnourished persons are
classified as 'well' (i.e. there are no false negatives). Numerically, sensitivity (Se) is the
proportion of individuals with malnutrition who have positive tests (true positives divided by the
sum of true positives and false negatives) (Table 2). Unfortunately, the term 'sensitivity' is also
used to describe the ability of an analytical method to detect the substance of interest.
Specificity
The specificity of an index refers to the ability of the index to identify and classify those persons
who are genuinely well-nourished. If an indicator has 100% specificity, all genuinely well-
nourished individuals will be correctly identified: no well-nourished individuals will be
classified as 'ill' (i.e. there are no false positives). Numerically, specificity (Sp) is the proportion
of individuals without malnutrition who have negative tests (true negatives divided by the sum
of true negatives and false positives) (Table 2).

The specificity (and sensitivity) of an index depend both on the extent of the random errors
associated with the raw measurement and on the influence of non-nutritional factors such as
diurnal variation and the effects of disease (Habicht et al., 1979). If the former is large, the index
will be imprecise and the specificity (and sensitivity) will be reduced. Similarly, if the latter is
important, the specificity (and sensitivity) will also be reduced. In these circumstances, the index
will show a change which is not associated with a nutritional effect, and misclassification may
occur; individuals may be designated 'at risk' when they are actually unaffected (false positives),
or individuals may be designated 'not at risk’ when they are truly affected by the condition (false
negatives). The ideal index has a low number of both false positives (high specificity) and false
negatives (high sensitivity).

Table 2. Numerical definitions of sensitivity, specificity, predictive value, and prevalence for a
single index used to assess malnutrition in a sample group.

The True Situation


Malnutrition No
present malnutrition
Positive Test Result True positive (TP) False positive (FP)
Negative Test Result False negative (FN) True negative (TN)
Sensitivity (Se) = TP / (TP + N)
Specificity (Sp) = TN / (FP + N)
Predictive value (V) = (TP + TN) / TP + FP + TN + FN)
Positive Predictive value (V+) = TP / (TP + FP)
Negative Predictive value (V-) = TN / (TN + FN)
Prevalence (P) = (TP + FN) / (TP + FP + TN + FN)
The choice of a cutoff point to differentiate between malnourished and well-nourished states
for a particular index critically affects both the sensitivity and specificity. In cases where lower
values of the index are associated with malnutrition, reducing the cutoff point increases
specificity and decreases sensitivity for a given index. Table 3 shows the influence of a change
in cutoff point on sensitivity and specificity. For example, when the cutoff point for mid-upper-
arm circumference is reduced from < 14.0 cm to < 12.5 cm, the specificity in predicting
malnutrition (based on weight for height <60% of median) increases from 82.7% to 98.0%,
whereas the sensitivity falls from 90.4% to 55.8%. Similarly, when the cutoff point for total iron
binding capacity (TIBC) is lowered from <310 µg/dL to < 270 µg/dL, the specificity in
predicting postoperative sepsis increases from 68% to 87%, but the sensitivity falls from 55% to
30%.
The term 'specificity' is also used in analytical work to describe the degree to which a particular
laboratory procedure measures only the component of interest. 'Analytical specificity' should be
used to clearly distinguish this more specialized use of the term.

Table 3. Influence of change in the cutoff point on the sensitivity and specificity of mid-upper-
arm circumference and total iron binding capacity (TIBC) as predictors of outcome.
Parameter Cutoff Sensitivity Specificity Author
Arm Circ. < 14.0 90.4% 82.7% Trowbridge
(cm) < 12.5 55.8% 98.0% & Staehling (1980)
TIBC <310 55% 68% Bozzetti et al.
(1985)
(µg/dL) <270 30% 87%
Prevalence
The number of persons with malnutrition or disease during a given time period is measured by
the prevalence. Numerically, the actual prevalence (P) is the proportion of individuals who
really are malnourished or infected with the disease in question (the sum of true positives and
false negatives) divided by the sample population (the sum of true positives, false positives, true
negatives and false negatives) (Table 2). Prevalence influences the predictive value of a
nutritional index more than any other factor.
Predictive value
The predictive value can be defined as the likelihood that an index correctly predicts the
presence or absence of malnutrition or disease (Galen and Gambino, 1975). Numerically, the
predictive value of a test is the proportion of all tests that are true (the sum of the true positives
and true negatives divided by the total number of tests) (Table 2). The predictive value can be
further subdivided into the positive predictive value and the negative predictive value. The
positive predictive value of a test is the proportion of positive tests that are true (the true
positives divided by the sum of the true positives and false positives). The negative predictive
value of a test is the proportion of negative tests that are true (the true negatives divided by the
sum of the true negatives and false negatives).

The predictive value of any index is not constant but depends on the sensitivity, specificity, and
the prevalence of malnutrition or disease. Table 4 shows the influence of prevalence on the
positive predictive value of an index when the sensitivity and specificity are constant. When the
prevalence of malnutrition is low, even very sensitive and specific tests have a relatively low
positive predictive value. Conversely, when the prevalence of malnutrition is high, indices with
rather low sensitivity and specificity may have a relatively high positive predictive value.
Determination of predictive value is the best test of the usefulness of any index of nutritional
status. An acceptable predictive value for any index depends on the number of false-negative
and false-positive results that are considered tolerable, taking into account the prevalence of the
disease or malnutrition, its severity, the cost of the test, and, where appropriate, the availability
and advantages of treatment. In general, the highest predictive value is achieved when
specificity is high, irrespective of sensitivity (Habicht, 1980).

Table 4. Influence of disease prevalence on the predictive value of a test with sensitivity and
specificity of 95%.

Predictive Prevalence
Value 0.1% 1% 10% 20% 30% 40%
Positive 0.02 0.16 0.68 0.83 0.89 0.93
Negative 1.00 1.00 0.99 0.99 0.98 0.97

Additional factors
There are many other factors affecting the design of nutritional assessment systems. These
include respondent burden, equipment and personnel requirements, and field survey and data
processing costs. The methods selected should aim at keeping the respondent burden to a
minimum, thus reducing the nonresponse rate, and avoiding bias in the sample selection. In the
early United Kingdom National Food Survey, for example, the inventory method was used to
assess household food consumption; the method was found to have too high a respondent
burden, and was abandoned in 1951 in favor of the food accounts method. Alternative methods
for minimizing the nonresponse rate include the offering of material rewards and the provision
of incentives such as regular medical checkups, feedback information, social visits, and
telephone follow-up.
The requirements for equipment and personnel should also be taken into account when
designing a nutritional assessment system. Measurements requiring elaborate equipment and
highly trained technicians may be impractical in a field survey setting; instead, the
measurements selected should be relatively noninvasive, and easy to perform accurately and
precisely, using unskilled but trained assistants. The ease with which equipment can be
maintained and calibrated must also be considered.
The field survey and data processing costs are also important factors. In surveillance systems,
the resources available may dictate the number of malnourished individuals who can
subsequently be treated in an intervention program. In such circumstances, the cutoff point for
the index can be manipulated to select only the number of individuals who can be treated.
REFERENCE INDIVIDUALS
| compose a
REFERENCE POPULATION
from which is selected a
REFERENCE SAMPLE GROUP
on which are determined
REFERENCE VALUES
Observed on which is observed a
value
of an REFERENCE DISTRIBUTION
individual
may be from which are calculated
compared REFERENCE LIMITS
with
that may define
REFERENCE INTERVALS
Table 5: The concept of reference values and the relationship of recommended terms.

You might also like