Professional Documents
Culture Documents
👉
important/must know
📕
book
📑
previous trans
● 📑
costly and detrimental to patient care.
Diagnostic tests should be requested because it will give
valuable information and may change the way the patient will
be treated.
I. INTRODUCTION
● 📑 with uncertainty.
The need for diagnostic certainty depends on the
penalty for being wrong.
● The physician/doctor establishes his/her own upper and
lower threshold that he/she wants to achieve to either accept
or discard the presence of the disease, depending on his/her
○ Diagnosis should be double checked if the harmful effect assessment.
of the error in treatment is grave. ● Upper threshold - accept diagnosis and start treatment.
● Lower threshold - discard the possibility of the disease.
B. ALTERNATIVE ACTIONS AFTER CLINICAL HISTORY ● In between - continue testing with diagnostic tests .
AND PHYSICAL EXAMINATION
● 📑
○
Do nothing.
Very low certainty that the patient has the disease.
○ Give some advice, health education, and reassurance
that the patient is healthy because the certainty of the
● 📑
○
disease is very low.
Obtain additional diagnostic information.
Certainty is enough to consider the presence of disease
but not enough to consider treatment right away.
○ Requesting diagnostic tests to increase probabilities of
disease. Figure 1. Establishing Diagnostic Thresholds
📑
○ In most cases, empiric treatment is started. (Dr. Regal’s Powerpoint Presentation)
●
○
Treat without waiting for more information.
Certainty of the disease is high enough to consider ● 📑 Overall steps in using diagnostic threshold for decision
making:
treatment.
○ Establish the upper and lower testing threshold.
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D., Cortez, J, Cruz, A., Cruz, I., Cruz, K. 1
○ Establish the probability that the patient has the disease. (Dr. Regal’s powerpoint lecture)
○ Using the likelihood ratio to determine if the test will
change the management of the patient’s condition. ● Suspicion of Diabetes Mellitus (50% probability) with
symptoms of polyuria and polydipsia.
II. CLINICAL SCENARIO ● At what probability are you confirming diagnosis and starting
the treatment?
○ Not as high as 80%.
A. SAMPLE CASE ON THE PROBABILITY OF ○ If it's a simple DM without other complications, treatment
APPENDICITIS would be oral medication and it would not be dangerous.
○ A lower upper threshold for where you can insert the
diagnosis of DM and start treatment.
○ 70% Upper threshold as opposed to the 80% in the
previous case (Acute Appendicitis).
○ 20% Lower threshold - not as low as 10% because if
you misdiagnose DM the patient will not die from it
immediately as compared to the previous case.
■ If the possibility of DM is only at 20%, then you can
Figure 2. Probability of Appendicitis start considering another disease to explain the
(Dr. Regal’s Powerpoint Presentation) symptoms of polyuria and polydipsia.
● The only difference from the first example is that the upper
● When dealing with abdominal pain, the possibility of and the lower thresholds are established at different values
appendicitis is at 10%. depending on the medical conditions, severity and experience
○ Because there is no other information that can bolster the of physicians.
suspicion. ● If the possibility is found in between 20% and 70%, in this
● After you dig into the history, the pain now localized to the example 50%, then you can continue testing.
right lower quadrant which increased the possibility of ● In this example, a work up HbA1c (glycosylated hemoglobin)
appendicitis to 40%. was done with a cut off of 6.1% but here it was at 6.5%
○ Because any other condition involving the gallbladder, (positive result).
ureter, right ovary (in female), lower GI, etc. may present ○ If it's normal, can it bring down the possibility of Diabetes
as pain on RLQ. below 20%?
● After performing physical examination, tenderness on RLQ is
elicited which increases the probability of appendicitis to 60%. III. SENSITIVITY AND SPECIFICITY
● Are you willing now to start treatment with just 60% probability
knowing that the treatment is invasive, may entail some
complications, and the disease is a serious one?
○ Need to acquire more information through diagnostic
tests.
○ Another option is to wait for signs and symptoms.
■ Not always a good option because the disease may
have already advanced.
○ Upper Threshold (80%) - before starting to operate on
the patient.
○ Lower Threshold (10%) - discard diagnosis of
● 👉 appendicitis
Upper and lower thresholds are established by the
physician.
● Since probability is at 60%, there is a need to continue testing
by requesting for diagnostic tests like ultrasound. Figure 4. Sensitivity and Specificity
● Ultrasound - can help establish diagnosis of appendicitis (Dr. Regal’s powerpoint lecture)
○ Findings for (+) appendicitis: increased appendix
diameter, presence of fluid around the RLQ, ● Characteristics of the test if it can push the possibility beyond
non-compressible appendix. the threshold (positive or negative).
○ Positive - probability will be increased to the upper ● Sensitivity
threshold established. ○ Ability of the diagnostic test to identify who is positive in
○ Negative - discard possibility of appendicitis. the patients who really have the disease (boxes A and
C)
B. SAMPLE CASE ON THE PROBABILITY OF DIABETES ○ The test was able to identify the true positive in box A.
𝑎
MELLITUS ○ Formula: 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑎+𝑐
○ True Negative - tests that can identify those who do not
have the disease
● Specificity
○ Test that turned negative can identify those who do not
have the disease (Boxes B and D)
○ True Negative - among all those who do not have the
disease and the test turned out negative (Box D)
𝑑
○ Formula: 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑏+𝑑
● Rarely will you have 100% specific and sensitive.
○ Due to false positive and false negative.
Figure 3. Probability of Diabetes Mellitus.
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 2
○ Mislabeled 23 out of 220 as having high blood sugar for
the non diabetics (specificity of 90%).
● Cut off at 120 (blue).
○ Sensitivity to 86% identified more diabetics as having
high blood sugar (24 out of 28) against earlier cutoff (21
out of 28).
○ Mislabeled more non-diabetics as having high blood
sugar 40 out of 220 (before 23 out of 220).
○ Net Result = Sensitivity of 86% (higher) and specificity of
82% (lower).
● Cut off at 110 (red).
○ Drastic increase in identification of diabetics having high
blood sugar = 96% sensitivity (27 out of 28).
○ Continuously increase mislabeling non-diabetics as
having high blood sugar = 66% specificity (74 out of
220).
○ Increase in sensitivity = decrease in specificity
● Cut off at 100 (purple)
Figure 5. Cut-off Points for Non-Diabetics and Diabetics. ○ Sensitivity remains the same.
(Dr. Regal’s powerpoint lecture) ○ Continuous decline in specificity.
● Which cut off should we adopt?
● Upper: Non-diabetics and blood sugar levels
● Below: Diabetics and blood sugar levels
● Cutoff point normal at 80.
○ We will be able to identify all the diabetics with high blood
sugar.
○ Seen on the table on the right side (green table of Figure
5), all 28 diabetics were identified.
○ Upper graph, mislabeled non diabetics as having high
blood sugar.
■ Among those non-diabetics, 181 out of 220 with high
blood sugar = low specificity (18%)
● Not a good diagnostic test with 100% sensitivity
(high) and 18% specificity (low)
● Cut off point at 200.
○ Able to identify those non-diabetics as having normal
blood sugar.
■ 220 non-diabetics are now identified as having low
sugar → specificity at 100%
Figure 7. Receiver Operating Characteristic Curve.
■ Mislabeled many diabetics as having normal blood
(Dr. Regal’s powerpoint lecture)
sugar = 17 out of 28 as having normal sugar = Very
low sensitivity
● As we increase the sensitivity (True-Positive rate), it does not
go straight up, it curves eventually and when it curves to the
right 1-Specificity (False-Positive rate) increases.
● Specificity and 0 starts from the right going to the left, 0 to 100
going to the left, that means as you increase your specificity
the curve from the right to the left, it does not go straight to
the left but it goes downward and if we look at 1-sensitivity
(false negative rate) as it goes downwards we can say that
the value of false-negative rate increases.
● There’s always that tradeoff we cannot attain 100% straightly
up at some point you will incur increasing false-positives and
false-negatives.
● The best way of cutoff here is it should be at the curve or the
shoulder of the curve meaning the highest at true-positive and
yet the lowest false-positive
○ If we look at the values it is somewhere around 120,
that’s the shoulder of the curve.
■ Interpretation of the ROC
■ Establishing the cut-off of the different
diagnostic tests
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 3
IV. PREDICTIVE VALUE ● Now we can compute for the predictive value:
○ For (+) 350/350+1900 = 16%
○ For (-) 7600/150+7600= 98%
● Why is it that we have a fairly moderate sensitivity and
specificity and yet when we look at the (+) PV, it’s very low?
○ One reason is that we only have a low prevalence of
disease.
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 4
● To explain further, we have the probability of breast cancer. ○ Both have 10,000 subjects, 10% prevalence, and 100
We have two groups of subjects; one with no palpable breast sensitivity.
lump but with a (+) mammogram with a probability of 13% ○ Purple table with higher specificity indicates higher value
that is before testing and in the other group is those women for those without the disease (8550/9000) as compared
with a palpable breast lump and the probability of having to green table (6300/9000).
breast cancer is higher with 38%.
● For the group of women with no palpable breast mass but (+)
for mammograms, let’s focus on the purple table below.
○ We have 1000 subjects and the prevalence of the
disease is 130. When we look at the prevalence, it is also
the same as the probability of the disease before the test
so we call that Pre-test probability or Prevalence.
● Now we look at the red table. We also have 1000 subjects but
the prevalence of the disease is 380 and that prevalence is
also the same as the probability before testing (Pre-test
probability).
👉NOTE:
● Pre-test probability = Prevalence
○ When computed at negative value and minimal
value at box c, most of the value will be in box d
(true negative).
● Post-test probability = Predictive Value ○ Therefore when the test is negative, one can rule
○ Post-test probability (predictive value) is affected out the possibility of the disease.
by pre-test probability (prevalence).
○ 👉
these are affected by pre-test probability and by specificity.
The higher the specificity, the higher the predictive
value.
■ Your false positive is already minimal
● In the example below (Figure 13), there are two tables that
look similar but differ in terms of specificity.
○ Green = 70% specificity, 27% PV (1000/3700)
○ Purple = 95% specificity, 69% PV (1000/1450)
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 5
👉 NICE
●
TO KNOW: COMPARING POSITIVE & NEGATIVE LR
The positive likelihood ratio (+LR) gives the change in
the odds of having a diagnosis in patients with a positive
test. The change is in the form of a ratio, usually greater
than 1.
○ For example, a +LR of 10 would indicate a
10-fold increase in the odds of having a particular
condition in a patient with a positive test result.
● NOTE: The larger the +LR, the more informative the test.
On the other hand, a +LR of 1.0 means the test is useless
because the odds of having the condition have not
changed after the test (a 1-fold increase in the odds means
the odds have not changed).
● The negative likelihood ratio (-LR) gives the change in the
Figure 14. SpPIn and SnNOut mnemonic to determine the sensitivity odds of having a diagnosis in patients with a negative test.
and specificity of tests. The change is in the form of a ratio, usually less than 1.
(Dr. Regal’s powerpoint lecture) ○ For example, a -LR of 0.1 would indicate a
10-fold decrease in the odds of having a
V. LIKELIHOOD RATIO (LR) condition in a patient with a negative test result. A
● Characteristics not affected by prevalence. –LR of 0.05 would be a 20-fold decrease in the
● Comparing the values of those with the disease from those odds of the condition.
without the disease. ● NOTE: The smaller the -LR, the more informative the test.
○ Statement above is applicable for both positive and Of course, a -LR of 1.0 still means the test is useless
negative results. because the odds of having the condition have not
● Likelihood ratio can be computed as: changed after the test (a 1-fold decrease in the odds
○ LR (+) = Sen/1-Sp means the odds have not changed).
https://www.uws.edu/wp-content/uploads/2013/10/Likelihood_Ratios.
● 📑
○
○
LR (-) = 1-Sen/Sp
A likelihood ratio of 1.
This means that the post-test probability is similar to the
pdf)
● 📑
○
pre-test probability.
A likelihood ratio of greater than 1.
This increases the chance that the disease is present.
● The greater the likelihood ratio, the greater the chance that
the disease is present.
Figure 16. Recall for figuring out and computing sensitivity, specificity,
PPV, and NPV.
(https://www.researchgate.net/publication/49650721_Sensitivity_specifi
city_predictive_values_and_likelihood_ratios)
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 6
VI. 3 QUESTIONS ABOUT DIAGNOSTIC TESTS
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 7
1. REPRESENTATIVE 3. MEASUREMENT
● There are journals that would just study those with the ● 📑 Was there an independent and blind comparison with a
●
disease, and those without the disease.
But when you would like to see the accuracy of the diagnostic
test, how will this test fair if you would have varying degrees
● 📑
reference standard?
○
There are two elements in this guide question:
Use of a reference standard
of the disease? ■ The reference standard for a diagnostic test is a test
○ If the population from the journal would have mild to that gives the information nearest to the “truth”
severe symptoms, early and late presentation, and those ■ Therefore, the accuracy of the test should be
treated and untreated, it would be a better literature than compared against the reference standard
just diagnostic tests comparing those with the disease ■ If the diagnostic test approximated the standard, that
2. ASCERTAINMENT ● 📑
○
being evaluated.
Thus, the questions you should answer are the following:
Whether there was a comparison with the reference
standard and whether the reference standard used was
acceptable to your setting.
○ Whether the reader of the reference standard was
blinded to the result of the diagnostic test being
evaluated.
● For example, the pathologist that will read the specimen
should not know the result of the test because if the
pathologist knew that the result was positive, then he might
over-read or over-interpret the specimen.
● If the pathologist knew that the test is negative, then he might
under-read or under-interpret the specimen because there is
now that influence.
● There should be a blind comparison
Figure 18. “Gold Standard”
4. REPLICATION
📑
(Dr. Regal’s powerpoint lecture)
● Were the methods for performing the test described in
📑
● Ascertainment is whether the diagnostic test is negative or sufficient detail to permit replication?
positive, you performed the “gold standard” and compared it ● This is necessary so that the reader will be able to
there. duplicate the test in his/her own setting and get the same
📑
● E.g. test about fine needle aspiration biopsy of a thyroid valid result.
nodule, whether the result is positive or negative for ● The description should include preparation for the patient
malignancy, you have to take out the thyroid nodule and such as:
examine it histopathologically and compare it with the test ○ Diet
result. ○ Drugs to avoid
● That is what is meant when we ask “was the reference ○ Precautions
standard ascertained/performed regardless of the diagnostic ○ Ideal conditions for performing the diagnostic test
● 📑
test result?”
In some studies, the accuracy of a diagnostic test is
○ A step by step description of how the diagnostic test is
done and interpreted.
● 📑
examined retrospectively (chart review of the actual practice).
○
Verification Bias
In actual practice, physicians request to perform the
B. DOES THIS (VALID) EVIDENCE SHOW THAT THIS TEST
CAN ACCURATELY DISTINGUISH PATIENTS WHO DO
reference standard based on the initial result of the
AND NOT HAVE A SPECIFIC DISORDER?
diagnostic test.
○ The reference standard is used to verify the initial finding ● Next is to look at the results, and whether it can help resolve
i.e. when positive the dilemma. Will the result put us at the other end of the
○ When this happens, most of the data available will be spectrum?
those positive for the diagnostic test and will likely be ● This question talks about:
positive in the reference standard ○ Sensitivity, specificity, likelihood ratios
○ Is this test useful at all?
● 📑
○
○
This will increase the accuracy of the test.
To avoid verification bias:
The study must show that the reference standard was
○ Can the test rule in or rule out (the presence or absence
of the disease)?
done regardless of the result of the diagnostic test being
evaluated.
○ Check the methods section.
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 8
1. SENSITIVITY; SPECIFICITY; PREDICTIVE VALUES; ○ “How do we now push the probability before testing to a
LIKELIHOOD RATIOS probability after testing, surpassing our upper threshold
(in this case, a positive test)?”
○ The significant value for likelihood ratio for a positive
test is 5 and 10.
○ Meanwhile, for a negative test it is 0.2 and 0.1.
● Compute for likelihood ratio using sensitivity and specificity.
𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦
● For a positive test: 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑟𝑎𝑡𝑖𝑜 (+) = 1−𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦
1−𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦
● For a negative test: 𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 𝑟𝑎𝑡𝑖𝑜 (−) = 𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦
● How do we now utilize the likelihood ratio expressed in ratio if
our probability is expressed in percentage?
○ Convert the percentage into a ratio or into an odds;
converting it into a formula where the probability of
having the disease against the probability of not having
the disease.
Figure 19. Sensitivity, specificity, predictive values, likelihood ratios ○
𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦
Formula: 𝑃𝑟𝑒 − 𝑡𝑒𝑠𝑡 𝑜𝑑𝑑𝑠 = 1−𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦
(Dr. Regal’s powerpoint lecture)
○ In the appendicitis case, the probability of having
appendicitis is 60% while the probability of not having
● Let’s say we have two pieces of literature: one dealing with
appendicitis is 40%. Using the formula above, you would
our dilemma with appendicitis, and the other one dealing with
have the equation 60/40.
diabetes.
○ This yields an odds called the Pre-test Odds.
● The journals in the figure above cited the results in sensitivity
○ Formula:
and specificity.
𝑃𝑜𝑠𝑡 − 𝑡𝑒𝑠𝑡 𝑂𝑑𝑑𝑠 = 𝑃𝑟𝑒 − 𝑡𝑒𝑠𝑡 𝑜𝑑𝑑𝑠 × 𝐿𝑅 (+)
● If the same journal will give you the predictive value, or the
○ It is easier to express probability in percentage, so we
post-test probability, you have there the raw data where you
have to convert back the odds into percentage.
can compute the predictive value, post-test probability, and 𝑂𝑑𝑑𝑠
negative predictive value (NPV), your problems are now over ○ Formula: 𝑃𝑜𝑠𝑡 − 𝑡𝑒𝑠𝑡 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 = 1+𝑂𝑑𝑑𝑠
because you can just apply whether the predictive value went ○ We will end up having the post-test probability (positive
beyond the upper threshold that you established. predictive value) because we are dealing with a positive
○ For example in appendicitis, the upper threshold that we test.
established is 80%. ● From a pre-test probability, we determined the post-test
○ So if the journal said the predictive value is above 80%, probability. Our question would now be “did this surpass the
then we accept the diagnosis that the patient has a case upper threshold?”.
of appendicitis. ● The same process is done for a negative test.
○ If the lower threshold that we have established is 10% ○ Formula:
and if the predictive value for negative testing is around 𝑃𝑜𝑠𝑡 − 𝑡𝑒𝑠𝑡 𝑂𝑑𝑑𝑠 = 𝑃𝑟𝑒 − 𝑡𝑒𝑠𝑡 𝑜𝑑𝑑𝑠 × 𝐿𝑅 (−)
9%, then it is below the lower threshold we established ○ Convert the odds into a probability/percentage with the
𝑂𝑑𝑑𝑠
and therefore we can discard the possibility of formula 𝑃𝑜𝑠𝑡 − 𝑡𝑒𝑠𝑡 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 = 1+𝑂𝑑𝑑𝑠
appendicitis. ○ We will end up having the post-test probability (negative
○ The same is true for diabetes, where the sensitivity and predictive value) because we are now dealing with a
specificity is 81%. negative test
○ If the predictive value and the post-test probability is
mentioned, then you can apply the predictive values in
your decision. CASE 1: ACCURACY OF ULTRASOUND IN DIAGNOSING
APPENDICITIS
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 9
Specificity: 87% ● Step 5: Interpret
● Computation: ○ If results came back (+) and Post probability > Upper
● Step 1: Compute for the Likelihood Ratio Threshold = ACCEPT DIAGNOSIS AND START
○ LR (+) = 0.37 / (1-0.87) = 2.9 TREATMENT.
○ LR (-) = (1-0.67)/0.87 = 0.72 ○ In this case, the post probability of 81% is greater than
● Step 2: Convert the Pretest Probability to the Pre Test Odd the upper threshold of 70% therefore, we accept and
○ 60% -> 0.6 = 0.6 / (1-0.6) = 1.5 treat the diagnosis of DM.
● Step 3: Compute the pretest odd to the post test odd ○ If results come back (-) and Post Probability is <
○ If (+LR): 1.5 x 2.9 = 4.35 Lower Threshold = DONT TREAT.
○ If (-LR): 1.5 x 0.72 = 1.08 ○ In this case, since the post probability of 19% is less
● Step 4: Convert Post test odd to Post test Probability than the lower threshold of 20%, we can forget about
○ If (+ LR): 4.35 / (1+4.35) = 81% the diagnosis of DM and think of other conditions that
○ If (- LR): 1.08 / (1 + 1.08) = 52% will explain the signs and symptoms of the patient.
● Step 5: Interpret Continue testing with more accurate tests (ie. CT
○ If results came back (+) and Post probability > Upper scan)
Threshold = ACCEPT DIAGNOSIS AND START
TREATMENT.
○ In this case, the post probability of 81% is greater than VII. SENSITIVITY, SPECIFICITY, PREDICTIVE VALUES AND
the upper threshold of 80% therefore, we accept and LIKELIHOOD RATIOS
treat the diagnosis.
○ If results came back (-) and Post Probability is <
Lower Threshold = DONT TREAT.
○ However, in the given, Post probability of 52% for a
negative test result was greater than the lower
threshold of 10%, hence we DO NOT DISCARD
DIAGNOSIS.
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 10
📑Table 2. Evaluating LR Values VIII. REFERENCES
LR INTERPRETATION ● Regal, H. (2021) Appraising and Applying Results of Studies
on Diagnosis Lecture
>10 or <0.1 Generate large, and often conclusive ● Paulino, A. (2020). 2023 Trans on Clinical Decision on
changes from pre- to post-test probability Diagnostic Test.
● https://www.uws.edu/wp-content/uploads/2013/10/Likelihood_
5-10 and 0.1-0.2 Generate moderate shifts in pre- to Ratios.pdf
post-test probability
IX. REVIEW QUESTIONS
2-5 and 0.5-0.2 Generate small (but sometimes important) 1. What are the other terms of pretest and posttest probabilities?
changes in probability 2. T or F: The higher the specificity, the higher the predictive
value.
1-2 and 0.5-1 Alter probability to a small (and rarely 1. Prevalence and Predictive Value
important) degree 2. T
X. FREEDOM WALL
XI. APPENDIX
Figure 24. Table of Specificity, Sensitivity, Likelihood Ratio and Table 3. Summary of Formulas
Predictive Values.
FORMULA
(Dr. Regal’s powerpoint lecture)
TWG #13: Cledera, Co, Cobarrubias, Contreras, Convento, Cooper TEG #14: Coronacion, Cortez, D.L., Cortez, J.P., Cruz, A.C., Cruz, I., Cruz, K.F. 11