Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
Quantifying Edge-Blended Display Quality: Correlation with Observer Judgments

Quantifying Edge-Blended Display Quality: Correlation with Observer Judgments

Ratings: (0)|Views: 91|Likes:
Published by deloaa
This paper reports the results of a side-by-side evaluation of nine candidate metrics for quantifying edge blend quality in multi-channel display systems. The metrics tested include eight variations of contemporary luminance difference and slope-based metrics and a new approach based on a just-noticeable difference area (JNDA) analysis. The performance of each candidate metric was evaluated by applying each metric to a set of 135 edge blended images which had been degraded using blend zone perturbations typical of those found in multi-channel flight simulators. The pool of degraded images was evaluated by 16 observers who produced ratings of edge blend quality for each image. The performance of each metric was evaluated by calculating the correlation between the metrics and the ratings of quality. The correlation between the JNDA metric and quality ratings was 0.84 and JNDA metric clearly out performed the competing candidate metrics.
This paper reports the results of a side-by-side evaluation of nine candidate metrics for quantifying edge blend quality in multi-channel display systems. The metrics tested include eight variations of contemporary luminance difference and slope-based metrics and a new approach based on a just-noticeable difference area (JNDA) analysis. The performance of each candidate metric was evaluated by applying each metric to a set of 135 edge blended images which had been degraded using blend zone perturbations typical of those found in multi-channel flight simulators. The pool of degraded images was evaluated by 16 observers who produced ratings of edge blend quality for each image. The performance of each metric was evaluated by calculating the correlation between the metrics and the ratings of quality. The correlation between the JNDA metric and quality ratings was 0.84 and JNDA metric clearly out performed the competing candidate metrics.

More info:

Published by: deloaa on Aug 05, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

04/12/2013

pdf

text

original

 
1
QUANTIFYING EDGE-BLENDED DISPLAY QUALITY:CORRELATION WITH OBSERVER JUDGMENTS
1
 
Dr. Charles J. LloydOptical EngineerBARCO Xenia
Presented at the IMAGE 2002 ConferenceScottsdale, Arizona 8-12 July 2002
 
Abstract
This paper reports the results of a side-by-sideevaluation of nine candidate metrics forquantifying edge blend quality in multichannel display systems. The metrics testedinclude eight variations of contemporaryluminance difference and slope-based metricsand a new approach based on a just-noticeable-difference area (JNDA) analysis. Theperformance of each candidate metric wasevaluated by applying each metric to a set of 135 edge blended images which had beendegraded using blend zone perturbationstypical of those found in multi-channel flightsimulators. The pool of degraded images wasevaluated by 16 observers who producedratings of edge blend quality for each image.The performance of each metric was evaluatedby calculating the correlation between themetrics and the ratings of quality. Thecorrelation between the JNDA metric andquality ratings was 0.84 and JNDA metricclearly out performed the competing candidatemetrics.
 
Introduction
Multi-channel Flight Simulators
Examination of images in the typical modernmulti-channel flight simulator reveals theopportunity for a noticeable improvement inimage quality if the projectors could beprecisely aligned in the blend regions on amore frequent basis. A review of theadjustments a training center technician mustmake in order to keep the blend zones in goodshape shows that blend zone quality issensitive to numerous parameters in additionto the “blend zone” controls provided by theprojector system. These parameters includethe relative spatial shifting of one channel toanother, the relative luminance (and color) of adjoining channels, the relative illuminance(shading) across each channel, and thedifferences in the black levels and gammafunctions of the CRTs within the projectors.Discussions with the technicians responsiblefor keeping these complex display systemsaligned makes readily apparent the potentialbenefit of automating this tedious process.A significant obstacle in the way of automating the blend zone adjustment task isthe lack of a valid metric for measuring blendzone quality. Most projector and displaysystems specifications acknowledge the needto control edge blend quality in that theyattempt to put a specification on it. Oftenthese specifications simply place limits on themaximum luminance difference (e.g., 5%)across the blend zone. While easy to measureusing a common hand held luminance meter,this simplistic metric of blend zone qualityhas the obvious problem that it does notcorrelate well with the quality judgments of human observers. For example, the visual
 
2
science literature shows that a 1% change inluminance over a visual angle of a few arcminutes is highly visible whereas asignificantly larger change (e.g., 10%) can beinvisible if the change is made over an angleof many degrees and the change is “smooth.”
Candidate Edge Blend Metrics
Prior to this evaluation a number of recentspecifications for multi-channel simulatordisplay systems were examined andapproximately 15 persons from severalcompanies within the simulation industry werequestioned in order to assemble a list of candidate edge blend metrics for testing.For all of the edge blend metricsidentified, the common hand held luminancemeter with a 1 deg circular aperture (e.g.,Minolta LS-100 or Minolta CS-100) is used.For each of these metrics up to fivemeasurement points are defined. Figure 1shows the typical locations of thesemeasurement points relative to the blend zone.ALeft ChannelB C DBlendERight Channel
Fig. 1. Measurement Locations for the commonmetrics of edge blend quality applied to multi-channel simulator displays.
 
The most commonly specified edge blendmetric for moving base flight simulatorsappears to be the luminance difference metriccalculated using three measurement points.With this metric the technician measuresluminance at points A, C, and E and thencalculates the maximum difference betweenthe three measurements. In this paper thismetric is referred to as “
MaxDiff3
.”A simple variation on this metric,identified here as “
MaxDiff3Mean
,” uses themaximum difference between eachmeasurement and the mean of the threemeasurements.A variation of the three-point differencemetric, proposed by a manufacturer of projection displays, uses five measurementpoints rather than three. A second uniqueattribute of this metric is that it uses theCIELUV method for calculating colordifferences (
E*) which allows it to respondto color as well as luminance differences. Forthis evaluation only luminance differenceswere introduced into the set of test images,thus, the
u* and
v* components of theCIELUV-based metric are zero and the metricsimplifies to
E* being equal to the maximumdifference in luminance among the fivemeasurement points. In this evaluation thismetric is referred to as “
MaxDiff5
.”A simple variation on the “MaxDiff5”metric is the “
MaxDiff5Mean
” metric definedhere as the maximum difference between themean of the five measurement points and eachmeasurement point.Two additional variations on the metricsdescribe above were included as candidatemetrics for testing. These metrics werecalculated using the standard deviations ratherthan the maximum differences of either threeor five measurement points and are thusidentified as the
StdDev3
and
StdDev5
 metrics.One display system specification for acommercial flight simulator called out an edgeblend metric based on luminance slope ratherthan luminance difference. This specificationstates the “luminance rate of change shall notexceed 0.15 fL per one deg change of angle.”This specification does not define themeasurement points but implies that themeasurements are to be made in theneighborhood of any “objectionablevariations” in luminance.Two variations of slope-based metricswere included as candidate metrics for thisevaluation. For both of these metrics fivemeasurement points were used. The
MaxSlope
” metric was calculated as themaximum absolute value of slope across the
 
3
blend zone, whereas the “
StdDevSlope
metric was calculated as the standarddeviation in slope across the blend zone.Difficulties with existing metrics. From avisual scientist’s point of view the mostobvious problem with the existing metrics of edge blend quality is the fact that these metricsrespond only to the magnitude of luminancechanges while largely ignoring the spatialextent over which these changes occur. Thisattribute of the contemporary metrics flies inthe face of the results of dozens of evaluationsof human contrast sensitivity which show thatthe visibility of low contrast stimuli dependsheavily on the angular size or “spatialfrequency” of the stimuli.For example, the data of Campbell andRobson (1968), Blakemore and Campbell(1969), Farrell and Booth (1984) and othersshow that humans have a maximum sensitivityto contrast at a spatial frequency of about 2 to3 cyc/deg. Their data show that at thisfrequency contrast can be detected at amodulation of about 0.0025 whichcorresponds to a contrast ratio of approximately 1.005. What this means is thatpeople can detect a luminance changes assmall as 0.5% if that change spans an angulardistance of about ¼ deg or less.On the other hand, these same contrastthreshold functions show that a modulation of approximately 0.025 is required to seeluminance changes at a spatial frequency of 0.1 cyc/deg. Thus, for luminance changes thatspan 5 or more deg, a luminance ratio of up to5% can be invisible if the luminance variessmoothly.Using a mathematical model for contrastthreshold provided by Infante (1991), amodulation of 0.22 is required to see asmoothly varying (cosine) grating at 0.01cyc/deg at 6 fL which corresponds to thespatial extent of a 50 deg wide image. In otherwords, a luminance ratio of greater than 1.57can be invisible if the luminance variessmoothly over an angular distance comparableto the width of a flight simulator channel.This result is entirely consistent with commonspecifications for relative illuminance whichallow the illuminance at the corner of adisplay channel to fall to as low as 50 or 60%of the illuminance at the center of the channel.It is clear from evaluations of humancontrast sensitivity that allowable contrastvaries over large range, depending on thespatial frequency (spatial extent orsmoothness) of the luminance transition. Forthis reason any attempt to quantify edge blendquality that does not simultaneously considercontrast and spatial distribution seems doomedto having a poor correlation with quality.Perhaps the most detrimental aspect of metrics which are uncorrelated with qualityare the (unnecessary) arguments that displaysystems vendors and customers findthemselves having over the specifications andmeasurements. Today it is far too easy toproduce a display that is clearly acceptable tothe customer but does not pass thespecification. Similarly, it is too easy toproduce a display that meets the specificationbut one that even the vendor would notconsider to be acceptable.
JNDA Metric
In response to the need for an improved metricof edge blend quality this author hasdeveloped the “just noticeable difference area”(JNDA) metric which quantifies uniformity byconsidering contrast as a function of spatialfrequency. Furthermore, this metric containsan explicit model of human contrast sensitivityand uses this data to “perceptually weight” themetric so that it more accurately represents thehuman observer.The JNDA metric evaluated here is basedon the design of the computational humanvisual system (HVS) model proposed byLloyd (1990) which in turn was based largelyon the “just noticeable difference” (JND)evaluation approach developed by Carlson andCohen (1980). The HVS model is patternedafter the multi-channel models described byQuick (1974), Wilson and Bergen (1979), andWatson and Robson (1981) which model theoverall response of human observers as theresponses of a number of independent spatialfrequency channels.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->