You are on page 1of 16

1. What is acuity?

Ans. Acuity: Visual acuity testing systems are based on having the patient view
a chart of optotypes, colors, or patterns to assess the clarity and sharpness of
their visual abilities. Visual acuity tests, which are a standard part of most exam
lanes, can take a variety of forms. The visual acuity test determines how small
the letters on a standardized chart (Snellen chart) or a card held 20 feet (6
meters) away can be read. When testing at distances less than 20 feet, special
charts are used (6 meters). Some Snellen charts are video monitors that display
letters or images.

How it is measured: Visual acuity is usually recorded as:


 "Uncorrected," which is without glasses or contact lenses
 "Best corrected," which is with the best possible glasses or contact lens
prescription
You will be asked to remove your glasses or contact lenses and stand or sit 20 feet
(6 meters) away from the eye chart for uncorrected visual acuity. You will keep
both of your eyes open. You will be asked to cover one eye with your palm, a piece
of paper, or a small paddle while reading aloud the smallest line of letters on the
chart. For people who cannot read, such as children, numbers, lines, or pictures are
used. If you can't make out any of the letters, numbers, or pictures, the examiner
will usually hold up a number of fingers and record how far away you can
correctly identify how many are held up.
Example: a result of “20/20” —normal visual acuity—means you read the line
that those with normal vision can read. Visual acuity decreases as the bottom
number gets larger. A result of 20/40 means you can see at 20 feet what those with
normal vision can see from 40 feet away.
Acuity Tool: The tool uses both clinical patient characteristics and workload
indicators to score patients from 1 to 4 based on acuity level. This approach gives
nurses the power to score their patient, then report to the charge nurse so that RN
assignments for the oncoming shift are quantifiable and equitable.

Visual acuity is the most widely used and widely accepted measure of visual
function. Visual acuity is important because it measures central corneal clarity,
central lens clarity, central macular function, and optic nerve conduction all at the
same time.
Types of Visual Acuity: In the visual system, acuity refers to the ability to
discriminate fine details of the visual scene
There are 3 types of Visual Acuity. They are:
1. SPATIAL ACUITY: ability to resolve 2 points in space is a function of
location: acuity at the periphery is relatively constant as a function of brightness,
and is much lower than at the fovea (due to the differences in the distribution of
rods and cones). is a function of brightness: as brightness increases, the ability to
resolve a gap increases at the fovea.
2. TEMPORAL ACUITY: ability to distinguish visual events in time
3. SPECTRAL ACUITY: ability to distinguish differences in the wavelength of
the stimuli
2. What is Contrast?
Ans: Contrast: Contrast is the contrast in luminance or color that distinguishes
an object (or its representation in an image or display). In real-world visual
perception, contrast is determined by the difference in color and brightness
between an object and other objects in the same field of view. The human visual
system is more sensitive to contrast than absolute luminance; we can perceive
the world similarly despite significant changes in illumination throughout the
day or from location to location. The contrast ratio or dynamic range of an image
determines its maximum contrast.

How machine vision filters create contrast in machine vision applications:


Contrast is critical to imaging results. Only by making a feature "pop" relative to
the larger image field in which it is located can machine vision software identify
the feature optimally. While sensor selection, lensing, and lighting are all
important factors in creating effective contrast in machine vision solutions, the
effective selection and application of filters can provide additional leverage for
many applications. We provide a first look at machine vision filter concepts and
benefits, which are frequently overlooked or misunderstood.
Before and
after applying filters

There are numerous definitions of contrast. Some have color, while others do
not. Travnikova bemoans, "Such a variety of contrast concepts is extremely
inconvenient. It complicates the solution of many applied problems and makes
comparing the results published by different authors difficult."

In various contexts, different definitions of contrast are used. The formulas here
are applied to luminance contrast as an example, but they can also be applied to
other physical quantities. In many cases, contrast definitions represent a type of
ratio.

The reasoning is that a small difference is insignificant if the average luminance


is high, whereas the same small difference is significant if the average
luminance is low (see Weber–Fechner law). Some common definitions are
provided below.

Weber contrast is defined as Ia and Ib representing the luminance of the features


and the background, respectively. The measure is also referred to as Weber
fraction, since it is the term that is constant in Weber's Law. Weber contrast is
commonly used in cases where small features are present on a large uniform
background, i.e., where the average luminance is approximately equal to the
background luminance.
A photograph of a leaf with several colors—
the bottom image has an 11% saturation boost and around 10% increase in contrast.

Contrast Sensitivity: The ability to distinguish between luminances of different


levels in a static image is measured by contrast sensitivity. Contrast sensitivity
varies between individuals, peaking around the age of 20 and at angular
frequencies of about 2-5 cycles per degree. Furthermore, it can deteriorate with
age and due to other factors such as cataracts and diabetic retinopathy.

In this image, the contrast amplitude


depends only on the vertical coordinate, and the spatial frequency depends only on the horizontal
coordinate. For medium frequency, less contrast is needed than for high or low frequency to detect
the sinusoidal fluctuation.

Contrast sensitivity and visual acuity: Visual acuity is a commonly used


parameter to assess overall vision. Despite normal visual acuity, decreased
contrast sensitivity may result in decreased visual function. For example, some
people with glaucoma may have 20/20 vision on acuity exams but struggle with
daily activities like driving at night.
Log-log plot of spatial contrast sensitivity functions for luminance and
chromatic contrast

The graph demonstrates the relationship between


contrast sensitivity and spatial frequency. The target-like images are representative of center-
surround organization of neurons, with peripheral inhibition at low, intermediate and high spatial
frequencies. Used with permission from Brian Wandell, PhD.

The main purpose of contrast is to underline ideas and explain their meanings, so
readers can easily follow a story or argument. Through opposite and contrasting
ideas, writers make their arguments stronger, which makes them more
memorable for readers due to emphasis placed on them. There a two types of
contrast effect: Positive contrast effect: something is viewed as better than it
would usually be when being compared to things that are worse. Negative
contrast effect: something is viewed as being worse than it would usually be
when compared to something better.
3. Write an algorithm for Image Formation.
Ans. The widely used algorithms in this context include denoising, region
growing, edge detection, etc. The contrast equalization is often performed in
image-processing and contrast limited adaptive histogram equalization
(CLAHE) is a very popular method as a preprocessing step to do it.
Object and light rays are the two important things necessary for the for the
formation of an image. Image formation is the analog-to-digital conversion of an
image by capturing devices such as cameras using 2D Sampling and
Quantization techniques. In general, we see a 2D representation of a 3D world.
Similarly, the formation of the analog image occurred. It is essentially a
conversion of our analog image's 3D world to our digital image's 2D world. A
frame grabber or a digitizer is typically used for sampling and quantizing analog
signals.

Light reflection. At the surface of the apple, light is


reflected in all directions and two of the rays hit the eye of two observers.

Projection on the
retina. The object in front of the eye is projected on the retina.

The two parts of the image formation process


The geometry of image formation which determines where in the image plane
the
projection of a point in the scene will be located.
The physics of light which determines the brightness of a point in the image
plane as
a function of illumination and surface properties.
• A simple model
- The scene is illuminated by a single source.
- The scene reflects radiation towards the camera.
- The camera senses it via chemicals on film

4. What is Pattern matching?

Ans. Pattern Matching: Pattern matching in computer vision refers to a set of


computational techniques which enable the localization of a template pattern in
a sample image or signal. Such template pattern can be a specific facial feature,
an object of known characteristics or a speech pattern such as a word. Pattern
matching is used to determine whether source files of high-level languages are
syntactically correct. It is also used to find and replace a matching pattern in a
text or code with another text/code. Any application that supports search
functionality uses pattern matching in one way or another.

Types of pattern matching algorithms in Machine Vision:


1. Supervised Algorithms: A supervised approach to pattern recognition is
known as classification. For identifying patterns, these algorithms
employ a two-stage methodology. The first stage is model
development/construction, and the second stage is prediction for new or
unknown objects.
2. Unsupervised Algorithms: In contrast to supervised algorithms for
pattern recognition, which use training and testing sets, these algorithms
employ a group-by approach. They look for patterns in the data and
group them based on similarities in features like dimension to make a
prediction. Assume we have a basket full of various fruits such as apples,
oranges, pears, and cherries. We assume we don't know what the fruits'
names are. We leave the data unlabeled. Now imagine we are in a
situation where someone approaches us and asks us to identify a new
fruit that has been added to the basket.
The user interacts with the system by providing it with an image or video
as input. To find similar patterns, the machine compares it to thousands,
if not millions, of images stored in its database. The essential features are
drawn using an algorithm designed primarily for grouping similar
looking objects and patterns. Computer vision is the term for this.
Consider cancer detection.
Importance of Pattern Matching:
Pattern matching has many useful applications, such as:

 Natural Language Processing: Applications like spelling and grammar


checkers, spam detectors, translation, and sentiment analysis tools heavily
depend on pattern recognition methods. Regular Expressions are helpful in
identifying complex text patterns for natural language processing.
 Image processing, segmentation, and analysis: Pattern matching is used to
give human recognition intelligence to machines which are required
in image processing.
 Computer vision: Pattern matching is used to extract meaningful features
from given image/video samples and is used in computer vision for various
applications like biological and biomedical imaging. Tumor identification is
the classical example.

5. Use the attached image to explain Image formation and Representation. (You
may utilize the algorithm in question 3)
Image Formation: Geometric primitives and transformations are essential
in modeling any image formation process because they project 3-D
geometric features into 2-D features. Image formation, in addition to
geometric features, is dependent on discrete color and intensity values. It
must understand the lighting in the environment, camera optics, sensor
properties, and so on. As a result, while discussing image formation in
Computer Vision, this article will concentrate on photometric image
formation.
Photometric Image Formation: Below gives a simple explanation of
image formation. The light from a source is reflected on a particular
surface. A part of that reflected light goes through an image plane that
reaches a sensor plane via optics.

IMAGE REPRESENTATION:
Image representation can be roughly divided according to data organization into four
levels.
The boundaries between individual levels are inexact, and more detailed divisions are
also proposed in the literature.
If the image is to be processed using a computer it will be digitized first, after which it
may be represented by a rectangular matrix with elements corresponding to the
brightness at appropriate image locations.
More probably, it will be presented in color, implying (usually) three channels: red, green
and blue.

Four possible levels of image representation suitable for image analysis problems in
which objects have to be detected and classified. Representations are depicted as
shaded ovals.

The above figure is an unusual image representation.


The point is that a lot of a priori knowledge is used by humans to interpret the images.
1. Low-level image processing and high-level computer vision differ in the data
used.
2. Low-level data are comprised of original images represented by matrices
composed of
3. brightness (or similar) values, while high-level data originate in images as well,
but only
4. those data which are relevant to high-level goals are extracted, reducing the data
quantity
5. considerably. High-level data represent knowledge about the image content—for
example,
6. object size, shape.
7. Many low-level image processing methods were proposed in the 1970s or earlier:
8. research is trying to find more efficient and more general algorithms and is
implementing
9. them on more technologically sophisticated equipment, in particular, parallel
machines
10. (including GPU’s) are being used to ease the computational load.

High-level vision tries to extract and order image processing steps using all available
knowledge—image understanding is the heart of the method, in which feedback from
high-level to low-level is used

SECOND STEP:
Image segmentation is the next step, in which the computer tries to separate objects
from the image background and from each other.
Total and partial segmentation may be distinguished; total segmentation is possible only
for very simple tasks, an example being the recognition of dark non-touching objects
from a light background.

Object description and classification in a totally segmented image are also understood
as part of low-level image processing.

The above Image represents, several 3D computer vision tasks from the user’s point of
view are on the upper line (filled).

Algorithmic components on different hierarchical levels support it in a bottom-up


fashion.

->Human vision is natural and seems easy; computer mimicry of this is difficult.
• We might hope to examine pictures, or sequences of pictures, for quantitative and
qualitative analysis.
• ‘High’ and ‘low’ levels of computer vision can be identified.
• Processing moves from digital manipulation, through pre-processing, segmentation,
and recognition to understanding—but these processes may be simultaneous and
co-operative.
• An understanding of the notions of heuristics, a priori knowledge, syntax, and
semantics is necessary.
• A knowledge of the research literature is necessary to stay up to date with the
topic.
• Developments in electronic publishing and the Internet are making access to vision
Simpler

IMAGE REPRESENTATION MODELS:


Mathematical models are often used to describe images and other signals.

A signal is a function depending on some variable with physical meaning; it can be one-
dimensional

(e.g., dependent on time), two-dimensional (e.g., an image dependent on two co-


ordinates in a plane), three-dimensional (e.g., describing a volumetric object in space),
or higher dimensional.

The quality of a digital image grows in proportion to the spatial, spectral, radiometric,
and time resolutions.
The spatial resolution is given by the proximity of image samples in the image plane;
spectral resolution is given by the bandwidth of the light frequencies captured by the
sensor; radiometric resolution corresponds to the number of distinguishable gray-levels;
and time resolution is given by the interval between time samples at which images are
captured.

Images f(x, y) can be treated as deterministic functions or as realizations of stochastic


processes.

You might also like