You are on page 1of 12

1. What is Colour?

Color is the characteristic of visual perception described through color categories and it is a
very important cue for object recognition. This perception of color derives from the stimulation
of photoreceptor cells in particular cone cells in the human eye and other vertebrate eyes by
electromagnetic radiation. Color categories and physical specifications of color are associated
with objects through the wavelength of the light that is reflected from them. This reflection is
governed by the object's physical properties such as light absorption, emission spectra, etc.

In the human eye, there are three types of wavelength-sensitive cone-cells (n = 3). These
cells collect the color information from the incoming signal and human visual system converts it
into color we see. When we consider the color of an object, an essential part of color detection is
the illumination. Since the color signal is originally light reflected (or radiated, or transmitted)
from an object,the color of the illumination also affects to the detected object’s color. A
schematic drawing of detection of object color is shown in Fig.1.

Fig. 1. The light source, color objects, and human visual system are needed to generate the perception of color

Fig.2. (a) A set of color spectra (x-axis: wavelength from 380 to 730nm, y-axis: reflectance factor) and (b) the
corresponding colors
The credit for the discovery of the nature of light as a spectrum of wavelengths is given to
Isaac Newton. The idea that colors are formed as a combination of different component rays,
which are immaterial by nature. Newton also presented colors in a color circle. In his idea, there
were seven basic colors: violet, indigo, blue, green, yellow, orange, and red. In the spectral
approach to color as shown in Fig.2 ,the wavelength scale is linear and continuing. Both ends,
UV from short wavelengths and IR from long wavelengths. From the first look, the circle form is
not natural for this physical signal. However, when the human perception of the different wave
bands is considered, the circle form seems to be a good way to represent colors.

Important phase in the development of color science was in 1801. At that time, theories of
human color vision were developed, physician Thomas Young found that there are three
different types of color-sensitive cells in the human retina. In that model, these cells are sensitive
to red, green, and violet and to other colors which are mixtures of these principal pure colors.
German physicist Hermann Helmholz studied this model further. He also provided the first
estimates of spectral sensitivity curves for the retinal cells. In the mid-1870s German physician
Karl Hering presented his theory of human color vision. His theory was based on four
fundamental colors: red, yellow, green, and blue. This idea is the basis for the opponent color
theory, where red, green, blue, and yellow form opponent color pairs. These ideas were bases
for human color vision models, for the trichromatic color theories, and the standards of
representing colors on a three-dimensional space.

As mentioned above, the basis of color is light, a physical signal of electromagnetic


radiation. This radiation is detected by some detection system. If the system is human vision,
then we consider traditional color. If we do not restrict the detection system, we consider the
physical signal, color spectrum. Kuehni separates these approaches into color and spectral color.

In the spectral approach, color means the color signal originated from the object and reaching
the color detection system. In the traditional color science, black and white, and gray levels in
between, are called achromatic light. This means that they differ from each other only by radiant
intensity, or luminous intensity in photometrical terms. Other light is chromatic. Hence, one may
say that black, white, and gray are not colors. This is a meaningful description only, if we have a
fixed detection system, which is well defined. In the traditional color approach, the human color
vision is considered to be based on a fixed detection system.
2. Color Perception
Perception of color begins with specialized retinal cells containing pigments with different
spectral sensitivities, known as cone cells. In humans, there are three types of cones sensitive to
three different spectra, resulting in trichromatic color vision. The cones are conventionally
labeled according to the ordering of the wavelengths of the peaks of their spectral sensitivities:
short (S), medium (M), and long (L) cone types. The spectral sensitivity of S-cones peaks at
approximately 420–440 nm), M-cones peak at 534–555 nm, and L-cones peak at 564–580 nm (as
shown in Fig.3).

Fig.3 . Spectral sensitivity of the S-cone, M-cone, and L-cone. Combined results from various authors using
different method

2.1 Trichromatic Theory


The trichromatic theory was first proposed by Thomas Young in 1802 and was explored further by
Helmholtz in 1866. This theory is primarily based on color mixing experiments and suggests that a
combination of three channels explain color discrimination functions. Evidence for the trichromatic
theory includes:
a. Identification of the spectral sensitivities of two cone pigments by Rushton's retinal
densitometry.
b. Identification of three cone pigments by microspectrometry.
c. Identification of the genetic code for L, M, and S cones.
d. color matching functions
e. Isolating photoreceptors and measuring their physiological responses as a function of
wavelength
f. Spectral sensitivity measurements (Wald-Marre spectral sensitivity functions and
Stiles' π-mechanisms)

However, the trichomatic theory fails to account for the four unique colors red, green, yellow
and blue and also fails to explain why dichromats can perceive white and yellow. It also fails to
fully explain color discrimination functions and opponent color percepts.

2.2 Opponent Color Theory

The opponent color theory was first proposed by Hering in 1872. At the time, this theory
rivaled the well-accepted trichromatic theory, which explains the trichromasy of vision and
predicts color matches. Hering's opponent color theory suggests that there are three channels,
red-green, blue-yellow, and black-white, with each responding in an antagonist way. That is,
either red or green is perceived and never greenish-red. Hering, however, never challenged the
initial stages of processing expressed by the trichromatic theory. He simply argued that any color
vision theory should explain our perception, that is, color opponency as revealed by colored after
images.

Hurvich and Jameson provided quantitative data for color opponency. Using hue cancellation
paradigms, the psychophysical color opponent channels were isolated. The Vλ function was used
for brightness discrimination to describe the perception of blackness and whiteness.
Fig.4 . Hurvich and Jameson experiment using blue or yellow AND red or green to match all wavelengths of
the visible spectrum.

Therefore, by adjusting the amount of blue or yellow AND red or green, any sample
wavelength can be matched (shown in Fig.4) Complementary wavelengths can be used to cancel
each other for all wavelengths except the four unique hues (blue, green, yellow, and red). Other
evidence supporting the opponent color theory include:

a. Electrical recordings of horizontal cells from fish retina show blue-yellow opponent
process and red-green opponent
b. Electrical recordings from the lateral geniculate nucleus show opponent color
processes.
c. Electrical recordings of ganglion cells from primate retinas show opponent color
processes.

2.3 Stage Theory


This has led to the modern model of normal color vision, which incorporates both the
trichromatic theory and the opponent color theory into two stages (shown in Fig.5).

Fig.5 . Model for normal human color vision.

The first stage can be considered as the receptor stage, which consists of the three
photopigments (blue, green, and red cones). The second is the neural processing stage, where the
color opponency occurs. The second stage is at a post-receptoral level and occurs as early as the
horizontal cell level.

2.4 Subjectivity of Color Perception

Nothing categorically distinguishes the visible spectrum of electromagnetic radiation from


invisible portions of the broader spectrum. In this sense, color is not a property of
electromagnetic radiation, but a feature of visual perception by an observer. Furthermore, there is
an arbitrary mapping between wavelengths of light in the visual spectrum and human
experiences of color. Although most people are assumed to have the same mapping, the
philosopher John Locke recognized that alternatives are possible, and described one such
hypothetical case with the "inverted spectrum" thought experiment. For example, someone with
an inverted spectrum might experience green while seeing 'red' (700 nm) light, and experience
red while seeing 'green' (530 nm) light. Synesthesia (or ideasthesia) provides some atypical but
illuminating examples of subjective color experience triggered by input that is not even light,
such as sounds or shapes. The possibility of a clean dissociation between color experience from
properties of the world reveals that color is a subjective psychological phenomenon.
Perception of color depends heavily on the context in which the perceived object is
presented. For example, a white page under blue, pink, or purple light will reflect mostly blue,
pink, or purple light to the eye, respectively; the brain, however, compensates for the effect of
lighting (based on the color shift of surrounding objects) and is more likely to interpret the page
as white under all three conditions, a phenomenon known as color constancy.

3. Spatial Sampling
Spatial sampling is the process of collecting observations in a two-dimensional framework.
Careful attention is paid to the quantity of the samples, dictated by the budget at hand, and the
location of the samples. A sampling scheme is generally designed to maximize the probability of
capturing the spatial variation of the variable under study. Once initial samples have been
collected and its variation documented, additional measurements can be taken at other locations.
This approach is known as second-phase sampling, and various optimization criteria have
recently been proposed to determine the optimal location of these new observations.

3.1 One-Dimensional Sampling


Pioneering research on sampling was devoted to one-dimensional problems (see,e.g.,
Cochran 1946; Madow1946,1953; Madow and Madow 1949). Cochran documented the
efficiency associated with random sampling, systematic sampling and stratified sampling. A
random sampling scheme (Fig. 6 a) allocates n sample points randomly within a population of
interest. Each location is equally likely selected. In a systematic random sampling (Fig. 6 b), the
population is partitioned into a prespecified number of intervals.
Fig. 6 . One-dimensional sampling schemes forn¼10.Thex-axis is partitioned in 10intervals for cases (b) and (c).The
random sampling locations have been generated using MATLAB rand function

For each interval, a number of samples are collected, and the total of all samples is of size
n .In a systematic sampling scheme (Fig. 70.1c), the population of interest is divided in to n
intervals of similar size. The first element is chosen within the first interval, starting at the origin,
and the remaining n-1 elements are aligned according to the same, fixed interval. A discussion of
these configurations to the field of natural resources can be found in Stevens and Olsen (2004).

3.2 Two-Dimensional Sampling

A simple random sampling design (Fig. 70.2a) randomly selects m sample points in a study
region, generally denoted D, where each location has an equal opportunity to be sampled. In a
systematic sampling design, (illustrations given in Fig. 70.2b–d), the study region is discretized
into m intervals of equal size 4.The first element is randomly or purposively chosen within the
first interval, and soare other points in the remaining regions. If the first sample is chosen at
random, the resulting scheme is called systematic random sampling.

Fig.7. Two-dimensional sampling schemes forn ¼100. In figures (b), (c), and (d), both x-and y-axis have been
divided into 10 intervals. Points were randomly generated using MATLAB rand function

When the first sample point is not chosen at random, the resulting configuration is called
regular systematic sampling. A centric systematic sampling occurs when the first point is chosen
in the center of the first interval, resulting in a checkerboard configuration. The most common
regular geometric configurations are the equilateral triangular grid, the rectangular (square) grid,
and the hexagonal one (Cressie, 1991). The benefits of a systematic approach reside in a good
spreading of observations across D, guaranteeing a maximized sampling coverage, and
preventing sampling clustering and redundancy. This design however presents two
inconveniences:

a. The distribution of separating distances in D is not represented well because many


pairs of points are separated by the same distances.

b. If the spatial process shows evidence of recurrence, periodicities, there is a risk that
the variation of the variable will remain uncaptured, because the systematic design
coincides in frequency with a regular pattern in the landscape (Overton and Stehman
1993).

4. Pixel and Resolution


a. Pixel
In digital imaging, a pixel is a physical point in a raster image, or the smallest
addressable element in an all points addressable display device; so it is the smallest
controllable element of a picture represented on the screen. Each pixel is a sample of an
original image; more samples typically provide more accurate representations of the original.
The intensity of each pixel is variable. In color imaging systems, a color is typically
represented by three or four component intensities such as red, green, and blue, or cyan,
magenta, yellow, and black. In some contexts (such as descriptions of camera sensors), pixel
refers to a single scalar element of a multi-component representation (called a photosite in
the camera sensor context, although sensel is sometimes used), while in yet other contexts it
may refer to the set of component intensities for a spatial position.
A pixel is generally thought of as the smallest single component of a digital image.
However, the definition is highly context-sensitive. For example, there can be "printed
pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or
pixels on a display device, or pixels in a digital camera (photosensor elements). This list is
not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and
spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per
line, or spaced 10 pixels apart.
b. Resolution
A term that refers to the number of pixels on a display or in a camera sensor (specifically
in a digital image). A higher resolution means more pixels and more pixels provide the
ability to display more visual information (resulting in greater clarity and more detail).

Fig. 8. example of high and low resolution

Resolution does not refer to the physical size of the display, camera sensor or image. For
example, two displays with the same resolution can have different physical dimensions.
Hence the importance of the other parameter that we publish - pixel density, which is
measured in pixels-per-inch (ppi). Since a smaller display of the same resolution will have
more pixels per inch the image provided by it should be clearer and more detailed (although
graphics will be physically smaller). There are two kind of resolution. The first kind of
resolution refers to the pixel count which is the number of pixels that form a digital graphics.
The other kind of resolution is about distribution of the total amount of pixels that construct a
digital graphic, which is commonly referred as pixel density.

5. Color Quantization
In computer graphics, color quantization or color image quantization is quantization
applied to color spaces; it is a process that reduces the number of distinct colors used in an
image, usually with the intention that the new image should be as visually similar as possible
to the original image. Computer algorithms to perform color quantization on bitmaps have
been studied since the 1970s. Color quantization is critical for displaying images with many
colors on devices that can only display a limited number of colors, usually due to memory
limitations, and enables efficient compression of certain types of images.
The name "color quantization" is primarily used in computer graphics research literature;
in applications, terms such as optimized palette generation, optimal palette generation, or
decreasing color depth are used. Some of these are misleading, as the palettes generated by
standard algorithms are not necessarily the best possible.

6. Color Depth

Color depth or colour depth (see spelling differences), also known as bit depth, is either
the number of bits used to indicate the color of a single pixel, in a bitmapped image or video
framebuffer, or the number of bits used for each color component of a single pixel. For
consumer video standards, such as High Efficiency Video Coding (H.265), the bit depth
specifies the number of bits used for each color component. When referring to a pixel, the
concept can be defined as bits per pixel (bpp), which specifies the number of bits used. When
referring to a color component, the concept can be defined as bits per component, bits per
channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per
color channel or bits per sample (bps).
Fig. 9. Color and depth images from a real scene: (a) color image from Kinect 1; (b) color image from
Kinect 2; (c) raw depth map from Kinect 1; (d) raw depth map from Kinect 2.

Color depth is only one aspect of color representation, expressing the precision with
which colors can be expressed; the other aspect is how broad a range of colors can be
expressed (the gamut). The definition of both color precision and gamut is accomplished with
a color encoding specification which assigns a digital code value to a location in a color
space.

You might also like