You are on page 1of 3

F.

4 CIT – Graphics page 1

Introduction
The origination of computer graphics can be in digital or analogue form. We can use
computer software to draw a picture in the computer. This picture is already in digital
format. But very often, the images come from the real world. In such case, the
brilliant pictures that we see are composed of different colours which are actually
light waves of different frequencies. Light wave is analogue, so we have to digitize it
so as to be processed by the computer.

Figure 1. Visible light spectrum


As the digital image is formed, it is very often shown on the screen in the form of an
2D array of pixels. Pixel is the smallest logical unit of a single colour on a computer
screen of graphical image.

Digitization of Light
1. Sampling
Usually, we uses digital camera to catch a moment and scanner to scan a picture
or hand-written document or even a 3D object! In many digital cameras and most
of the scanners, there are CCD (Charge-Coupled Device) arrays inside. CCD
converts light signal to electrical signal for digitization.
The density of the CCDs determines the quality of the output image signal, and
is highly related to the device resolution. Device resolution of scanner is the
measure of the number of dots (i.e. pixels on screen) can be shown in a unit
length and is measured in dpi (dots per inch). Since the electrical signal from the
CCD resembles the sample of the original image, the device resolution can be
used as a measure of the sampling rate.

More about: Resolution


However, in monitor, the resolution is meant by the number of pixels that
can be shown in the horizontal and vertical axes of the monitor. For example, a
screen resolution of 800×600 implies that there are 800 pixels on each horizontal
line and 600 pixels on each vertical line.
The screen resolution can be adjusted manually and there is a direct
proportion relationship between the device resolution and the screen resolution.
For example, a 17 inch monitor whose actual resolution of 72dpi will result in a
screen resolution of about 832×624.
F.4 CIT – Graphics page 2

2. Quantization
The sample size decides the number of colours that can be recognized. In
computer graphics, the sample size is the colour depth (or bit depth). To be
more precise and general, color depth is the number of bits used to store the
color of one pixel.

Representation of Colours
The number of colours that can be used to store an image depends on the colour depth
and the colour model used.
Common Colour Model:
Black and White:
It uses 0 and 1 to represent white and black dots. Brightness of colour is
represented by density of black and white dots.

Original Image B&W B & W using pattern

Figure 2. Black and White Image


Greyscale:
It normally uses 8 bits to store the shades of grey from white (255) to black
(0).

Original Image Greyscale Image

Figure 3. Greyscale Image


RGB:
It usually uses 24 bits to store the colour of one dot: 8 bits for the red level,
8 bits for the green level and 8 bit for the blue level. The colours are usually
stored in three 2D arrays. Each 2D array stores the matrix of 1 primitive
colour and is called a channel.
F.4 CIT – Graphics page 3

Original Image Red Channel Blue Channel Green Channel

Figure 4. The three channels of a colourful image


Sometimes, we use 32 bits for RGB model instead of 24 bits. The extra 8
bits are used for a channel which stores the degree of transparency which is
called the alpha channel.
CMYK:

Colour Depth Colour Model No. of Colours Remark


1 bit Black and White
8 bit Greyscale
8 bit Indexed Colours
16 bit Greyscale
16 bit RGB High Colour
24 bit RGB True Colour
32 bit RGB True Colour

Common Image File Types

You might also like