You are on page 1of 19

Data representation

Data types

Text Numbers Pictures

Audio Video
How are all these types of data represented?

Data is usually a mix of types.


A Uniform representation of all data types is used. The input data to a
computer is transformed into the uniform representation as it is used
and stored by the PC.
This uniform representation or universal format is called a bit pattern.
Bit

Binary Digit = Bit

Smallest unit of data that can be stored on a computer; it can be either


0 or 1 (zero or one).
One bit represents the state of a device that can take one of two states.
For example an electrical switch.
Computers today use various binary two-state devices to store data.
A single bit cannot solve the data representation problem, if each piece
of data (character) could be represented by a 1 or a 0 then only one bit
would be needed. However, it is necessary to store larger numbers,
text, graphics and other types of data.
This is where bit patterns are needed.
To represent different types of data a bit pattern, a sequence or as it is
sometimes called, a string of bits is used.
Example:
1000101010111111
This means that if you want to store a 16-bit bit pattern, you need 16
electronic switches. If you want to store 1000 bit patterns, each 16 bits,
you need 16000 bits and so on.
It is the responsibility of input / output devices or programs to interpret
a bit pattern as a number, text or some other type of data.
Data is encoded when it enters the computer and decoded when
presented to the user.
Byte

A bit pattern with a length of 8 bits is called a BYTE.


This term is also used to measure the size of memory or other storage
devices.
A piece of text in any language is a sequence of symbols used to
represent an idea in that language. (eg ABC ... Z, 0,1,2,3 ... 9)
Each symbol (from human languages) can be represented with a bit
pattern (machine language)
Codes

Different sequences of bit patterns have been designed to represent


text symbols. This sequence is known as code and the process of
representing symbols is called coding.
ASCII:
American Standard Code for Information Interchange.
This code uses seven bits for each symbol. This means 128 (27)
different symbols that can be defined by this extended ASCII code: To
make the size of each pattern 1 byte (8 bits), ASCII bit patterns are
increased one more zero to the left. Each pattern easily fits in one byte
of memory
EBCDIC:
Binary-coded decimal Interchange Extended Code developed by IBM at
the beginning of the computer age.
It uses eight-bit patterns, so it can represent up to 256 symbols.
Unicode:
Faced with the need for a higher capacity code, a coalition of hardware
and software manufacturers developed a code that uses 16 bits and
can represent up to 65,536 (216) symbols. Different sections of the code
are assigned to the symbols of different languages in the world.
ISO:
The International Organization for Standardization has designed code
that uses 32-bit patterns. This code represents up to 4,294,967,296
(232) symbols, definitely enough to represent any symbol in the world
today.
Numbers

In a computer, numbers are represented using the binary system. In


this system a bit pattern (a sequence of zeros and ones) represents a
number.
Pictures

They are represented on a computer by one of two methods: bitmap


graphics or vector graphics
Audio

Audio by nature is analog information, it is continuous not discrete.


Video

It is a representation of images (called frames or frames) in time.


A movie is a series of frames displayed one after the other to create the
illusion of movement. Each image or frame changes to a series of bit
patterns and is stored. The combination of the images represents the
video.
Hexadecimal notation

The bit pattern was designed to represent data when it is stored inside
a computer. However, it is difficult for people to manipulate bit
patterns. Writing a series of numbers 0 and 1 is tedious and prone to
error.
Hexadecimal notation is based on 16. This means that there are 16
symbols (hexadecimal digits): 0,1,2,3,4,5,6,7,8,9, A, B, C, D, E, F.
The importance of hexadecimal notation becomes apparent when
converting a bit pattern to this notation.
Each hexadecimal digit can represent four bits and four bits can be
represented by one hexadecimal digit.

You might also like