You are on page 1of 3

16 August 2022

BCD (Binary Coded Decimal)

• BCD is a 4-bit value that represents a denary digit.


• It is used due to the convenience and accuracy over certain other methods (eg: floating point).
• It can be used to represent values in a formatted manner. For example, the date in the format
DD/MM/YYYY can be represented in BCD for storing in the CMOS.
• For example, the value 23 can be represented in BCD as follows:

• The denary value 95 can be converted into BCD as:

• The BCD value 10001001 can be converted to denary as:

• Recall that each denary digit was taken as a group of 4 bits to form a BCD value. So the following
conversion is impossible: 11001010BCD
• This is because when grouped into 4s each pattern exceeds the denary value 9. So it is an invalid
BCD value. However, this is a valid binary value that can be converted into Hexadecimal or
Denary.
• Despite its convenience and ease of use, BCD takes up more space than a value converted into
binary.
Character representation
• A character is a letter, digit or symbol that is represented in a computer.
• By default a computer system can represent a specific set of characters. This is known as a
character set.
• In the earlier days different manufacturers had their own standards. Consider the following
example.

Parallel port
Serial port

Commodore EPSON LX-80


64 Computer printer

• The Commodore computer (European) had a character set that was not fully compatible with the
EPSON printer.
• To resolve this problem, the user had to connect the printer to the computer using a hardware
interface.
• This interface provided 2 uses:
◦ A physical connection that connects the computer through the serial port to the printer using a
parallel port.
◦ It converts the Commodore’s character to an equivalent EPSON code so that the correct
character is printed.
• Thus there was an increased cost for users to work with various devices.
• As a result, computer hardware and software manufacturers started to set some standards so that
there would be greater compatibility.
• From amongst these standards is:
◦ ASCII
◦ Unicode

ASCII (American Standard Code for Information Interchange)

• ASCII was available in two modes.


◦ Standard ASCII – used 7 bits to represent each character. So the MSB was always 0.
◦ Extended ASCII – used 8 bits to represent each character.
• By following the standard, it increased the possibility of exchanging files and programs.
• Hardware manufacturers had an easier job since they didn’t have to worry about incompatibilities.
• However, the ASCII character set had to be set at the time a computer was formatted or prepared for
first time use. It was difficult or impossible to easily switch between languages.
• For example, a computer set for English using a European character set could not correctly read or
recognise data from a different language such as Arabic.
• So when setting up the computer, the user had to specify a code page for the base language. And if
8-bit extended ASCII were to be used, we could have a maximum of 256 characters (28 = 256).
• The characters are assigned a numeric value in ASCII. This is done considering the order of letters
in the alphabet.
Character ASCII (Denary) ASCII (Binary)
NULL 0 00000000

ESC 27
SPACE 32

0 48
1 49

ENTER 13
A 65 01000001
B 66 01000010
C 67 01000011

a 97
b 98

• The first 32 characters (codes 0 to 31) in ASCII are known as non-printable characters. That is, they
have a significance but are not printable on screen or printer.
• When a computer sorts data, it uses the character codes behind them.
• Note that NULL is character 0 and the symbol/digit 0 is character 48. They’re not the same.

Eg: The letter a is 97 in ASCII. Show the letter d in binary.


Eg: The letter a is 97 in ASCII. Show the letter d in binary.

You might also like