You are on page 1of 56

A HIGH PERFORMANCE BINARY TO BCD

CONVERTER
ABSTRACT

Decimal data processing applications have grown exponentiallyin recent years


thereby increasing the need to have hardware support for decimal arithmetic. Binary to
BCD conversion forms the basic building block of decimal digit multipliers.
CHAPTER 1

INTRODUCTION

Decimal Arithmetic is receiving significant attention incommercial business and


internet based applications, providing hardware support in this direction is henceforth
necessary. Recognizing the need for decimal arithmetic has led to specifications for
decimal floating point arithmetic to be added in the draft revision of the IEEE-P754
standard. Decimal arithmetic operations aregenerally slow and complex, its hardware
occupies more area. Theyare typically implemented using iterative approaches or
lookup table based reduction schemes. This has led to the motivation behind improving
BCD architectures, to enable faster and compact arithmetic.

BCD is a decimal representation of a number directly coded inbinary, digit by digit.


For example, the number (9321)10 = (1001 0011 0010 0001) BCD. It can be seen that
each digit of the decimal number is coded in binary and then concatenated to form the
BCD representation of the decimal number. As any BCD digit lies between [0, 9] or [0000,
1001], multiplying two BCD digits can result in numbers between [0, 81]. All the possible
combinations can be represented in a 7-bit binary number when multiplied, (81)10 or
(1010001)2 being the highest. In BCD multiplication where 4-bit binary multipliers are
used to multiply two BCD numbers X and Y with digits, Xi and Yj, respectively, a partial
product Pijis generated of the form (p6p5p4p3p2p1p0)2. Conversion of Pijfrom binary
to a BCD number BiCj where π (Xi, Yj) = 10Bi + Cjneeds fast and efficient BCD
converters. The binary to BCD conversion is generally inefficient if the binary number is
very large. Hence the conversion can be done in parallel for every partial product after
each BCD digit is multiplied as shown in Figure 1 and the resulting BCD numbers after
conversion can be added using BCD adders. Another alternative
would be to compress the partial products of all binary terms in parallel and then
convert them to BCD as done.
Fig: Illustration of BCD conversion in BCD
Binary coded decimal

In computing and electronic systems, binary-coded decimal (BCD) is a class


of binary encodings of decimal numbers where each decimal digit is represented by a
fixed number of bits, usually four or eight, although other sizes (such as six bits) have
been used historically. Special bit patterns are sometimes used for a sign or for other
indications (e.g., error or overflow).

In byte-oriented systems (i.e. most modern computers), the


term uncompressed BCD usually implies a full byte for each digit (often including a
sign), whereas packed BCD typically encodes two decimal digits within a single byte by
taking advantage of the fact that four bits are enough to represent the range 0 to 9. The
precise 4-bit encoding may vary however, for technical reasons, see Excess-3for
instance.

BCD's main virtue is its more accurate representation and rounding of decimal
quantities as well as an ease of conversion into human-readable representations, in
comparison to binary positional systems. BCD's principal drawbacks are a small
increase in the complexity of the circuits needed to implement basic arithmetic and a
slightly less dense storage.

BCD was used in many early decimal computers. Although BCD is not as widely
used as in the past, decimal fixed-point and floating-point formats are still important
and continue to be used in financial, commercial, and industrial computing, where
subtle conversion and fractional rounding errors that are inherent in floating point binary
representations cannot be tolerated.

BCD takes advantage of the fact that any one decimal numeral can be
represented by a four bit pattern. The most obvious way of encoding digits is "natural
BCD" (NBCD), where each decimal digit is represented by its corresponding four-bit
binary value, as shown in the following table. This is also called "8421" encoding.

Decimal BCD
Digit 8421

0 0000

1 0001

2 0010

3 0011

4 0100

5 0101
6 0110

7 0111

8 1000

9 1001

Other encodings are also used, including so-called "4221" and "7421" — named
after the weighting used for the bits — and "excess-3". For example the BCD digit 6,
'0110'b in 8421 notation, is '1100'b in 4221 (two encodings are possible), '0110'b in 7421,
and '1001'b (6+3=9) in excess-3.

As most computers deal with data in 8-bit bytes, it is possible to use one of the
following methods to encode a BCD number:

 Uncompressed: each numeral is encoded into one byte, with four bits representing
the numeral and the remaining bits having no significance.

 Packed: two numerals are encoded into a single byte, with one numeral in the least
significant nibble (bits 0 through 3) and the other numeral in the most significant
nibble (bits 4 through 7).

Binary Coded Decimal (BCD) Systems

The BCD system is employed by computer systems to encode the decimal


number into its equivalent binary number.This is generally accomplished by encoding
each digit of the decimal number into its equivalent binary sequence. The main
advantage of BCD system is that it is a fast and efficient system to convert the decimal
numbers into binary numbers as compared to the pure binary system.
4-Bit Binary Coded Decimal (BCD) Systems

The 4-bit BCD system is usually employed by the computer systems to represent and
process numerical data only. In the 4-bit BCD system, each digit of the decimal number
is encoded to its corresponding 4-bit binary sequence. The two most popular 4-bit BCD
systems are:

• Weighted 4-bit BCD code

• Excess-3 (XS-3) BCD code

Weighted 4-Bit BCD Code

The weighted 4-bit BCD code is more commonly known as 8421 weighted code.It
is called weighted code because it encodes the decimal system into binary system by
using the concept of positional weighting into consideration.In this code, each decimal
digit is encoded into its 4-bit binary number in which the bits from left to right have the
weights 8, 4, 2, and 1, respectively.

Excess-3 BCD Code

The Excess-3 (XS-3) BCD code does not use the principle of positional weights
into consideration while converting the decimal numbers to 4-bit BCD system. Therefore,
we can say that this code is a non-weighted BCD code.The function of XS-3 code is to
transform the decimal numbers into their corresponding 4-bit BCD code.In this code, the
decimal number is transformed to the 4-bit BCD code by first adding 3 to all the digits of
the number and then converting the excess digits, so obtained, into their corresponding
8421 BCD code. Therefore, we can say that the XS-3 code is strongly related with 8421
BCD code in its functioning.
Binary System
The binary system uses base 2 to represent different values. Therefore, the
binary system is also known as base-2 system. As this system uses base 2, only two
symbols are available for representing the different values in this system. These
symbols are 0 and 1, which are also known as bits in computer terminology.Using binary
system, the computer systems can store and process each type of data in terms of 0s
and 1s only.
The following are some of the technical terms used in binary system:
 Bit: It is the smallest unit of information used in a computer system. It can either
have the value 0 or 1. Derived from the words Binary ditIT.
 Nibble: It is a combination of 4 bits.
 Byte: It is a combination of 8 bits.
 Word: It is a combination of 16 bits.
 Double word: It is a combination of 32 bits.
 Kilobyte (KB): It is used to represent the 1024 bytes of information.
 Megabyte (MB): It is used to represent the 1024 KBs of information.
 Gigabyte (GB): It is used to represent the 1024 MBs of information.

We can determine the weight associated with each bit in the given binary number in the
similar manner as we did in the decimal system.In the binary system, the weight of any
bit can be determined by raising 2 to a power equivalent to the position of bit in the
number.
1011.101

BINARY CODE

A binary code represents text or computer processor instructions using


the binary number system's two binary digits, 0 and 1. A binary code assigns a bit string
to each symbol or instruction. For example, a binary string of eight binary digits (bits)
can represent any of 256 possible values and can therefore correspond to a variety of
different symbols, letters or instructions.

In computing and telecommunication, binary codes are used for various methods
of encoding data, such as character strings, into bit strings. Those methods may use
fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or
other character is represented by a bit string of the same length; that bit string,
interpreted as a binary number, is usually displayed in code tables
in octal, decimal or hexadecimal notation. There are many character sets and
many character encodings for them.

A bit string, interpreted as a binary number, can be translated into a decimal


number. For example, the lower case a, if represented by the bit string 01100001 (as it
is in the standard ASCII code), can also be represented as the decimal number 97.

BCD IN ELECTRONICS

BCD is very common in electronic systems where a numeric value is to be


displayed, especially in systems consisting solely of digital logic, and not containing a
microprocessor. By utilizing BCD, the manipulation of numerical data for display can be
greatly simplified by treating each digit as a separate single sub-circuit. This matches
much more closely the physical reality of display hardware—a designer might choose to
use a series of separate identical seven-segment displays to build a metering circuit, for
example. If the numeric quantity were stored and manipulated as pure binary,
interfacing to such a display would require complex circuitry. Therefore, in cases where
the calculations are relatively simple working throughout with BCD can lead to a simpler
overall system than converting to binary.

The same argument applies when hardware of this type uses an embedded
microcontroller or other small processor. Often, smaller code results when representing
numbers internally in BCD format, since a conversion from or to binary representation
can be expensive on such limited processors. For these applications, some small
processors feature BCD arithmetic modes, which assist when writing routines that
manipulate BCD quantities.
LITERATURE SURVEY

1. Arazi, B., and Naccache, D.: „Binary-to-decimal conversionbased on the 28 2 1 by


5‟, Electron. Lett., 1992.
This describes about The BCD-digit multiplier can serve as the key building block of
a decimal multiplier, irrespective of the degreeof parallelism. A BCD-digit multiplier
produces a two-BCD digit product from two input BCD digits. We provide a novel design
for the latter, showing some advantages in BCD multiplier implementations.

2. Schmookler, M.: „High-speed binary-to-decimal conversion‟, IEEE Trans. Comput.,


1968
This note describes several methods of performing fast, efficient, binary-to-decimal
conversion. With a modest amount of circuitry, an order of magnitude speed
improvement is obtained. This achievement offers a unique advantage to general-
purpose computers requiring special hardware to translate between binary and
decimal numbering systems.
3. Rhyne, V.T.: „Serial binary-to-decimal and decimal-to-binary conversion‟, IEEE
Trans. Comput., 1990
Over ten years ago, Couleur described a serial binary/ decimal conversion algorithm,
the BIDEC method. This was a two-step process involving a shift followed by a parallel
modification of the data being converted. With the integrated-circuit J-K flip-flop, the
implementation of this two-step process requires an excessive amount of control logic.
This paper presents a one-step conversion algorithm that is suitable for binary-to-
decimal and decimal-to-binary conversion. A general design procedure for formulating
the conversion registers as a present-state/next-state counter problem is given, along
with several examples of the application of the one-step algorithm. The advantages of
this new algorithm include low cost, faster operation, and hardware modularity.

4. Binary-coded decimal digit multipliers Jaberipur, G.; Kaivani A. Computers and


Digital Techniques, IET Volume 1, Issue 4, July 2007.
This describes about, the growing popularity of decimal computer arithmetic in
scientific, commercial, financial and Internet-based applications, hardware
realization of decimal arithmetic algorithms is gaining more importance. Hardware
decimal arithmetic units now serve as an integral part of some recently
commercialized general purpose processors, where complex decimal arithmetic
operations, such as multiplication, have been realized by rather slow iterative
hardware algorithms. However, with the rapid advances in very large scale
integration (VLSI) technology, semi- and fully parallel hardware decimal
multiplication units are expected to evolve soon. The dominant representation for
decimal digits is the binary-coded decimal (BCD) encoding. The BCD-digit multiplier
can serve as the key building block of a decimal multiplier, irrespective of the degree
of parallelism. A BCD-digit multiplier produces a two-BCD digit product from two
input BCD digits. We provide a novel design for the latter, showing some advantages
in BCD multiplier implementations.
5. Decimal multiplication using compact BCD multiplier James, R.K.; Shahana, T.K.;
Jacob, K.P.; Sasi, S.; Electronic Design, 2008. ICED 2008. International
Conference on 1-3 Dec. 2008
This describes about the, decimal multiplication is an integral part of financial,
commercial, and internet-based computations. The basic building block of a decimal
multiplier is a single digit multiplier. It accepts two Binary Coded Decimal (BCD) inputs
and gives a product in the range [0, 81] represented by two BCD digits. A novel design
for single digit decimal multiplication that reduces the critical path delay and area is
proposed in this research. Out of the possible 256 combinations for the 8-bit input, only
hundred combinations are valid BCD inputs. In the hundred valid combinations only four
combinations require 4 times 4 multiplication, 64 combinations need 3 times 3
multiplication, and the remaining 32 combinations use either 3 times 4 or 4 times 3
multiplication. The proposed design makes use of this property. This design leads to
more regular VLSI implementation, and does not require special registers for storing
easy multiples. This is a fully parallel multiplier utilizing only combinational logic, and is
extended to a Hex/Decimal multiplier that gives either a decimal output or a binary
output. The accumulation of partial products generated using single digit multipliers is
done by an array of multi-operand BCD adders for an (n-digit times n-digit) multiplication.

CHAPTER
PROPOSED ALGORITHM

The main objective of the proposed algorithm is to performhighly efficient fixed bit
binary to BCD conversion in terms of delay, power and area. As mentioned earlier, most
of the recently proposed multipliers use 7-bit binary to 8-bit/2-digit BCD converters. The
proposed algorithm has been specifically designed for such converters.
Let p6p5p4p3p2p1p0 be the seven binary bits to be converted intotwo BCD digits.
To convert these binary bits into 2-digit BCD we split the binary number into two parts,
the first part contains the lower significant bits (LSBs) p3, p2, p1 and p0 while the
second part contains the remaining higher significant bits (HSBs) p6, p5 and p4. The
lower significant part (LSBs) has the same weight as that of a BCD digit and can be
directly used to represent a BCD digit. The only exception arrives when p3p2p1p0
exceeds (1001)2 or (9)10. To convert the LSBs into a valid BCD number we check
whether p3p2p1p0 exceeds (1001)2, and if it does, we add (0110)2 to it. This procedure
of adding (0110)2 whenever the number exceeds (1001)2 is called correction in BCD
arithmetic. The carry obtained from this procedure is added to the higher significant
BCD digit calculated from the HSBs of the original binary number. The HSBs not only
contribute to the higher significant BCD digit but also to the lower significant BCD digit.
These contributions of HSBs towards the lower significant digit are added after BCD
correction. The resulting sum is then checked for the case (1001)2 and correction is
done if needed to obtain the final lower significant BCD digit.
A possible carry from the above operation is added to the higher significant digit
resulting in the final higher significant BCD digit. When two BCD digits are multiplied
only six combinations of p6, p5 and p4 (HSBs) are possible, which are 000, 001, 010,
011, 100 and 101. Each of these combinations have a different contribution towards the
lower and higher significant BCD digits. This contribution can be easily calculated by
evaluating the weights of the patterns which are p6x27 + p5x26+ p4x25. Contribution of
each of these patterns towards the lower and higher BCD digits is shown in table.

Table contribution of HSB


The below figure shows an example of the algorithm for number (111111)2 or(63)10 or
(0110 0011)BCD.
Figure :Proposed Algorithm for BCD conversion
PROPOSED ARCHITECTURE

Maximum utilization of the fact that only limited and small numbersof outcomes
are possible for conversion has been made in designing the architecture to reduce delay,
power and area. Figure shows the proposed architecture.
Figure: Proposed Architecture

p6p5p4p3p2p1p0 are the binary bits to beconverted into BCD bits z7z6z5z4
z3z2z1z0. p6, p5 and p4 are the HSBs while p3, p2, p1 and p0 are the LSBs. z0 is same
as p0 and hence no operation is done on p0. {p3, p2 and p1} are used to check whether
the LSBs are greater than (1001)2 or not using equation (1) and are sent to the BCD
Correction block.
C1 = p3. (p2 + p1) - (1)

Whenever C1 is high, BCD Correction block adds 011 to the input bits. Figure
shows the implementation of BCD Correction block.

Figure: BCD Correction


In parallel, HSBs along with p3 are fed to a simple logic blockknown as Contribution
Generator which produces the higher significant BCD digits. The logic implemented by
the Contribution
Generator is as follows
t3 = p6p4 -(2)
t2 = p5(p4+p3) + p6p4‟ -(3)
t1 = (p5+p6)p4‟ -(4)
t0 = p6‟p5‟p4 + p5p4‟ -(5)

C1 is the carry from the lower significant digit, so it is added to the higher significant
digit t3t2t1t0. It is found that very few cases lead to the propagation of the incoming
carry from t1 to t2. Hence, we take advantage of this situation and implement {t3, t2} in
combinational logic thus removing the need to add C1 to these terms, thus saving
hardware and complexity. 2-bit One Adder, as shown in Figure, isused to add C1 to t0
and t1. There is a possibility of a carrygeneration, when the contributions of HSBs are
added to the corrected LSBs (a3, a2 and a1). This carry is calculated beforehand by a
Carry Generator block using C1 and input bits p6 to p1. The logic implemented by Carry
Generator is given by the equation below
C2 = C1‟ (p4 (p3+p2) +p3p5) + p6p3 + p4p3p1 -(6)
Figure: 2-bit One Adder
C2 is also added to result of the first 2-bit One Adder using another 2-bit One Adder and
the final higher significant digit is Higher Significantobtained. {t3 and t2} are equal to z7
and z6 respectively and aredirectly available from the Contribution Generator block.

Contribution of HSBs towards lower significant BCD digit is fixed and unique and
is known once HSBs are known. We have implemented four distinct adder units which
add only specified values to the inputs in parallel according to the contributions in Table.
The different adder blocks, +1, +2, +3 and +4 (shown in Figure) add 001, 010, 011 and
100 to the input bits respectively. Adder blocks take the corrected LSBs (a3, a2, a1) as
inputs and add specific numbers to them. The appropriate result is then obtained
through a multiplexer whose selection bits are p6, p5 and p4 (HSBs). The result from
the multiplexer is then fed to BCD
Correction block which takes C2 as input to decide whether correction has to be
done or not. The results obtained from the BCD Correction block are z3, z2 and z1 which,
along with z0, form the final lower significant BCD digit.
Figure: +1, +2, +3, +4 adder blocks

CHAPTER

VLSI TECHNOLOGY

Gone are the days when huge computers made of vacuum tubes sat humming in
entire dedicated rooms and could do about 360 multiplications of 10 digit numbers in a
second. Though they were heralded as the fastest computing machines of that time,
they surely don’t stand a chance when compared to the modern day machines. Modern
day computers are getting smaller, faster, and cheaper and more power efficient every
progressing second. But what drove this change? The whole domain of computing
ushered into a new dawn of electronic miniaturization with the advent of semiconductor
transistor by Bardeen (1947-48) and then the Bipolar Transistor by Shockley (1949) in
the Bell Laboratory.
Fig.: A comparison: first planar IC(1961) and Intel nehalem quad core die

Since the invention of the first IC (Integrated Circuit) in the form of a Flip Flop by
Jack Kilby in 1958, our ability to pack more and more transistors onto a single chip has
doubled roughly every 18 months, in accordance with the Moore’s Law. Such
exponential development had never been seen in any other field and it still continues to
be a major area of research work.

HISTORY & EVOLUTION OF VLSI

The development of microelectronics spans a time which is even lesser than the
average life expectancy of a human, and yet it has seen as many as four generations
Early 60’s saw the low density fabrication processes classified under Small Scale
Integration (SSI) in which transistor count was limited to about 10. This rapidly gave
way to Medium Scale Integration in the late 60’s when around 100 transistors could be
placed on a single chip.

It was the time when the cost of research began to decline and private firms
started entering the competition in contrast to the earlier years where the main burden
was borne by the military. Transistor-Transistor logic (TTL) offering higher integration
densities outlasted other IC families like ECL and became the basis of the first
integrated circuit revolution. It was the production of this family that gave impetus to
semiconductor giants like Texas Instruments, Fairchild and National Semiconductors.
Early seventies marked the growth of transistor count to about 1000 per chip called the
Large Scale Integration.

By mid-eighties, the transistor count on a single chip had already exceeded 1000
and hence came the age of Very Large Scale Integration or VLSI. Though many
improvements have been made and the transistor count is still rising, further names of
generations like ULSI are generally avoided. It was during this time when TTL lost the
battle to MOS family owing to the same problems that had pushed vacuum tubes into
negligence, power dissipation and the limit it imposed on the number of gates that
could be placed on a single die.

The second age of Integrated Circuits revolution started with the introduction of
the first microprocessor, the 4004 by Intel in 1972 and the 8080 in 1974. Today many
companies like Texas Instruments, Infineon, Alliance Semiconductors, Cadence,
Synopsys, Celox Networks, Cisco, Micron Tech, National Semiconductors, ST
Microelectronics, Qualcomm, Lucent, Mentor Graphics, Analog Devices, Intel, Philips,
Motorola and many other firms have been established and are dedicated to the various
fields in "VLSI" like Programmable Logic Devices, Hardware Descriptive Languages,
Design tools, Embedded Systems etc.

CHALLENGES

As microprocessors become more complex due to technology scaling,


microprocessor designers have encountered several challenges which force them to
think beyond the design plane, and look ahead to post-silicon:

Power usage/Heat dissipation – As threshold voltages have ceased to scale with


advancing process technology, dynamic power dissipation has not scaled
proportionally. Maintaining logic complexity when scaling the design down only means
that the power dissipation per area will go up. This has given rise to techniques such as
dynamic voltage and frequency scaling (DVFS) to minimize overall power.

Process variation – As photolithography techniques tend closer to the


fundamental laws of optics, achieving high accuracy in doping concentrations and
etched wires is becoming more difficult and prone to errors due to variation. Designers
now must simulate across multiple fabrication process corners before a chip is certified
ready for production.

Stricter design rules – Due to lithography and etch issues with scaling, design
rules for layout have become increasingly stringent. Designers must keep ever more of
these rules in mind while laying out custom circuits. The overhead for custom design is
now reaching a tipping point, with many design houses opting to switch to electronic
design automation (EDA) tools to automate their design process.

Timing/design closure – As clock frequencies tend to scale up, designers are finding
it more difficult to distribute and maintain low clock skew between these high frequency
clocks across the entire chip. This has led to a rising interest in multicore and
multiprocessor architectures, since an overall speedup can be obtained by lowering the
clock frequency and distributing processing.

CHAPTER

HARDWARE DESCRIPTION LANGUAGE

Introduction of HDL’s

Hardware description languages (HDLs), mainly to describe logic equations to be


realized in programmable logic devices (PLDs).in the 1990s, HDL usage by digital
systems designers accelerated as PLDs, CPLDs, and FPGAs became inexpensive and
common place. Designers turned to HDLs as a means to design individual modules
within a system-on-chip. The important innovations in HDLs occurred in the mid-1980s,
and were the developments of VHDL and VERILOG HDL became popular. There are
several steps in an HDL based design process, often called the design flow. These
steps are applicable to any HDL based design process and are shown in figure

Figure : Steps in an HDL Based Design Flow

In any design, specifications are written first. Specifications describe the


functionality, interface and overall architecture of the digital circuit to be designed. The
next step is the actual writing of HDL code for modules, their interfaces and their
internal details. After the code has written we have to compile the code, this step is
known as compilation. Here the HDL compiler analyzes the code for syntax errors and
also checks it for compatibility with other modules on which it relies.

The most satisfying step is simulation or verification. The HDL simulator allows
to define and apply the inputs to the design and to observe its outputs without ever
having to build the physical circuit. There are at least two dimensions to verification. In
timing verification, the circuit operation including estimated delays, the setup, hold and
other timing requirements for sequential devices like flip flops are met. In the functional
verification the circuits logical operation independent of timing considerations; gate
delays and other timing parameters are considered to be zero.

After verification step, the synthesis process is done in the back end stage. There
are three basic steps, the first synthesis, converting the HDL description into a set of
primitive or components that can be assembled in the target technology and it may
generate a list of gates and a net list that specifies how they are interconnected.

In the fitter step, a fitter maps the synthesized components on to available device
resources. It may mean selecting microcells or laying down individual gates in a pattern
and finding ways to connect them within the physical constraints of the FPGA or ASIC
die, is called as place and route process.

The final step is post fitting verification of the fitted circuit. It is only at the stage
that the actual circuit delays due to wire lengths, electrical loading, and other factors
can be calculated with reasonable precision.
HDL Tool Suites

HDL tool suite really has several different tools with their own names and
purposes:

 A text editor allows to write, edit and save an HDL program. It often contains HDL
specific features, such as recognizing specific file name extensions and
recognizing HDL reserved and comments and displaying them in different colors.
 The compiler is responsible for parsing the HDL program, finding syntax errors
andfiguring out what the program really says.
 A synthesizer or synthesis tools targets the design to a specific hardware
technology,such as FPGA, ASIC etc...
 The simulator runs the specified input sequence on the described hardware
anddetermines the values of the hardware’s internal signals and its outputs over
a specifiedperiod of time.
 The output of the simulator can be include waveforms to be viewed usingthe
waveform editor.
 A schematic viewer may create a schematic diagram corresponding to an HDL
program,based on the intermediate-language output of the compiler.
 A translator targets the compilers intermediate language output to a real device
such as
 A timing analysercalculates the delays through some or all of the signal paths in
thefinal chip and produces a report showing the worst case paths and their
delays.
VHDL
VHDL Hardware Description Language

“VHDL” stands for “VHSIC hardware description language”. “VHSIC” in turns


stands for Very High Speed Integrated Circuit.

VHDL Advantages

The key advantage of VHDL, when used for systems design, is that it allows the
behavior of the required system to be described (modelled) and verified (simulated)
before synthesis tools translate the design into real hardware (gates and wires).

Another benefit is that VHDL allows the description of a concurrent system.


VHDL is a dataflow language, unlike procedural computing languages such as BASIC, C,
and assembly code, which all run sequentially, one instruction at a time.

VHDL project is multipurpose. Being created once, a calculation block can be


used in many other projects. However, many formational and functional block
parameters can be tuned (capacity parameters, memory size, element base, block
composition and interconnection structure).

VHDL project is portable. Being created for one element base, a computing
device project can be ported on another element base, for example VLSI with various
technologies. Concurrency, timing and clocking can be modelled.

VHDL handles asynchronous as well as synchronous sequential-circuit


structures. The logical operation and timing behaviour of a design can be simulated.

VHDL allows for various design methodologies, both the top-down, bottom-up
and is very flexible in its approach to describing hardware.
VHDL History and Features

In the mid-1980s, the U.S. Department of Defence (DOD) and the IEEE sponsored
the development of a highly capable hardware description language called VHDL and
this was got extended in 1993 and again 2002. And some of the features of the VHDL
are:

 Packages are used to provide a collection of common declaration, constants,


and/orsubprograms to entities and architectures.
 Generics provide a method for communicating information to architecture from
theexternal environment. They are passed through the entity construct.
 Ports provide the mechanism for a device to communicate with its environment.
A portdeclaration defines the names, type’s directions and possible default
values for the signalsin a component’s interface.
 Configuration is an instruction used to bind the component instances to design
entities.In it, we specify which real entity interface and corresponding
architecture body shouldbe used for any component instances.
 Bus is a signals group or a particular method of communication.
 Driver is a source for a signal in that it provides values to be applied to the signal.
 Attribute is a VHDL object’s additional information.
VHDL Structure

The VHDL structure or model is shown in figure. A single component model is

composed of one entity and one of more architecture. The entity represents the
interface

specification (I/O) of the component. It defines the component’s external view,


sometimes referred to as its “pins” .The architecture(s) describe(s) the internal
implementation of an entity.
Figure: VHDL Structure

Types of Architectures

There are three general types of architectures. A VHDL Model can be created at
different abstraction levels (behavioral, dataflow, structural), according to a refinement
of starting specification.

Dataflow Modeling

Several additional concurrent statements allow VHDL to describe a circuit in


terms of the flow of data and operations on it within the circuit. This style is called a
dataflow description or dataflow design. Concurrent statements execute when data is
available on their inputs. These statements occur in any order within the architecture.
This method is to use logic equations to develop a data flow description.

Structural Modeling

Structural description can be created from pre described components. These


gates can be pulled from a library of parts. A VHDL architecture that uses components
is often called a structural description or structural design, because it defines the
precise interconnection structure of signals and entities that realize the entity.

Behavioral Modeling

Behavioral description in which the functional and possibly timing characteristic


are described using VHDL concurrent statements and processes. Process is a
collection of sequential statements that executes in parallel with other concurrent
statement and processes. Using a process, can specify a complex interaction of signals
and events in a way that executes in essentially zero simulated time during simulation
and that give rise to a synthesized combinational or sequential circuit that performs the
modeled operation directly.

A VHDL process statement can be used anywhere that a concurrent statement


can be used. Process statement is introduced by the keyword process. AVHDL process
is always either running or suspended. The list of signals in the process definition,
called the sensitivity list. All statements within a process execute sequential order until
it gets suspended by a wait statement.

VERILOG

Verilog HDL is a Hardware Description Language (HDL). A Hardware Description


Language is a language used to describe a digital system, for example, a computer or a
component of a computer. One may describe a digital system at several levels. For
example, an HDL might describe the layout of the wires, resistors and transistors on an
Integrated Circuit (IC) chip, i.e., the switch level or, it might describe the logical gates
and flip flops in a digital system, i.e., the gate level. An even higher level describes the
registers and the transfers of vectors of information between registers. This is called
the Register Transfer Level (RTL). Verilog supports all of these levels. However, this
handout focuses on only the portions of Verilog which support the RTL level.

Verilog is one of the two major Hardware Description Languages (HDL) used by
hardware designers in industry and academia. VHDL is the other one. The industry is
currently split on which is better. Many feel that Verilog is easier to learn and use than
VHDL. As one hardware designer puts it, "I hope the competition uses VHDL." VHDL was
made an IEEE Standard in 1987, while Verilog is still in the IEEE standardization process.

History of Verilog

Verilog was introduced in 1985 by Gateway Design System Corporation, now a


part of Cadence Design Systems, Inc.'s Systems Division. Until May, 1990, with the
formation of Open Verilog International (OVI), Verilog HDL was a proprietary language of
Cadence. Cadence was motivated to open the language to the Public Domain with the
expectation that the market for Verilog HDL-related software products would grow more
rapidly with broader acceptance of the language. Cadence realized that Verilog HDL
users wanted other software and service companies to embrace the language and
develop Verilog-supported design tools.

Verilog HDL allows a hardware designer to describe designs at a high level of


abstraction such as at the architectural or behavioral level as well as the lower
implementation levels (i. e. , gate and switch levels) leading to Very Large Scale
Integration (VLSI) Integrated Circuits (IC) layouts and chip fabrication. A primary use of
HDLs is the simulation of designs before the designer must commit to fabrication. This
handout does not cover all of Verilog HDL but focuses on the use of Verilog HDL at the
architectural or behavioral levels. The handout emphasizes design at the Register
Transfer Level (RTL).

Importance of HDLs

HDLs have many advantages compared to traditional schematic-based design.

 Designs can be described at a very abstract level by use of HDLs. Designers can
write their RTL description without choosing a specific fabrication technology.
Logic synthesis tools can automatically convert the design to any fabrication
technology. If a new technology emerges, designers do not need to redesign their
circuit. They simply input the RTL description to the logic synthesis tool and
create a new gate-level netlist, using the new fabrication technology. The logic
synthesis tool will optimize the circuit in area and timing for the new technology.
 By describing designs in HDLs, functional verification of the design can be done
early in the design cycle. Since designers work at the RTL level, they can optimize
and modify the RTL description until it meets the desired functionality. Most
design bugs are eliminated at this point. This cuts down design cycle time
significantly because the probability of hitting a functional bug at a later time in
the gate-level netlist or physical layout is minimized.
 Designing with HDLs is analogous to computer programming. A textual
description with comments is an easier way to develop and debug circuits. This
also provides a concise representation of the design, compared to gate-level
schematics. Gate-level schematics are almost incomprehensible for very
complex designs.

HDL-based design is here to stay. With rapidly increasing complexities of digital


circuits and increasingly sophisticated EDA tools, HDLs are now the dominant method
for large digital designs. No digital circuit designer can afford to ignore HDL-based
design.
Chapter

XILINX ISE

The Xilinx ISE tools allow you to use schematics, hardware description language
(HDLs), and specially designed modules in number of ways. Schematics are drawn by
using symbols for components and lines for wires. Xilinx tools is a suite of software
tools used for the design of digital circuits implemented using Xilinx Field
Programmable Gate Array(FPGA) or Complex programmable logic Device (CPLD).

The design procedure consists of (a) design entry, (b) synthesis and
implementation of the design,(c) functional simulation and (d) testing and verification.
Digital designs can be entered in various ways using the above CAD tools: using a
schematic hardware description language (HDL) – Verilog or VHDL or a combination of
both. In this lab we will only use the design flow that involves the use of Verilog HDL.

DESIGN ENTRY

Design entry is the first step in the ISE design flow. During design entry, you
create your source files based on your design objectives. You can create your top-level
design file using a Hardware Description Language (HDL), such as VHDL, Verilog, or
ABEL, or using schematic. You can use multiple formats for the lower-level source files
in your design.

SYNTHESIS

After design entry and optimal simulation, you run synthesis. During this step,
VHDL, Verilog, or mixed language designs become net list files that are accepted as
input to the implementation step.

IMPLEMENTATION

After synthesis, you run design implementation, which converts the logical
design into a physical file format that can be downloaded to selected target device.
From project navigator, you can run the implementation process in one step, or you can
run each of the implementation separately. Implementation processes vary depending
on whether you are targeting a Field Programmable Gate Array (FPGA) or a Complex
Programmable Logic Device (CPLD).

VERIFICATION

You can verify the functionality of your design at several points in the design
flow. You can use simulator software to verify the functionality and timing of your
design or a portion of your design. The simulator interprets VHDL or Verilog code into
circuit functionality and displays logical results of described HDL to determine correct
circuit operation. Simulation allows you to create and verify complex functions in a
relatively small amount of time. You can also run in-circuit verification after
programming your device.
DEVICE INSTALLATION

After generating a programming file, you configure your device. During


configuration, you generate configuration files and download the programming files
from a host computer to a Xilinx device.

ISE

Xilinx ISE is a Hardware Description Language (HDL) simulator that enables you
to perform functional and timing simulations for VHDL, Verilog and mixed VHDL/Verilog
designs.

Simulation Using ISE

Now that you have a test bench in your project, you can perform behavioral
simulation on the design using ISE. The ISE software has full integration with ISE. The
ISE software enables ISE to create the work directory, compile the source files, load
the design, and perform simulation based on simulation properties.

To select ISE as your project simulator, do the following:

 In the Hierarchy pane of the Project Navigator Design Panel, right-click the
device line (xc3s100E-5tq114), and select “Design Properties”.

 In the Design Properties dialog box, set the simulator field to “ISE
(VHDL/Verilog)”.

Locating the Simulation Processes

The simulation processes in the ISE software enable you to run simulation on
the design using ISE.
To locate the ISE processes, do the following:

 In the View Pane of the Project Navigator Design Panel, select “Simulation”, and
select “Behavioral” from the drop-down list.

 In the Hierarchy Pane, select the test bench files (ex: stopwatch_tb).

 In the Processes Pane, expand “ISE Simulator” to view the process hierarchy.

The following simulation processes are available:

Check “Syntax” .This process checks for syntax errors in the test bench.Simulate
“Behavioral Model” .This process starts the design simulation.

Specifying Simulation Properties

You will perform a behavioral simulation on the stopwatch design after you set
process properties for simulation.

The ISE software allows you to set several ISE properties in addition to the
simulation net list properties. To see the behavioral simulation properties and to modify
the properties for this tutorial, do the following:

In the Hierarchy pane of the Project Navigator

 In the Process Pane, expand “ISE simulator”, right-click “Simulate


Behavioral Model”, and select “Process Properties”.

 In the Process Properties dialog box, set the property display level to
“Advanced”. This global setting enables you to see all available properties.

 Change the Simulation Run Time to “2000


ns”. Click “OK”.
Performing Simulation

After the process Properties have been set, you are ready to run ISE to simulate
the design. To start the behavioral simulation, double-click “Simulate Behavioral Model”.
ISE creates the work directory, compiles the source files, loads the design, and performs
simulation for the time specified.

CHAPTER

CONCLUSION

All the architectures have been described using Verilog HDL. Delay, power and
area values for the designs are obtained by synthesizing the Verilog HDL description.
The proposed converteris flexible and can be plugged into any homogeneous
multiplicationarchitectures to achieve better performance irrespective of the method
used to generate binary partial products.

APPLICATIONS

1. The BIOS in many personal computers stores the date and time in BCD because
the MC6818 real-time clock chip used in the original IBM PC AT motherboard
provided the time encoded in BCD. This form is easily converted into ASCII for
display.

2. The Atari 8-bit family of computers used BCD to implement floating-point


algorithms. The MOS 6502 processor used has a BCD mode that affects the
addition and subtraction instructions.

3. Early models of the PlayStation 3 store the date and time in BCD. This led to a
worldwide outage of the console on 1 March 2010. The last two digits of the year
stored as BCDwere misinterpreted as 16 causing an error in the unit's date,
rendering most functions inoperable.

ADVANTAGES

 Many non-integral values, such as decimal 0.2, have an infinite place-value


representation in binary (.001100110011...) but have a finite place-value in binary-
coded decimal (0.0010). Consequently a system based on binary-coded decimal
representations of decimal fractions avoids errors representing and calculating such
values.

 Scaling by a factor of 10 (or a power of 10) is simple; this is useful when a decimal
scaling factor is needed to represent a non-integer quantity (e.g., in financial
calculations)

 Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal


does not require rounding.

 Alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact, shift.

 Conversion to a character form or for display (e.g., to a text-based format such


as XML, or to drive signals for a seven-segment display) is a simple per-digit
mapping, and can be done in linear (O(n)) time. Conversion from
pure binary involves relatively complex logic that spans digits, and for large numbers
no linear-time conversion algorithm is known (see Binary numeral system).

REFERENCES
[1] IEEE standard for floating-point arithmetic. IEEE SC, Oct. 2006 at
http://754r.ucbtest.org/drafts/754r.pdf

[2] Erle, M.A.; Schwarz, E.M.; Schulte, M.J., "Decimal multiplication with efficient partial
product generation," 17thIEEE Symposium on Computer Arithmetic, 2005. ARITH-17
2005, 27-29 June 2005, pages21-28.

[3] Erle, M.A.; Schulte, M.J., "Decimal multiplication via carrysaveaddition," Proceedings.
IEEE International Conference on Application-Specific Systems, Architectures, and
Processors,
2003, 24-26 June, 2003 Page(s):348 - 358.

[4] A; Antelo, E M"i Vazquez A;Antelo, E; Montuschi P, "A New Family of High-
Performance Parallel Decimal Multipliers" in the 18th IEEE Symposium on Computer
Arithmetic 25-27
June 2007.

[5] SreehariVeeramachaneni, M. Keerthi Krishna , L. Avinesh, P Sreekanth Reddy, M.B.


Srinivas, “Novel High-Speed 16-Digit BCD Adders Conforming to IEEE 754r Format”, IEEE
Computer Society Annual Symposium on VLSI (ISVLSI‟07), pages 343-350, Mar 2007.

[6] Decimal multiplication using compact BCD multiplier James, R.K.; Shahana, T.K.;
Jacob, K.P.; Sasi, S.; Electronic Design, 2008. ICED 2008. International Conference on 1-
3 Dec. 2008
Page(s):1 – 6.

[7] Binary-coded decimal digit multipliers Jaberipur, G.; Kaivani, A. Computers and
Digital Techniques, IET Volume 1, Issue 4, July 2007 Page(s):377 - 381.

[8] Novel High-Speed Architecture for 32-Bit Binary Coded Decimal (BCD) Multiplier
Veeramachaneni, S.; Srinivas, M.B. Communications and Information Technologies,
2008. ISCIT 2008. International Symposium on 21-23 Oct. 2008 Page(s):543 – 546.

[9] Improving the Speed of Parallel Decimal Multiplication Jaberipur, Ghassem; Kaivani,
Amir; Computers, IEEE Transactions on Volume 58, Issue 11, Nov. 2009 Page(s):1539 -
1552.
[10] Schmookler, M.: „High-speed binary-to-decimal conversion‟, IEEETrans. Comput.,
1968, 17, (5), pages. 506–508.

[11] Rhyne, V.T.: „Serial binary-to-decimal and decimal-to-binary conversion‟, IEEE Trans.
Comput., 1970, 19, (9), pages. 808–812.

[12] Arazi, B., and Naccache, D.: „Binary-to-decimal conversion based on the 28 2 1 by 5‟,
Electron. Lett., 1992, 28, (23), pages. 2151–2152.

SIMULATION RESULTS

RTL Schematic:
Internal Schematic:
Simulation:
RTL CODE:

RTL Code for top Module:

module final_binary_bcd_mul(
input [6:0] p,
output [7:0] z
);
//WIRE DECLARATION

wire c1,c2;
wire t0,t1,t2,t3;
wire [2:0] a,e,f,g,h;
wire [2:0] y;
wire o1,o2;
wire [2:0]s_1,s_2,s_3,s_4;
wire q1,q2,q3,q4,out1;
// CONTRIBUTION GENERATOR

assign c2 = ((~c1)&(p[4]&(p[3]|p[2])))|(p[3]&p[5])|(p[6]&p[3])|(p[4]&p[3]&p[2]);

assign t0 = ((~p[6])&(~p[5])&p[4])|(p[5]&(~p[4]));
assign t1 = (p[5]|p[6])&(~p[4]);
assign t2 = (p[5]&(p[4]|p[3]))|(p[6]&(~p[4]));
assign t3 = p[6] & p[4];

//BCD CORRECTION

assign c1 = p[3]&(p[2]|p[1]);

// always@(*)
// begin
// if(c1==1'b1)
// adds = p[3:1] + 3'b011;
// else
// adds = p[3:1];
// end
//

//bcd_correction BCD(adds[0],adds[1],adds[2],c1,a[0],a[1],a[2]);

assign out1 = c1 & a[2];

//assign a = (c1==1'b1) ? (p[3:1]+3'b011) : p[3:1];


bcd_correction BCD(p[1],p[2],p[3],c1,a[0],a[1],a[2]);

// //+1,+2,+3,+4 ADDER BLOCKS

add1_block add1(a[0],a[1],a[2],e[0],e[1],e[2]);
add2_block add2(a[0],a[1],a[2],f[0],f[1],f[2]);
add3_block add3(a[0],a[1],a[2],g[0],g[1],g[2]);
add4_block add4(a[0],a[1],a[2],h[0],h[1],h[2]);

//MULTIPLEXER ARRAY

// multiplexer_array mux(g,f,e,h,a,3'b0,3'b0,3'b0,p[6:4],y);
mux21 m1(e,h,p[4],s_1);
mux21 m2(f,g,p[4],s_2);
mux21 m3(s_2,s_1,p[5],s_3);

//xor u0(p[4],p[6],t_1);
//
// not u1(q2,q3);
// not u2(p[5],t_3);
// and u4(t,t_3,t_4);
assign q1 = p[4] ^ p[6];
assign q2 = ~ q1;
assign q3 = ~ p[5];
assign q4 = q2 & q3;

mux21 m4(s_3,a,q4,s_4);
//BCD CORRECTION CIRCUIT

//assign {z[3],z[2],z[1]} = (c2==1'b1) ? (s_4+3'b011) : s_4;

bcd_correction BCD1(s_4[0],s_4[1],s_4[2],c2,z[1],z[2],z[3]);

//TWO BIT ONE ADDER

twobit_one_adder bit2(t0,t1,c1,o1,o2);
twobit_one_adder bit1(o1,o2,c2,z[4],z[5]);

assign z[0] = p[0];


assign z[6] = t2;
assign z[7] = t3;

endmodule

RTL Code for BCD Correction:

module bcd_correction(
input s1,s2,s3,
input c,
output o1,o2,o3
);

wire y1,y2,y3;

assign o1 = (s1&(~c))|((~s1)&c);
assign y1 = (s2&s1)|((~s2)&(~s1));
assign o2 = (s2&(~c))|(y1&c);
assign y2 = s1 | s2;
assign y3 = (s3&(~y2))|((~s3)&y2);
assign o3 = (s3&(~c))|(y3&c);

endmodule

RTL Code for Adder:

module add1_block(
input s1,s2,s3,
output o1,o2,o3
);

wire y;

assign o1 = ~s1;
assign o2 = (s2&(~s1)) | ((~s2)&s1);
assign y = s1&s2;
assign o3 = (s3&(~y)) | ((~s3)&y);

endmodule

RTL Code for MUX:

module mux21(
input [2:0] a,b,
input s,
output [2:0] y
);
assign y = (s==1'b0) ? a : b;

endmodule

RTL Code for Adder2:

module twobit_one_adder(
input s1,s2,
input c,
output o1,o2
);
wire y1;
assign o1 = (s1&(~c))|((~s1)&c);
assign y1 = (s2&(~s1))|((~s2)&s1);
assign o2 = (s2&(~c))|(y1&c);

endmodule

You might also like