Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
The challenge of the verifying a large design is growing exponentially. There
is a need to define new methods that makes functional verification easy. Several
strategies in the recent years have been proposed to achieve good functional
verification with less effort. Recent advancement towards this goal is methodologies.
The methodology defines a skeleton over which one can add flesh and skin to their
requirements to achieve functional verification.
i) Mathematics, derived from the Veda, provides one line, mental and superfast
methods along with quick cross checking systems.
ii) Vedic Mathematics converts a tedious subject into a playful and blissful one which
students learn with smiles.
iii) Vedic Mathematics offers a new and entirely different approach to the study of
Mathematics based on pattern recognition. It allows for constant expression of a
student's creativity, and is found to be easier to learn.
iv)In this system, for any problem, there is always one general technique applicable to
all cases and also a number of special pattern problems. The element of choice and
flexibility at each stage keeps the mind lively and alert to develop clarity of thought
and intuition, and thereby a holistic development of the human brain automatically
takes place.
v) Vedic Mathematics with its special features has the inbuilt potential to solve the
psychological problem of Mathematics - anxiety.
The Sanskrit word “Veda” means house of knowledge and this gift that the Indians
gave to the world, thousands of years ago and this knowledge is now currently
employed in our global silicon chip technology of engineering. Vedic mathematics is
used to solve the complex calculations involved in usual mathematics. This is so
because, the Vedic formula is claimed to be based on the natural principles on which
the human mind works, so typical to
the typical calculation that can be performed by a normal person, and hence Vedic
mathematic provides techniques to solve operations with a large magnitude number
easily. It consents to incorporate the arithmetic rules along with high speed and easy
implementation,
thereby viable for a range of applications based on computing. This is a very
interesting field and presents some effective algorithms which can be applied to
various branches of engineering such as computing and digital system
In the text, the words Sutra, aphorism, formula is used synonymously. So are
also the words Upa-sutra, Sub-sutra, Sub-formula, corollary used. Now we shall have
the literal meaning, contextual meaning, process, different methods of application
along with examples for the Sutras. Explanation, methods, further short-cuts,
algebraic proof, etc follow. What follows relates to a single formula or a group of
formulae related to the methods of Vedic Mathematics.
1.3 Urdhva – tiryagbhyam
Urdhva-Tiryagbhyam sutras are the basic sutras which is applicable for all
case of multiplication. This itself is very short and compendious consisting of only
one combine word and means “vertically and crosswise” i.e. the first bit of
multiplicand and the first bit of multiplier are multiplied with vertically and crosswise
method. Vertically and crosswise multiplication procedure is also known as array
multiplication technique. Fig-1 represents the 6×6 multiplier using vertically and
crosswise method.
Urdhva – tiryagbhyam is the general formula applicable to all cases of multiplication
and also in the division of a large number by another large number.
Vedic multipliers are based on the principle Vedic Sutras. In Sanskrit word
terminology ‘Veda’ stands for ‘knowledge’. Vedic mathematics is believed to be
reinvestigated from Vedas by swami Sri Bharti Krishna Tirathaji between the years
1911 to 1918 . The Vedic mathematics has been portioned intosixteen different Sutras
which can be applied to any branch of math’s like algebra, trigonometry, geometry,
mensuration etc. Its methods reduce thecomplex calculations turns into simpler ones
because they are basically based on methods similar to working method of human
mind thereby making them easier. It has been observed that being coherent and
symmetrical; they consume much lesser power and use lower chip area. Designs
based on Vedic Sutra have been used in many applications like ALU, MAC etc. and
have shown better results
CHAPTER 2
LITERATURE SURVEY
The 12 X 12 modules are realized by 4X4 bit multiplier modules. The projected
method shows advantages similar power saving, configurability, self-reparability etc.
The method can be extended for DFT . A low power Multiplier is offered in . The
realized multiplier is based on the ancient Vedic Multiplication Method. At this point
the 'Urdhva tiryakbhyam sutra’ and 'Nikhilam sutras’ are used for multiplication. The
multiplier founded on this technique is equated with the modern multiplier to
highpoint the power and speed rewards in the Vedic Multipliers. To check the Vedic
multiplier BIST (Built inSelf-Test) is realized and it is establish Fault free. The
outcomes are compared with the Booth's Multiplier in terms of constraints like power
and time delay. The multiplier is realized using VHDL and Spartan 2G FPGA. The
simulation outcomes are offered based on power and time delay. Reference offerings
a squarer based high enactment multiplier for which Vedic multipliers and two
flexible constant coefficient multipliers are castoff. Outcomes are stored in ROM
which rises power consumption. The arrangement proposed in attains increased speed
and compact area as equated to array multipliers. Conferring to authors its only
drawback is increment in dynamic power. According to multiplier is very essential
part of any processor and needs more hardware resources and handing out time than
subtractors and adders.
The 8.72 percentages of instructions of every processor are multiplication centered
and substantial amount of time is spent on this process by any CPU. A relative study
of diverse multipliers with respect to low power requirement and high speed is
accessible in by using ‘Urdhva tiryakbhyam’ algorithm, it like wise proposes to use
'Nikhilam sutras' for least iterations. Array multiplier, Wallace multiplier and Booth
multiplier are equated and Vedic mathematical procedures are used in all. The results
displayed that Booth multiplier is superior in aspects like speed, delay, area,
complication and power consumption. Array Multiplier needs more power
consumption and gives optimal number of components; the delay for this multiplier is
larger than Wallace Tree Multiplier. 'Nikhilam sutras’ needs less number of iterations
to perform multiplication. 'Nikhilam sutras' establish to be less complex as compared
with 'Urdhva tiryakbhyam' algorithm. Additional work can be carried out to
minimalize delay and to advance the speed. The efficacy comparison between
Karatsuba multiplier by polynomial multiplication with multiplier realizing 'Nikhilam
Sutras’ have been offered in which states that Karatsuba multiplier displays speed
CHAPTER 3
EXISTING METHODS
When the critical path is compared between the critical path in 4 bit conventional and
Vedic multiplier, for a 4-bit multiplier, 4 partial products will be generated and are
named as p0 to p3. For Wallace tree multiplier, a 3:2 reduction is used, so that the
partial products are reduced from 4 to 3. The Delay in critical path is given by the
addition of 3 full adder sums, 2 full adder carry, and half adder carry. The critical path
for Vedic mathematics as shown in Figure 3, is given by 2FAS is reduced by 3HAS
and in terms of XOR gates, Vedic-Wallace uses 3XOR gates instead of 4XOR i.e.,
less carry propagation delay than the conventional method. Hence, Vedic-Wallace has
a variable improvement over design ware depending upon the number of bits in
multiplication.
the multiplicand is multiplied with the next higher bit of the multiplier and added to,
the product of LSB of the multiplier and the next higher bit of the multiplicand
(crosswise). The sum gives the second bit of the final product and the carry is added
to the partial product obtained by multiplying the most significant bits to give the sum
and carry. The sum is the third corresponding bit and the carry becomes the fourth bit
of the final product.
s0=a0b0 (1)
c1s1=a1b0+a0b1 (2)
c2s2=c1+a1b1 (3)
The final result will be c2s2s1s0. This multiplication method is applicable for all the
cases. The 2x2 bit Vedic Wallace multiplier is implemented by using four input AND
gates along with two half-adders. In the same way, 4, 8, 16, 32 and N bit multipliers
are designed with a little modification.
3.4 Vedic Wallace Multiplier for 4x4 Bit Module
The 4x4 bit Vedic Wallace multiplication unit is further realized by
incorporating four similar modules of 2x2 multipliers. The processing in the form of a
block diagram is depicted in Figure 4 for the 4x4 multiplier.
3.4.1 Vedic Wallace Multiplier for 8x8 Bit Module
The 8x8 Vedic Wallace Multiplier modules are realized using four 4x4
multiplier modules. The processing of 8x8 multiplier based on Vedic Wallace
methodology is
depicted in Figure 5. Demonstrating with an example of the two digits consisting of 8
bits, the output is obtained in 16 bits length.
Figure 3.5 Schematic Diagram of 8x8 bit Vedic Wallace based multiplier
CHAPTER 4
PROBLEM STATEMENT
One of the primary features that help us determine the computational power of
a processor is the speed of its arithmetic unit. An important function of an arithmetic
block is multiplication because, in most mathematical computations, it forms the bulk
of the execution time. Thus, the development of a fast multiplier has been a key
research area for a long time. Some of the important algorithms proposed for fast
multiplication in literature are Array, Booth and Wallace multipliers. Vedic
Mathematics is a methodology of arithmetic rules that allows for more efficient
implementations regarding speed. A high-speed Vedic multiplier based on the Urdhva
Tiryagbhyam sutra of Vedic mathematics that incorporates a novel adder based on
Quaternary Signed digit number system. Multipliers are the most important unit in
high speed arithmetic logic units, multiplier and accumulate units, digital signal
processing units etc. With the high demand in increasing constraints limits on delay,
more and more emphasis is being put on design of high speed multiplications. To
increase speed many adjustment over the standard modified .
CHAPTER 5
PROPOSED METHOD
5.1 INTRODUCTION
One of the primary features that help us determine the computational power of
a processor is the speed of its arithmetic unit. An important function of an arithmetic
block is multiplication because, in most mathematical computations, it forms the bulk
of the execution time. Thus, the development of a fast multiplier has been a key
research area for a long time. Some of the important algorithms proposed for fast
multiplication in literature are Array, Booth and Wallace multipliers. Vedic
Mathematics is a methodology of arithmetic rules that allows for more efficient
implementations regarding speed. Multiplication in this methodology consists of three
steps: generation of partial products, reduction of partial products, and finally carry
propagate addition. Multiplier design based on Vedic mathematics has many
advantages as the partial products and sums are generated in one step, which reduces
the carry propagation from LSB to MSB. This feature helps in scaling the design for
larger inputs without proportionally increasing the propagation delay as all smaller
blocks of the design work concurrently. References compared Vedic Multiplier with
other multiplier architectures namely Booth, Array and Wallace on the basis of delay
and power consumption. Vedic multiplier showed improvements in both the
parameters over other architectures. Thus, many implementations of multiplication
algorithms based on Vedic sutras have been reported in literature. Vedic multiplier
schemes proposed in literature are based on Urdhva Tiryagbhyam and Nikhilam
sutras of Vedic Mathematics. As Nikhilam sutra is only efficient for inputs that are
close to the power of 10, in this paper a design to perform high-speed multiplication
based on the Urdhva Tiryagbhyam sutra of Vedic Mathematics which is generalized
method for all numbers, has been presented. The final step, carry-propagate addition,
requires a fast adder scheme because it forms a part of the critical path. A variety of
adder schemes have been proposed in literature to optimize the performance of Vedic
multiplier. Adder based on QSD shows an improvement in speed over other state of
the art adders. Earlier implementations of QSD adder were based on Multi Voltage or
Multi Value Logic (MVL). The difficulty in application of quaternary addition outside
MVL (Multi Voltage logic) is that, the adder is only a small unit of the design whose
outputs will need to be converted back to binary for further processing. However, use
The QSD is a radix-4 number system that provides the benefit of faster
arithmetic calculations over binary computation, as it eliminates rippling of carry
during addition. Every number in QSD can be represented using digits from the set {-
3,-2,-1, 0, 1, 2, 3}. Being a higher radix number system it utilizes less number of gates
and hence saves on time and reduces circuit complexity. The stages involved in
addition of two numbers in QSD are: Stage1: Generation of intermediate carry and
sum: When two digits are added in QSD number system, the resulting sum ranges
between -6 to +6. Numbers with magnitude higher than 3 are represented by multiple
digits with least significant digit representing sum and the next digit corresponds to
carry. Also, every number in QSD can have multiple representations. The
representation is chosen such that the magnitude of sum digit is 2 or less than 2 and
the magnitude of carry digit is 1 or less than 1, the reason for which is explained in
the next stage. Stage2: The intermediate sum and carry have a limit fixed on their
magnitude because this allows carry free addition in the second step. The result can be
obtained directly by adding the sum digit with the carry of the lower significant digit.
4x4 Multiplier
Table I shows all intermediate and final results involved in the multiplication
process of two binary numbers, A = (1111)2 and B = (1001)2. The data flow in the
proposed 4x4 multiplier is given below: 1) A[1:0] and B[1:0], A[3:2] and B[1:0],
A[1:0] and B[3:2], and A[3:2] and B[3:2] are multiplied by 2x2 Vedic multipliers,
giving output D0[3:0], D1[3:0], D2[3:0] and D3[3:0] respectively.
2) D1 [3:0] and D2[3:0] are added by the proposed 4 bit QSD adder, giving D4[3:0]
and a carry out as the outputs. 3) D4[3:0] and {D3[1:0], D0[3:2]} are added by the
second 4 bit QSD adder, giving D5[3:0] and a carry out as the outputs. 4) The half
adder is used to add the carry outs of the QSD adders. The output obtained is fed to
the 2 Bit Adder along with D3[3:2]. 5) The result, C, in binary is obtained by
concatenation of output of 2 Bit Adder, D5[3:0] and D0[1:0]. The proposed design
can be extended to multiply both negative and positive integers by an addition of a
sign bit in both inputs. An XOR logic can then be used to compute the sign bit of the
final output. The multiplication of the magnitudes will proceed simultaneously in a
similar manner to the example described above.
The 4x4 multiplier design can be scaled to multiply larger numbers as shown
in Fig. 4, where the design is scaled up for a 32 bit multiplier.
The Intermediate sum lies in the range [0, 6], as the operands are unsigned
numbers. From [16], for quaternary addition to be carry free beyond the first stage,
the intermediate sum can’t be greater than 2. To ensure this stipulation holds true, the
(1 )4 representation of 3 needs to be chosen while adding. However, this represents a
blocking case when converting the final output string back into binary as it prohibits
us from simply concatenating the lower two bits of quaternary output strings to get the
binary equivalent. For addition of unsigned numbers, if the (03)4 representation
would have been used, direct concatenation of results could have been possible. But,
then that wouldn’t have always been carry free after the initial stage. Thus, the
concept of an adjusting bit has been devised to solve the dilemma of which
representation of 3 to use, such that both carry free addition and concatenation of
output string bits to get the final output can be realized in the same design. The
solution to the problem described above, is that the (03)4 representation of 3 is
required to be taken instead of the (1)4 representation in some cases. But, determining
when such a change is required before proceeding with the addition will increase the
delay of the design and be counter-productive. Thus, the (1)4 representation of 3 is
always selected in stage 1, to satisfy necessary conditions for carry free arithmetic.
While necessary adjustments are made in stage 2 if (03)4 representations was to be
taken, the need for such an adjustment is determined via an adjusting bit.
Where Sn-2 is true if n-2th intermediate sum digit is 3. This formula can cover
the problem of n consecutive 3’s in a similar manner. The adjusting bit can be
predicted based on the initial inputs to the adders itself. It can be computed in parallel
with Stage 1. Thus, effect on delay of the adder is minimal. The above example is
revaluated with the modified formula: Input A= (X3X2X1)4 =
(A8A7A6A5A4A3A2A1A0)2 = (030)4 Input B = (Y3Y2Y1)4 =
(B8B7B6B5B4B3B2B1B0)2 = (003)4 Adjusting Bit for addition of Xn and Yn is Sn-
1.(Sn-2+ ). As can be seen from the flow of data shown in Table V. The modified
formula gives the correct binary output after concatenation. The proposed adder
works in two stages, as shown in Fig. 5. 1) In the first stage, as in Fig. 5(a), every
individual digit at the same position in the quaternary representation of two n-bit
numbers A and B is added using a 2 Bit Adder to generate a sum. This sum lies in the
range [0, 6]. From the sum obtained from the adder, the intermediate sum and
intermediate carry for the next stage are calculated in parallel using 2x1 multiplexers.
The logic for the selection of the representation of sum and carry has been explained
in [16]. The adjusting bit is also computed in parallel with the addition process. The
input to the adjusting bit calculation block for every quaternary digit addition are the
previous two quaternary digits of A and B signified by [n-2: n-5]. 2) Second stage has
two modules as shown in Fig. 5(b). One is a one-bit module that performs the
computation (A+BC). In this case A would be LSB of intermediate sum, B would be
carry from the previous quaternary digit addition and C would be the adjusting bit.
The other module will be a half adder which will add the carry from the (A+B-C)
module and the bit to the left of the least significant bit of the intermediate sum. As
for the final concatenation, the sign bit would not be used owing to the adjustments
proposed in the design. Thus, its final value is not computed.
5.7 EXTENSION
Figure 8: Schematic block diagram of N*N bit vedic Wallace based mulitiplier
ALU was designed to perform the arithmetic and logical operations for the
controller. Arithmetic operations performed are the 32-bit addition, subtraction, and
multiplication. Logical operations performed are AND, OR, XOR, NAND, NOR,
XNOR, NOT and Data Buffer. For designing the ALU, the authors had followed a
flexible design that consists of smaller, but more manageable blocks, some of which
can be re-used [2]. Designing of half-adder, 2-bit multiplier, 4-bit Brent-Kung, 4-bit
multiplier, 8-bit Brent-Kung adder, 8-bit multiplier, 8-bit full adder, 8-bit subtractor,
32-bit Brent-Kung adder, 32-bit multiplier, 32-bit full adder, 32-bit subtractor, 32-bit
arithmetic unit, logical unit and 32-bit ALU has been done[7].
ARITHMETIC UNIT
·An Arithmetic unit does the following task: Addition with carry, multiplication, and
subtraction.
·A half adder is used to make the 4-bit Brent-Kung adder and then a 4-bit multiplier.
·4-bit Brent-Kung adder is used to make the 8 bit Brent- Kung adder.
·The 8-bit multiplier is made by using the 4-bit multiplier and 8-bit Brent-Kung adder.
·8-bit subtractor.
·The 32-bit multiplier is made by using the 16-bit multiplier and the 16-bit Brent-
Kung adder.·32-bit subtractor.
LOGICAL UNIT
For designing of the logical unit, the performance of logic circuits have been
analysed by employing the commonly used logic gates and a multiplexer. A Logic
unit does the
various operations such as Logical AND, OR, XOR, NOT, NAND, NOR, XNOR, and
data buffer. In addition to this arithmetic unit and logical unit, they have been
combined into the arithmetic logic units. The schematic block diagram of an
arithmetic logic unit is shown in Figure 9, that is self-explanatory in itself. The output
of the ALU and Logical Unit is 64 bits. Table 1 shows the control word for ALU
operations
Lowest path delay for various multipliers. This section also deals with the
quantitative and comparative result analysis of the different approach to Vedic
mathematics through various multipliers and adder design and implementation.
Additionally, to validate the proposed Vedic mathematics based various multipliers
and adder designs and their implementation, the synthesis and the simulated results
have been compared with some other trendy multiplier structures which are designed
based on the different multiplication algorithms. Table 2 shows the efficacy of the
proposed Vedic Wallace multiplier at 32- bit level. It has been compared and showed
the lowest path delays in comparison to others. Figure 10 shows the schematic block
diagram of 32-bit RTL implementation by using Vedic Mathematics, whereas, the
simulation results of 32-bit multiplier based on the Vedic Wallace and 32-bit
Arithmetic Logic Unit as per the control word are shown in Figure 11 and Figure 12,
respectively. Besides this, Table 3, summarized the ALU with different multipliers
and adders.
CHAPTER 6
GENERAL DOCUMENT
What is VLSI?
VLSI stands for "Very Large Scale Integration". This is the field which
involves packing more and more logic devices into smaller and smaller areas.
VLSI
Simply we say Integrated circuit is many transistors on one chip.
Design/manufacturing of extremely small, complex circuitry using modified
semiconductor material
Integrated circuit (IC) may contain millions of transistors, each a few mm in
size
Applications wide ranging: most electronic logic devices
These advantages of integrated circuits translate into advantages at the system level:
Smaller physical size. Smallness is often an advantage in itself-consider
portable televisions or handheld cellular telephones.
Lower power consumption. Replacing a handful of standard parts with a single
chip reduces total power consumption. Reducing power consumption has a
ripple effect on the rest of the system: a smaller, cheaper power supply can be
used; since less power consumption means less heat, a fan may no longer be
necessary; a simpler cabinet with less shielding for electromagnetic shielding
may be feasible, too.
Reduced cost. Reducing the number of components, the power supply
requirements, cabinet costs, and so on, will inevitably reduce system cost. The
ripple effect of integration is such that the cost of a system built from custom
ICs can be less, even though the individual ICs cost more than the standard
parts they replace.
Understanding why integrated circuit technology has such profound influence on the
design of digital systems requires understanding both the technology of IC
manufacturing and the economics of ICs and digital systems.
Applications
Electronic system in cars.
Digital electronics control VCRs
Transaction processing system, ATM
Personal computers and Workstations
Medical electronic systems.
Applications of VLSI
Electronic systems now perform a wide variety of tasks in daily life. Electronic
systems in some cases have replaced mechanisms that operated mechanically,
hydraulically, or by other means; electronics are usually smaller, more flexible, and
easier to service. In other cases electronic systems have created totally new
applications. Electronic systems perform a variety of tasks, some of them visible,
some more hidden:
Personal entertainment systems such as portable MP3 players and DVD
players perform sophisticated algorithms with remarkably little energy.
Electronic systems in cars operate stereo systems and displays; they also
control fuel injection systems, adjust suspensions to varying terrain, and
perform the control functions required for anti-lock braking (ABS) systems.
Digital electronics compress and decompress video, even at high-definition
data rates, on-the-fly in consumer electronics.
Low-cost terminals for Web browsing still require sophisticated electronics,
despite their dedicated function.
Personal computers and workstations provide word-processing, financial
analysis, and games. Computers include both central processing units (CPUs)
and special-purpose hardware for disk access, faster screen display, etc.
6.2 ASIC
An Application-Specific Integrated Circuit (ASIC) is an integrated circuit (IC)
customized for a particular use, rather than intended for general-purpose use. For
example, a chip designed solely to run a cell phone is an ASIC. Intermediate between
ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are
application specific standard products (ASSPs).
As feature sizes have shrunk and design tools improved over the years, the maximum
complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates
to over 100 million. Modern ASICs often include entire 32-bit processors, memory
blocks including ROM, RAM, EEPROM, Flash and other large building blocks. Such
an ASIC is often termed a SoC (system-on-a-chip). Designers of digital ASICs use a
hardware description language (HDL), such as Verilog or VHDL, to describe the
functionality of ASICs.
6.3 SOFTWARE
INTRODUCTION TO XILINX
Migrating Projects from Previous ISE Software Releases
When you open a project file from a previous release, the ISE® software prompts you
to migrate your project. If you click Backup and Migrate or Migrate Only, the
software automatically converts your project file to the current release. If you click
Cancel, the software does not convert your project and, instead, opens Project
Navigator with no project loaded.
Note: After you convert your project, you cannot open it in previous versions of the
ISE software, such as the ISE 11 software. However, you can optionally create a
backup of the original project as part of project migration, as described below.
To Migrate a Project
1. In the ISE 12 Project Navigator, select File > Open Project.
2. In the Open Project dialog box, select the .xise file to migrate.
Note You may need to change the extension in the Files of type field to display .npl
(ISE 5 and ISE 6 software) or .ise (ISE 7 through ISE 10 software) project files.
3. In the dialog box that appears, select Backup and Migrate or Migrate Only.
4. The ISE software automatically converts your project to an ISE 12 project.
Note If you chose to Backup and Migrate, a backup of the original project is created
at project_name_ise12migration.zip.
5. Implement the design using the new version of the software.
Note Implementation status is not maintained after migration.
Properties
For information on properties that have changed in the ISE 12 software, see ISE 11 to
ISE 12 Properties Conversion.
IP Modules:
If your design includes IP modules that were created using CORE Generator™
software or Xilinx® Platform Studio (XPS) and you need to modify these modules,
you may be required to update the core. However, if the core netlist is present and you
do not need to modify the core, updates are not required and the existing netlist is
used during implementation.
Obsolete Source File Types:
The ISE 12 software supports all of the source types that were supported in the ISE 11
software.
If you are working with projects from previous releases, state diagram source files
(.dia), ABEL source files (.abl), and test bench waveform source files (.tbw) are no
longer supported. For state diagram and ABEL source files, the software finds an
associated HDL file and adds it to the project, if possible. For test bench waveform
files, the software automatically converts the TBW file to an HDL test bench and adds
it to the project. To convert a TBW file after project migration, see Converting a
TBW File to an HDL Test Bench
Creating a Project
Project Navigator allows you to manage your FPGA and CPLD designs using an
ISE® project, which contains all the source files and settings specific to your design.
First, you must create a project and then, add source files, and set process properties.
After you create a project, you can run processes to implement, constrain, and analyze
your design. Project Navigator provides a wizard to help you create a project as
follows.
Note If you prefer, you can create a project using the New Project dialog box instead
of the New Project Wizard. To use the New Project dialog box, deselect the Use New
Project wizard option in the ISE General page of the Preferences dialog box.
To Create a Project
1. Select File > New Project to launch the New Project Wizard.
2. In the Create New Project page, set the name, location, and project type, and
click Next.
3. For EDIF or NGC/NGO projects only: In the Import EDIF/NGC Project page,
select the input and constraint file for the project, and click Next.
4. In the Project Settings page, set the device and project properties, and click
Next.
5. In the Project Summary page, review the information, and click Finish to
create the project
Project Navigator creates the project file (project_name.xise) in the directory you
specified. After you add source files to the project, the files appear in the Hierarchy
pane of the
Design panel
Project Navigator manages your project based on the design properties (top-level
module type, device type, synthesis tool, and language) you selected when you
created the project. It organizes all the parts of your design and keeps track of the
processes necessary to move the design from design entry through implementation to
programming the targeted Xilinx® device.
Note For information on changing design properties, see Changing Design Properties.
You can now perform any of the following:
• Create new source files for your project.
• Add existing source files to your project.
• Design source files are left in their existing location, and the copied project
points to these files.
• Design source files, including generated files, are copied and placed in a
specified directory.
• Design source files, excluding generated files, are copied and placed in a
specified directory.
Copied projects are the same as other projects in both form and function. For
example, you can do the following with copied projects:
• Open the copied project using the File > Open Project menu command.
• View, modify, and implement the copied project.
• Use the Project Browser to view key summary data for the copied project and
then, open the copied project for further analysis and implementation, as described in
Using the Project Browser
Alternatively, you can create an archive of your project, which puts all of the project
contents into a ZIP file. Archived projects must be unzipped before being opened in
Project Navigator. For information on archiving, see Creating a Project Archive.
To Create a Copy of a Project
1. Select File > Copy Project.
2. In the Copy Project dialog box, enter the Name for the copy.
Note The name for the copy can be the same as the name for the project, as long as
you specify a different location.
3. Enter a directory Location to store the copied project.
4. Optionally, enter a Working directory.
By default, this is blank, and the working directory is the same as the project
directory. However, you can specify a working directory if you want to keep your
ISE® project file (.xise extension) separate from your working area.
5. Optionally, enter a Description for the copy.
The description can be useful in identifying key traits of the project for reference
later.
6. In the Source options area, do the following:
Select one of the following options:
• Keep sources in their current locations - to leave the design source files in
their existing location.
If you select this option, the copied project points to the files in their existing location.
If you edit the files in the copied project, the changes also appear in the original
project, because the source files are shared between the two projects.
• Copy sources to the new location - to make a copy of all the design source
files and place them in the specified Location directory.
If you select this option, the copied project points to the files in the specified
directory. If you edit the files in the copied project, the changes do not appear in the
original project, because the source files are not shared between the two projects.
Optionally, select Copy files from Macro Search Path directories to copy files from
the directories you specify in the Macro Search Path property in theTranslate
Properties dialog box. All files from the specified directories are copied, not just the
files used by the design.
Note: If you added a netlist source file directly to the project as described in Working
with Netlist-Based IP, the file is automatically copied as part of Copy Project because
it is a project source file. Adding netlist source files to the project is the preferred
method for incorporating netlist modules into your design, because the files are
managed automatically by Project Navigator.
Optionally, click Copy Additional Files to copy files that were not included in the
original project. In the Copy Additional Files dialog box, use the Add Files and
Remove Files buttons to update the list of additional files to copy. Additional files are
copied to the copied project location after all other files are copied.To exclude
generated files from the copy, such as implementation results and reports, select
Overview
Hardware description languages such as Verilog differ from software
programming languages because they include ways of describing the propagation of
time and signal dependencies (sensitivity). There are two assignment operators, a
blocking assignment (=), and a non-blocking (<=) assignment. The non-blocking
assignment allows designers to describe a state-machine update without needing to
declare and use temporary storage variables (in any general programming language
we need to define some temporary storage spaces for the operands to be operated on
subsequently; those are temporary storage variables). Since these concepts are part of
Verilog's language semantics, designers could quickly write descriptions of large
circuits in a relatively compact and concise form. At the time of Verilog's introduction
(1984), Verilog represented a tremendous productivity improvement for circuit
designers who were already using graphical schematic capturesoftware and specially-
written software programs to document and simulate electronic circuits.
6.6 History
Beginning
Verilog was the first modern hardware description language to be invented. It
was created by Phil Moorby and PrabhuGoel during the winter of 1983/1984. The
wording for this process was "Automated Integrated Design Systems" (later renamed
to Gateway Design Automation in 1985) as a hardware modeling language. Gateway
Design Automation was purchased by Cadence Design Systems in 1990. Cadence
now has full proprietary rights to Gateway's Verilog and the Verilog-XL, the HDL-
simulator that would become the de-facto standard (of Veriloglogic simulators) for
the next decade. Originally, Verilog was intended to describe and allow simulation;
only afterwards was support for synthesis added.
Verilog-95
With the increasing success of VHDL at the time, Cadence decided to make
the language available for open standardization. Cadence transferred Verilog into the
public domain under the Open Verilog International (OVI) (now known as Accellera)
organization. Verilog was later submitted to IEEE and became IEEE Standard 1364-
1995, commonly referred to as Verilog-95. In the same time frame Cadence initiated
the creation of Verilog-A to put standards support behind its analog simulator Spectre.
Verilog-A was never intended to be a standalone language and is a subset of Verilog-
AMS which encompassed Verilog-95.
Verilog 2001
Verilog 2005
System Verilog
System Verilog is a superset of Verilog-2005, with many new features and
capabilities to aid design verification and design modeling. As of 2009, the System
Verilog and Verilog language standards were merged into System Verilog 2009
(IEEE Standard 1800-2009). The advent of hardware verification languages such as
OpenVera, and Verisity's e language encouraged the development of Superlog by Co-
Design Automation Inc. Co-Design Automation Inc was later purchased by Synopsys.
The foundations of Superlog and Vera were donated to Accellera, which later became
the IEEE standard P1800-2005: System Verilog.
In the late 1990s, the Verilog Hardware Description Language (HDL) became
the most widely used language for describing hardware for simulation and synthesis.
However, the first two versions standardized by the IEEE (1364-1995 and 1364-2001)
had only simple constructs for creating tests. As design sizes outgrew the verification
capabilities of the language, commercial Hardware Verification Languages (HVL)
such as Open Vera and ewere created. Companies that did not want to pay for these
tools instead spent hundreds of man-years creating their own custom tools. This
productivity crisis (along with a similar one on the design side) led to the creation of
Accellera, a consortium of EDA companies and users who wanted to create the next
generation of Verilog. The donation of the Open-Vera language formed the basis for
the HVL features of System Verilog. Accellera’s goal was met in November 2005
with the adoption of the IEEE standard P1800-2005 for System Verilog, IEEE
(2005).The most valuable benefit of System Verilog is that it allows the user to
construct reliable, repeatable verification environments, in a consistent syntax, that
can be used across multiple projects
There are many other useful features, but these allow you to create test benches at a
higher level of abstraction than you are able to achieve with an HDL or a
programming language such as C.
Examples
module main;
initial
begin
$display("Hello world!");
$finish;
end
endmodule
input clock;
input reset;
reg flop1;
reg flop2;
if(reset)
begin
flop1 <=0;
flop2 <=1;
end
else
begin
end
endmodule
The "<=" operator in Verilog is another aspect of its being a hardware description
language as opposed to a normal procedural language. This is known as a "non-
blocking" assignment. Its action doesn't register until the next clock cycle. This means
that the order of the assignments are irrelevant and will produce the same result: flop1
and flop2 will swap values every clock.
The other assignment operator, "=", is referred to as a blocking assignment. When "="
assignment is used, for the purposes of logic, the target variable is updated
immediately. In the above example, had the statements used the "=" blocking operator
instead of "<=", flop1 and flop2 would not have been swapped. Instead, as in
traditional programming, the compiler would understand to simply set flop1 equal to
flop2 (and subsequently ignore the redundant logic to set flop2 equal to flop1.)
inputcet;
inputcep;
output[size-1:0] count;
outputtc;
// within an always
// (or initial)block
always@(posedgeclkorposedgerst)
count<={size{1'b0}};
else
begin
if(count == length-1)
count<={size{1'b0}};
else
end
assigntc=(cet&&(count == length-1));
endmodule
...
reg a, b, c, d;
wire e;
...
always@(b or e)
begin
a = b & e;
b = a | b;
#5 c = b;
d =#6 c ^ e;
end
The always clause above illustrates the other type of method of use, i.e. the always
clause executes any time any of the entities in the list change, i.e. the b or e change.
When one of these changes, immediately a is assigned a new value, and due to the
blocking assignment b is assigned a new value afterward (taking into account the new
value of a.) After a delay of 5 time units, c is assigned the value of b and the value of
c ^ e is tucked away in an invisible store. Then after 6 more time units, d is assigned
the value that was tucked away.
Signals that are driven from within a process (an initial or always block) must be of
type reg. Signals that are driven from outside a process must be of type wire. The
keyword reg does not necessarily imply a hardware register.
Constants
The definition of constants in Verilog supports the addition of a width parameter. The
basic syntax is:
Examples:
12'h123 - Hexadecimal 123 (using 12 bits)
20'd44 - Decimal 44 (using 20 bits - 0 extension is automatic)
4'b1010 - Binary 1010 (using 4 bits)
6'o77 - Octal 77 (using 6 bits)
wire out;
reg out;
always@(a or b orsel)
begin
case(sel)
1'b0: out = b;
1'b1: out = a;
endcase
end
// procedural structure.
reg out;
always@(a or b orsel)
if(sel)
out= a;
else
out= b;
The next interesting structure is a transparent latch; it will pass the input to the output
when the gate signal is set for "pass-through", and captures the input and stores it
upon transition of the gate signal to "hold". The output will remain stable regardless
of the input signal while the gate is set to "hold". In the example below the "pass-
through" level of the gate would be when the value of the if clause is true, i.e. gate =
1. This is read "if gate is true, the din is fed to latch_out continuously." Once the if
clause is false, the last value at latch_out will remain and is independent of the value
of din.
reg out;
always@(gate or din)
if(gate)
The flip-flop is the next significant template; in Verilog, the D-flop is the simplest,
and it can be modeled as:
reg q;
always@(posedgeclk)
q <= d;
The significant thing to notice in the example is the use of the non-blocking
assignment. A basic rule of thumb is to use <= when there is aposedge or negedge
statement within the always clause.
A variant of the D-flop is one with an asynchronous reset; there is a convention that
the reset state will be the first if clause within the statement.
reg q;
always@(posedgeclkorposedge reset)
if(reset)
q <=0;
else
q <= d;
The next variant is including both an asynchronous reset and asynchronous set
condition; again the convention comes into play, i.e. the reset term is followed by the
set term.
reg q;
if(reset)
q <=0;
else
if(set)
q <=1;
else
q <= d;
Note: If this model is used to model a Set/Reset flip flop then simulation errors can
result. Consider the following test sequence of events. 1) reset goes high 2) clk goes
high 3) set goes high 4) clk goes high again 5) reset goes low followed by 6) set going
low. Assume no setup and hold violations.
In this example the always @ statement would first execute when the rising
edge of reset occurs which would place q to a value of 0. The next time the always
block executes would be the rising edge of clk which again would keep q at a value of
0. The always block then executes when set goes high which because reset is high
forces q to remain at 0. This condition may or may not be correct depending on the
actual flip flop. However, this is not the main problem with this model. Notice that
when reset goes low, that set is still high. In a real flip flop this will cause the output
to go to a 1. However, in this model it will not occur because the always block is
triggered by rising edges of set and reset - not levels. A different approach may be
necessary for set/reset flip flops.
Note that there are no "initial" blocks mentioned in this description. There is a
split between FPGA and ASIC synthesis tools on this structure. FPGA tools allow
initial blocks where reg values are established instead of using a "reset" signal. ASIC
synthesis tools don't support such a statement. The reason is that an FPGA's initial
state is something that is downloaded into the memory tables of the FPGA. An ASIC
is an actual hardware implementation.
Initial Vs Always:
There are two separate ways of declaring a Verilog process. These are the
always and the initial keywords. The always keyword indicates a free-running
process. The initial keyword indicates a process executes exactly once. Both
constructs begin execution at simulator time 0, and both execute until the end of the
block. Once an always block has reached its end, it is rescheduled (again). It is a
common misconception to believe that an initial block will execute before an always
block. In fact, it is better to think of the initial-block as a special-case of the always-
block, one which terminates after it completes for the first time.
//Examples:
initial
begin
end
begin
if(a)
c = b;
else
d =~b;
end// Done with this block, now return to the top (i.e. the @ event-control)
a <= b;
These are the classic uses for these two keywords, but there are two significant
additional uses. The most common of these is an alwayskeyword without the @(...)
sensitivity list. It is possible to use always as shown below:
always
The always keyword acts similar to the "C" construct while(1) {..} in the sense that it
will execute forever.
The other interesting exception is the use of the initial keyword with the addition of
the forever keyword.
Race Condition
The order of execution isn't always guaranteed within Verilog. This can best be
illustrated by a classic example. Consider the code snippet below:
initial
a =0;
initial
b = a;
initial
begin
#1;
end
What will be printed out for the values of a and b? Depending on the order of
execution of the initial blocks, it could be zero and zero, or alternately zero and some
other arbitrary uninitialized value. The $display statement will always execute after
both assignment blocks have completed, due to the #1 delay.
Operators
Note: These operators are not shown in order of precedence.
| Bitwise OR
^ Bitwise XOR
~^ or ^~ Bitwise XNOR
Logical ! NOT
&& AND
|| OR
| Reduction OR
~| Reduction NOR
^ Reduction XOR
~^ or ^~ Reduction XNOR
Arithmetic + Addition
- Subtraction
- 2's complement
* Multiplication
/ Division
** Exponentiation (*Verilog-2001)
Concatenation { , } Concatenation
Conditional ?: Conditional
System Tasks
System tasks are available to handle simple I/O, and various design measurement
functions. All system tasks are prefixed with $ to distinguish them from user tasks and
functions. This section presents a short list of the most often used tasks. It is by no
means a comprehensive list.
CHAPTER 7
RESULTS
ALU SIMULATION
Design summary
Time summary
CHAPTER 8
ADVANTAGES
CHAPTER 9
APPLICATIONS
The main application of this multipliers in the systems where the execution
time is very crucial.
This multipliers are used in less power consuming systems.
The main application of such system is in digital image processing as
convolution plays an important role in many algorithms in edge detection and
related processes.
Speeding up convolution and de-convolution using a Hardware Description
Language for design entry not only increases (improves) the level of
abstraction, but also opens new possibilities for using programmable devices.
Digital electronics control VCRs
Transaction processing system, ATM
Personal computers and Workstations
Medical electronic systems.
CHAPTER 10
CONCLUSION & FUTURE SCOPE
REFERENCES
[1]. GarimaRawat, KhyatiRathore, SiddharthGoyal,Shefali Kala and Poornima Mittal,
(2015). “Design and Analysis of ALU: Vedic Mathematics”. IEEE Int. Conf. on
Computing, Communication and Automation (ICCCA2015), pp. 1372-1376.
[2]. Rahul Nimje and ShardaMungale, (2014). “Design of arithmetic unit for high-
speed performance using Vedic mathematics”. International Journal of Engineering
Research and Applications, pp. 26-31.
[8]. PushpalataVerma, (2012). “Design of 4x4 bit Vedic Multiplier using EDA Tool”.
International Journal of Computer Applications, Vol. 48, No. 20.
[9]. AniruddhaKanhe, Shishir Kumar Das and Ankit Kumar Singh, (2012). “Design
and Implementation of Low Power Multiplier Using Vedic Multiplication
[10]. UmeshAkare, T.V. More and R.S. Lonkar, (2012).“Performance Evaluation and
Synthesis of Vedic Multiplier ”. National Conference on Innovative Paradigms in
Engineering & Technology (NCIPET-2012), Proceedings published by International
Journal of Computer Applications (IJCA), pp. 20-23.
[11]. Anvesh Kumar and Ashish Raman, (2010). “Low Power ALU Design by
Ancient Mathematics”. IEEE, 978-1- 4244-5586-7/10