Hardware-Software Codesign

張雲南 中山大學資工系

Fall, 2001

1

Outline
l

Hardware Software Codesign
– Motivation – Domain – Problem

l l

System Specification Partition methodology
– Partition strategies – Partition Algorithm

l l l

Design Quality Estimation Specification Refinement Co-synthesis

Fall 2002

2

Motivation for Co-design
l

Cooperative approach to the design of HW/SW system is required.
– In tradition, HW/SW are developed independently and

integrated later. – Very little interaction occurring before system integration.
l

l

Co-design improves overall system performance, reliability, and cost effectiveness. Co-design benefits the design of embedded systems, which contain HW/SW tailored for a particular applications.
– The trend of System-on-chip. – Support growing complexity of embedded systems. – Take advantage of advanced in tools and technologies.

Fall 2002

3

and cost • Reduces integration and test time – Supports growing complexity of embedded systems – Takes advantage of advances in tools and technologies • Processor cores • High-level hardware synthesis capabilities • ASIC development Fall 2002 .) l The importance of codesign in designing hardware/software systems: – Improves design quality.Motivations for Codesign (cont. design cycle time.

Fall 2002 5 . Hardware synthesis capability.Driving factors for co-design l l Instruction Set Processors (ISPs) available as cores in many design kits. l l l Increasing capacity of field programmable devices. Efficient C compilers for embedded processors. System on Silicon – Many transistors available in typical processes.

l Reconfigurable Systems.Co-design Applications l Embedded Systems – Consumer electronics – Telecommunications – Manufacturing control – Vehicles l Instruction Set Architectures – Application-specific instruction set processors. • Instructions of the CPU are also the target of design. 6 Fall 2002 . – Dynamic compiler provides the tradeoff of HW/SW.

Categories of Codesign Problems l Codesign of embedded systems – Usually consist of sensors. and actuators – Are reactive systems – Usually have real-time constraints – Usually have dependability constraints l Codesign of ISAs – Application-specific instruction set processors (ASIPs) – Compiler and hardware optimization and trade-offs l Codesign of Reconfigurable Systems – Systems that can be personalized after manufacture for a specific application – Reconfiguration can be accomplished before execution or concurrent with execution (called evolvable systems) Fall 2002 7 . controller.

Classic Design & Co-Design classic design co-design HW SW HW SW Fall 2002 8 .

Current Hardware/Software Design Process l Basic features of current process: – System immediately partitioned into hardware and software components – Hardware and software developed separately – “Hardware first” approach often adopted l Implications of these features: – HW/SW trade-offs restricted • Impact of HW and SW on each other cannot be assessed easily – Late system integration l Consequences of these features: – Poor quality designs – Costly modifications – Schedule slippages Fall 2002 .

with successful and easy integration of the two later Hardware problems can be fixed with simple software modifications Once operational. software rarely needs modification or maintenance Valid and complete software requirements are easy to state and implement in code l l l Fall 2002 .Incorrect Assumptions in Current Hardware/Software Design Process l Hardware and software can be acquired separately and independently.

Typical co-design process System Description (Functional) FSMdirected graphs Concurrent processes Programming languages Unified representation (Data/control flow) HW HW/SW Partitioning Another HW/SW partition SW Software Synthesis Interface Synthesis Hardware Synthesis System Integration Instruction set level HW/SW evaluation Fall 2002 11 .

Co-design Process l l System specification. etc. – Partitioning objectives: • Speedup. computer-aided partitioning technique. – Architectural assumptions: • Type of processor. – Partitioning strategies: • High-level partitioning by hand. etc. – Operational scheduling in hardware – Instruction scheduling in compiler – Process scheduling in operating systems Fall 2002 12 . cost. l HW/SW synthesis. etc. silicon size. interface style. latency requirement. HW/SW partition.

Requirements for the Ideal Codesign Environment l Unified. unbiased hardware/software representation – Supports uniform design and analysis techniques for hardware and software – Permits system evaluation in an integrated design environment – Allows easy migration of system tasks to either hardware or software l Iterative partitioning techniques – Allow several different designs (HW/SW partitions) to be evaluated – Aid in determining best implementation for a system – Partitioning applied to modules to best meet design criteria (functionality and performance goals) Fall 2002 .

) l Integrated modeling substrate – Supports evaluation at several stages of the design process – Supports step-wise development and integration of hardware and software l Validation Methodology – Insures that system implemented meets initial system requirements Fall 2002 .Requirements for the Ideal Codesign Environment (cont.

Cross-fertilization Between Hardware and Software Design l Fast growth in both VLSI design and software engineering has raised awareness of similarities between the two – Hardware synthesis – Programmable logic – Description languages l Explicit attempts have been made to “transfer technology” between the domains Fall 2002 .

and Configuration Configuration Modules Software Descript. Software Gen..Conventional Codesign Methodology Analysis of Constraints and Requirements System Specs. & Parameterization Software Modules Interface Synthesis Hardware Components HW/SW Interfaces HW/SW Integration and Cosimulation Integrated System Fall 2002 System Evaluation Design Verification © IEEE 1994 [Rozenblit94] . HW Synth. HW/SW Partitioning Hardware Descript.

l l HW/SW co-synthesis – Interface Synthesis – Refinement of Specification Fall 2002 17 .Key issues during the codesign process l Unified representation HW/SW partition – Partition Algorithm – HW/SW Estimation Methods.

for both hardware and software designers to use to communicate Supports cross-fertilization between hardware and software domains Fall 2002 .Unified HW/SW Representation l Unified Representation -– A representation of a system that can be used to describe its functionality independent of its implementation in hardware or software – Allows hardware/software partitioning to be delayed until trade-offs can be made – Typically used at a high-level in the design process l l Provides a simulation environment after partitioning is done.

Specification Characteristics l Concurrency – Data-driven concurrency – Control-driven concurrency l l l l l l l State transition Hierarchy Programming constructs Communication Synchronization Exception timing Fall 2002 19 .

structural. register-transfer. instruction set.Current Abstraction Mechanisms in Hardware Systems l Abstraction – The level of detail contained within the system model l A system can be modeled at system. logic. or circuit level A model can describe a system in the behavioral. or physical domain l Fall 2002 20 .

Abstractions in Modeling: Hardware Systems
Level Start here PMS (System) Communicating Processes Input-Output Processors Memories Switches (PMS) Memory, Ports Processors ALUs, Regs, Muxes, Bus Gates, Flip-flops Trans., Connections Cabinets, Cables Behavior Structure Physical

Instruction Set (Algorithm) RegisterTransfer Logic Circuit

Board Floorplan ICs Macro Cells Std. cell layout Transistor layout Work to here

Register Transfers Logic Equns. Network Equns.

Fall 2002

© IEEE 1990

[McFarland90]

Current Abstraction Mechanisms for Software Systems
Virtual machine
A software layer very close to the hardware that hides the hardware’s details and provides an abstract and portable view to the application programmer Attributes – Developer can treat it as the real machine – A convenient set of instructions can be used by developer to model system – Certain design decisions are hidden from the programmer – Operating systems are often viewed as virtual machines

Fall 2002

Abstractions for Software Systems
Virtual Machine Hierarchy • Application Programs • Utility Programs • Operating System • Monitor • Machine Language • Microcode • Logic Devices

Fall 2002

Abstract Hardware-Software Model Uses a unified representation of system to allow early performance analysis General Performance Evaluation Identification of Bottlenecks Abstract HW/SW Model Evaluation of Design Alternatives Evaluation of HW/SW Trade-offs Fall 2002 .

Examples of Unified HW/SW Representations Systems can be modeled at a high level as: l l l l l Data/control flow diagrams Concurrent processes Finite state machines Object-oriented representations Petri Nets Fall 2002 .

and concurrent operations because of its graphical nature 5 Example: X 4 Y + + + + Control Step 1 Control Step 2 Control Step 3 Fall 2002 . control steps.) l Data/control flow graphs – Graphs contain nodes corresponding to operations in either hardware or software – Often used in high-level hardware synthesis – Can easily model data flow.Unified Representations (Cont.

hardware/software partitioning. and synthesis reactive real-time systems – Multiple FSMs that communicate can be used to model Fall 2002 .) l Concurrent processes – Interactive processes executing concurrently with other processes in the system-level specification – Enable hardware and software modeling l Finite state machines – Provide a mathematical foundation for verifying system correctness. simulation.Unified Representations (Cont.

Unified Representations (Cont.) l Object-oriented representations: – Use techniques previously applied to software to manage complexity and change in hardware modeling – Use C++ to describe hardware and display OO characteristics – Use OO concepts such as • Data abstraction • Information hiding • Inheritance – Use building block approach to gain OO benefits • Higher component reuse • Lower design cost • Faster system design process • Increased reliability Fall 2002 .

Unified Representations (Cont.) Object-oriented representation Example: 3 Levels of abstraction: Register Read Write Add Sub AND Shift ALU Mult Div Load Store Processor Fall 2002 .

Unified Representations (Cont.represent information flow through system – Transitions . arcs. tokens. representing the state of the net Example: Input Places Transition Output Place Fall 2002 Token . a “firing” of a transition indicates that some event has occurred – Marking .associated with events. transitions.equivalent to conditions and hold tokens – Tokens . and a marking – Places .a particular placement of tokens within places of a Petri net.) l Petri Nets: a system model consisting of places.

Typical DSP Algorithm l Traditional DSP – Convolution/Correlation – Filtering (FIR.k ] – DCT y[n] = -Â ak y[n . IIR) • y[n] = x[n] * h[n] = N – Adaptive Filtering (Varying Coefficient) k = -• Â x[k ]h[n .k ] + Â bk x[n] k =1 k =0 M -1 (2n + 1)kp x[k ] = e(k )Â x[n] cos[ ] 2N n =0 Fall 2002 31 N -1 .

Specification of DSP Algorithms l Example – y(n)=a*x(n)+b*x(n-1)+c*x(n-2) l Graphical Representation Method 1: Block Diagram (Data-path architecture) – Consists of functional blocks connected with directed edges. which represent data flow from its input block to its output block x(n) a D b x(n-1) D x(n-2) c y(n) Fall 2002 32 .

k): denotes a linear transformation from the input signal at node j to the output signal at node k Linear SFGs can be transformed into different forms without changing the system functions. sum all incoming signals Directed edge (j. – Flow graph reversal or transposition is one of these transformations (Note: only applicable to single-input-singleoutput systems) Fall 2002 33 .Graphical Representation Method 2: Signal-Flow Graph l l l l SFG: a collection of nodes and directed edges Nodes: represent computations and/or task.

Signal-Flow Graph l l Usually used for linear time-invariant DSP systems representation Example: x(n) z -1 a b z -1 c y(n) Fall 2002 34 .

while the directed edges represent data paths (data communications between nodes). each edge has a nonnegative number of delays associated with it. x(n) a D b D c y(n) Fall 2002 35 .Graphical Representation Method 3: Data-Flow Graph l l DFG: nodes represent computations (or functions or subtasks). DFG captures the data-driven property of DSP algorithm: any node can perform its computation whenever all its input data are available.

Data-Flow Graph l Each edge describes a precedence constraint between two nodes in DFG: – Intra-iteration precedence constraint: if the edge has zero delays – Inter-iteration precedence constraint: if the edge has one or more delays (Delay here represents iteration delays.) – DFGs and Block Diagrams can be used to describe both linear single-rate and nonlinear multi-rate DSP systems l Fine-Grain DFG x(n) a D b D c y(n) Fall 2002 36 .

Examples of DFG l Nodes are complex blocks (in Coarse-Grain DFGs) FFT Adaptive filtering IFFT l Nodes can describe expanders/decimators in MultiRate DFGs Decimator N samples Ø2 ↑2 N/2 samples ≡ 2 1 Expander Fall 2002 N/2 samples N samples ≡ 1 2 37 .

DG does not contain delay elements. Fall 2002 38 . Mainly used for systolic array design. Edges represent precedence constraints among nodes.Graphical Representation Method 4: Dependence Graph l l l l DG contains computations for all iterations in an algorithm.

memories. performance. buses l l Two design tasks: – Allocate system components or ASIC constraints – Partition functionality among components l Constraints – Cost. size.System partitioning System functionality is implemented on system components – ASICs. processors. power l Fall 2002 Partitioning is a central system design task 39 .

Hardware/Software Partitioning
l

Definition
– The process of deciding, for each subsystem, whether the

required functionality is more advantageously implemented in hardware or software
l

Goal
– To achieve a partition that will give us the required

performance within the overall system requirements (in size, weight, power, cost, etc.)
l

This is a multivariate optimization problem that when automated, is an NP-hard problem

Fall 2002

HW/SW Partitioning Issues
l

Partitioning into hardware and software affects overall system cost and performance Hardware implementation
– Provides higher performance via hardware speeds and

l

parallel execution of operations

– Incurs additional expense of fabricating ASICs
l

Software implementation
– May run on high-performance processors at low cost (due to

high-volume production) software

– Incurs high cost of developing and maintaining (complex)

Fall 2002

Structural vs. functional partitioning
l

Structural: Implement structure, then partition
– Good for the hardware (size & pin) estimation. – Size/performance tradeoffs are difficult. – Suffer for large possible number of objects. – Difficult for HW/SW tradeoff.

l

Functional: Partition function, then implement
– Enables better size/performance tradeoffs – Uses fewer objects, better for algorithms/humans – Permits hardware/software solutions – But, it’s harder than graph partitioning

Fall 2002

42

Partitioning Approaches l Start with all functionality in software and move portions into hardware which are time-critical and can not be allocated to software (software-oriented partitioning) Start with all functionality in hardware and move portions into software implementation (hardware-oriented partitioning) l Fall 2002 .

System Partitioning (Functional Partitioning) l System partitioning in the context of hardware/software codesign is also referred to as functional partitioning Partitioning functional objects among system components is done as follows – The system’s functionality is described as collection of l indivisible functional objects – Each system component’s functionality is implemented in either hardware or software l An important advantage of functional partitioning is that it allows hardware/software solutions Fall 2002 .

Binding Software to Hardware l Binding: assigning software to hardware components After parallel implementation of assigned modules. all design threads are joined for system integration – Early binding commits a design process to a certain course – Late binding. provides greater flexibility l for last minute changes Fall 2002 . on the other hand.

Hardware/Software System Architecture Trends l Some operations in special-purpose hardware – Generally take the form of a coprocessor communicating with the CPU over its bus • Computation must be long enough to compensate for the communication overhead – May be implemented totally in hardware to avoid instruction interpretation overhead • Utilize high-level synthesis algorithms to generate a register transfer implementation from a behavior description l Partitioning algorithms are closely related to the process scheduling model used for the software side of the implementation Fall 2002 .

where H Ã O. timecon}. Cm). H « S =f Associated metrics: – Hsize(H) is the size of the hardware needed to implement the functions in H (e. .S} – Set of performance constraints.HW/SW Partition Formal Definition l l A hardware/software partition is defined using two sets H and S. H » S = O..g. indicates the maximum execution time allowed for all the functions in group G and G Ã O Fall 2002 . where Cj = {G.. S Ã O.. number of transistors) – Performance(G) is the total execution time for the group of functions in G for a given partition {H. Cons = (C1.

e..timecon. for all j=1…m Given O and Cons. the hardware/software partitioning problem is to find a performance satisfying partition {H.S} such that Hsize(H) is minimized The all-hardware size of O is defined as the size of an all hardware partition (i.G) £ Cj.Performance Satisfying Partition l A performance satisfying partition is one for which performance(Cj. Hsize(O)) l l Fall 2002 .

g. FSMD l Granularity: specification size in each object – Fine granularity yields more possible designs – Coarse granularity better for computation. CDFG. statement blocks. procedures. task DFG.g.g. statements l Component allocation: types and numbers – e. designer interaction • e. memories. – Just indicating the language is insufficient – Abstraction-level indicates amount of design already done • e. processors. tasks. tasks. buses Fall 2002 49 . ASICs.Basic partitioning issues l Specification-abstraction level: input definition – Executable languages becoming a requirement • Although natural languages common in practice.

pins. power.Basic partitioning issues (cont. testability. hints to synthesis tool l Flow of control and designer interaction 50 Fall 2002 . cost.c)+k3F(power. size. reliability – Estimates derived from quick.g.g.) l Metrics and estimations: "good" partition attributes – e.c)+k2F(delay.g. rough implementation – Speed and accuracy are competing goals of estimation l Objective and closeness functions – Combines multiple metric values – Closeness used for grouping before complete partition – Weighted sum common – e.k1F(area. new specification. speed.c) l Output: format and uses – e.

Issues in Partitioning (Cont.) High Level Abstraction Decomposition of functional objects • Metrics and estimations • Partitioning algorithms • Objective and closeness functions Component allocation Output Fall 2002 .

with possibly complex expressions being computed in a state or during a transition Fall 2002 .Specification Abstraction Levels l Task-level dataflow graph – A Dataflow graph where each operation represents a task l Task – Each task is described as a sequential program l Arithmetic-level dataflow graph – A Dataflow graph of arithmetic operations along with some control operations – The most common model used in the partitioning techniques l Finite state machine (FSM) with datapath – A finite state machine.

Specification Abstraction Levels (Cont.) l Register transfers – The transfers between registers for each machine state are described l Structure – A structural interconnection of physical components – Often called a net-list Fall 2002 .

Granularity Issues in Partitioning l l The granularity of the decomposition is a measure of the size of the specification in each object The specification is first decomposed into functional objects. which are then partitioned among system components – Coarse granularity means that each object contains a large amount of the specification. – Fine granularity means that each object contains only a small amount of the specification • Many more objects • More possible partitions – Better optimizations can be achieved Fall 2002 .

Fall 2002 . memories. each differing primarily in monetary cost and performance – Allocation is typically done manually or in conjunction with a partitioning algorithm l A partitioning technique must designate the types of system components to which functional objects can be mapped – ASICs. and selecting a number of each to use in a given design The set of selected components is called an allocation – Various allocations can be used to implement a specification.System Component Allocation l l The process of choosing system component types from among those allowed. etc.

power consumption.Metrics and Estimations Issues l A technique must define the attributes of a partition that determine its quality – Such attributes are called metrics • Examples include monetary cost. program size. data size. and memory size • Closeness metrics are used to predict the benefit of grouping any two objects l Need to compute a metric’s value – Because all metrics are defined in terms of the structure (or software) that implements the functional objects. communication bit-rates. reliability. pins. testability. area. execution time. it is difficult to compute costs as no such implementation exists during partitioning Fall 2002 .

Metrics in HW/SW Partitioning l Two key metrics are used in hardware/software partitioning – Performance: Generally improved by moving objects to hardware – Hardware size: Hardware size is generally improved by moving objects out of hardware Fall 2002 .

which require much design time • Determining metric values from a rough implementation is called estimation Fall 2002 .Computation of Metrics l Two approaches to computing metrics – Creating a detailed implementation • Produces accurate metric values • Impractical as it requires too much time – Creating a rough implementation • Includes the major register transfer components of a design • Skips details such as precise routing or optimized logic.

Estimation of Partitioning Metrics l Deterministic estimation techniques – Can be used only with a fully specified model with all data dependencies removed and all component costs known – Result in very good partitions l Statistical estimation techniques – Used when the model is not fully specified – Based on the analysis of similar systems and certain design parameters l Profiling techniques – Examine control flow and data flow within an architecture to determine computationally expensive parts which are better realized in hardware Fall 2002 .

. power. they must be taken into account • e. such as cost.Objective and Closeness Functions l Multiple metrics. Objfct = k1 * area + k2 * delay + k3 * power – Because constraints always exist on each design. power_constr) Fall 2002 . a weighted sum objective function is used • e.g Objfct = k1 * F(area. area_constr) + k2 * F(delay. delay_constr) + k3 * F(power.g. and performance are weighed against one another – An expression combining multiple metric values into a single value that defines the quality of a partition is called an Objective Function – The value returned by such a function is called cost – Because many metrics may be of varying importance.

as computed by an objective function While the best partition can be found through exhaustive search.Partitioning Algorithm Issues l l l Given a set of functional objects and a set of system components. this method is impractical because of the inordinate amount of computation and time required The essence of a partitioning algorithm is the manner in which it chooses the subset of all possible partitions to examine Fall 2002 . a partitioning algorithm searches for the best partition. which is the one with the lowest cost.

hoping for a good partition – Spend computation time constructing a small number of partitions l Iterative algorithms – Modify a complete partition in the hope that such modifications will improve the partition – Use an objective function to evaluate each partition – Yield more accurate evaluations than closeness functions used by constructive algorithms l In practice. a combination of constructive and iterative algorithms is often employed Fall 2002 .Partitioning Algorithm Classes l Constructive algorithms – Group objects into a complete partition – Use closeness metrics to group objects.

Local minima C . B .Iterative Partitioning Algorithms l l l The computation time in an iterative algorithm is spent evaluating large numbers of partitions Iterative algorithms differ from one another primarily in the ways in which they modify the partition and in which they accept or reject bad modifications The goal is to find global minimum while performing as little computation as possible B A C Fall 2002 A.Global minimum .

Iterative Partitioning Algorithms (Cont.)
l

Greedy algorithms
– Only accept moves that decrease cost – Can get trapped in local minima

l

Hill-climbing algorithms
– Allow moves in directions increasing cost (retracing) • Through use of stochastic functions – Can escape local minima – E.g., simulated annealing

Fall 2002

Typical partitioning-system configuration

Fall 2002

65

Basic partitioning algorithms
l

Random mapping
– Only used for the creation of the initial partition.

l l l l l l

Clustering and multi-stage clustering Group migration (a.k.a. min-cut or Kernighan/Lin) Ratio cut Simulated annealing Genetic evolution Integer linear programming

Fall 2002

66

Hierarchical clustering l One of constructive algorithm based on closeness metrics to group objects Fundamental steps: – Groups closest objects – Recompute closenesses – Repeat until termination condition met l l Cluster tree maintains history of merges – Cutline across the tree defines a partition Fall 2002 67 .

pk) end loop end loop return P Fall 2002 68 . j.k=ComputeCloseness(pij.pj) end loop end loop /* Merge closest objects and recompute closenesses*/ While not Terminate(P) loop pi.pj=FindClosestObjects(P.C) P=P-pi-pjUpij for each pk loop ci. j=ComputeCloseness(pi.Hierarchical clustering algorithm /* Initialize each object as a group */ for each oi loop pi=oi P=Pupi end loop /* Compute closenesses between objects */ for each pi loop for each pi loop ci.

Hierarchical clustering example Fall 2002 69 .

Greedy partitioning for HW/SW partition l l Two-way partition algorithm between the groups of HW and SW. Repeat P_orig=P for i in 1 to n loop if Objfct(Move(P. Suffer from local minimum problem.o) end if end loop Until P=P_orig Fall 2002 70 .o)<Objfct(P) then P=Move(P.

Multi-stage clustering l l Start hierarchical clustering with one metric and then continue with another metric. Each clustering with a particular metric is called a stage. Fall 2002 71 .

The movement of objects between groups depends on if it produces the greatest decrease or the smallest increase in cost. Fall 2002 72 .Group migration l l Another iteration improvement algorithm extended from two-way partitioning algorithm that suffer from local minimum problem.. each object can only be moved once. – To prevent an infinite loop in the algorithm.

else exit*/ If bestpart_cost<prev_cost then P=bestpart_P else return prev_P end if end loop Fall 2002 73 . oi )) if cost<bestmove_cost then bestmove_cost=cost bestmove_obj= oi end if end loop P=Move(P.bestmove_obj) bestmove_obj.Group migration’s Algorithm P=P_in Loop /*Initialize*/ prev_P=P prev_cost=Objfct(P) bestpart_cost=• for each oi loop oi•moved=false end loop /*create a sequence of n moves*/ for i in 1 to n loop bestmove_cost= • for each oi not oi •moved loop cost=Objfct(Move(P.moved=true /*Save the best partition during the sequence*/ if bestmove_cost<bestpart_cost then bestpart_P=P bestpart_cost=bestmove_cost end if end loop /*Updata P if a better cost was found.

74 Fall 2002 . Based on this new metric. A new metric ratio is defined as ratio = cut ( P) size( p1 ) ¥ size( p2 ) – Cut(P):sum of the weights of the edges that cross p1 and p2.Ratio Cut l l A constructive algorithm that groups objects until a terminal condition has been met. the partition algorithms try to group objects to reduce the cutsizes without grouping objects that are not close. – Size(pi):size of pi. l l The ratio metric balances the competing goals of grouping objects to reduce the cutsize without grouping distance objects.

Overview – Starts with initial partition and temperature – Slowly decreases temperature – For each temperature.Simulated annealing l l Iterative algorithm modeled after physical annealing process that to avoid local minimum problem. less likely at low temperatures l Results and complexity depend on temperature decrease rate Fall 2002 75 . generates random moves – Accepts any move that improves cost – Accepts some bad moves.

temp)=min(1.1)) then P=P_tentative cost=cost_tentative end if end loop temp=DecreaseTemp(temp) .temp)>Random(0.cost End loop temp where:Accept(cost.Simulated annealing algorithm Temp=initial temperature Cost=Objfct(P) While not Frozen loop while not Equilibrium loop P_tentative=Move(P) cost_tentative=Objfct(P_tentative cost=cost_tentative-cost if(Accept(cost.e ) Fall 2002 76 .

– Mutation: randomly selected partition after some randomly modification. l Produce good result but suffer from long run times. and create a new generation from a current one by imitating three evolution methods found in nature. – Crossover: randomly selected from two strong partitions. Fall 2002 77 .Genetic evolution l l Genetic algorithms treat a set of partitions as a generation. Three evolution methods – Selection: random selected partition.

num_sel) U Cross(G.Genetic evolution’s algorithm /*Create first generation with gen_size random partitions*/ G=∅ for i in 1 to gen_size loop G=GUCreateRandomPart(O) end loop P_best=BestPart(G) /*Evolve generation*/ While not Terminate loop G=Select*G.num_cross) Mutate(G.num_mutatae) If Objfct(BestPart(G))<Objfct(P_best)then P_best=BestPart(G) end if end loop /*Return best partition in final gereration*/ return P_best Fall 2002 78 .

Fall 2002 79 . and a single linear function of the variables that serves as an objective function. a set of linear inequalities. Still a NP-hard problem that requires some heuristics. the variables are used to represent partitioning decision or metric estimations. l l For partition purpose. – A integer linear program is a linear program in which the variables can only hold integers.Integer Linear Programming l A linear program formulation consists of a set of variables.

k 2. j # of op to ip and ip to op MaxConn( P) maximum Conn over all pairs sizei estimated size of group pi size _ max maximum group size allowed k1. k 3 constants Fall 2002 80 . p j ) = ( Conni . j + wirei . j ) ⋅( size _ max k3 Min ( sizei . size j ) ) ⋅( size _ max sizei + size j ) k1 ⋅ inputsi . j inputsi .Partition example l The Yorktown Silicon compiler uses a hierarchical clustering algorithm with the following closeness as the terminal conditions: Conni . j # of common inputs shared wiresi . j k2 MaxConn ( P ) Closeness ( pi .

6(c ). = ) = 8 120 120 + 140 x x x 0+4 Closeness(-.5 the + and = operations form one cluster.9 Closeness(+. < ) = 8 300 160 x 160 + 180 300 =0.5 threshold 8+0 300 300 =2.and the < and – operations from a second cluster. The closeness values Between all operations are shown in figure 6.6(d)shows the results of hierarchical clustering with a closeness threshold of 0.8 All other operation pairs have a closeness value of 0. figure 6.Ysc partitioning example YSC partitioning example:(a)input(b)operation(c ) operation closeness values(d) Clusters formed with 0. Fall 2002 81 .

Ysc partitioning with similarities
+
0 0 0

2.9

0

=

<
0.9

-

+ = <

+-=<
4231 x431 xx41 xxx4
8.7

+
0 0

0

+ < = <

=
0

-

0.8

-

0.8

Ysc partitioning with similarities(a)clusters formed with 3.0 closeness threshold (b) operation similarity table (c )closeness values With similarities (d)clusters formed.

Fall 2002

82

Output Issues in Partitioning
l

Any partitioning technique must define the representation format and potential use of its output
– E.g., the format may be a list indicating which functional

object is mapped to which system component
– E.g., the output may be a revised specification • Containing structural objects for the system components • Defining a component’s functionality using the functional objects mapped to it

Fall 2002

Flow of Control and Designer Interaction
l

Sequence in making decisions is variable, and any partitioning technique must specify the appropriate sequences
– E.g., selection of granularity, closeness metrics, closeness

functions

l

Two classes of interaction
– Directives • Include possible actions the designer can perform manually, such as allocation, overriding estimations, etc. – Feedback • Describe the current design information available to the designer (e.g., graphs of wires between objects, histograms, etc.)

Fall 2002

I. S. Cost( ) ) which returns a partition H’. I ) which returns a natural number that summarizes the overall quality of a given partition – I contains any additional information that is not contained in H or S or Cons – A smaller cost function value is desired l An iterative improvement partitioning algorithm is defined as a procedure Part_Alg(H. Cons. S’. Cons.Comparing Partitions Using Cost Functions l A cost function is a function Cost(H. I) £ Cost(H. S. S. I ) Fall 2002 . Cons. S’ such that Cost(H’. Cons.

Estimation l Estimates allow – Evaluation of design quality – Design space exploration l Design model – Represents degree of design detail computed – Simple vs. complex models l Issues for estimation – Accuracy – Speed – Fidelity Fall 2002 86 .

Typical estimation model example Design model Additional tasks Accuracy Fidelity Low Mem Mem+FU Mem+FU+Reg Mem+FU+Reg+Mux Mem+FU+Reg+Mux+ Wiring Fall 2002 speed fast Low Mem allocation FU allocation Lifetime analysis FU binding Floorplaning high high slow 87 .

88 Fall 2002 . M ( D) : estimated & measured value l Speed: computation time spent to obtain estimate Estimation Error Computation Time Simple Model l Actual Design Simplified estimation models yield fast estimator but result in greater estimation error and less accuracy. Speed l Accuracy: difference between estimated and actual value A=1|E(D)-M(D)| M(D) E ( D) .Accuracy vs.

C) = E(A) < E(C). C) = E(B) < E(C). j u (A. M(A) < M(B) measured Design points (B. the more likely that correct decisions will be made based on estimates. B) = E(A) > E(B). Definition of fidelity: Metric estimate F = 100 ¥ 2 n ( n -1) Â Â i =1 n n j =i +1 i . M(B) > M(C) (A.Fidelity l l l l Estimates must predict quality metrics for different design alternatives Fidelity: % of correct predictions for pairs of design Implementations The higher fidelity of the estimation. M(A) < M(C) A B C Fidelity = 33 % Fall 2002 89 .

control steps. communication rates l Cost Metrics – Hardware: manufacturing cost (area).Quality metrics l Performance Metrics – Clock cycle. packaging cost(pin) – Software: program size. execution time. data memory size l Other metrics – Power – Design for testability: Controllability and Observability – Design time – Time to market Fall 2002 90 .

Hardware design model Fall 2002 91 .

l Clock slack represents the portion of the clock cycle for which the functional unit is idle. clk ( MOD) = Maxall t i (delay (ti )) – The estimation is simple but may lead to underutilization of the faster functional units.Clock cycles metric l l Selection of a clock cycle before synthesis will affect the practical execution time and the hardware resources. Simple estimation of clock cycle is based on maximum-operator-delay method. Fall 2002 92 .

Clock cycle estimation Fall 2002 93 .

Clock slack and utilization l Slack : portion of clock cycle for which FU is idle slack ( clk .ti) ] ave_slack = i T i U occur (ti) l Clock utilization : % of clock cycle utilized for computations ave_slack(clk) utilization=1 clk Fall 2002 94 . ti ) = ( [ delay ( ti ) / dk ] * dk ) – delay ( ti ) l Average slack: FU slack averaged over all operations T U[ occur (ti) * slack(clk.

4 ns utilization(65 ns) = 1 .Clock utilization 6x32 2x9 2 x 17 + ave_slack(65 ns)= 6+2+2 + = 24.0) = 62% Fall 2002 95 .(24.4 / 65.

– Scheduling Fall 2002 96 .Control steps estimation l Operations in the specification assigned to control step Number of control steps reflects: – Execution time of design – Complexity of control unit l l Techniques used to estimate the number of control steps in a behavior specified as straight-line code – Operator-use method.

Operator-use method l l Easy to estimate the number of control steps given the resources of its implementation. Number of control steps for each node can be calculated: È È occur (t j ) ˘ ˘ csteps(n j ) = max Í Í ¥ clocks(ti )˙ ˙ ti ŒT Í ˙ Î Í num(t j ) ˙ ˚ The total number of control steps for behavior B is l csteps( B) = max csteps(n j ) ni ŒN Fall 2002 97 .

Operator-use method Example l Differential-equation example: Fall 2002 98 .

l l Fall 2002 99 . Resource-constrained vs time-constrained scheduling. It’s quite expensive to obtain the estimate based on scheduling.Scheduling l A scheduling technique is applied to the behavior description in order to determine the number of controls steps.

Scheduling for DSP algorithms Scheduling: assigning nodes of DFG to control times Resource allocations: assigning nodes of DFG to hardware(functional units) High-level synthesis – Resource-constrained synthesis – Time-constrained synthesis l l l Fall 2002 100 .

Classification of scheduling algorithms l Iterative/Constructive Scheduling Algorithms • • • As Soon As Possible Scheduling Algorithm(ASAP) As Late As Possible Scheduling Algorithm(ALAP) List-Scheduling Algorithms l Transformational Scheduling Algorithms • • • Force Directed Scheduling Algorithm Iterative Loop Based Scheduling Algorithm Other Heuristic Scheduling Algorithms Fall 2002 101 .

The DFG in the Example Fall 2002 102 .

As Soon As Possible(ASAP) Scheduling Algorithm l Find minimum start times of each node Fall 2002 103 .

As Soon As Possible(ASAP) Scheduling Algorithm l The ASAP schedule for the 2nd-order differential equation Fall 2002 104 .

As Late As Possible(ALAP) Scheduling Algorithm l Find maximum start times of each node Fall 2002 105 .

As Late As Possible(ALAP) Scheduling Algorithm l The ALAP schedule for the 2nd-order differential equation Fall 2002 106 .

scheduling range) Fall 2002 107 .List-Scheduling Algorithm (resourceconstrained) l A simple list-scheduling algorithm that prioritizes nodes by decreasing criticalness (e.g.

Force Directed Scheduling Algorithm (timeconstrained) l Transformation algorithm Fall 2002 108 .

l Fall 2002 109 .Force Directed Scheduling Algorithm l Figure.(a) shows the time frame of the example DFG and the associated probabilities (obtained using ALAP and ASAP).(b) shows the DGs for the 2nd-order differential equation. Figure.

5)) = (2.333*(-0.333*(0–0.25 Fall 2002 110 .833*(+0.5)) = +0.Force Directed Scheduling Algorithm Lj Self _ Force( j ) = Â [ DG (i ) * x(i )] i = Sj l Example: Self_Force4(1) = Force4(1) + Force4(2) = (DGM(1)*x4(1)) + (DGM(2)*x4(2)) = (2.5)) + (2.833*(1-0.5)) + (2.

333*(+0.25 Succ_Force4(2) = Self_Force8(2) +Self_Force8(3) = (DGM(2)*x8(2)) + (DGM(3)*x8(3)) = (2.00 111 .5)) = -0.75 Force4(2) = Self_Force4(2) +Succ_Force4(2) Fall 2002 l = (2.5)) = -0.5)) + (0.75 = -1.5)) = -0.333*(0-0.5)) + (0.833*(0.): Self_Force4(2) = Force4(1) + Force4(2) = (DGM(1)*x4(1)) + (DGM(2)*x4(2)) = (2.833*(1–0.25-0.5)) + (2.Force Directed Scheduling Algorithm Example (con’d.833*(-0.333*(-0.

5=0 Fa3(1) = 0.5*0. A2: Fa2(1) = 0.5 M1 M1: Fm1(2) = 0.5 Fm2(0) = -0.5+0.5*0.5*0. M2: T(0): Self_Fm2(0)=0.5*0.5-0.5*0.5+0.5-0.5*0.5*0.5*0.5=0 Succ_Fa3(2)=-1.5*0.5*0.5*0.5=0.5=-0.5 Fall 2002 112 .5=0 Fm2(0) = 0 T(1): Self_Fm2(1)=-0.5-0.5*0.5 Fa3(2) = -0.Force Directed Scheduling Algorithm(another example) A1: Fa1(0) = 0.5 A2 A3 T(2): Self_Fa3(2)=-1.5=-0. A3: T(1): Self_Fa3(1)=1.5 A1 M2 Pred_Fm2(1)=0.5+0.

Fall 2002 113 .Scheduler l Critical path scheduler – Based on precedence graph (intra-iteration precedence constraints) l Overlapping scheduler – Allow iterations to overlap l Block schedule – Allow iterations to overlap – Allow different iterations to be assigned to different processors.

u (2) (2) A 2D C (2) E (1) Fall 2002 0 1 2 3 4 5 6 7 8 9 10 B 3D D (2) P 1 P 2 P 3 p4 A 0 A 0 B 0 A 1 B 0 A 1 B 1 C 0 C 0 A 2 B 1 E 0 D 0 A 2 B 2 C 1 D 0 C 1 A 3 B 2 E 1 D 1 A 3 D 1 114 .Overlapping schedule l Example: – Minimum iteration period obtained from critical path scheduler is 8 t.

Block schedule l Example: – Minimum iteration period obtained from critical path scheduler is 20 t.u 0 1 2 3 4 5 6 7 8 9 10 11 2D A (20) B (4) P1 A0 A0 B0 B0 C0 C0 A2 A2 B2 B2 C2 C2 P2 A1 A1 B1 B1 C1 C1 A3 A3 B3 P3 D0 D0 E0 D1 D1 E1 Fall 2002 115 .

Branching in behaviors l Control steps maybe shared across exclusive branches – sharing schedule: fewer states. no status registers Fall 2002 116 . status register – non-sharing schedule: more states.

Execution time estimation l l l Average start to finish time of behavior Straight-line code behaviors execution( B) = csteps( B) ¥ clk Behavior with branching – Estimate execution time for each basic block – Create control flow graph from basic blocks – Determine branching probabilities – Formulate equations for node frequencies – Solve set of equations execution( B) = Â exectime(bi ) ¥ freq (bi ) bi ŒB Fall 2002 117 .

Probability-based flow analysis Fall 2002 118 .

0 x freq(v3) + 1.9 > freq(v5) freq(v1)=1.0 x freq(v2) freq(v1)=1.0 freq(v6)=1.0 l Can be used to estimate number of accesses to variables.0 x freq(v1) + 0.0 x freq(S) freq(v1)=1. channels or procedures Fall 2002 119 .0 freq(v4)=5.0 x freq(v2) freq(v1)=1.0 freq(v3)=5.0 freq(v1)=1.0 x freq(v5) l Node execution frequencies: freq(v1)=1.Probability-based flow analysis l Flow equations: freq(S)=1.0 freq(v2)=10.0 freq(v5)=10.0 > freq(v4) freq(v1)=1.

is defined as the rate at which data is sent during the entire channel’s lifetime. peakrate(C). 120 Fall 2002 . Average rate of a channel C. is defined as the rate at which data is sent in a single message transfer. avgrate (C). Communication channel may be either explicitly specified in the description or created after system partitioning. Peak rate of a channel.Communication rate l l l l Communication between concurrent behaviors (or processes) is usually represented as messages sent over an abstract channel.

C ) ¥ portdelay (C ) l Total bits transferred by the channel. – Communication time commtime( B. C ) ¥ bits (C ) l Channel average rate average(C ) = l Channel peak rate Total _ bits ( B . C ) = access( B.C ) comptime ( B ) + commtime ( B .C ) bits ( C ) protocol _ delay ( C ) 121 peakrate(C ) = Fall 2002 . total _ bits ( B. • Obtained by the flow-analysis method.Communication rate estimation l Total behavior execution time consists of – Computation time comptime( B ) • Time required for behavior B to perform its internal computation. C ) = access( B. C ) • Time spent by behavior to transfer data over the channel commtime( B.

Communication rates l Average channel rate rate of data transfer over lifetime of behavior 56bits averate (C ) = l 1000ns =56Mb/s Peak channel rate rate of data transfer of single message peakrate(C ) = Fall 2002 8bits 100ns =80Mb/s 122 .

Area estimation l Two tasks: – Determining number and type of components required – Estimating component size for a specific technology (FSMD.) l Behavior implemented as a FSMD (finite state machine with datapath) – Datapath components: registers. next-state logic l Area can be accessed from the following aspects: – Datapath component estimation – Control unit estimation – Layout area for a custom implementation Fall 2002 123 . control logic. gate arrays etc.multiplexers/buses – Control unit: state register. functional units.

V and E are set of vertices and edges Clique is a complete subgraph of G Clique-partitioning – divides the vertices into a minimal number of cliques – each vertex in exactly one clique l One heuristic: maximum number of common neighbors – Two nodes with maximum number of common neighbors are merged – Edges to two nodes replaced by edges to merged node – Process repeated till no more nodes can be merged Fall 2002 124 .Clique-partitioning l l l l Commonly used for determining datapath components Let G = (V.E) be a graph.

Clique-partitioning Fall 2002 125 .

Lifetime analysis is popularly used in DSP synthesis in order to reduce number of registers required. Fall 2002 126 .Storage unit estimation l l l l Variables used in the behavior are mapped to storage units like registers or memory. Variables not used concurrently may be mapped to the same storage unit Variables with non-overlapping lifetimes have an edge between their vertices.

: l Due to the periodic nature of DSP programs the lifetime chart can be drawn for only one iteration to give an indication of the # of registers that are needed. Linear lifetime chart : Represents the lifetime of the variables in a linear fashion.Register Minimization Technique l l l Lifetime analysis is used for register minimization techniques in a DSP hardware. – Note : Linear lifetime chart uses the convention that the variable is not live during the clock cycle when it is produced but live during the clock cycle when it is consumed. A ‘data sample or variable’ is live from the time it is produced through the time it is consumed. After that it is dead. 127 Fall 2002 .

In the example. Fall 2002 128 . # of live variables at cycle 7 ≥ N (=6) is the sum of the # of live variables due to the 0th iteration at cycles 7 and (7 .Lifetime Chart l For DSP programs with iteration period N – Let the # of live variables at time partitions n ≥ N be the # of live variables due to 0-th iteration at cycles n-kN for k ≥ 0. which is 2 + 1 = 3.1¥6) = 1.

Matrix transpose example i|h|g|f|e|d|c|b|a Sample a b c d e f g h i Tin 0 1 2 3 4 5 6 7 8 Tzlout 0 3 6 1 4 7 2 5 8 Tdiff 0 2 4 -2 0 2 -4 -2 0 Matrix Transposer Tout 4 7 10 5 8 11 6 9 12 Life 0 ‡4 1 ‡7 2‡10 3 ‡5 4 ‡8 5‡11 6 ‡6 7 ‡9 8‡12 i|f|c|h|e|b|g|d|a vTo make the system causal a latency of 4 is added to the difference so that Tout is the actual output time. 129 Fall 2002 .

19.1) represents the time partition i and all time instances {(Nl + i)} where l is any non-negative integer. the point marked i (0 £ i £ N . then time partition i = 3 represents time instances {3. For example : If N = 8. …}. • Note : Variable produced during time unit j and consumed during time unit k is shown to be alive from ‘j + 1’ to ‘k’.Circular Lifetime Chart n n Useful to represent the periodic nature of the DSP programs. n 130 . 11. In a circular lifetime chart of periodicity N. • The numbers in the bracket in the adjacent figure correspond to the # of live variables at each Fall 2002 time partition.

Forward-Backward Register Allocation Technique: Note : Hashing is done to avoid conflict during backward allocation. Fall 2002 131 .

If multiple variables are input in a given cycle. In forward allocation. previous two steps until the allocation is complete. they are allocated in a backward manner on a first come first serve basis. 132 l l l Falll 2002 Repeat . If (i + 1)-th register is not free then use the first available forward register.Steps of Register Allocation l l Determine the minimum number of registers using lifetime analysis. Input each variable at the time step corresponding to the beginning of its lifetime. Being periodic the allocation repeats in each iteration. For variables that reach the last register and are still alive. then register i + 1 holds the same variable in the next cycle. so hash out the register Rj for the cycle l + N if it holds a variable during cycle l. Each variable is allocated in a forward manner until it is dead or it reaches the last register. if the register i holds the variable in the current cycle. these are allocated to multiple registers with preference given to the variable with the longest lifetime.

construct a graph where – Each operation in behavior represented by a vertex – Edge connects two vertices if corresponding operations assigned different control steps there exists an FU that can implement both operations l For determining the number of interconnect units. construct a graph where – Each connection between two units is represented by a vertex – Edge connects two vertices if corresponding connections are not used in same control step Fall 2002 133 .Functional-unit and interconnect-unit estimation l l Clique-partitioning can be applied For determining the number of FU’s required.

Computing datapath area l Bit-sliced datapath Lbit = a ¥ tr ( DP) H rt = nets nets _ per _ track ¥b area(bit ) = Lbit ¥ ( H cell + H rt ) area( DP) = bitwdith ( DP) ¥ area(bit ) Fall 2002 134 .

Pin estimation l Number of wires at behavior’s boundary depends on – Global data – Port accessed – Communication channels used – Procedure calls Fall 2002 135 .

Fall 2002 136 .Software estimation model l Processor-specific estimation model – Exact value of a metric is computed by compiling each behavior into the instruction set of the targeted processor using a specific compiler. l Generic estimation model – Behavior will be mapped to some generic instructions first. – Estimation can be made accurately from the timing and size information reported. – Bad side is hard to adapt an existing estimator for a new processor. – Processor-specific technology files will then be used to estimate the performance for the targeted processors.

Software estimation models Fall 2002 137 .

do mov d0.do add a6@(offset2).word ptr[bp+offset1] add ax. dmem3=dmem1+dmem2 ………….a6@(offset3) 6820 instructions clock (7) (2+EA2) (5) bytes 2 2 2 technology file for 8086 Generic instruction Execution time size technology file for 68020 Generic instruction Execution time size …………. 22 clocks 6 bytes Fall 2002 138 .word ptr[bp+offset2] mov word ptr[bp+offset3]. dmem3=dmem1+dmem2 …………. 35 clocks 10 bytes ………….Deriving processor technology files Generic instruction dmem3=dmem1+dmem2 8086 instructions Instruction mov ax.ax) clock bytes (10) 3 (9+EA1) 4 (10) 3 Instruction mov a6@(offset1).

Software estimation l Program execution time – Create basic blocks and compile into generic instructions – Estimate execution time of basic blocks – Perform probability-based flow analysis – Compute execution time of the entire behavior: exectime(B)=d x ( S exectime(bi) x freq(bi)) d accounts for compiler optimizations – accounts for compiler optimizations l Program memory size progsize(B)= S instr_size(g) Data memory size datasize(B)= S datasize(d) 139 l Fall 2002 .

Refinement l Refinement is used to reflect the condition after the partitioning and the interface between HW/SW is built – Refinement is the update of specification to reflect the mapping of variables. l Functional objects are grouped and mapped to system components – Functional objects: variables. behaviors. and buses l Specification refinement is very important – Makes specification consistent – Enables simulation of specification – Generate input for synthesis. and channels – System components: memories. compilation and verification tools Fall 2002 140 . chips or processors.

Refining variable groups l The memory to which the group of variables are reflected and refined in specification. Variable folding: – Implementing each variable in a memory with a fixed word l size l Memory address translation – Assignment of addresses to each variable in group – Update references to variable by accesses to memory Fall 2002 141 .

Variable folding Fall 2002 142 .

X := MEM(136).... X := V(36).. for J in 0 to 63 loop SUM := SUM + V(J). variable MEM : IntArray (255 downto 0).. V(K) := 3...... MEM(K + 100) := 3. end loop. .. . . for J in 100 to 163 loop SUM := SUM + MEM(J). X := MEM(136). Original specification variable J.. ... MEM(J+100) := X.... .. V(J) := X. end loop. variable V : IntArray (63 downto 0). variable K : integer := 0. MEM(K +100) := 3... .. for J in 0 to 63 loop SUM := SUM + MEM(J +100).. variable MEM : IntArray (255 downto 0). Refined specification Fall 2002 Refined specification without offsets for index J 143 . . end loop. K : integer := 0. K : integer := 0... . V (63 downto 0) MEM(163 downto 100) Assigning addresses to V variable J : integer := 100... MEM(J) := X. ..Memory address translation variable J.

Channel refinement l l l Channels: virtual entities over which messages are transferred Bus: physical medium that implements groups of channels Bus consists of: – wires representing data and control lines – protocol defining sequence of assignments to data and control lines l Two refinement tasks – Bus generation: determining bus width • number of data lines – Protocol generation: specifying mechanism of transfer over bus Fall 2002 144 .

serial port. self-timed. blocking Fall 2002 145 .to-point • Multi-way – Blocking – Non-blocking l Standard interface scheme – Memory-mapped.Communication l Shared-memory communication model – Persistent shared medium – Non-persistent shared medium l Message-passing communication model – Channel • uni-directional • bi-directional • Point. synchronous. parallel port.

receive(y). (b)message passing Fall 2002 146 . … end (a) Process Q begin variable y … y :=M.Communication (cont) Shared memory M Process P begin variable x … M :=x. … … end end (b) Inter-process communication paradigms:(a)shared memory. … end Process P Process Q begin Channel C begin variable x variable y … … send(x).

– Message size bits (C ) • number of bits in each message – Accesses: accesses( B. C ) • number of times P transfers data over C – Average rate averate(C ) • rate of data transfer of C over lifetime of behavior – Peak rate peakrate(C ) • rate of transfer of single message bits(C )=8 bits averate(C )=24bits/400ns=60Mbits/s peakrate(C )=8bits/100ns=80Mbits/s Fall 2002 147 .Characterizing communication channels l For a given behavior that sends data over channel C.

Characterizing buses l For a given bus B – Buswidth buswidth( B) • number of data lines in B – Protocol delay protdelay ( B ) • delay for single message transfer over bus – Average rate averate( B ) • rate of data transfer over lifetime of system peakrate( B) – Peak rate • maximum rate of transfer of data on bus peakrate(C ) = buswidth ( B ) portdelay ( B ) Fall 2002 148 .

Determining bus rates l l Idle slots of a channel used for messages of other channels To ensure that channel average rates are unaffected by bus averate( B) = Â averate(C ) CŒB l Goal: to synthesize a bus that constantly transfers data for channel peakrate( B) = averate(C ) Fall 2002 149 .

Constraints for bus generation l l l Bus-width: affects number of pins on chip boundaries Channel average rates: affects execution time of behaviors Channel peak rates: affects time required for single message transfer Fall 2002 150 .

Bus generation algorithm l l Compute buswidth range:minwidth=1. C ) ¥ [Ècurrwidth ˘¥ protdelay ( B)] average(C ) = cŒB access ( B .C )¥bits ( C ) comptime ( B ) + commtime ( B ) – If peakrate(B) ≥ S averate(C ) then if bestcost>ComputeCost(currwidth) then bestcost=ComputeCost(currwidth) bestwidth=currwidth Fall 2002 151 .maxwidth Max(bit(C )) For minwidth: currwidth£maxwidth loop – Compute bus peak rate: peakrate(B)=currwidth÷protdelay(B) – Compute channel average rates commtime( B) = access( B.

Bus generation example l Assume – 2 behavior accessing 16 bit data over two channels – Constraints specified for channel peak rates Channel C Behavior B Variable accessed P1 V1 Bits(C) Access(B. C) 128 Comptime( p) 515 CH1 16 data + 7 addr 16 data + 7 addr CH2 P2 V2 128 129 Fall 2002 152 .

used for synchronization between behaviors – ID lines. used for identifying the channel active on the bus l l l All channels mapped to bus share these lines Number of data lines determined by bus generation algorithm Protocol generation consists of six steps Fall 2002 153 .Protocol generation l Bus consists of several sets of wires: – Data lines. used for transferring message bits – Control lines.

CH0 CH1 variable X. half-handshake etc. ….. …. end. begin …. bit_vector(15 downto 0).Protocol generation steps l 1. ID assignment – N channels require log2(N) ID lines behavior P variable AD. 15 downto 0). MEM(60) := COUNT.. CH2 variable MEM : bit_vector (63 downto 0. MEM(AD) := X+7. l 2.. X <= 32. end. Protocol selection – full handshake. behavior Q variable COUNT. begin …... CH3 bus B Fall 2002 154 . ….

Protocol generation steps l 3 Bus structure and procedure definition – The structure of bus (the data. Update variable-reference – References to a variable that has been assigned to another component must be updated. l 5. control. Fall 2002 155 . Generate processes for variables – Extra behavior should be created for those variables that have been sent across a channel. ID lines) is defined in the specification. l 4.

B. ID : bit_vector(1 downto 0) . DONE : bit . end ReceiveCH0. rxdata (8*J-1 downto 8*(J-1)) <= B. procedure SendCH0( txdata : in bit_vector) is Begin bus B.DONE <= ’0’ . B. end loop.START <= ’1’ .DONE = ’0’) .DATA . procedure ReceiveCH0( rxdata : out bit_vector) is begin for J in 1 to 2 loop wait until (B. 156 Fall 2002 .ID <= "00" . DATA : bit_vector(7 downto 0) . B. B.START <= ’0’ .DONE <= ’1’ . wait until (B. end SendCH0.Protocol generation example type HandShakeBus is record START.START = ’0’) . signal B : HandShakeBus . wait until (B.ID = "00") . for J in 1 to 2 loop B.DONE = ’1’) . end loop. end record . wait until (B.START = ’1’) and (B.data <= txdata(8*J-1 downto 8*(J-1)) .

.ID. end. …. SendCH3(60. Fall 2002 157 . …. bus B process Xproc variable X.ID=“00”) then receiveCH0(X). …. end if.ID. begin …. process Q variable COUNT. begin …. elseif (B. begin wait on B. if (B. end. end if. ReceiveCH1(Xtemp). end. process MEMproc variable MEM: array(0 to 63). elseif (B..Refined specification after protocol generation process P variable AD Xtemp. if (B..ID=“10”) then receiveCH2(MEM). SendCH0(32). COUNT). begin wait on B. end.Xtemp+7).. SendCH2(AD.ID=“11”) then sendCH3(MEM)..ID=“01”) then sendCH1(X).

Resolving access conflicts l System partitioning may result in concurrent accesses to a resource – Channels mapped to a bus may attempt data transfer simultaneously – Variables mapped to a memory may be accessed by behaviors simultaneously l l Arbiter needs to be generated to resolve such access conflicts Three tasks – Arbitration model selection – Arbitration scheme selection – Arbiter generation Fall 2002 158 .

Arbitration models STATIC Dynamic Fall 2002 159 .

Fixed-priority scheme statically assigns a priority to each behavior. – Fixed priority can be also pre-emptive. l Dynamic-priority scheme determines the priority of a behavior at the run-time. and the relative priorities for all behaviors are not changed throughout the system’s lifetime. – Round-robin – First-come-first-served Fall 2002 160 .Arbitration schemes l l Arbitration schemes determines the priorities of the group of behaviors’ access to solve the access conflicts. – It may lead to higher mean waiting time.

l Both behaviors are bound to standard components. – An interface process has to be inserted between the two standard components to make the communication compatible.Refinement of incompatible interfaces l l Three situation may arise if we bind functional objects to standard components: Neither behavior is bound to a standard component. – Communication between two can be established by generating the bus and inserting the protocol into these objects. Fall 2002 161 . l One behavior is bound to a standard component – The behavior that is not associated with standard component has to use dual protocol to the other behavior.

Effect of binding on interfaces Fall 2002 162 .

Protocol operations l Protocols usually consist of five atomic operations – waiting for an event on input control line – assigning value to output control line – reading value from input data port – assigning value to output data port – waiting for fixed time interval l Protocol operations may be specified in one of three ways – Finite state machines (FSMs) – Timing diagrams – Hardware description languages (HDLs) Fall 2002 163 .

transitions Fall 2002 164 .Protocol specification: FSMs l l l Protocol operations ordered by sequencing between states Constraints between events may be specified using timing arcs Conditional & repetitive event sequences require extra states.

not simulatable – Difficult to specify conditional and repetitive event sequences Fall 2002 165 . representation of timing constraints l Disadvantages: – Lack of action language.Protocol specification: Timing diagrams l Advantages: – Ease of comprehension.

wait for 100 ns. MDATAp <= MemVar (MAddrVar). RDp 16 MAddrVar := MADDRp . port RDp : in bit. wait until (RDp = ’1’). wait until (ARCVp = ’1’ ). DataVar <= DATAp. ARDYp <= ’1’. wait until (DRDYp = ’1’).Protocol specification: HDLs l Advantages: – Functionality can be verified by simulation – Easy to specify conditional and repetitive event sequences l Disadvantages: – Cumbersome to represent timing constraints between events port ADDRp : out bit_vector(7 downto 0). ADDRp <= AddrVar(7 downto 0). port MDATAp : out bit_vector(15 downto 0). port DATAp : in bit_vector(15 downto 0). port DRDYp : in bit. port DREQp : out bit. 8 ADDRp 16 DATAp ARDYp ARCVp DREQp DRDYp MADDRp MDATAp 16 Protocol Pb 166 port MADDRp : in bit_vector(15 downto 0). port ARDYp : out bit. ADDRp <= AddrVar(15 downto 8). Fall 2002 Protocol Pa . DREQp <= ’1’. port ARCVp : in bit.

but incompatible protocols Output: HDL process that translates one protocol to the other – i. responds to their control signals and sequence their data transfers l Four steps required for generating interface process (IP): – Creating relations – Partitioning relations into groups – Generating interface process statements – interconnect optimization Fall 2002 167 .e.Interface process generation l l Input: HDL description of two fixed.

Relations A1[ (true) : ADDRp <= AddrVar(7 downto 0) ARDYp <= ’1’ ] A2[ (ARCVp = ’1’) : ADDRp <= AddrVar(15 downto 8) DREQp <= ’1’ ] A3 [ (DRDYp = ’1’) : DataVar <= DATAp ] Fall 2002 168 . DataVar <= DATAp.IP generation: creating relations l l Protocol represented as an ordered set of relations Relations are sequences of events/actions Protocol Pa ADDRp <= AddrVar(7 downto 0). wait until (ARCVp = ’1’ ). DREQp <= ’1’. wait until (DRDYp = ’1’). ADDRp <= AddrVar(15 downto 8). ARDYp <= ’1’.

Group represents a unit of data transfer Protocol Pa A1 (8 bits out) A2 (8 bits out) Protocol Pb B1 (16 bits in) A3 (16 bits in) B2 (16 bits out) G1=(A1 A2 B1) G2=(B1 A3) Fall 2002 169 .IP generation: partitioning relations l l Partition the set of relations from both protocols into groups.

ARCVp DREQp DRDYp MADDRp <= TempVar1. TempVar1(7 downto 0) := ADDRp . ARCVp <= ’1’ . add its dual to interface process Dual of an operation represents the complementary operation Temporary variable may be required to hold data values Interface Process ADDRp /* (group G1)’ */ wait until (ARDYp = ’1’). RDp <= ’1’ .IP generation: inverting protocol operations l l l For each operation in a group. /* (group G2)’ */ wait for 100 ns. DRDYp <= ’1’ . ARDYp TempVar1(15 downto 8) := ADDRp . TempVar2 := MDATAp . wait until (DREQp = ’1’). DATAp <= TempVar2 . 8 16 Atomic operation Dual operation DATAp 16 MADDRp wait until (Cp = ’1’) Cp <= ’1’ var <= Dp Dp <= var wait for 100 ns Cp <= ’1’ wait until (Cp = ’1’) Dp <= TempVar TempVar := Dp wait for 100 ns 16 MDATAp RDp Fall 2002 170 .

IP generation: interconnect optimization l l Certain ports of both protocols may be directly connected Advantages: – Bypassing interface process reduces interconnect cost – Operations related to these ports can be eliminated from interface process Fall 2002 171 .

Transducer synthesis l l l Input: Timing diagram description of two fixed protocols Output: Logic circuit description of transducer Steps for generating logic circuit from timing diagrams: – Create event graphs for both protocols – Connect graphs based on data dependencies or explicitly – – – – specified ordering Add templates for each output node in combined graph Merge and connect templates Satisfy min/max timing constraints Optimize skeletal circuit Fall 2002 172 .

Generating event graphs from timing diagrams Fall 2002 173 .

Deriving skeletal circuit from event graph l Advantages: – Synthesizes logic for transducer circuit directly – Accounts for min/max timing constraints between events l Disadvantages: – Cannot interface protocols with different data port sizes – Transducer not simulatable with timing diagram description of protocols Fall 2002 174 .

Hardware/Software interface refinement Fall 2002 175 .

g.g.Tasks of hardware/software interfacing l l l l l l Data access (e.. behavior accessing variable) refinement Control access (e. behavior starting behavior) refinement Select bus to satisfy data transfer rate and reduce interfacing cost Interface software/hardware components to standard buses Schedule software behaviors to satisfy data input/output rate Distribute variables to reduce ASIC cost and satisfy performance 176 Fall 2002 ..

Cosynthesis l Methodical approach to system implementations using automated synthesis-oriented techniques Methodology and performance constraints determine partitioning into hardware and software implementations The result is “optimal” system that benefits from analysis of hardware/software design trade-off analysis l l Fall 2002 .

Cosynthesis Approach to System Implementation Behavioral Specification and Performance criteria System Input System Output Memory Mixed Implementation Performance Pure HW Pure SW Cost Constraints Fall 2002 © IEEE 1993 [Gupta93] .

Co-synthesis l Implementation of hardware and software components after partitioning Constraints and optimization criteria similar to those for partitioning Area and size traded-off against performance Cost considerations l l l Fall 2002 179 .

Synthesis Flow l HW synthesis of dedicated units – Based on research or commercial standard synthesis tools l SW synthesis of dedicated units (processors) – Based on specialized compiling techniques l Interface synthesis – Definition of HW/SW interface and synchronization – Drivers of peripheral devices Fall 2002 180 .

The POLIS Flow “ESTEREL” for functional specification language Fall 2002 181 .Co-Synthesis .

Hardware Design Methodology Hardware Design Process: Waterfall Model Hardware Requirements Preliminary Hardware Design Detailed Hardware Design Fabrication Testing Fall 2002 .

) l l Use of HDLs for modeling and simulation Use of lower-level synthesis tools to derive register transfer and lower-level designs Use of high-level hardware synthesis tools – Behavioral descriptions – System design constraints l l Introduction of synthesis for testability at all levels Fall 2002 .Hardware Design Methodology (Cont.

Hardware Synthesis l Definition – The automatic design and implementation of hardware from a specification written in a hardware description language l Goals/benefits – To quickly create and modify designs – To support a methodology that allows for multiple design alternative consideration details of VLSI design – To remove from the designer the handling of the tedious – To support the development of correct designs Fall 2002 .

or register-transfer behavior (on one hand) to register-transfer structure (on the other) – Logic synthesis – Synthesis from register-transfer structures or Boolean equations to gate-level logic (or physical implementations using a predefined cell or IC library) Fall 2002 . control-flow behavior.Hardware Synthesis Categories l Algorithm synthesis – Synthesis from design requirements to control-flow behavior or abstract behavior – Largely a manual process l Register-transfer synthesis – Also referred to as “high-level” or “behavioral” synthesis – Synthesis from abstract behavior.

Hardware Synthesis Process Overview Specification Behavioral Simulation Implementation Behavioral Synthesis Behavioral Functional Optional RTL Simulation Synthesis & Test Synthesis RTL Functional Verification Gate-level Simulation Gate-level Analysis Gate Silicon Vendor Place and Route Layout Fall 2002 Silicon .

Concurrent Design 3.. Specification Analysis 2. AppendDQ(root) Yes Class Hierarchy PopDQ Component Design Component Design Component Design . System Integration Specification Error? No Initialization: AddDH(root).HW Synthesis ICOS Architecture specs Performance specs Synthesis specs Specification Input Specification Analysis (1) 1. Yes Component Design (2) No DQempty? Yes design complete? No Simulation and Performance Evaluation Synthesis Rollback Yes rollback possible? No Error: Synthesis Impossible Output Best Architecture (3) Stop (1) Specification Analysis Phase (2) Concurr ent Design Phase (3) System Integration Phase Fall 2002 187 ..

Software Design Methodology Software Design Process: Waterfall Model Software Requirements Software Design Coding Testing Maintenance Fall 2002 .

several levels – – – – – Unit testing Integration testing System testing Regression testing Acceptance testing Fall 2002 . – Detailed level .module specs.process design language (PDL) used l l Coding .in high-level language – C/C++ Maintenance .) l l Software requirements includes both – Analysis – Specification Design: 2 levels: – System level .Software Design Methodology (Cont.

Software Synthesis l Definition: the automatic development of correct and efficient software from specifications and reusable components Goals/benefits – To Increase software productivity – To lower development costs – To Increase confidence that software implementation l satisfies specification – To support the development of correct programs Fall 2002 .

a new software design methodology. is necessary Synthesis supports a correct-by-construction philosophy Techniques support software reuse l l l Fall 2002 . including automated code generation.Why Use Software Synthesis? l Software development is becoming the major cost driver in fielding a system To significantly improve both the design cycle time and life-cycle cost of embedded systems.

Why Software Synthesis? l More software Ë high complexity Ë need for automatic design (synthesis) Eliminate human and logical errors Relatively immature synthesis techniques for software Code optimizations – – l l l size efficiency l Automatic code generation Fall 2002 192 .

Software Synthesis Flow Diagram for Embedded System with Time Petri-Net Vehicle Parking Management System System Software Specification Modeling System Software with Time Petri Net Task Scheduling Code Generation Code Execution on an Emulation Board Feedback and Modification Test Report Fall 2002 193 .

Automatically Generate CODE l Real-Time Embedded System Model? Set of concurrent tasks with memory and timing constraints! How to execute in an embedded system (e. 100 KB Mem)? Task Scheduling! How to generate code? Map schedules to software code! Code optimizations? Minimize size. 1 CPU. maximize efficiency! l l l Fall 2002 194 .g.

Software Synthesis Categories l Language compilers – ADA and C compilers – YACC .yet another compiler compiler – Visual Basic l Domain-specific synthesis – Application generators from software libraries Fall 2002 .

Software Synthesis Examples l Mentor Graphics Concurrent Design Environment System – Uses object-oriented programming (written in C++) – Allows communication between hardware and software synthesis tools l Index Technologies Excelerator and Cadre’s Teamwork Toolsets – Provide an interface with COBOL and PL/1 code generators l KnowledgeWare’s IEW Gamma – Used in MIS applications – Can generate COBOL source code for system designers l MCCI’s Graph Translation Tool (GrTT) – Used by Lockheed Martin ATL – Can generate ADA from Processing Graph Method (PGM) graphs Fall 2002 .

Narayan. KLUWER ACADEMIC PUBLISHERS. Sanjaya Kumar. & J. Barry W. James H. Gong. Frank Vahid. Prentice Hall.Reference l l l Function/Architecture Optimization and Co-Design for Embedded Systems. 2000 The Co-Design of Embedded Systems: A Unified Hardware/Software Representation. Bassam Tabbara. A. S. Daniel D. 1994 Fall 2002 197 . 1996 Specification and Design of Embedded Systems. KLUWER ACADEMIC PUBLISHERS. and Wm. Wulf. Johnson. Abdallah Tabbra. Gajski. and Alberto Sangiovanni-Vincentelli. Aylor.

Sign up to vote on this title
UsefulNot useful