You are on page 1of 15

White Paper

Understanding the Fundamentals of PCI Express

Scott Knowlton, Product Marketing Manager, Synopsys September 2007

From Parallel to Serial


PCI Express or PCIe is a high performance, high bandwidth serial communications interconnect standard that has been devised by the Peripheral Component Interconnect Special Interest Group (PCI-SIG) to replace bus-based communication architectures, such as PCI, PCI Extended (PCI-X), and the accelerated graphics port (AGP). The prime motivation for migrating to PCIe is to achieve the significantly enhanced system throughput, scalability and flexibility, with lower manufacturing costs, that traditional bus-based interconnects simply cannot deliver. The PCI Express standard is designed with the future in mind and continues to evolve to provide systems with increased throughput. The first-generation of PCIe specified a throughput of 2.5 gigabits per second (Gbps), with the second-generation specifying 5.0 Gbps, and the recently announced PCIe 3.0 standard supporting 8.0 Gbps. While the PCIe standard continues to leverage the latest technologies to deliver ever-increasing throughput, the use of a layered protocol eases migration from PCI to PCIe by maintaining driver software compatibility with existing PCI applications. Although originally targeted at computer expansion cards and graphics cards, PCIe is now also being extensively adopted for use in a broader range of applications, including networking, communications, storage, industrial and consumer electronics. The objective of this whitepaper is to equip the reader with a broad understanding of PCI Express and the design challenges essential to successful PCIe implementation.

PCI Express Fundamentals Topology


This section introduces the PCIe protocol fundamentals and the different components required to implement and support the PCIe protocol in todays systems. The objective here is to provide a working knowledge of PCIe, without delving into the detailed complexity of the PCIe protocol. The advantages of PCIe come at the cost of complexity. PCIe is a packet-based serial connectivity protocol that is estimated to be 10x more complex than PCIs parallel bus. This complexity is due in part to the requisite parallel-to-serial data conversion at gigahertz speeds, and the move to a packet-based implementation. PCIe maintains the basic load-store architecture of PCI, including support for split transactions that was added by PCI-X. In addition, it introduces a number of low-level message-passing primitives to manage the link (such as link-level flow control) to mimic the side-band wires of the traditional parallel bus, and to deliver higher levels of robustness and functionality The specification defines many features that support both todays needs and future expandability, while maintaining software driver compatibility with PCI. Advanced features of PCI Express include: autonomous power management; advanced error reporting; end-to-end reliability via end-to-end cyclic redundancy checking (ECRC), hot plug support; and quality of service (QoS) traffic classes. The topology of a simplified system, consisting of the four function types the root complex, switch, endpoints and bridge is shown in figure 1. Each of the dotted lines represents a connection between two PCIe devices, and is called a link.

2007 Synopsys, Inc.

C PU

GFX

Chip Set w/Root Complex

Memory

PCI

PCIe to PCI

Endpoint

Bridge
Switch Endpoint

Endpoint

Figure 1. The Four PCIe Function Types

The root complex initializes the whole PCIe fabric and configures each of the links. It normally connects the central processor unit (CPU) to one or more of the other three functions PCIe switches, PCIe endpoints and PCIe-to-PCI bridges. The PCIe switch routes data downstream to multiple PCIe ports, and from each of these individual ports upstream to a single root complex. PCIe switches may also route traffic flexibly from one downstream port to another (peer-to-peer), eliminating the restrictive tree structure required by traditional PCI systems. Endpoints normally reside in the end applications connecting the application to the PCIe network in the system. The Endpoint requests and completes PCIe transactions. Generally, there are more endpoints in the system than any other type of PCIe component. The bridge connects PCIe to other PCI bus standards, such as PCI/PCI-X, in systems that employ those bus architectures as well as PCIe.

PCIe Protocol Specification The protocol as defined in the PCIe specification adheres to the Open Source Initiative (OSI) model. It is partitioned into five principal layers, as shown on the left side of figure 2. This section provides a general overview of the Mechanical and Physical Layers; subsequent sections will address Link, Transaction and Application Layers.

2007 Synopsys, Inc.

Application

Application Transaction Link


PIPE

Transaction Link Physical


Command TxData RxData Status Physical Interface (PIPE) PCIk
State machines for Link Training and Status State Machine (LTSSM) and lane-lane deskew

Physical Logical Electrical Mechanical

PCS Layer

Physical Coding Sub-layer

8b/10b code/decode Elastic buffer Rx detection

Electrical Sub-block

Analog buffers SERDES 10-bit interface

Rx

Tx

Lane
Figure 2. PCIe Specification Protocol Layers

The Mechanical Layer defines the mechanical environment such as connectors, card form factors, card detection and hot-plug requirements.

On the right side of figure 2, we expand the remaining layers to show more accurately how the lower layers are mapped to a physical hardware implementation. As shown, the Physical Layer is partitioned into two sub-layers: the Electrical Layer and the Logical Layer. A number of companies have defined and utilize an interface between the Electrical Layer and the Logical Layer called the Physical Interface for PCI Express (PIPE). The PIPE interface enables the design to a standard interface and/or the purchase of multiple components that will work together, even from different vendors. The Electrical Sub-layer of the Physical layer implements the analog components including the transceiver, the analog buffers, the serializer/deserializer (SerDes) and the 10-bit interface. The Physical Coding Sub-layer (PCS) encodes/decodes each 8-bit data-byte to a 10-bit code. This coding feature not only checks for valid characters; it also limits the difference between the number of zeros and ones transmitted, thus maintaining a DC balance at both the transmitter and receiver, and significantly enhancing electromagnetic compatibility (EMC) and electrical signal performance. The other side of the PIPE interface in the physical layer contains the Link Training and Status State Machine (LTSSM), lane-to-lane de-skew, special sequence detection and generation, etc

In physical hardware, the layers from the serial pins to the PIPE interface are collectively called the PHY, and those from the PIPE interface to the application layer are collectively called the digital controller. Each end of any given PCIe link must have both a PHY and digital controller. Figure 3 illustrates the PCIe PHY and controller inserted into the Root Complex and the Endpoint from the sub-system defined in Figure 1. The Endpoint uses an Endpoint port, and the Root Complex device uses a Root Port. The figure shows each of the port types expanded into their separate PHY and controller functions.

2007 Synopsys, Inc.

Chip Set w/Root Complex

PCIe RC Controller

phy

phy
PCIe Endpoint

Endpoint

Figure 3. : PHY & Controller usage in an SoC

As before, the dotted line between the two ports represents the link. This PCIe link is unidirectional, and uses a low-voltage differential signal. The PCIe specification defines that links can contain up to 32 parallel lanes to increase the throughput to 80 Gbps for PCIe 1.x (2.5Gbps) links or 160 Gbps for PCIe 2.0 (5.0 Gbps) . Each of the lanes in a link provides its own embedded clocking, which eliminates the line length matching on the PC board that was required by the old PCI interface in order to maintain timing. The next two sections consider the design of the PHY and digital controller functions in greater depth. PCIe SerDes Design Challenges The design of a PCIe PHY for PCIe is particularly challenging for designers due to: The serial-to-parallel data conversion, which requires advanced analog design. Analog design is not portable between process technologies, so the PHY must be redesigned for each new process technology used to manufacture the chips. The high speed although a design challenge in and of itself is exacerbated by the analog link with its additional design complications, such as degradation due to signal integrity and noise, which must be addressed. The PHY must pass rigorous electrical and compliance tests to ensure interoperability with other devices

As line speeds increase, the PHY is not only more difficult to design, but it must also be carefully integrated to address the signal integrity issues that arise at throughputs of over 1 Gbps. Packaging and board design are much more difficult and time-consuming at high speeds, often leading to project delays. In addition, the design of a high performance PHY requires advanced expertise in high-speed analog communications. Such communications are critically dependent upon the devices manufacturing process, so the designer must possess an understanding of the fundamental device physics. Such expertise is acquired only through extensive design experience. Not only is the PHY development difficult, the PHY must also interoperate with PCIe interfaces designed by other companies. Consequently, the PCI-SIG provides compliance workshops commonly known as plug-fests to test a design for compliance to the specification and interoperability with other devices.

2007 Synopsys, Inc.

Why are PCIes engineering challenges so much greater? An example of the issues in high speed design and the effects that standard FR4 board material have on the signal is shown in figure 4. The left side of the figure shows the binary eye diagrams for a 1.25 Gbps stream and a 5 Gbps stream, respectively, transmitted over a 26-inches of standard FR4 board material. The corresponding binary eye diagrams on the right show the degraded signal at the destination. The 1.25 Gbps stream has survived the journey quite well, but the size and clarity of the 5 Gbps eye have significantly degraded because of the dielectric loss incurred in lowcost FR4 substrate and interconnect materials at frequencies greater than 1 GHz.

Data Rate 1. Gbps .0 Gbps

Transmit Eye

Receive Eye @  of FR

Figure 4. Binary Eye Degradation with Increasing Frequency

This loss increases with increasing frequency, resulting in unacceptable distortion in 1-0-1-0 bit-streams (essentially, AC signals), although a series of all-ones or all-zeros (essentially DC signals) would successfully transmit. The solution is to improve the overall signal-to-noise ratio by increasing the amplitude of higher frequency (AC) signals with respect to that of lower frequency (DC) signals a process known as pre-emphasis. Alternatively, the lower frequency signals may be de-emphasized. Using pre-emphasis at the transmitter provides a clean eye at the destination enabling the specification to be met with a comfortable margin (shown in Figure 5).

Data Rate .0 Gbps


Transmit Eye

Receive Eye @  of FR

Transmit Eye with Pre-emphasis

Receive Eye with Pre-emphasis @  of FR

Figure 5. Using Pre-Emphasis to Limit Binary Eye Degradation

Using pre-emphasis and other analog design techniques can provide a clean signal. However, even a signal with an apparently clean binary eye must still meet the voltage margin requirements of the PCIe specification. In Figure 6, the left diagram represents the PCIe specification with the diamond in the middle delineating the minimum requirements for the eye opening. The diagram in the middle transfers the PCIe specification requirements (as shown by the diamond) and shows an acceptable eye opening that exceeds these requirements. In the diagram on the right, the waveform fails to meet the requirements represented by the diamond.

2007 Synopsys, Inc.

Figure 6. Meeting the Voltage Margin Requirements

Why does this matter? It is common for high speed SerDes testing to use a loopback mode to ensure that the PHY can produce a clean eye. However, even when a device has a clean eye, it is still possible to pass loopback testing, but not be able to reliably communicate with other PCIe devices in the system. Clearly, a loop back test is not sufficient to ensure that the PHY passes the electrical requirements of the PCIe specification. To overcome the limitations of the loop-back tests, Synopsys has implemented on-board diagnostics into its high-speed PHY designs, providing real-time visibility into link behavior and performance. Such diagnostics identify and quantify signal integrity problems such as excessive jitter and inadequate voltage margins right on the die something that simple go/no-go loop-back diagnostics cannot do. Meeting the foregoing PHY development challenges is, however, subject to further constraints. These challenges must be met in an economically-viable die area and within the power consumption budget. Small die and low power are imperatives. PCI Express Digital Controller Design Challenges The complexity of PCIe is much greater than that of PCI, with the interface complexity roughly 10x greater, and the gate count (excluding the PHY) about 7.5x greater. PCIe also defines a number of different port types which include: Root Complex, Switch, Bridge and Endpoint. To complicate matters further, for each of the PCIe port types, one size does not fit all. For instance, the requirements of a PCIe add-in card for a 1G Ethernet controller can be fulfilled with a single lane (x1) Endpoint with a 32-bit internal datapath, while a set-top box may require a 64-bit internal datapath with dual Root Complex and Endpoint functionality. There are many factors that can increase the complexity of the PCIe interface, including optional features that may be required for a specific PCIe application. While implementing the PCIe interface, care must be taken to ensure that only necessary features are included in the design, to avoid unnecessary gate count, area and power consumption penalties. For each of the PCIe port types, there are a number of required features, but there are also a number of system-level issues that must be optimized and resolved. Such system-level decisions greatly influence the performance of the interface and gate count. These decisions are outside the scope and purpose of this white paper. PCIe Packets Before examining the details of the different features of each of the protocol layers, it is important to understand how data is transferred within the PCIe network. PCI Express uses packets to move data around the system and between the layers of the digital interface and PCIe devices. The application layer initiates transactions, while the transaction layer converts the applications request into a PCIe transaction packet. The data link layer adds a sequence number and Link CRC (LCRC) to the packet. The data link layer also ensures that the two-way transactions are received correctly (see figure 7). Finally, the physical layer transmits the transactions across the PCIe link.

2007 Synopsys, Inc.

TLP

STP 1 Byte

SEQ 2 Byte

TLP Header 2 12/16

Data Payload 0-4K Byte Transaction Layer Data Link Layer Physical Layer

ECRC 4 Byte

LCRC 4 Byte

END 1 Byte

DLLP

STP 1 Byte

Type 1 Byte

Data 3 Byte Data Link Layer Physical Layer

16b CRC 2 Byte

END 1 Byte

Figure 7. Physical Layer (Logical)

The Physical layer of the controller interfaces to the PHY and manages many of the functions that initialize the link and format the packets. Special sequences are used to establish the physical link, enter and exit low power link states, etc. The receive portion of the Physical Layer is responsible for: Lane mapping, lane-to-lane de-skew, for links composed of multiple lanes. Data de-scrambling. Packet discovery and de-framing. Detecting special packet sequences such as TS1, TS2, Skip, and Electrical Idle.

The transmit portion of the Physical Layer is responsible for: Framing the packets by using special symbol insertion, such as STP or SDP symbols to mark the beginning of the packet, END symbols to mark the end. Data scrambling. Link control initialization, width and lane-reversal negotiation. Multi-lane transmit control. Generation of skip sequences to compensate for clock PPM differences between the two ends of the link. The following few sections provide a more detailed description of some of these concepts:

The following few sections provide a more detailed description of some of these concepts:

2007 Synopsys, Inc.

Lane mapping enables sequential packets to be transmitted in parallel over a multi-lane link, thus considerably increasing throughput. The receivers physical layer re-assembles the packets in the correct sequence (see figure 8).

Packet

STP Hd3

SEQ Hd3

# Hd3

Hd1 Hd3

Hd1 CRC

Hd1 CRC

Hd1 CRC

Hd2 CRC

Hd2 END

Hd2

Hd2

Cycle 4 Cycle 3 Cycle 2 Cycle 1 Cycle 0

CRC Hd3 Hd2 Hd1 STP

CRC Hd3 Hd2 Hd1 END

CRC Hd3 Hd2 Hd1 END

END CRC Hd3 Hd2 END

Lane 0

Lane 1

Lane 3

Lane 4

Figure 8. Using Lane Mapping to Increase Throughput

Lane-to-lane de-skew is performed to correct skew between the lanes in a multi-lane link. The transmitting component sends pre-defined markers (COMs) simultaneously on all lanes, enabling the receiver to detect the skew, and insert compensation to re-align packets in order that the received data is perceived by the other layers as arriving simultaneously. In figure 9 below, the left graphic shows a four lane device (x4) transmitting the packets and how they could be received. The right graphic shows how the received data is de-skewed to remove skew introduced by different per-lane delays.
STP 1 Byte Type 1 Byte Data 3 Byte Data Link Layer 16b CRC 2 Byte END 1 Byte

Byte 15 Byte 11

Byte 16 Byte 12

Byte 17 Byte 13

END
Byte 14 Byte 10 Byte 2

Byte 15 Byte 11

Byte 16 Byte 12

Byte 17 Byte 13

END
Byte 14 Byte 10

Byte 7 Byte 3 SDP Byte 3 SDP SKP SKP SKP COM

Byte 8 Byte 4 Byte 0 Byte 4 Byte 1 SKP SKP SKP COM

Byte 9 Byte 5 Byte 1 Byte 5 Byte 1 SKP SKP SKP COM

Byte 1 Byte 5

Physical Layer

Byte 7 SDP Byte 3 SDP Byte 3 Byte 7


Byte 11 Byte 15

Byte 8 Byte 4 Byte 0 Byte 4 Byte 0 SKP SKP SKP COM

Byte 9 Byte 5 Byte 1 Byte 5 Byte 1 SKP SKP SKP COM

Byte 6 Byte 2 END Byte 2 SKP SKP SKP COM

END
Byte 2 Byte 6 Byte 10 Byte 14

Byte 1 Byte 5 Byte 1 Byte 5 Byte 9


Byte 13 Byte 17

Byte 0 Byte 8 Byte 8 Byte 12 Byte 16

Byte 3 SDP Byte 3 SDP SKP SKP SKP COM

Byte 6 Byte 2 END Byte 2 SKP SKP SKP COM

Byte 2

Byte 1 Byte 5 Byte 1 Byte 5 Byte 9


Byte 13 Byte 17

Byte 0 Byte 4 Byte 0 Byte 4 Byte 8 Byte 12 Byte 16

SDP Byte 3 SDP Byte 3 Byte 7


Byte 11 Byte 15

END
Byte 2 Byte 6 Byte 10 Byte 14

END SKP

SKP SKP

END

Lane 0

Lane 1

Lane 2

Lane 3

Lane 0

Lane 1

Lane 2

Lane 3

Lane 0

Lane 1

Lane 2

Lane 3

Lane 0

Lane 1

Lane 2

Lane 3

Figure 9 Lane-to-Lane De-skew)

Lane reversal (not to be confused with direction reversal) is then used to eliminate the need for bowtie routing on the printed circuit board (PCB), simplifying PCB design and reducing manufacturing costs. Using the same x4 transmission in figure 9, we can internally remap the data to compensate for PCB routing issues as shown in figure 10.

2007 Synopsys, Inc.

Byte 15 Byte 11

Byte 16 Byte 12

Byte 17 Byte 13

END
Byte 14 Byte 10

Byte 7 Byte 3 SDP Byte 3 SDP SKP SKP SKP COM

Byte 8 Byte 4 Byte 0 Byte 4 Byte 0 SKP SKP SKP COM

Byte 9 Byte 5 Byte 1 Byte 5 Byte 1 SKP SKP SKP COM

Byte 6 Byte 2 END Byte 2 SKP SKP SKP COM SDP Byte 3 SDP Byte 3
Byte 7 Byte 11 Byte 15

Byte 0 Byte 4 Byte 0 Byte 4 Byte 8


Byte 12 Byte 16

Byte 1 Byte 5 Byte 1 Byte 5 Byte 9 Byte 10 Byte 14

Byte 2 END Byte 2 Byte 6


Byte 11 Byte 14

END

Lane 0

Lane 1

Lane 2

Lane 3

Lane 0

Lane 1

Lane 2

Lane 3

Figure 10. Using Lane Reversal to Compensate for PC Board Routing

The Link Training and Status State Machine (LTSSM) controls the physical layer and subsequently the link. The LTSSM initiates the link negotiation with a Detect state, followed by a Polling state if/when a link partner is detected. Once the link is established, the two communicating components enter a Configuration state, in which they negotiate the link configuration. These states identify how many lanes are physically attached; how many are active; whether any data pairs are reversed; and whether any lanes are reversed. The L0 state serves normal link operation. The LTSSM also re-establishes a dropped link, using the Recovery state, and manages link power state transitions using the L0s, L1, and Recovery states.

Data Link Layer The Data Link Layer ensures reliable data exchange, error detection and retry, flow control credit (FCC) initialization and update, and power management services. To accomplish these functions, the Data Link Layer generates and processes Data Link Layer Packets (DLLP). The Data Link Layer is enabled once a Physical Link is negotiated by the LTSSM. At this point, the Data Link Layers on each end of the link initialize the link using a flow control (FC) initialization protocol. This protocol is used to communicate each link partners available queuing resources. Once the FC initialization is complete, the link is ready to provide reliable data delivery services to the Transaction Layer. During TLP transmission, periodic flow control updates continue to track the amount of buffer space available to prevent overflows. The Data Link Layer provides reliable data delivery services over an unreliable (lossy) physical link. It does so by verifying received TLPs and using a positive acknowledgement of received data, with retransmission on failure. When TLPs are transmitted, they are assigned sequence numbers and a CRC code is applied and delivered to the Physical Layer for transmission over the serial link. Upon reception, the CRC and sequence numbers are checked. Errors in the CRC or an out-of-order sequence number indicate a transmission error and the signal responds with a negative acknowledgement (NAK). Upon receipt of the NAK, the transmitter re-transmits the packet, which it has stored in a replay buffer for this very purpose. If the CRC sequence number checks are successful, the receiver sends a positive acknowledgement (ACK). Only when an ACK has been received for a given TLP is the data flushed from the replay buffer. Using this protocol, the Data Link Layer can guarantee delivery of TLPs.

10

2007 Synopsys, Inc.

Transaction Layer The Transaction Layer creates outbound and receives inbound Transaction Layer Packets (TLP). A TLP includes a header, an optional data payload, and an optional End-to-End CRC (ECRC). The TLP is either a request or a response to a request (completion) and is always a multiple of 4 bytes (1 DWORD). The header specifies the transaction type, priority, address, routing rule, and other packet characteristics. The transmit transaction layer builds packet headers, optionally adds ECRC, and gates packet transmission until sufficient remote flow control credits are available. The receive transaction layer checks TLP format and headers. It also optionally checks ECRC.

Essential Functions and Attributes of PCIe Throughput: Flow Control Credits


As previously noted, the two ends of the PCIe connection use flow control credits (FCC) to ensure that data is not lost due to buffer overflow. Flow control credits thus play a critical role in total effective throughput. Flow control credits are simply information about available receiver buffer capacity, and are issued by the receiving component. The transmitting end of the link transmits only the volume of packets for which there are sufficient credits at the receiving end, and consumes those credits during packet transmission. The receiving end of the link issues further credits as its buffer space becomes available. There are flow control queues for the Posted, Non-Posted and Completion queues, and therefore three different types of flow control DLLPs. In addition: Init_FC DLLPs defines the initial buffer space for each FC class (P, NP, CPL) Update_FC DLLPs advertise that new credits have been made available

QoS: Traffic Classes and Virtual Channels Traffic classification and channel virtualization enable a system to deliver differing quality of service (QoS) levels for different applications. For example, in a PC, a video stream may be given top priority to ensure that there is sufficient bandwidth to deliver high quality video, unimpeded by other applications. In a network server application, such prioritization is essential to the economics of the network service provider, who must fulfill multiple, different service level agreements with different service pricing. Channel virtualization enables multiple, independent data streams to be multiplexed on the same wire. A virtual channel possesses its own buffering resources. Traffic classification using traffic class labeling defines the end-to-end priority that any given packet is given with relation to other traffic. Each traffic class is allocated to a virtual channel by the root complex, although there may be different numbers of virtual channels at different points in the packets path. Flexible arbitration schemes enable virtual channels to maintain the requisite priorities and service levels. Example arbitration schemes include: Arbitrary (custom), Round Robin, Weighted Round Robin.

11

2007 Synopsys, Inc.

RAS: Data Integrity Data integrity is assured by the use of a number of required and optional protocol features. The required features are: Physical layer checks 8b/10b encoding/decoding to eliminate invalid characters. Link layer checks Packet CRC (PCRC) checks; packet sequence number checks; verify acknowledge/ negative acknowledge (ACK/NAK). Transaction layer checks header and packet validity; completion timeouts.

The optional features which really ought to be supported in any PCIe IP, optional or not are receiver overflow checks, flow control error checks, end-to-end CRC (ECRC), corrupted TLPs, memory parity and datapath parity. RAS: Ordering/PCI Rules Ordering is derived from the PCI model, and has two objectives: To avoid system deadlock. PCIe achieves this by ensuring that some packet types must pass other types that are blocked, and that some packet types may never pass others. For example, completions and posted writes must be allowed to pass read requests. Maintain consistent view of data and flags to ensure validity of the data. For example, completions and read requests may not pass posted writes, and posted writes may not pass other posted writes.

Ordering rules may be implemented in hardware or software. For example, hardware receive buffers automatically pass or block packets based on arrival order and packet type. A software implementation would limit the issue of read requests to prevent deadlock; control the location of flags so that bypass does not matter; and if only the order of writes is maintained, put the flag in the receiver block and set it with a write from the producer. Active Power Management Active power management is simply the automatic entry into a low power state when system or device activity is not detected that is, there are no packets on the link and a timeout occurs and automatic exit from the low power state when required. Active power management is executed entirely in hardware, using the L0s and L1 low power states. The PCIe device should be compatible with PCI software power saving mechanisms via D0, D1, D2, and D3 hot/cold plug, and should support PCIe hot plug logic interrupt and wakeup. An example of active power management that of the required sequence when two PCIe components successfully enter the L1 mode is shown in figure 11:

. .

1

2007 Synopsys, Inc.

Upstream Component
Upstream component layers in active state

T D P P D T

Downstream Component
Downstream Component wishes to enter L1 state Downstream component accumulates minimum credits and blocks scheduling of new TLPs Downstream component receives acknowledgment for last TLP PM_Active_State_Request_L1 DLLPs sent repeatedly

Upstream component blocks scheduling of new TLPS Upstream component receives acknowledgment for last TLP Upstream component sends PM_Request_Ack DLLPs repeatedly Downstream component transitions upstream direction to electrical idle Upstream component transitions downstream to electrical idle time
Figure 11. L1 Power Management

Component waits for a responnse to the PM_Active_Sttate_Request_L1 DLLPs

The upstream component could be a root complex, while the downstream component could be an endpoint. The downstream component detects that the link has been idle for a long time perhaps a few microseconds. Consequently, the downstream component ceases packet transmission and waits for the appropriate link credit and acknowledgement conditions. It then transmits a packet requesting permission to enter the L1 active power management state, and the upstream component must then decide whether to grant permission. In this example, it decides to grant permission by ceasing packet transmission, waiting for acknowledgement of earlier packets, and transmitting a special packet accepting the L1 entry. The downstream component then places its transmit wires into a special electrical idle state that is detected by the upstream component, which does the same. The link is now in L1 mode. Ideally, the application layer and logic have no role in this process it should be entirely automatic. Advanced Error Management Advanced error reporting is an optional feature in PCIe, but an important one that is essential in many applications. Advanced Error Reporting provides detailed information about individual errors that can be used to diagnose system problems. It also allows the error to be classified into fatal or non-fatal categories, provides detailed error type and masking capability. Interrupts A PCIe endpoint may signal an interrupt via messages. Legacy interrupt support uses messages to emulate the line-based wired-OR PCI interrupt. These messages are use to create a virtual wire that replaces the actual physical interrupt signal that is used in a PCI or PCI-X bus. Each PCIe function is limited to one such interrupt.

1

2007 Synopsys, Inc.

An alternative to traditional line-based interrupts are message-signaled interrupts (MSI) or MSI Extended interrupts (MSI-X). These are equivalent to those in PCI and PCI-X. MSI supports 32 message interrupts from an endpoint, each with the same address, but with different data values specified by the root complex. MSI-X expands this interrupt flexibility to 2,048, each with an address and a data value specified by the root complex. Both types are delivered to the root complex via memory-writes. MSI data is always one DWORD in size and the value is platform-specific. The PCIe software programs the MSI address and data value for each MSI message.

Using IP to Reduce Risk and Speed Time to Market for PCIe-based Designs
PCI Express has multiple component types, each with complex system level trade-offs to meet demanding performance; reliability, availability, and serviceability (RAS); and quality of service (QoS) objectives. Achieving these objectives requires the multi-dimensional optimization of parameters such as throughput, buffer sizing, flow control credit management and ordering rules, so a one size fits all design is not viable. The combination of these design challenges with stringent compliance and interoperability testing poses a challenge for even the most experienced design teams. To help speed time to market and reduce risk, designers are turning to 3rd party IP to help them successfully integrate the PCIe interface into their designs. To ensure that the right choices are made, designers should consider the essential elements of IP vendor selection in terms of functional correctness, integration, usability and support. The Synopsys DesignWare IP for PCI Express solution provides the port logic necessary to implement and verify high-performance designs using the PCIe interconnect standard. The complete, integrated solution is silicon-proven and includes a comprehensive suite of configurable digital controllers, high-speed mixed-signal PHY, and verification IP, all of which are compliant with the PCIe 1.1 and 2.0 specifications. By providing a complete solution from a single IP vendor, Synopsys reduces integration risk by helping to ensure that all the IP functions seamlessly together. Synopsys DesignWare IP for PCI Express provides designers with a high performance IP solution that is extremely low in power consumption, area and latency.

Verification Environment SoC Application PCIe Digital Cores Application Layer Transaction Layer Link Layer Physical Layer Endpoint Dual Mode Root Port Switch/Bridge

PHY

Verification IP DesignWare IP

1

2007 Synopsys, Inc.

Digital Controller Available in all port types: Endpoint, Root Complex, Switch, Bridge and Dual Mode (Endpoint/Root Complex) Digital controller implements the Physical, Data Link, and Transaction layers of the PCIe protocol Provides maximum throughput while reducing gate count, latency, and memory requirements

PHY IP Low power and area consumption up to 50% less than competitive solutions Excellent performance margin and receive sensitivity Advanced built-in diagnostics & ATE test vectors for complete, at-speed production test

Verification IP Verifies all configurations of the digital core Provides directed and constrained random traffic generation Verifies compliance with the PCIe specification

As the market leader for PCI Express IP (Dataquest 2007), Synopsys continually delivers next generation and innovative PCIe IP solutions to the market. With a strong focus on delivering high quality, the DesignWare IP for PCI Express has undergone extensive third party interoperability testing, with products shipping in volume production. Using strict quality measures and backed by an expert technical support team, Synopsys enables designers to accelerate time-to-market and reduce integration risk for nextgeneration, PCIe-enabled desktop, mobile, consumer and communication system-on-chips. For more information on DesignWare IP for PCI Express, please visit: www.synopsys.com/designware

700 East Middlefield Road, Mountain View, CA 94043 T 650 584 5000 www.synopsys.com Synopsys and DesignWare are registered trademarks of Synopsys, Inc. All other trademarks or registered trademarks mentioned in this release are the intellectual property of their respective owners and should be treated as such. All rights reserved. Printed in the U.S.A. 2007 Synopsys, Inc. 8/07.VR.WO.07-15829

2007 Synopsys, Inc.

1