Professional Documents
Culture Documents
Ebook Carrier Ethernet Basics Chap 1and2 Ang PDF
Ebook Carrier Ethernet Basics Chap 1and2 Ang PDF
Educational Series 1 2 3 4 5 6
Carrier
Ethernet
Basics
Carrier
Ethernet
Basics
AUTHORS
SYLVAIN CORNAY, Marketing Manager, EXFO
HAMMADOUN DICKO, Product Specialist, EXFO
THIERNO DIALLO, Product Specialist, EXFO
SOPHIE LEGAULT, Product Line Manager, EXFO
SUE JUDGE, Consultant
EXFO Inc.
March 2011
1. CARRIER ETHERNET BASICS . . . . . . . . . . . . . . . 1– 4
Also coming soon to the Carrier Ethernet Basic Educational Series, modules that will focus
on the following aspects of Carrier Ethernet, including service turn-up, service monitoring and
troubleshooting.
1– 3
1 CARRIER ETHERNET BASICS
ETHERNET ACCESS
1– 4
Carrier Ethernet Basics
1.2.1 Copper
To this day, copper cabling (i.e., insulated twisted copper wires) is still one of the
most widely used media in Carrier Ethernet due to its existing vast deployment
and its relatively low cost. It is almost everywhere as it was the media of choice to
deliver plain old telephony service (POTS) to homes and businesses. Leveraging
this infrastructure, service providers can avoid building out new and costly
networks, as they address markets with lower-rate traffic of up to 1 Gigabit per
second (Gbit/s) and begin to carry higher-speed traffic (in some cases up to 10
Gbit/s). Ethernet’s inherent scalability gives carriers a highly flexible platform for
delivering incremental services to smaller enterprises, branch offices, cellular
towers and other sites. However, copper is subject to both electromagnetic
interference and cross-talk, which can negatively affect the reliable transfer of
digital data—and at high speeds, the problem is even worse.
1.2.2 Microwave
Ethernet is also used for mobile backhaul, the distance from a cell tower to
a switching office or between switching offices. The medium used is actually
microwave-over-the-air. Microwave radio is a popular infrastructure choice for
wireless operators. Ethernet-enabled microwave is becoming an increasingly
important component of a wireless infrastructure. The increasing interest in
microwave is driven by the higher bandwidth demands at the base station sites
and the requirement to provide a substantial reduction in operational costs of
backhauling the data traffic. The growth of the wireless industry combined with
the proliferation of the mobile backhaul will only contribute to increase the use of
microwave radio as a transport medium.
1.2.3 Fiber
Since fiber can carry much more information than copper, carrier Ethernet
service providers typically use fiber to transport high-speed traffic (usually 1
Gbit/s or more) over long distances or within the network core. Fiber is used
with SONET/SDH, dense wavelength-division multiplexing (DWDM) or optical
transport networks (OTNs). Fiber cabling may have an initial higher cost, but even
at the fastest speeds, it is entirely resistant to both cross-talk and electromagnetic
interference, therefore it can provide much more reliable data transmission. As
the demand for bandwidth and speed increases, the need to implement fiber on
networks, even at the business site, is growing. However, the main issue with fiber
is the high cost of deployment and maintenance.
1– 5
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
E-LINE VARIANTS
1– 6
Carrier Ethernet Basics
E-LAN VARIANTS
1– 7
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
One of the major benefits of Ethernet for business services is cost reduction.
Global availability of standardized services reduces the cost of implementation.
The familiarity of IT departments with Ethernet makes the implementation of Carrier
Ethernet services easier and cheaper. In essence, Carrier Ethernet brings the benefits
of the Ethernet cost model to metro and wide-area networks. New applications
requiring high bandwidth and low latency—which was previously not possible or
prohibited due to high costs—can now be implemented.
Another major benefit of Carrier Ethernet is performance. That is, in part because
inherently, Ethernet networks require less processing to operate and manage. They
also operate at higher bandwidths than other technologies. It is also the most suited
solution for voice, video and data because of its low latency and delay variation.
Carrier Ethernet services also provide a high level of flexibility, which is ideal for
applications such as site-to-site access that by the nature can have unpredictable
and varying bandwidth requirements.
1– 8
Carrier Ethernet Basics
Today, and in the years to come, backhaul networks will be made of a mixture
of both E1/T1 (for voice) and Ethernet/IP (for data services) technologies. This
hybrid-network approach offers an economical solution for potential traffic
bottlenecks with the increased traffic of non-real-time data.
• Frame delay, or latency, is the difference in time from the moment a frame or
packet leaves the origination port and the moment it arrives at the destination
port. It has a direct impact on the quality of real-time data, such as voice or video.
Management services such as synchronization protocols, which communicate
between the BSC and mobile devices, must have a very fast response time. This
helps to ensure quality voice transmission, cell handoffs, signaling and reliable
connectivity.
• Frame loss is a serious problem for all real-time services such as voice or live
video, as well as for synchronization and management of traffic control. Lost
packets cause poor perception quality, and lost control packets increase latency
and may cause connectivity failures—and even dropped calls.
1– 9
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
• Frame delay variation, or packet jitter, refers to the variability in arrival time
between packet deliveries. As packets travel through a network, they are often
queued and sent in bursts to the next hop. Random prioritization may occur,
resulting in packet transmission at random rates. Packets are therefore received
at irregular intervals. This jitter translates into stress on the receiving buffers of
the end nodes, where buffers can be overused or underused when there are
large swings of jitter. Real-time applications are especially sensitive to packet
jitter. Buffers are designed to store a certain quantity of video or voice packets,
which are then processed at regular intervals to provide a smooth and error-free
transmission to the end user. Too much jitter will affect the quality of experience
(QoE)—where packets arriving at a fast rate will cause buffers to overfill, leading
to packet loss; while packets arriving at a slow rate will cause buffers to empty,
leading to still images or sound.
1.5.2 MPLS-TP
With the movement toward packet-based services, transport networks have to
encompass the provision of packet-aware capabilities while enabling carriers to
leverage their installed transport infrastructure investments. MPLS transport profile
(MPSL-TP) is a derivative of MPLS designed for transport networks. It supports
the capabilities and functionalities needed for packet-transport network services
and operations through combining the packet experience of MPLS with the
operational experience and practices of existing transport networks. MPLS-TP
enables the deployment of packet-based transport networks that efficiently scales
to support packet services in a simple and cost-effective way.
1.5.3 PBB-TE
Provider backbone bridge traffic engineering or PBB-TE (also referred to as
PBT) is an alternative Ethernet-based implementation that enables carrier-grade
provisioning and management of connection-oriented transport services across an
1– 10
Carrier Ethernet Basics
all-IP MAN and core network by disabling the flooding/broadcasting and spanning
tree protocol features. It is an evolution of MAC-in-MAC by making it connection-
oriented. PBB-TE separates the Ethernet service layer from the network layer; its
flexibility also allows service providers to deliver native Ethernet initially and MPLS-
based services—i.e., virtual private wire service (VPWS) or virtual private LAN
service (VPLS)—if and when they are required.
1.5.4 PTN
The packet transport network (PTN) is the next generation of networks designed
around the best elements of traditional TDM technologies and the emergent
packet technologies. It is typically deployed at two layers. At the access layer, PTN
provides convergence of multiple services by converging TDM and packets into
the PTN cloud. TDM packets are encapsulated and forwarded as packets in the
PTN cloud while native Ethernet/IP packets are encapsulated and forwarded in
the same PTN cloud.
1.5.5 PWE3
Pseudo wire emulation edge-to-edge (PWE3) is a mechanism that emulates the
essential attributes of a service such as ATM, frame relay or Ethernet over a packet
switched network (PSN). PWE3 only provides the minimum required functionality
to emulate the wire. From the customer perspective, it is perceived as an unshared
link or circuit of the chosen service. PW3 specifies the encapsulation, transport,
control, management, interworking and security of services emulated over PSNs.
To maximize the return on their assets and minimize their operational costs,
many service providers are looking to consolidate the delivery of multiple service
offerings and traffic types onto a single IP-optimized network. PWE3 is a possible
solution since it emulates Ethernet frame formats over IP networks.
1– 11
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
At the device level, OAM protocols generate messages that are used by
operations staff to help identify problems in the network. In the event of a fault,
the information generated by OAM helps the operator troubleshoot the network
to locate the fault, identify which services have been impacted and take the
appropriate action. Also, just as it is important to keep the customers’ services
running, operators must be able to prove that is the case, this is usually measured
against an SLA, and the operator must have the performance measurements to
manage customer SLAs. Finally, administration features include collecting the
accounting data for the purpose of billing and network usage data for capacity-
planning exercises.
Effective end-to-end service control also enables carriers to avoid expensive truck
rolls to locate and contain faults, thereby facilitating reduction of maintenance
costs. Intrinsic OAM functionality is therefore essential in any carrier-class
technology and is a ‘must have’ capability in intelligent Ethernet network
termination units.
1– 12
Carrier Ethernet Basics
1.5.8 Synchronization
As the network moves toward Ethernet as the transport technology of choice,
synchronization remains a major issue. As Ethernet and TDM technologies
continue to coexist, technologies like circuit-emulation services (CES) provide
capabilities to map TDM traffic on Ethernet infrastructure and vice versa, enabling
a smooth changeover for network operators transitioning to an all-packet network.
Many services need synchronization, but wireless base stations today have the
largest stake in frequency and time distribution. The frequency stability of the
air interface between the cell tower and the handset supports handing off a call
between adjacent base stations without interruption. Synchronization for base
stations is therefore central to the QoS that an operator provides.
1– 13
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
The next packet synchronization technology, the Precise Time Protocol (PTP) also
referred to as the “IEEE 1588v2”, is specifically designed to provide high clock
accuracy through a packet network via a continuous exchange of packets with
appropriate timestamps. In this protocol, a highly precise clock source, referred to
as the “grand-master clock” generates timestamp announcements and responds
to timestamp requests from boundary clocks, thus ensuring that the boundary
clocks and the slave clocks are precisely aligned to the grand-master clocks. By
relying on the handover capability and the precision of the integrated clocks in
combination with the continuous exchange of timestamps between PTP-enabled
devices, frequency and phase accuracy can be maintained at a sub-microsecond
range, thus ensuring synchronization within the network. In addition to frequency
and phase synchronization, ToD synchronization can also ensure that all PTP-
enabled devices are synchronized with the proper time, based on coordinated
universal time (UTC).
This diagram shows an application of IEEE 1588v2 PTP in a mobile backhaul to establish
synchronization.
1– 14
Carrier Ethernet Basics
Educational Series 1 2 3 4 5 6
Carrier Ethernet
Testing TECHNOLOGIES
AND METHODOLOGIES
Carrier Ethernet
Testing Technologies
and Methodologies
AUTHORS
SYLVAIN CORNAY, Marketing Manager, EXFO
HAMMADOUN DICKO, Product Specialist, EXFO
THIERNO DIALLO, Product Specialist, EXFO
SOPHIE LEGAULT, Product Line Manager, EXFO
SUE JUDGE, Consultant
EXFO Inc.
March 2011
2. OPTIMIZING QUALITY OF SERVICE
AND QUALITY OF EXPERIENCE . . . . . . . . . . . . . . 2– 4
Also coming soon to the Carrier Ethernet Basic Educational Series, modules that will focus
on the following aspects of Carrier Ethernet, including service turn-up, service monitoring and
troubleshooting.
2– 3
2 OPTIMIZING QUALITY OF SERVICE
AND QUALITY OF EXPERIENCE
In fact, according to the latest report from Cisco’s Visual Networking Index Global Mobile
Data Traffic Forecast, it is predicted that global mobile data traffic will grow by 26 times
between 2010 and 2015, to 6.3 exabytes—one billion gigabytes—per month. Additionally,
the report foresees that by 2015, two-thirds of all mobile data traffic will be video,
underscoring the challenges operators face as they try to manage the tidal wave of mobile
data set to flood their networks.
While there is little doubt about the cost efficiencies and scalability of Ethernet, the
time- and delay-sensitive nature of established voice services, in addition to the growing
popularity of new mobile video services, cannot be ignored and necessitate an advanced
approach to Ethernet networking and testing to maintain the customer expectations for
quality of service (QoS) and quality of experience (QoE).
As Carrier Ethernet technology matures, networks will eventually become entirely packet-
based; this will greatly simplify the network architecture, reduce costs and provide the
necessary scalability for expected growth with data-centric applications. But as the
network infrastructure evolves to support packet-based transmission, operators must also
evolve from only managing network performance to also managing service performance.
This means that testing the network with a simple ping is no longer an option as operators
must now constantly validate and measure the key performance indicators (KPIs) on a
per-service basis.
2– 4
Carrier Ethernet Basics
2.1 RFC-2544
The Internet Engineering Task Force’s (IETF’s) RFC 2544 is a benchmarking
methodology for network interconnect devices. This request for comment (RFC)
was created in 1999 as a methodology to benchmark network devices, such as
hubs, switches and routers, as well as to provide accurate and comparable values
for comparison and benchmarking.
RFC 2544 provides engineers and network technicians with a common language
and results format. RFC 2544 describes the following six subtests:
• Throughput: This test measures the maximum rate at which none of the
offered frames is dropped by the device/system under test (DUT/SUT). This
measurement translates into the available bandwidth of the Ethernet virtual
connection.
• Frame loss: This test defines the percentage of frames that should have been
forwarded by a network device under steady state (constant) loads that were not
forwarded due to lack of resources. This measurement can be used for reporting
the performance of a network device in an overloaded state, as it can be a useful
indication of how a device would perform under pathological network conditions,
such as broadcast storms.
• Latency: This test measures the round-trip time of a test frame to travel through
a network device or across the network and back to the test port. Latency is the
time interval that begins when the last bit of the input frame reaches the input port
and ends when the first bit of the output frame is seen on the output port. It is the
time taken by a bit to go through the network and back. Latency variability can be
a problem. With protocols like voice over Internet protocol (VoIP), a variable or
long latency can cause degradation in voice quality.
• System reset: This test measures the speed at which a DUT recovers from a
hardware or software reset. This subtest is performed by measuring the interruption
of a continuous stream of frames during the reset process.
• System recovery: This test measures the speed at which a DUT recovers from an
overload or oversubscription condition. This subtest is performed by temporarily
oversubscribing the device under test and then reducing the throughput at normal
or low load while measuring frame delay in these two conditions. The difference
between delay at overloaded conditions and the delay and low-load conditions
represent the recovery time.
2– 5
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
• Service providers are shifting from only providing Ethernet pipes to enabling
services. Networks must support multiple services from multiple customers,
while each service has its own performance requirements that must be met even
under full load conditions and with all services being processed simultaneously.
RFC 2544 was designed as a performance tool with a focus on a single stream
to measure maximum performance of a DUT or network under test and was never
intended for multiservice testing.
• Packet delay variation is a KPI for real-time services such as VoIP and Internet
protocol television (IPTV) and is not measured by the RFC 2544 methodology.
Network operators that performed service testing with RFC 2544 must typically
execute external packet jitter testing outside of RFC 2544—as this KPI is not
defined or measured by the RFC 2544.
2– 6
Carrier Ethernet Basics
To resolve issues with RFC 2544, ITU-T has introduced a new test standard: the
ITU-T Y.1564 methodology, which is aligned with the requirements of today’s
Ethernet services. EXFO is the first to implement EtherSAM—the Ethernet service
testing methodology based on this new standard—into its Ethernet-testing
products.
2. To ensure that all services carried by the network meet their SLA objectives
at their maximum committed rate, proving that under maximum load, network
devices and paths can support all the traffic as designed.
3. To perform medium- and long-term service testing, to validate that network elements
can properly carry all services while under stress during a soaking period.
Services are traffic streams with specific attributes identified by different classifiers,
such as 802.1q VLAN, 802.1ad and class of service (CoS) profiles. These services
are defined at the user-to-network interface (UNI) level with different frame and
bandwidth profiles, such as the service’s maximum transmission unit (MTU) or frame
size, committed information rate (CIR) and excess information rate (EIR).
• CIR defines the maximum transmission rate for a service where it is guaranteed
certain performance objectives; these objectives are typically defined and
enforced via SLAs.
• EIR defines the maximum transmission rate above the committed information
rate considered as excess traffic. This excess traffic is forwarded as the capacity
allows and is not subject to meeting any guaranteed performance objectives
(best effort forwarding).
• Overshoot rate defines a testing transmission rate above CIR or EIR and is used
to ensure that the DUT or network under test does not forward more traffic than
specified by the CIR or EIR of the service.
2– 7
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
2.2.3 Methodology
The ITU-T Y.1564 is built around two key subtests, the service-configuration test
and the service-performance test, which are performed in order:
The service configuration test is designed to measure the ability of the DUT or the
network under test to properly forward in three different states:
• In the CIR phase, where performance metrics for the service are measured and
compared to the SLA performance objectives
• In the EIR phase, where performance is not guaranteed and the services transfer
rate is measured to ensure that CIR is the minimum bandwidth
• In the discard phase, where the service is generated at the overshoot rate and
the expected forwarded rate is not greater than the committed information rate
or excess rate (when configured)
The service performance test measures the ability of the DUT or network under test
to forward multiple services, while maintaining SLA conformance for each service.
Services are generated at the CIR, where performance is guaranteed, and pass/fail
assessment is performed on the KPI values for each service according to its SLA.
2– 8
Carrier Ethernet Basics
Bidirectional Test
EtherSAM can perform round-trip measurements with a loopback device. In this
case, the results reflect the average of both test directions, from the test set to the
loopback point and back to the test set. In this scenario, the loopback functionality
can be performed by another test instrument in Loopback mode or by a network
interface device in Loopback mode.
The same test can also be run in simultaneous bidirectional mode (dual test set).
In this case, two test sets, one designated as local and the other as remote, are
used to communicate and independently and run tests per direction. The tests are
performed simultaneously as well. This provides much more precise test results
such as independent assessment per direction and the ability to quickly determine
which direction of the link is experiencing failure. This allows service providers
to test asymmetrical links. This test uncovers more configuration errors than the
EtherSAM test with one loopback device on the other end especially when testing
multiple services with different EIRs and CIRs.
2– 9
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
Burst test methodologies assume a token bucket algorithm to police and shape
the traffic. The token bucket is a control mechanism that dictates when traffic
can be transmitted, based on the presence of tokens in the bucket–an abstract
container that holds aggregate network traffic to be transmitted.
The burst test is provided for non-color aware and color aware applications. In
advanced business Ethernet services, it is possible to have traffic that is tagged
with different colors (green and yellow) within the same service. The colors are
a method allowing the end customer to tell the network that specific traffic has
higher priority in case of congestion. Color Aware mode is also offered only in
more complex/advanced Ethernet Services. Color mode testing consists in
verifying that the traffic policers and shapers properly respects the Color mode.
Testing Color mode is a complex test and will not be tested frequently in the field.
On the other hand Non-Color Aware mode requires only one color per service. It is
the most common mode.
2– 10
Carrier Ethernet Basics
2.2.5 Metrics
Y.1564 focuses on the following KPIs for service quality:
2– 11
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
2.2.6 Benefits:
ITU-T Y.1564 provides numerous benefits to service providers, offering mobile
backhaul, commercial and wholesale Ethernet services.
• Significantly faster:
The RFC 2544 methodology uses a sequential approach where each subtest is
executed one after the other until they have all been completed, making it a time-
consuming procedure. Additionally, the completion of a subtest heavily relies on
the quality of the link.
In opposition, ITU-T Y.1564 uses a defined ramp-up approach where each step
takes an exact amount of time. Link quality issues are quickly identified without
necessarily increasing test time because a pass/fail condition is based on the
KPI assessment during the step.
The ITU-T Y.1564 service subtest can generate all configured services at the
same time, providing the ability to stress network elements and data paths in
worst-case conditions. The service test provides powerful test results since all
KPIs are measured simultaneously for all services with clear pass/fail indication, as
well as identification of failed KPIs. This ensures that any failure or inconsistency
is quickly pinpointed and reported, again contributing to an efficient and more
meaningful test cycle.
BERT is a concept taken from the SONET/SDH world. In a BERT, a data stream is
sent through the communications medium and the resulting data stream is compared
with the original. Any changes are noted as data errors. BERT uses a pseudo-random
binary sequence (PRBS) encapsulated into an Ethernet frame, making it possible to
go from a frame-based error measurement to a BER (bit error rate) measurement.
2– 12
Carrier Ethernet Basics
BERT still remains a very popular testing methodology because it is suited for
applications that are transparent to the transport medium and also because it has
been used for a long time and most telecom engineers and technicians are very
comfortable with it. However if the network to be tested is switched-based and
includes overhead processing and error verification, the BER approach is not the
best one. This is because a network processing element will discard frames or
packets if an error is found, which means that most errors will never reach the test
equipment. These lost frames are more difficult to translate into a BER value.
2.4 Synchronization
Synchronization can be defined as the coordinated and simultaneous relationship
between time-keeping among multiple devices. For people outside of the telecom
world, synchronization typically refers to time synchronization where one or more
devices have the same time as a reference clock, typically the universal time clock
(UTC); when synchronized, two devices will have the proper time of day (ToD) in
reference to the universal time reference, regardless of their geographical location.
However for network engineers, synchronization has a very precise and critical use.
Telecom networks, such as SONET and SDH networks, are based on a synchronous
architecture, meaning that all data signals are synchronized and clocked using
virtually the same clock throughout. This ensures that all of the ports that carry
data do so at the same frequency or with very little offset, and therefore, network
throughput is deterministic and fixed for a specific transport rate.
2– 13
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
2– 14
Carrier Ethernet Basics
However, the major weakness of PTP is due to its packet nature. As the
synchronization packets used by PTP are forwarded in the network between
grand master and hosts, they are subject to all the network events, such as
frame delay (latency), frame-delay variation (packet jitter) and frame loss. Even
with the best practice of applying high priority to synchronization flows, these
synchronization packets still experience congestion and possible routing and
forwarding issues, such as out-of-sequence and route flaps. This means that the
host clock’s holdover circuit must be stable enough to maintain synchronization
where the synchronization packets experienced network events.
2– 15
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
• MTIE: Is a measurement based on the TIE data designed to provide the maximum
deviation of the peak-to-peak value of the TIE within, by widening the observation
window. Typically produced after processing the TIE data, the MTIE provides the
worst-possible TIE change within different observation windows and can be used
to predict the stability of the clock frequency over time.
• TDEV: Is another measurement derived from the TIE data and provides the average
phase variations of the clock by expressing the root mean square (RMS) of the
variations of the MTIE for specific measurement windows. As MTIE is focused on
the worst case, any peak variation will limit the visibility of small variations. TDEV on
the other hand averages the worst peak variations and provides a good indication
of the periodicities or TIE offsets. TDEV provides information about the short-term
stability of the clock and the random noise in the clock accuracy.
For such reasons, PTP testing involves not only the synchronization KPIs listed earlier
but also new ones such as:
• Frame delay (latency)
• Frame delay variation
• Frame loss
2– 16
Carrier Ethernet Basics
Measurements such as availability, frame delay, frame delay variation (jitter) and
frame loss enable identification of problems before they escalate so that users
are not impacted by network defects. Furthermore, these capabilities allow
the operators to offer binding SLAs and generate new revenues from rate- and
performance-guaranteed service packages that are tailored to the specific needs
of their customers.
Effective end-to-end service control also enables carriers to avoid expensive truck
rolls to locate and contain faults thereby facilitating reduction of maintenance costs.
Intrinsic OAM functionality is therefore essential in any carrier-class technology and
is a must have capability in intelligent Ethernet network termination units.
• Fault detection: IEEE 802.1ag and ITU-T Y.1731 support fault detection
through continuity check messages (CCM). These allow endpoints to detect an
interruption in service. CCMs are sent from the source to the destination node
at periodic intervals; if either end does not receive a CCM within a specified
duration, then a fault is detected against the service.
• Fault verification: IEEE 802.1ag and ITU-T Y.1731 support fault verification
through loopback messages (LBM) and loopback reply (LBR).
• Fault isolation: IEEE 802.1ag and ITU-T Y.1731 support fault isolation through
linktrace messages (LTM) and linktrace reply (LTR). Under normal conditions,
it allows the operator to determine the path used by the service through the
network, while under fault conditions, it allows the operator to isolate the fault
location without making a site visit.
2– 17
Carrier Ethernet Basics Educational Series 1 2 3 4 5 6
• Frame loss ratio: This represents the percentage of the traffic that has been
lost; it is the percentage ratio of the traffic not received versus the traffic that
was sent.
• Frame delay (latency): Two types of delays are measured: one-way delay
represents how long it takes traffic to go from one of the network to another;
whereas two-way delay represents the duration from one end back to the same
end.
• Frame delay variation: This is also referred to as jitter; it represents the variation
between different delay measurements.
Another standard used for OAM is 802.3ah, which is a complete standard for
Ethernet in the first mile, but it also contains a link level (as opposed to service
level) OAM mechanism. 802.3ah detects link failures in both bi-directional links
and unidirectional links (link monitoring). Once the failure is detected, it can set a
device in Loopback mode that will check when it recovers.
The emergence of carrier-grade Ethernet has driven the need for improved
Ethernet OAM functionality. Ethernet OAM allows the exchange of management
information from the network elements to the management layer. Without this
capability, it is impossible to provide the comprehensive network management
tools that operators have today in their TDM networks. The combination of IEEE
802.1ag and ITU-T Y.1731 provides powerful fault management and performance
monitoring capabilities to Ethernet.
2– 18