Professional Documents
Culture Documents
FibeAir® IP-10G
Product Description for i7.1.2
June 2014
Hardware Release: R2 and R3
Software Release: i7.1.2
Document Revision B.03
Notice
This document contains information that is proprietary to Ceragon Networks Ltd. No part of this
publication may be reproduced, modified, or distributed without prior written authorization of
Ceragon Networks Ltd. This document is provided as is, without warranty of any kind.
Trademarks
Ceragon Networks®, FibeAir® and CeraView® are trademarks of Ceragon Networks Ltd.,
registered in the United States and other countries.
Ceragon® is a trademark of Ceragon Networks Ltd., registered in various countries.
CeraMap™, PolyView™, EncryptAir™, ConfigAir™, CeraMon™, EtherAir™, CeraBuild™, CeraWeb™,
and QuickAir™, are trademarks of Ceragon Networks Ltd.
Other names mentioned in this publication are owned by their respective holders.
Statement of Conditions
The information contained in this document is subject to change without notice. Ceragon
Networks Ltd. shall not be liable for errors contained herein or for incidental or consequential
damage in connection with the furnishing, performance, or use of this document or equipment
supplied with it.
Information to User
Any changes or modifications of equipment not expressly approved by the manufacturer could
void the user’s authority to operate the equipment and the warranty for such equipment.
Table of Contents
1. Synonyms and Acronyms .............................................................................. 22
2. Introduction .................................................................................................... 25
2.1 Product Overview ......................................................................................................... 26
2.2 IP-10G Advantages ...................................................................................................... 27
2.2.1 Efficient Utilization of Spectrum Assets ....................................................................... 27
2.2.2 Spectral Efficiency........................................................................................................ 27
2.2.3 Radio Link .................................................................................................................... 27
2.2.4 Wireless Network ......................................................................................................... 28
2.2.5 Scalability ..................................................................................................................... 28
2.2.6 Availability .................................................................................................................... 28
2.2.7 Network Level Optimization ......................................................................................... 29
2.2.8 Network Management .................................................................................................. 29
2.2.9 Power Saving Mode with High Power Radio ............................................................... 29
2.3 Functional Block Diagrams .......................................................................................... 30
2.4 Nodal Configuration Option .......................................................................................... 32
2.4.1 Nodal Configuration Benefits ....................................................................................... 32
2.4.2 Nodal Design ................................................................................................................ 32
2.4.3 Nodal Enclosure Design............................................................................................... 33
2.4.4 Nodal Management ...................................................................................................... 34
2.4.5 Centralized System Features in a Nodal Configuration ............................................... 35
2.4.6 Ethernet Connectivity in a Nodal Configuration ........................................................... 35
2.5 Solution Overview ........................................................................................................ 36
2.6 System Overview ......................................................................................................... 40
4. Hardware Description..................................................................................... 49
4.1 Hardware Architecture ................................................................................................. 50
4.2 Front Panel Description................................................................................................ 51
4.3 Ethernet Interfaces ....................................................................................................... 53
4.3.1 GbE Interfaces ............................................................................................................. 54
4.3.2 100Base-FX support .................................................................................................... 55
4.4 Management Interfaces ............................................................................................... 56
4.5 Link Aggregation (LAG)................................................................................................ 57
4.5.1 Creating a LAG Group ................................................................................................. 57
4.5.2 Adding Ports to a LAG Group ...................................................................................... 58
4.5.3 Removing Ports from a LAG Group ............................................................................. 59
4.6 TDM Interface Options ................................................................................................. 60
4.7 Radio Interface ............................................................................................................. 61
5. Licensing......................................................................................................... 66
5.1 License Overview ......................................................................................................... 67
5.2 Working with License Keys .......................................................................................... 67
5.3 Licensed Features ........................................................................................................ 67
List of Figures
Functional Block Diagram ................................................................................... 30
Native2 2+2/XPIC/Multi-Radio MW Link, with 2xSTM-1 Mux (up to 150 E1s over
the radio) ....................................................................................................... 301
Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM-1 Mux .................... 303
Node with 2 x 1+0 Downlinks and 1 x 1+1 HSB Uplink .................................... 304
Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM-1 Mux .................... 305
Native2 Ring with 3 x 1+0 Links + STM-1 Mux Interface at Main Site ............. 306
Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site ..... 307
Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink with STM-1 Mux .. 308
Native2 Ring with 4 x 1+0 Links, with STM-1 Mux ........................................... 309
Native2 Ring with 3 x 1+0 Links + Spur Link 1+0 ............................................. 310
Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link (5 hops total), with STM-
1 Mux ............................................................................................................. 311
Native2 Ring with 2 x 2+0/XPIC MW Links and 1 x Fiber Link (3 hops total), with
2 x STM-1 Mux .............................................................................................. 312
Integrated IP-10G Management Tools .............................................................. 314
List of Tables
FibeAir IP-10 Series Overview ............................................................................. 36
1+1 HSB with 84 E1 Components (Each Side of the Link) .............................. 299
1+1 HSB Link with 16 E1s+ STM-1 Components (Each Side of the Link) ...... 300
Native2 2+2/XPIC/Multi-Radio MW Link, with 2xSTM-1 Components (Each Side
of the Link) .................................................................................................... 301
Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM-1 Mux Components
(Entire Chain) ................................................................................................ 303
Node with 2 x 1+0 Downlinks and 1 x 1+1 HSB Uplink Components (Entire
Node) ............................................................................................................. 304
Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM-1 Mux Components
(Entire Chain) ................................................................................................ 305
Native2 Ring with 3 x 1+0 Links + STM-1 Mux Interface at Main Site
Components (Entire Ring) ........................................................................... 306
Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site
Components (Entire Ring) ........................................................................... 307
Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink with STM-1 Mux
Components (Entire Node) .......................................................................... 308
Native2 Ring with 4 x 1+0 Links, with STM-1 Components (Entire Ring) ........ 309
Native2 Ring with 3 x 1+0 Links + Spur Link 1+0 Components (Entire Ring) . 310
Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link with STM-1 Mux
Components (Entire Ring) ........................................................................... 311
Native2 Ring with 2 x 2+0/XPIC MW Links and 1 x Fiber Link with 2 x STM-1
Components (Entire Ring) ........................................................................... 312
Dedicated Management Ports ........................................................................... 315
PolyView Server Receiving Data Ports ............................................................. 316
Target Audience
This manual is intended for use by Ceragon customers, potential customers,
and business partners. The purpose of this manual is to provide basic
information about the FibeAir IP-10G for use in system planning, and
determining which FibeAir IP-10G configuration is best suited for a specific
network.
Related Documents
FibeAir IP-10G Installation Guide - DOC-00023199
FibeAir IP-10G and IP-10E User Guide, DOC-00034612
FibeAir IP-10 MIB Reference - DOC-00015446
Ceragon License Management System - DOC-00019183
FibeAir CeraBuild Commission Reports Guide, DOC-00028133
WG Wave guide
WFQ Weighted Fair Queue
WRED Weighted Random Early Detection
WRR Weighted Round Robin
XC Cross-Connect
XPIC Cross Polarization Interference Cancellation
2. Introduction
This chapter includes:
Product Overview
IP-10G Advantages
Functional Block Diagrams
Nodal Configuration Option
Solution Overview
System Overview
2.2.5 Scalability
FibeAir IP-10G is a scalable solution that is based on a common hardware that
supports any channel size, modulation scheme, capacity, network topology,
and configuration. Scalability and hardware efficiency simplify logistics and
allow for commonality of spare parts. A common hardware platform enables
customers to upgrade the feature set as the need arises - Pay As You Grow -
without requiring hardware replacement.
2.2.6 Availability
MTBF.– FibeAir IP-10G provides an unrivaled reliability benchmark, with
radio MTBF exceeding 112 years, and average annual return rate around
1%. Ceragon radios are designed in-house and employ cutting-edge
technology with unmatched production yield, and a mature installed-base
exceeding 100,000 radios. In addition, advanced radio features such as
multi-radio and cross polarization (XPIC) enable the system to achieve
100% utilization of radio resources by load balancing based on
instantaneous capacity per carrier. Important resulting advantages are
reduction in capital expenditures due to less spare parts required for roll-
out, and reduction in operating expenditures, since maintenance and
troubleshooting are infrequently required.
The CPU acts as the IDU’s central controller, and all management frames
received from or sent to external management applications must pass through
the CPU. In a nodal configuration, the main unit’s CPU serves as the central
controller for the entire node.
The Mux assembles the radio frames, and holds the logic for protection, as
well as Frequency and Space Diversity.
The modem represents the physical layer, modulating, transmitting, and
receiving the data stream.
Note: CPU and memory utilization can be monitored by users via
the CLI or SNMP. This can be useful for troubleshooting.
Each nodal enclosure includes a backplane. The rear panel of an IP-10G IDU
includes an extra connector for connection to the backplane. The following
interfaces are implemented through the backplane:
TDM Cross-Connect
Multi-Radio
Protection
XPIC
You can add additional extension nodal enclosures and IDUs in the field as
required, without affecting traffic. Replacing an IDU or an extension unit does
affect traffic.
Using the stacking method, units in the bottom nodal enclosure act as main
units, whereby a mandatory active main unit can be located in either of the
two slots, and an optional standby main unit can be installed in the other slot.
The switchover time is <50 ms for all traffic-affecting functions. Units located
in nodal enclosures other than the one on the bottom act as expansion units.
Radios in each pair of units can be configured as either dual independent 1+0
links, or single fully redundant 1+1 HSB links.
The Web-Based EMS enables access to all IDUs in the node from its main
window.
In addition, the management system provides access to other network
equipment through In-Band or Out-of-Band network management.
To ease the reading and analysis of several IDU alarms and logs, the system
time should be synchronized to the main unit’s time.
Single
Carrier/Single
Direction
IP-10G is fully MEF-9 and MEF-14 certified for all Carrier Ethernet services (E-
Line and E-LAN). IP-10G also supports TDM trails, and provides end-to-end
service management, with OAM that includes 802.1ag CFM and automatic
"link trace" processing for storing of the last known working path.
IP-10G End-to-End Service Management
Together with the other FibeAir IP-10 products, IP-10G provides an optimal
solution for all split-mount tail and node sites, with IP-10G’s smart
pseudowire T-Card used selectively to provide an all-packet solution for
legacy TDM islands in the network. IP-10E provides a solution for all-packet
networks, while IP-10C provides the ideal option for all-outdoor all-Ethernet
sites.
Integrated Hybrid/All-Packed Solution Using FibeAir IP-10 Products
The following table includes enhancements that have been added since
version i6.9.0.
Feature R2 R3
SyncE Support SyncE output only SyncE input and output
SyncE regenerator support for Smart Pipe
mode
Ethernet Header Compression Layer 1 Header Suppression Same as R2, with a license-enabled option for
Legacy MAC Header Multi-Layer (Enhanced) Header Compression
Compression
Enhanced QoS Standard and Enhanced QoS Additional Enhanced QoS Features:
MEF 10.2-compliant traffic policers for SLA
enforcement: Dual-rate (CIR + EIR) per
VLAN/CoS
Enhanced monitoring and SLA Assurance:
Per VLAN/CoS statistics
Improved traffic queues statistics
Utilization Statistics Improved accuracy for radio throughput and link
utilization statistics
4. Hardware Description
This chapter includes:
Hardware Architecture
Front Panel Description
Ethernet Interfaces
Management Interfaces
Link Aggregation (LAG)
TDM Interface Options
Radio Interface
Power Interfaces
Additional Interfaces
Front Panel LEDs
External Alarms
Front Panel Additional Interfaces
IP-10G Interfaces
Part Manufacturer
Number Item Description Name Manufacturer PN
AO-0049-0 XCVR,SFP,850nm,1.25Gb,MM,500M,W.DDM PHOTON PST120-51TP+
Wuhan Telecom.
AO-0049-0 XCVR,SFP,850nm,1.25Gb,MM,500M,W.DDM Devices (WTD) RTXM191-551
AO-0049-0 XCVR,SFP,850nm,1.25Gb,MM,500M,W.DDM CORETEK (*) CT-1250NSP-SB1L
AO-0049-0 XCVR,SFP,850nm,1.25Gb,MM,500M,W.DDM Fiberxon FTM-8012C-SLG
Wuhan Telecom.
AO-0037-0 XCVR,SFP,1310nm,1.25Gb,SM,10km Devices (WTD) RTXM191-401
AO-0037-0 XCVR,SFP,1310nm,1.25Gb,SM,10km CORETEK (*) CT-1250TSP-MB4L-A
AO-0037-0 XCVR,SFP,1310nm,1.25Gb,SM,10km Fiberxon FTM-3012C-SLG
AO-0037-0 XCVR,SFP,1310nm,1.25Gb,SM,10km AGILENT AFCT-5710PZ
* Electrically, these SFP modules work properly but they tend to get
mechanically stuck in the IP-10 cage.
Part
Number Item Description Manufacturer Name Manufacturer PN
ao-0072-0 XCVR,SFP S1.1 Wuhan Telecom. Devices (WTD) wtd-rtxm139-400
16 X E1 T-Card
5. Licensing
This chapter includes:
License Overview
Working with License Keys
Licensed Features
License Types
6. Feature Description
This chapter includes:
Equipment Protection
Ethernet Line Protection
Capacity and Latency
Radio Features
Ethernet Features
Quality of Service (Traffic Manager)
TDM Solution
Synchronization
Related topics:
Ethernet Line Protection
Smart TDM Pseudowire Path Protection
Floating IP Address
STM-1 T-Card Protection
2+2 HSB with Multi-Radio 4 4 2 2 Full protection for TDM Optional Optional No
trails.
1
ACM is not supported when BBS (SD/FD) is used.
2
With graceful degradation.
3
With graceful degradation.
4
Protection can optionally be provided using the SNCP/ABR mechanism. This is done by
defining a primary TDM trail over one radio carrier and a secondary trail over the other radio
carrier. The secondary trail will back up the primary trail in the event of any failure (assuming
the main IDU performing the node TDM XC is functional).
5
With graceful degradation.
6
With graceful degradation.
7
ACM support is only provided for Ethernet traffic, not for TDM trails.
Related topics:
Adaptive Coding Modulation (ACM)
A 1+1 configuration scheme can be used to provide full protection in the event
of IDU or RFU failure. The two IDUs operate in active and standby mode. If
there is a failure in the active IDU or RFU, the standby IDU and RFU pair
switches to active mode. TDM trails are duplicated in the active and standby
IDUs, so that both Ethernet and TDM traffic is protected.
In a 1+1 configuration, the protection options are as follows:
Standalone – The IDUs must be connected by a dedicated Ethernet
protection cable. Each IDU has a unique IP address.
Nodal – The IDUs are connected by the backplane of the nodal enclosure.
There is one IP address for each of the main units.
1+1 HSB can be used with BBS Space or Frequency Diversity.
The following figure illustrates a 1+1 HSB configuration in a standalone setup,
with an Ethernet protection cable connecting the two IDUs via their Protection
ports.
1+1 HSB Protection – Connecting the IDUs
The following figure shows an example of a 1+1 HSB nodal configuration used
in an IP-10G 3 x 1+1 aggregation site. In this example, the node includes the
following components:
One main nodal enclosure with two IDUs
One configured as Main
The other configured as Protected
One extension nodal enclosure with two IDUs configured as Extension
One extension nodal enclosure with one IDU configured as Extension
3 x 1+1 Aggregation Site
Coupler Coupler
B -6d
-6d B
Primary RFU Primary RFU
Coupling Path
Coupling Path
The non-revertive HSB protection mechanism does not provide any means to
prioritize the primary path over the secondary path. When installing the
system, it is the technician’s responsibility to manually ensure that the
primary path (with less path loss) is active. However, protection switches may
occur during maintenance periods or as a result of link loss caused by bad
weather or other factors. The objective of the revertive HSB mechanism is to
ensure that the primary path is active whenever link and equipment
conditions permit.
Revertive mode is only relevant for 1+1 HSB protection.
The advantage of using revertive HSB mode is that the radio link budget will
benefit from additional gain whenever it is possible to activate the primary
path.
The one drawback of revertive HSB mode is that each protection switch
causes a 50msec traffic disruption. However, the IP-10G revertive protection
mechanism enables users to minimize traffic disruption by limiting the
number and frequency of revertive protection switchovers.
In revertive HSB protection mode, user defines the “primary” and “secondary”
IDUs on each side of the link. The primary IDU should be the IDU connected to
the RFU on the coupler’s main path and the secondary IDU should be the IDU
connected to the RFU on the coupling path.
The system monitors the availability of the primary path at all times.
Whenever the primary path is operational and available, without any alarms,
but the secondary path is active, the system initiates a revertive protection
switch. Every revertive protection switch is recorded as an event in the event
log.
6.1.3 2+0 Multi-Radio and 2+0 Multi-Radio with IDU and Line
Protection
Related topics:
Multi-Radio
Nodal Configuration Option
Wireless SNCP
2+0 Multi-Radio provides a significant degree of protection, in addition to
doubling capacity by enabling two separate radio carriers to be shared by a
single Ethernet port. In the event of RFU failure, or failure of the slave IDU, one
RFU and IDU remain in operation, with graceful degradation of service to
ensure that not all data is lost, but rather, a reduction of bandwidth occurs.
However, if there is a failure of the master IDU, traffic and management access
is lost.
The IDU and line protection option increases protection to the master IDU. If
there is a failure in the master IDU, the slave IDU becomes the master, and
continues to provide service. Thus, a 2+0 Multi-Radio configuration with IDU
and line protection provides protection for the failure of any IDU or RFU in the
node.
The IDU and line protection feature protects Ethernet traffic. It also protects
management of the node, since node management is handled by the master
IDU. Graceful degradation is provided with the help of IP-10G’s integrated QoS
mechanism, which ensures that high-priority traffic is maintained in the event
of reduced bandwidth.
Notes: TDM traffic is not protected in Multi-Radio, either with or
without line protection. However, TDM protection can be
provided by duplicating each TDM trail in both radio
channels using SNCP. The primary trail is defined in the
master IDU, and the secondary trail is defined in the slave
IDU. TDM trails are not supported when Multi-Radio with
line protection is active in ACM adaptive mode.
When using Multi-Radio with IDU and line protection, ACM
is supported for Ethernet traffic, but not for TDM trails.
TDM
Ethernet
Ethernet
Ethernet
Et
he
M
TD
rn
et
et
rn
he
Et
TDM TDM TDM
TDM
TDM
TDM
Cross-Connect
(XC) Module
Orange lines represent the Ethernet traffic flow, while blue lines represent
TDM traffic flow. The active IDU holds the line interfaces for Ethernet traffic,
the line interfaces for TDM traffic, and the interface with the Cross-Connect
module. The active IDU acts as a Multi-Radio master unit by distributing the
Ethernet traffic between its own radio channel and the radio channel of its
mate. At the receive side of the link, the active IDU combines the data from
both radio channels to create a single Ethernet stream. When a protection
switch occurs, the new active IDU also becomes the Multi-Radio main unit.
The following events will cause a protection switchover:
GbE line Loss of Carrier (LOC)
TDM interface Loss of Signal (LOS)
STM-1 LOS
User manual switch
Note: Radio failure or BER in the radio channel will not cause a
protection switchover. Multi-Radio protects against radio
channel failure by blocking the defective radio.
Related topics:
Nodal Configuration Option
2+2 HSB protection provides full redundancy between two pairs of IDUs. Each
pair is a 2+0 link, which can be configured for XPIC or in different frequencies.
If there is a failure in one of these pairs, the other pair takes over.
A 2+2 protection scheme must be implemented by means of a nodal
configuration. Each pair is inserted into its own main nodal enclosure, with a
protection cable to connect the main IDUs (in slot 1) in each pair. Protection is
performed between the pairs. At any given time, one pair is active and the
other is standby.
A 2+2 configuration scheme is only possible between units in a main nodal
enclosure (slots 1 and 2). Extension nodal enclosures (slots 3 – 6) cannot be
used in a 2+2 configuration.
2+2 protection can be used together with XPIC and/or Multi-Radio. The
following figure illustrates a 2+2 configuration with both XPIC and Multi-
Radio. The RFUs marked V are set to vertical polarization, while the RFUs
marked H are set to horizontal polarization.
In a 2+2 configuration, the lower IDU in each pair is a master unit, and does
the following:
Sends and receives traffic to and from the user through line interfaces.
Receives protection information from the slave unit in the pair.
Hardware Protection with Full protection with Dual Interface Full Protection Using Multi-Unit
Single Interface Using Optical Using Optical Splitters and LAG LAG
Splitter
All of these line protection methods are available for any of the following
configurations:
1+1 HSB
2+0 Multi Radio with IDU and Line Protection
2+2 Multi-Radio
All BBS diversity configurations
The following table compares the advantages and limitations of the Ethernet
line protection schemes described in this section.
Ethernet Line Protection Comparison
Related topics:
Link Aggregation (LAG)
Ethernet Switching
Diversity
With Multi-Unit LAG, the switch or router relates to two IDUs as a single
device. There is no need for splitters, and Multi-Unit LAG can be used to
protect either the electrical GbE ports or the optical GbE ports. In contrast,
splitters can only be used to protect optical GbE ports or Fast Ethernet ports.
Multi-Unit LAG can only be used in Smart Pipe mode. The service disruption
time in case of failure in one of the LAG physical ports is less than 50ms in
most cases.
An IP-10G system using Multi-Unit LAG has dual (redundant) GbE interfaces.
Each of these interfaces is connected to a separate interface on an external
switch or router. The IP-10G interfaces are active and enabled on both the
active or master unit and the standby or slave unit. On the external unit, a
static LAG must be configured on the interfaces that are connected to the IDUs.
If the IP-10G IDUs are in Multi-Radio mode with IDU and line protection, any
link failure triggers graceful degradation and is transparent to the external
unit. If an IDU itself experiences unit failure, the interface to which it is
connected on the external unit is disabled. If the disabled IDU is the standby
unit, or if it is the active unit and Multi-Radio with IDU and line protection is
enabled, the functioning IDU maintains connectivity with the external unit via
the interface to which the functioning IDU is connected.
Multi-Unit LAG is supported with any of the following protection features:
1+1 HSB
1+1 Space or Frequency Diversity
2+2 HSB
2+0 Multi Radio with line protection
Multi-Unit LAG is supported in both standalone and nodal configurations.
Multi-Unit LAG supports both electrical and optical interfaces.
The following table describes the behavior of Multi-Unit LAG Ethernet line
protection in various failure scenarios.
Multi-Unit LAG Failure Scenarios
Scenario Reaction
Failure in port1 in active Initiate protection switchover.
Failure in port1 in standby LAG protocol on the external switch recognizes the port
failure and uses the second LAG port (the one that is
connected to the active IDU). No protection switchover is
initiated.
Failure in the mirroring port Standby unit shuts down Eth1 to indicate failure to the
external switch. After resolving the failure, the standby unit
reopens port1 automatically. No protection switchover is
initiated.
L1 header (PHY)
Inter-Frame Gap (IFG)
12B
8B Preabmle
MAC DA
6B
L2 header (MAC)
6B MAC SA
2B 0x8A88 (opt)
4B
GFP header
2B S-Vlan (opt)
2B 0x8100 (opt)
6B MAC DA 2B C-Vlan (opt)
2B 0x0800/0x86DD
6B MAC SA
2B 0x8A88 (opt)
2B S-Vlan (opt)
2B 0x8100 (opt)
2B C-Vlan (opt)
2B 0x0800/0x86DD L3/L4 headers
(optional)
L3/L4 headers &
(optional) Payload
&
Payload
4B CRC
MAC
4B
CRC
L1 header (PHY)
Inter-Frame Gap (IFG)
12B
8B Preabmle
MAC DA
6B
L2 header (MAC)
6B MAC SA
2B 0x8A88 (opt)
2B S-Vlan (opt)
2B 0x8100 (opt)
4B GFP header
2B C-Vlan (opt)
1B Flow ID 2B 0x0800/0x86DD
2B 0x8A88 (opt)
2B S-Vlan (opt)
2B 0x8100 (opt)
2B C-Vlan (opt)
2B 0x0800/0x86DD
4B CRC
MAC
4B
CRC
Related topics:
Licensing
Multi-Layer (Enhanced) header compression identifies traffic flows and
replaces the header fields with a "flow ID". This is done using a sophisticated
algorithm that learns unique flows by looking for repeating frame headers in
the traffic stream over the radio link and compressing them. The principle
underlying this feature is that packet headers in today’s networks use a long
protocol stack that contains a significant amount of redundant information.
In Enhanced Compression mode, the user can determine the depth to which
the compression mechanism operates, from Layer 2 to Layer 4. Operators
must balance the depth of compression against the number of flows in order
to ensure maximum efficiency. Up to 256 concurrent flows are supported.
Up to 68 bytes of the L2-4 header can be compressed. In addition Layer 1
header suppression is also performed, replacing the IFG and Preamble fields
(20 bytes) with a GFP header.
Multi layer header compression can be used to compress the following types
of header stacks:
Ethernet MAC untagged
IPv4
TCP
UDP
IPv6
TCP
UDP
MPLS
Ethernet MAC + VLAN
IPv4
TCP
UDP
IPv6
TCP
UDP
MPLS
Ethernet MAC with QinQ
IPv4
TCP
UDP
IPv6
TCP
UDP
MPLS
PBB-TE
The following figure provides a detailed diagram of how the frame structure is
affected by Multi-Layer (Enhanced) header compression.
Multi-Layer (Enhanced) Header Compression
L1 header (PHY)
Inter-Frame Gap (IFG)
12B
8B
Preabmle
MAC DA
6B
L2 header (MAC)
6B MAC SA
2B 0x8A88 (opt)
2B S-Vlan (opt)
2B 0x8100 (opt)
2B C-Vlan (opt)
4B GFP header 2B 0x0800/0x86DD
L3 header
Compressed header
24/40B IPv4/6
& Flow ID
L4 header
8/28B UDP/TCP
Payload
Payload
4B CRC
MAC
4B
CRC
6.3.3 Latency
IP-10G provides best-in-class latency (RFC-2544) for all channels, making it
LTE (Long-Term Evolution) ready:
<0.21ms for 28/56MHz channels (1518 byte frames)
<0.4 ms for 14MHz channels (1518 byte frames)
<0.9 ms for 7MHz channels (1518 byte frames)
Related topics:
Scheduling and Shaping
Frame Cut-Through is a unique and innovative feature that ensures low
latency for delay-sensitive services, such as CES, VoIP, and control protocols.
With Frame Cut-Through, high-priority frames are pushed ahead of lower
priority frames, even if transmission of the lower priority frames has already
begun. Once the high priority frame has been transmitted, transmission of the
lower priority frame is resumed with no capacity loss and no re-transmission
required. This provides operators with:
Immunity to head-of-line blocking effects – key for transporting high-
priority, delay-sensitive traffic.
Reduced delay-variation and maximum-delay over the link:
Reduced end-to-end delay for TDM pseudowire services.
Improved QoE for VoIP and other streaming applications.
Expedited delivery of critical control frames.
Propagation Delay with and without Frame Cut-Through
When enabled, Frame Cut-Through applies to all the high priority frames, i.e.,
all frames that are classified to a CoS queue with 4th (highest) priority.
Frame Cut-Through Operation
To activate an asymmetrical script, the user must upgrade the uplink script
(narrow TX, wide RX) at one end of the link, and upgrade the downlink script
(wide TX, narrow RX) at the other end of the link. This operation requires
reset. To avoid loss of management to the remote site, it is recommended to
upgrade the remote site first.
Related topics:
ACM with Adaptive Transmit Power
ACM for TDM Services
Quality of Service (Traffic Manager)
Cross Polarization Interface Canceller (XPIC
1+1 HSB Protection
Radio Traffic Priority
FibeAir IP-10G employs full-range dynamic ACM. IP-10G’s ACM mechanism
copes with 90 dB per second fading in order to ensure high transmission
quality. IP-10G’s ACM mechanism is designed to work with IP-10G’s QoS
mechanism to ensure that high priority voice and data packets are never
dropped, thus maintaining even the most stringent service level agreements
(SLAs).
The hitless and errorless functionality of IP-10G’s ACM has another major
advantage in that it ensures that TCP/IP sessions do not time-out. Without
ACM, even interruptions as short as 50 milliseconds can lead to timeout of
TCP/IP sessions, which are followed by a drastic throughout decrease while
these sessions recover.
When activating an ACM script together with 1+1 HSB protection, if an LOF
alarm is raised, both the active and the standby receivers degrade to the
lowest available profile (highest RX sensitivity). Because RX sensitivity is very
high, the receivers may have false lock, which will result in a switchover. If the
LOF alarm remains, protection switchovers may appear alternately every one
second. This may cause management instability and may even prevent
management access to the units completely.
In order to avoid this scenario, it is important to carefully follow the
instructions for setting up 1+1 HSB protection. In particular, make sure that
the link is established with lockout configuration in order to avoid alternate
switchovers. Once the link is up and running, lockout can be disabled.
The following ACM behavior should be expected in a 1+1 configuration:
In the TX direction, the Active TX will follow the remote Active RX ACM
requests (according to the remote Active Rx MSE performance).
The Standby TX might have the same profile as the Active TX, or might stay
at the lowest profile (profile-0). That depends on whether the Standby TX
was able to follow the remote RX Active unit’s ACM requests (only the
active remote RX sends ACM request messages).
In the RX direction, both the active and the standby units follow the
remote Active TX profile (which is the only active transmitter).
For this feature to be used effectively, it is essential for the operator not to
breach any regulator-imposed EIRP limitations. For example, if used, the
operator must license the system for the maximum possible EIRP.
The Adaptive Transmit Power feature, together with ACM, can work in one out
of two scenarios:
Increase capacity (increase throughput of existing link) – With the option
to use Adaptive TX Power.
Increase availability (new link) – Adaptive TX Power is not applicable.
The first scenario is for operators that have existing PDH links with several
links in a low class (modulation order), and want to use ACM to carry the same
PDH circuits with additional Ethernet traffic without occupying more
spectrum bandwidth.
The second scenario is for operators who plan a new link for a specific
availability and capacity, but want to take advantage of the ACM capability to
achieve lower capacity even in higher fades.
In the first scenario the operator must plan the link according to a “low class”
channel mask. When radio path conditions allow, the link will increase the
modulation. This modulation increase may require lowering the output power
(see figure below), in order to decrease the non-linearity of the transmitter for
the higher constellations and in order for the transmitted spectrum to stay
within the licensed “low class” channel mask. The following figure
demonstrates the differences between a “low class” mask (e.g., class 2) and a
“high class” mask (e.g., class 5).
Channel Mask Comparison
Related topics:
Adaptive Coding Modulation (ACM)
Quality of Service (Traffic Manager)
Since radio bandwidth may vary in ACM, situations may arise in which it is
necessary to drop some of the outgoing traffic. The system dynamically
allocates bandwidth to traffic according to user-defined priorities.
At the radio level, the system can discern between the following types of
traffic:
High-priority Ethernet traffic
Low-priority Ethernet traffic
High-priority TDM trails
Low-priority TDM trails
Users can configure the following parameters:
The amount (in Mbps) of high priority Ethernet Bandwidth
For each TDM trail, whether it is high or low priority
The priority order between the different types of traffic. the following
schemes are available (from high to low priority):
High-TDM-over-high-Ethernet, meaning:
1. TDM high priority
2. Ethernet high priority
3. TDM low priority
4. Ethernet low priority
High-Ethernet-over-TDM, meaning:
1. Ethernet high priority
2. TDM high priority
3. TDM low priority
4. Ethernet low priority
TDM-over-Ethernet (default), meaning:
1. TDM high priority
2. TDM low priority
3. Ethernet
For this mechanism to work properly, both sides of the link should be
identically configured:
Each TDM trail on both sides of a link should be assigned the same
priority.
Both sides of the link should have the same amount of high priority
Ethernet bandwidth.
Both sides of the link should use the same priority scheme.
Note that on the right side of the figure you can see that CarrierR receives the
H+v signal, which is the combination of the desired signal H (horizontal) and
the interfering signal V (in lower case, to denote that it is the interfering
signal). The same happens in CarrierL = “V+h. The XPIC mechanism takes the
data from CarrierR and CarrierL and, using a cost function, produces the
desired data.
6.4.6 Multi-Radio
Related topics:
2+0 Multi-Radio and 2+0 Multi-Radio with IDU and Line Protection
2+2 HSB Protection
Automatic State Propagation
Multi-Radio enables two separate radio carriers to be shared by a single
Ethernet port. This provides an Ethernet link over the radio with double
capacity, while still behaving as a single Ethernet interface. The IDUs in a
Multi-Radio setup operate in master and slave mode.
In Multi-Radio mode, traffic is divided among the two carriers optimally at the
radio frame level without requiring Ethernet Link Aggregation, and is not
dependent on the number of MAC addresses, the number of traffic flows, or
momentary traffic capacity. During fading events which cause ACM
modulation changes, each carrier fluctuates independently with hitless
switchovers between modulations, increasing capacity over a given
bandwidth and maximizing spectrum utilization.
The result is 100% utilization of radio resources in which traffic load is
balanced based on instantaneous radio capacity per carrier and is
independent of data/application characteristics, such as the number of flows
or capacity per flow.
Typical 2+0 Multi-Radio Link Configuration
Eth x
MODEM MODEM
Duplication
x Eth &
LVDS
Master
Eth 8
Traffic Eth 8
Traffic splitter MODEM MODEM combiner
LVDS
LVDS
At the transmitting side, outgoing traffic at Eth8 in the master IDU is split
between its own radio and that of the slave. Each radio transmits its share of
the data.
At the receiving side, the slave sends the data it receives to the master, which
combines it with the data received from its own radio link, recovering all the
data.
Data is distributed between the two links at the Layer 1 level in an optimal
way. Therefore, the distribution is not dependent on the contents of the
Ethernet frames.
In addition, the distribution is proportional to the available bandwidth in
every link:
If both links have the same capacity, half the data will be sent through each
link.
In ACM conditions, the links could be in different modulations; in this case,
data will be distributed proportionally in order to maximize the available
bandwidth.
Links can also have different capacities because of different numbers of TDM
trails configured through the link; as before, Multi-Radio makes maximum use
of available capacity by distributing proportionally to the available
bandwidth.
Note: The Multi-Radio feature is applicable for Ethernet data only.
For TDM, each link remains separate, and users can configure
trails to either radio (or both, by using SNCP or ABR).
In order for Multi-Radio to work properly, the two radio links should use the
same radio script. Note that in the case of ACM, the links may use different
modulations, but the same base script must still be configured in both links.
6.4.8 Diversity
Space Diversity and Frequency Diversity are common ways to negate the
effects of fading caused by multipath phenomena.
Space Diversity is implemented by placing two separate antennas at a distance
from one another that makes it statistically likely that if one antenna suffers
from fading caused by signal reflection, the other antenna will continue to
receive a viable signal.
Frequency Diversity is implemented by configuring two RFUs to separate
frequencies. The IDU selects and transmits the better signal.
Related topics:
Multi-Unit LAG
IP-10G offers Frequency Diversity and two methods of Space Diversity:
Baseband Switching (BBS) Frequency and Space Diversity – Each IDU
receives a separate signal from a separate antenna. Each IDU compares
each of the received signals, and enables the bitstream coming from the
receiver with the best signal. Switchover is errorless (“hitless switching”).
IF Combining (IFC) Space Diversity – Signals from two separate
antennas are combined in phase with each other to maximize the signal to
noise ratio. IF Combining is performed in the RFU.
Diversity Signal Flow
8
1500 HP (11 GHz ) 40 MHz bandwidth does not support IF Combining. For this frequency,
space diversity is only available via BBS.
East Radio
Radio 16xE1
Disable
Enable 16xE1 (Active)
16xE1 spiltter
West 16xE1 spiltter
16xE1 Radio
Radio Disable
Enable 16xE1 (Stby)
Radio
16xE1 Radio 16xE1 Enable
16xE1 Disable 16xE1 (Active)
16xE1 spiltter
16xE1 spiltter
Protection 1+1
16xE1 Radio
16xE1 Radio Enable
16xE1 Disable 16xE1 (Stby)
Related topics:
Quality of Service (Traffic Manager)
Licensing
IP-10G supports three modes for Ethernet switching:
Smart Pipe – Ethernet switching functionality is disabled and only a single
Ethernet interface is enabled for user traffic. The unit effectively operates
as a point-to-point Ethernet microwave radio.
Managed Switch – Ethernet switching functionality is enabled based on
VLANs.
Metro Switch – Ethernet switching functionality is enabled based on an
S-VLAN-aware bridge.
Ethernet Switching
Each switching mode supports QoS. Smart Pipe is the default mode. Managed
Switch and Metro Switch require a license.
All Ethernet ports are enabled for traffic in Managed Switch mode. The aging
time used by the MAC learning table can be configured in Managed Switch
mode.
The following table lists VLANs that are reserved for internal use in Managed
Switch mode.
VLANs Reserved for Internal Use in Managed Switch Mode
QoS can be used in Metro Switch mode. All Ethernet ports can be used for
traffic.
Users can choose the Ethertype used to recognize the S -VLAN tag. Options
are:
88A8
8100
9100
9200
The aging time used by the MAC learning table can be configured in Metro
Switch mode.
Related topics:
Quality of Service (Traffic Manager)
Standards and Certifications
FibeAir IP-10G is fully MEF-9 and MEF-14 certified for all Carrier Ethernet
services (E-Line and E-LAN).
Carrier Grade Ethernet Feature Summary
Standardized Services Scalability Quality of Service Reliability Service Management
MEF-9 and MEF-14 Up to 500Mbps per Advanced CoS Highly reliable and Extensive multi-layer
certified for all service radio carrier classification integrated design management
types (EPL, EVPL, Up to 1Gbps per Advanced traffic Fully redundant capabilities
and E-LAN) channel (with XPIC) policing/rate- 1+1/2+2 HSB and Ethernet service
Multi-Radio limiting nodal configurations OA&M – 802.1ag
Related topics:
Automatic State Propagation
Automatic State Propagation (ASP) with HSB Protection
Licensing
IP-10G supports the following spanning tree Ethernet resiliency protocols:
Rapid Spanning Tree Protocol (RSTP) (802.1w)
Carrier Ethernet Wireless Ring-optimized RSTP (proprietary)
Standard RSTP configurations are identical to those for Ring-Optimized RSTP.
The two protocols differ in the following respects:
Topologies supported
Standard RSTP is meant to work with any mesh topology
Ring-Optimized RSTP is meant for ring topologies only
Interoperability
Standard RSTP is fully interoperable
Ring-Optimized RSTP is proprietary
Performance
Standard RSTP converges in up to a few seconds
Ring-Optimized RSTP converges in under 200ms in most cases
Node Type A
The node is connected to the ring with one radio interface (e.g., East) and one
line interface (e.g., West). The node contains only one IP-10 IDU.
The Radio interface is directed towards one direction (e.g., East), and one of
the Gigabit interfaces (electrical or optical) is directed towards the second
direction (e.g., West).
The other line interfaces are in Edge mode, which means that they are user
interfaces, and do not belong the ring itself.
Node Type B
Using two IP-10G IDUs, this node is connected to radios in both directions of
the ring (East and West). Each IDU supports the radio in one direction.
In this topology, Ring-Optimized RSTP is enabled in one IDU. The other IDU
operates in Smart Pipe mode.
The IDUs are connected to each other using one of their Gigabit interfaces
(either optical or electrical). Other line interfaces are in Edge mode.
Exceptions:
10% of convergence scenarios might take 600 ms.
Excessive BER convergence might end within 600 ms.
HW (cold reset) resets, convergence might end within 400-600 ms.
Radio TX mute/ un-mute convergence takes, in 5-10% of cases,
500 - 1000 ms.
In-Band Management
In-band management is part of the data traffic. RSTP therefore protects
management traffic along with the other network traffic when the ring is re-
converged as a result of a ring failure.
When In-Band management is used, IDUs set to Managed Switch are
configured to In-Band, while IDUs set to Smart Pipe mode are configured to
Out-of-Band. IDUs using Smart Pipe mode are connected to their mates, which
are using Managed Switch mode, via an external Ethernet cable for
management. This is because an IDU in Smart Pipe mode shuts down its
Gigabit traffic port in the event of failure, which would prevent management
traffic from reaching the IDU.
Note: If the IDU in Managed Switch mode loses power, its mate in
Smart Pipe mode will lose management access. As a result,
the entire node will lose management access. However, if
the IDU in Smart Pipe mode loses power, its mate in
Managed Switch mode will retain management access.
The following figure illustrates a ring with four nodes using In-Band
management.
Resilient In-Band Ring Management
Out-of-Band Management
Out-of-band management uses the Wayside Channel (WSC) for management
access to the IDUs in the network. An external switch using some form of STP
should be used in order to obtain resilient management access and resolve
management loops.
When Out-of-Band management is used, all IDUs must be configured to:
Out-of-Band
WSC Enabled
The following figure illustrates a ring with four nodes using Out-of-Band
management.
Resilient Out-of-Band Ring Management
Related topics:
Multi-Radio
Ethernet Switching
Network Resiliency
Automatic State Propagation (ASP) with HSB Protection
Automatic State Propagation ("GigE Tx mute override") enables propagation
of radio failures back to the line, to improve the recovery performance of
resiliency protocols (such as xSTP). The feature enables the user to configure
which criteria will force the GbE port (or ports in case of a remote fault) to be
muted or shutdown, in order to allow the network to find alternative paths.
In Single Pipe mode, upon radio failure Eth1 is muted when configured as
optical or shut down when configured as electrical. In Managed Switch or
Metro Switch mode, the radio interface (Eth8) is forced to be disabled (Eth8
cannot be muted, but only disabled in both directions).
In 2+0 Multi-Radio mode, Automatic State Propagation can be triggered upon
a failure in a single IDU or upon a failure in both IDUs. This behavior is
determined by user configuration.
User Configuration Optical (SFP) GbE port Electrical GbE port Radio Port functionality
functionality - Single Pipe mode (10/100/1000) – ‘Managed/Metro
functionality - Single Switch mode
Pipe mode
Automatic State Propagation No mute is issued. No shutdown.
disabled.
Local LOF, Link-ID mismatch Mute the LOCAL port when one or Shut down the LOCAL port when one or more of the
(always enabled) more of the following events occurs: following events occurs:
1. Radio-LOF on the LOCAL unit. 1. Radio-LOF on the LOCAL unit.
2. Link ID mismatch on the LOCAL 2. Link ID mismatch on the LOCAL unit.
unit.
Ethernet shutdown threshold Mute the LOCAL port when ACM Rx Shut down the LOCAL port when ACM Rx profile degrades
profile. profile degrades below a pre- below a pre-configured profile on the LOCAL unit.
configured profile on the LOCAL unit This capability is applicable only when ACM is enabled.
Local Excessive BER Mute the LOCAL port when an Shut down the LOCAL port when an Excessive BER alarm
Excessive BER alarm is raised on the is raised on the LOCAL unit
LOCAL unit
Local LOC Mute the LOCAL port when a GbE- No shutdown. N/A
LOC alarm is raised on the LOCAL Note1: Electrical-GbE
unit. cannot be muted. Electrical-
GbE LOC will not trigger
Shutdown, because it will not
be possible to enable the
port when the LOC alarm is
cleared
User Configuration Optical (SFP) GbE port Electrical GbE port Radio Port functionality
functionality - Single Pipe mode (10/100/1000) – ‘Managed/Metro
functionality - Single Switch mode
Pipe mode
Remote Fault Mute the LOCAL port when one or Shut down the LOCAL port, Shut down the LOCAL port,
more of the following events is raised when one or more of the when one or more of the
on the REMOTE unit: following events is raised on following events is raised on
1. Radio-LOF (on remote). the REMOTE unit: the REMOTE unit:
2. Link-ID mismatch (on remote). 1. Radio-LOF (on remote). 1. Radio-LOF (on remote).
3. GbE-LOC alarm is raised (on 2. Link-ID mismatch (on 2. Link-ID mismatch (on
remote). remote). remote).
4. ACM Rx profile crossing threshold 3. ACM Rx profile crossing 3. ACM Rx profile crossing
(on remote), only if enabled on the threshold (on remote), only threshold (on remote), only
LOCAL. if enabled on the LOCAL. if enabled on the LOCAL.
5. ‘Excessive BER’ (on remote), only 4. ‘Excessive BER’ (on 4. ‘Excessive BER’ (on
if enabled on the LOCAL. remote), only if enabled on remote), only if enabled on
the LOCAL. the LOCAL.
Note1: Electrical-GbE
cannot be muted. Electrical-
GbE LOC will not trigger
"Shut-down", because it will
not be possible to enable the
port when LOC alarm is
cleared
Related topics:
Radio Traffic Priority
Standard and Enhanced QoS Comparison
IP-10G offers integrated QoS functionality in all switching modes. In addition
to its standard QoS functionality, IP-10G offers an enhanced QoS feature.
Enhanced QoS is license-activated.
IP-10G’s standard QoS provides for four queues and six classification criteria.
Ingress traffic is limited per port, Class of Service (CoS), and traffic type.
Scheduling is performed according to Strict Priority (SP), Weighted Round
Robin (WRR), or Hybrid WRR/SP scheduling.
IP-10G’s enhanced QoS provides eight classification criteria instead of six,
color-awareness, increased frame buffer memory, eight priority queues with
configurable buffer length, improved congestion management using WRED
protocols, enhanced counters, and other enhanced functionality.
The figure below shows the QoS flow of traffic with IP-10G operating in Smart
Pipe mode.
Smart Pipe Mode QoS Traffic Flow
The figure below shows the QoS flow of traffic with IP-10G operating in
Managed Switch or Metro Switch mode.
Managed Switch and Metro Switch QoS Traffic Flow
5 Policers
Shaper
Classifier (Ingress Queue
Marker Scheduler (Egress rate
(4 Queues) Rate Controller
limiting)
Limiting)
Related topics:
Licensing
Enhanced QoS provides an enhanced and expanded feature set. The tools
provided by enhanced QoS apply to egress traffic on the radio port, which is
where bottlenecks generally occur. Enhanced QoS can be enabled and disabled
by the user.
Enhanced QoS capabilities include:
Enhanced classification criteria
CIR/CBS and EIR/EBS support
Policers per service (VLAN+CoS)
255 MEF 10.2-compliant policers with trTCM support.9
Eight priority queues with configurable buffer length
An enhanced scheduler based on Strict Priority, Weighted Fair Queue
(WFQ), or a hybrid approach that combines Strict Priority and WFQ
Shaper per priority queue
WRED support, along with Tail-Drop, for congestion management
Configurable P-bit and CFI/DEI re-marker
Enhanced PM and statistics
These and other IP-10G enhanced QoS features enable operators to provide
differentiated services with strict SLA while maximizing network resource
utilization. Enhanced QoS requires a license, and can be enabled and disabled
by the user.
The main benefits of enhanced QoS are:
Improved available link capacity utilization:
Enhanced and configurable queue buffer size (4 Mb total)
WRED for best utilization of the link when TCP/IP sessions are
transported, providing up to 25% more capacity.
Advanced SLA support:
Granular SLA enforcement and traffic policing with TrTCM (CIR + EIR)
– dual-rate limit per service (VLAN / VLAN + CoS)
Enhanced service differentiation:
8 CoS queues (as opposed to 4 queues in standard QoS)
Additional classification criteria – MPLS EXP bits and UDP ports
Shaping per CoS queue
9
Requires hardware version R3.
The initial step in the enhanced QoS traffic flow is the classifier, which
provides granular service classification based on a number of user-defined
criteria.
The classifier marks the Service ID, CoS, and color of the frames. If a frame’s
VLAN ID matches a Service ID that is mapped to a policer, the frame is sent to
the policer. Untagged frames or frames whose VLAN ID does not match a
defined Service ID are sent directly to a queue, based on the frame’s CoS and
color.
Enhanced QoS provides up to 255 user-defined TrTCM policers. The policers
implement a bandwidth profile, based on CIR/EIR, CBS/EBS, and several other
criteria.
The next step after the TrTCM policers is queue management. Queue
management determines which packets enter which of the eight available
queues. Queue management also includes congestion management, which can
be implemented by Tail-Drop or WRED.
Frames are sent out of the queues according to scheduling and shaping, IP-
10G’s enhanced QoS module provides a unique hierarchical scheduling model
that includes four priorities, with WFQ within each priority and shaping per
queue. This model enables operators to define flexible and highly granular
QoS schemes for any mix of services.
Finally, the enhanced QoS module re-marks the P-bits and CFI/DEI bits of the
most outer VLAN according to the CoS and color decision in the classifier. This
step is also known as the modifier.
For even more granularity, policers can be assigned according to VLAN P-Bit.
This Policer per VLAN P-bit option enables the customization of a set of eight
policers for a variety of traffic flows within a single service (e.g., GPRS or
management).
Note: The Policer per VLAN P-Bit option can be enabled only for a
Policer with a Policer ID of 8 or a multiple of 8, e.g., Policer8,
Policer16, Policer24, …, Policer248 . When using the Policer
per VLAN P-Bit option, none of the 8 policers that are
allocated to the service can be used by other services.
As illustrated in the figure below, TrTCM policers use a leaky bucket
mechanism to determine whether packets are marked Green, Yellow, or Red.
Packets within the Committed Information Rate (CIR) or Committed Burst
Size (CBS) are marked Green and sent on to a queue. Packets within the Excess
Information Rate (EIR) or Excess Burst Size (EBS) are marked Yellow. These
packets are also sent on to a queue, and processed according to the settings of
the scheduling and shaping mechanisms. Packets that do not fall within the
CIR/CBS+EIR/EBS are marked Red and dropped, without being sent any
further.
TrTCM Policers – Leaky Bucket Mechanism
Excess Burst Size (EBS) – Packets within the EBS defined for the service
are marked Yellow and processed according to network availability.
Packets beyond the combined CBS and EBS are marked Red and dropped
by the policer.
Color Mode – Color mode can be enabled (color aware) or disabled (color
blind). In color aware mode, all packets that ingress with a CFI/DEI field
set to 1 (Yellow) are treated as EIR packets, even if credits remain in the
CIR bucket. In color blind mode, all ingress packets are treated as Green
packets regardless of CFI/DEI value. A color-blind policer discards any
previous color decisions.
Coupling Flag – If the coupling flag is enabled, frames marked Yellow may
be placed in the Green buffer when there are no available Yellow credits in
the EIR bucket.
Note: Coupling Flag is only relevant in color aware mode.
Line Compensation – A policer can measure CIR and EIR as Layer1 or
Layer2 rates. Layer1 capacity is equal to Layer2 capacity plus 20
additional bytes for each frame (preamble, SFD, and IFG). Line
compensation defines the number of bytes to be added to each frame for
CIR and EIR calculation. When Line Compensation is 20, the policer
operates as Layer1. When Line Compensation is 0, the policer operates as
Layer 2.
CIR and EIR granularity is:
64 Kbps in range of 64 Kbps to 100 Mbps
1 Mbps in range of 100 Mbps to 1 Gbps
CBS and EBS granularity is 1 byte.
The TrTCM policer mechanism includes counters for packets dropped and
packets transmitted, both per queue and per service. These counters can be
viewed via the CLI.
Note: Per-service counters require hardware version R3 and
software version 6.9 or higher.
Per queue counters are available in hardware versions R2 and R3, as well as
software versions i6.7 and up. However, hardware version R3 and software
version i6.9 and higher provide additional counters, as shown in the following
table:
Per-Queue Counters Availability
Software Version
i6.7 Green bytes passed
Green frames dropped
Yellow bytes passed
Yellow frames dropped
i6.9 and higher Same as i6.7, with the addition of:
L1 support for Green and Yellow bytes passed (i6.7 supports L2 only)
Green frames passed
Yellow frames passed
Each one of the eight priority queues can be given a different weight. For each
queue, the user defines the WRED profile curve. This curve describes the
probability of randomly dropping frames as a function of queue occupancy.
Basically, as the queue occupancy grows, the probability of dropping each
incoming frame increases as well. As a consequence, statistically more TCP
flows will be restrained before traffic congestion occurs.
The WRED profile curve can be adjusted for each one of the priority queues.
Yellow and Green frames can also be assigned different weights. Usually,
Green frames (committed rate) are preferred over Yellow frames (excessive
rate), as shown in the curve below.
WRED Profile Curve
Note: WRED can also be set to a tail drop curve. A tail drop curve
is useful for reducing the effective queue size, such as when
low latency must be guaranteed. In order to set the tail drop
curve to its maximum level, the drop percentage must be set
to zero.
Scheduling
IP-10G’s enhanced QoS mechanism provides Strict Priority and Weighted Fair
Queue (WFQ) for scheduling. Users can configure a combination of both
methods to achieve the optimal results for their unique network
requirements.
Each priority queue has a configurable strict priority from 1 to 4
(4=High; 1=Low). WFQ weights are used to partition bandwidth between
queues of the same priority.
Note: When Frame Cut-Through is enabled, frames in queues with
4th priority can pre-empt frames already in transmission
over the radio from other queues. For details, refer to Frame
Cut-Through on page 98.
IP-10G Configuration Example
Scheduling Examples
This section provides several use cases in which Strict Priority and WFQ are
combined to produce a desired scheduling configuration. These are simply
two examples of the many ways in which IP-10G’s flexible scheduling
mechanism can be configured to achieve a combination of Strict Priority
scheduling for the highest priority traffic flows and weighted scheduling for
other traffic flows that may be less delay sensitive.
Example 1 shows a hybrid setup in which the three highest-priority queues
are served according to Strict Priority, and the remaining queues are served
according to WFQ. In this example, higher-priority queues are served first.
Only after the three highest-priority queues are empty is traffic from the
remaining five queues served, according to WFQ and their respective weight.
Example 1 – Hybrid Scheduling
As shown above, trails are defined from one end of a line to the other. The
Cross-Connect Unit forwards signals generated by the radios to and from the
IDUs based on their designated VCs. For instance, in the example above, the
Cross-Connect Unit can forward signals on Trail C from Radio 1, VC 3 to Radio
4, VC 1.
For each trail, the following end-to-end OAM functions are supported:
Alarms and maintenance signals, including AIS and RDI
Performance monitoring counters, including ES, SES, and UAS.
Trace ID for provisioning mismatch detection.
A VC overhead is added to each VC trail to support the end-to-end OAM
functionality and synchronization justification requirements.
In a 1+1 HSB configuration, the single port on the third party equipment is
connected to two STM-1 interfaces on the IP-10G through an optical splitter
cable. This ensures that an identical signal is received by each STM-1 interface
on the IP-10G. The IP-10G determines which interface is active, based on
traffic loss indications such as LOS, LOF, or other errors.
While both interfaces on the IP-10G receive traffic, only the active interface
transmits. The standby interface is automatically muted.
In Uni-directional MSP, the elements at each end of the STM-1 link transmit
traffic through both connections. On the receiving side, each IP-10G element
unilaterally decides, based on traffic loss indications such as LOS, LOF, or
other errors, from which interface to receive the traffic, and declares that
interface the active interface.
Each STM-1 T-Card is connected directly to separate ports in the third party
network element. There is no need for a splitter or Y-cable. This extends
protection to the optical ports in the third party equipment and to the cable, as
well as to the STM-1 T-Card in the IP-10G.
In 1+1 HSB configurations, Uni-directional MSP is subject to the following
limitations:
The IP-10G units in the 1+1 HSB protection pair must be placed in a main
nodal enclosure (slots 1 and 2). Only traffic originating in these slots can
be transported via the STM-1 interface. This means that TDM trails
originating from radios in units that are not in slots 1 or 2 cannot be sent
through the protected STM-1 interface.
Note: The system does not block sending TDM trails that do not
originate from slots 1 and 2 via the protected STM-1
interface. It is the responsibility of the user to ensure that
only trails that do originate from slots 1 and 2 are
transmitted via the protected STM-1 interface.
Both IP-10G units must be configured for BBS Space Diversity, even if only one
antenna is used. This ensures that the signals sent through both STM-1
interfaces are identical regardless of which radio is actually receiving.
Related topics:
TDM Interface Options
Smart TDM Pseudowire Interface Specifications
Licensing
Ethernet Switching
Pseudowire provides a smart solution for migration to all-packet networks.
Often, TDM islands exist within a network that has largely converted to
all-packet. All-packet segments may be joined with hybrid or TDM segments.
Base stations in particular often continue to use TDM equipment after the
remaining network segments have migrated to all-packet. Pseudowire bridges
the gap between legacy TDM equipment and the all-packet present and future.
As part of IP-10G’s Native2 model, Smart TDM Pseudowire and IP-10G’s built-
in native TDM provide an ideal solution for TDM to packet migration.
IP-10G’s Smart TDM Pseudowire provides TDM over packet capabilities by
means of an optional 16 E1 Pseudowire (PW) processing T-Card that
processes TDM data, sends the data through the system in packet format that
can be processed by the IDU’s Ethernet ports, and converts the data back to
TDM format. Up to six PW T-Cards can be used in a single node.
Smart TDM Pseudowire features an advanced network processor design, with
state of the art Carrier Ethernet and advanced QoS. Smart TDM Pseudowire
also offers the option of 1:1 path protection, which provides path redundancy
for TDM services carried over Pseudowire.
The TDM PW processing T-Card includes an Ethernet interface that must be
connected to one of the Ethernet ports in the same IDU as the PW T-Card. Any
electrical Ethernet port can be used, including either GbE or Fast Ethernet
ports. The optical GbE ports cannot be used.
PW T-Card Connected to Ethernet Port (Eth3)
In the following example, native E1 trails are used up to the aggregation site
and PW T-Cards are installed in the intermediate aggregation sites,
minimizing the cost and effort of migration to an all-packet network by
optimizing deployment of the PW T-Cards.
Migration from Hybrid to All-Packet Network – PW processing T-Card in Intermediate
Aggregation Sites
In the following example, native E1 trails are used in the access network and
PW T-Cards are installed in the fiber PoP sites, providing for seamless
integration with any packet aggregation network.
Migration from Hybrid to All-Packet Network – PW processing T-Card in Fiber PoP
Sites
IP-10G with Smart TDM Pseudowire supports several aggregation options and
scenarios.
One option is native service stitching at a fiber site. In this scenario, Smart
TDM Pseudowire converts TDM data to packet format at the tail/hub site. The
pseudowire connection is terminated at the fiber site and N x E1 or STM-1
lines are used to connect either to the fiber node via a router/MSPP or directly
to the BSC/RNC.
Smart TDM Pseudowire with Native Service Stitching at Fiber Site
Auto-negotiation
Flow control
T-Card’s IP address and subnet mask
Clock distribution and use of front panel clock interface
Active Path
Layer 2
CCM Standby Path
IP-10G
Layer 2
Default Gateway 2 Router
CCM
Pseudowire PMs
Standard pseudowire PM measurements are provided for each configured
service:
missing-packets counter
packets-reorder counter
misorder-dropped counter
malformed-packets counter
ES
SES
UAS
FC
RMON
The Ethernet port provides a number of RMON counters, which are not
identical to the IP-10G main bridge counters. For a list and description of
these counters, refer to the FibeAir IP-10G and IP-10E User Guide, DOC-
00034612.
Related topics:
Adaptive Bandwidth Recovery (ABR)
AIS Signaling and Detection
IP-10G supports an integrated VC trail protection mechanism called Wireless
Sub-Network Connection Protection (SNCP).
Path-protected trails are a special case of TDM trails, in which not two but
three interfaces are configured. It is used to protect TDM traffic from any
failure along its end-to-end path.
With Wireless SNCP, a backup VC trail can optionally be defined for each
individual VC trail.
For each backup VC, the following must be defined:
Two “branching points” from the main VC that it is protecting.
A path for the backup VC (typically separate from the path of the main VC
that it is protecting).
For each direction of the backup VC, the following is performed
independently:
At the first branching point, duplication of the traffic from the main VC to
the backup VC.
At the second branching point, selection of traffic from either the main VC
or the backup VC.
Traffic from the backup VC is used if a failure is detected in main VC.
Switchover is performed within <50 ms.
For each main VC trail, the branching points can be any Cross-Connect node
along the path of the trail.
Wireless SNCP - Branching Points
Performance
Non traffic-affecting switching to protection (<50 m)
Switch to protection is done at the E1 VC trail level, works perfectly
with ACM (no need to switch the entire traffic on a link)
Optimal latency under protection
Interoperability
Protection is done at the end points, independent of
equipment/vendor networks
Interoperable with networks that use other types of protection (such
as BLSR)
Related topics:
Wireless SNCP
As an alternative to Wireless SNCP, Adaptive Bandwidth Recovery (ABR)
enables full utilization of the bidirectional capabilities inherent in ring
technologies to provide TDM path protection while utilizing the protection
paths whenever possible for both TDM and Ethernet traffic.
With ABR, TDM-based information is transmitted in one direction only, while
the unused protection capacity is allocated for Ethernet traffic. In the event of
a failure, the unused capacity is re-allocated for TDM transmission.
Using ABR, each E1 flow consists of a primary and a protection path. Capacity
on the protection path is reserved, but not allocated. Actual capacity allocation
only occurs on demand in the event of a failure. In an ordinary non-failure
state, only the primary path consumes capacity, freeing capacity on the
protection path to other applications, such as mobile broadband.
This technique extends the Native2 approach to dynamic allocation of link
capacity between TDM and Ethernet flows to the network level.
SNCP and ABR Comparison
For this reason, any failure in the primary path will cause both sides to
revert to the normal mode of operation (sending traffic through both
paths). Traffic will return to the primary path after the failure
condition has been cleared (the mechanism is revertive).
In order to prevent jittering of the path and unnecessary traffic
switches in case of intermittent primary path failures, there is a
revertive timer. This timer determines the amount of time required
after no failure is detected in the primary path before ceasing traffic
transmission through the secondary path
Automatically freeing bandwidth whenever TDM traffic is not being sent:
Whenever valid TDM traffic is not available at the radio interface for
transmission, its bandwidth is automatically re-allocated for Ethernet
traffic.
This is relevant not only for ABR trails, but for all TDM traffic. In other
words, bandwidth is freed up whenever there is no information to
transmit. This may occur in the following circumstances:
A failure has occurred which interrupts TDM traffic in a certain
trail. This may take place in a radio link or an internal connection.
No valid TDM input (E1 signal) is received at the end-point.
AIS signal is detected at the input (if AIS detection feature is
enabled).
Selecting the incoming traffic normally as explained for SNCP trails.
The ABR mechanism is relevant only for the transmission. Reception is dealt
with in the same manner as normal SNCP trails.
For ABR trails, status is given for paths which are currently transmitting;
with no failure conditions; this means the primary path only.
PMs are collected as follows:
Primary is active – No PM is counted on secondary.
Secondary is active (due to primary failure or force to standby) – PM is
counted on primary and on secondary.
In the standby direction, the transmitting node – along with all the nodes in
the standby path to the receiver – removes the E1 bandwidth allocation, and
sends periodic signals to the receiver to help it monitor the transmissions
from east and west. The de-allocated (recovered) E1 bandwidth can now be
utilized by Ethernet traffic.
The receiving node continues to accept information flows from either the east
or west direction, and detects the path in which the E1 payload is actually
transmitted.
When a failure occurs in the working direction, the receiving node sends a
Reverse Defect Indication (RDI) signal to the transmitter, which automatically
switches to the standby path.
ABR can be selected for any number of E1 channels, and the resulting path co-
exists with all other paths in the network – be they unidirectional,
bidirectional, protected, or unprotected. The case study below describes a
real-life example of how ABR delivers normal-state Ethernet capacity that may
triple the Ethernet capacity delivered when using SNCP 1+1. While
malfunctions under SNCP 1+1 automatically result in network degradation to
a worst-case scenario (known as “failure state”), a network fault under ABR
results in a level of degradation that depends on the exact location of the
failure, and worst-case degradation is usually avoided.
In this scenario, the main question is how to migrate the network to support
3G-based data services, given the severe spectrum limitations. This common
legacy configuration leaves almost no capacity for Ethernet traffic – in this
case, approximately 2.3 Mbps per site of guaranteed Ethernet traffic
(assuming 64 Bytes frame size).
In the simple, TDM-only, SNCP 1+1 case presented in the figure above, all E1s
flow in both directions, meaning that 50% of the total capacity is reserved for
failure states. In case of such a failure, E1 traffic is forwarded in the opposite
direction. From a capacity point of view, there is no difference between
normal state and failure state.
TDM Aggregation Ring - SNCP 1:1 Protection Bandwidth is Used for Ethernet
In the SNCP 1:1 scenario depicted in the above figure, TDM-only E1s flow only
in one direction. An alternate path is reserved, but no capacity is allocated. In
case of a failure, E1s are re-routed in the opposite direction over the reserved
path, receiving the non-allocated capacity.
When planning a data network for broadband services, one should compute
the guaranteed traffic (Committed Information Rate – CIR), as well as the
possible upside (Excess Information Rate – EIR). Given the availability of
bandwidth for both classes, you can determine the subscriber’s overall Quality
of Experience.
In the scenario that appears in the figure above, when applying 100%
protection – or in case of a worst case failure, up to 14.5 Mbps of Ethernet
capacity are available per site. The whole ring can support 262 Mbps of traffic.
So if the 262 Mbps of protected path bandwidth is reserved but not allocated,
Ethernet capacity is increased to 29 Mbps per cell site aggregated into 116
Mbps in aggregation site S2, etc. In Ethernet, the various failure state scenarios
each have a different effect on capacity, as described in the next section.
Link Failure
STP Block
Traffic from S2 to S1
Traffic from S3 to S1
There is no need for an STP block in any of the failure scenarios (1-3), since at
least one link in the ring is in any case out of service.
While 72 E1s lines are delivered all the time, only the relevant 36 E1s are
actually carried on each path. On the Ethernet side, up to 262 Mbps of data are
available in normal state, while 41 Mbps guaranteed at failure (in the worst
case scenario).
Much more, even in failures states:
17 Mbps of data per cell site vs. 2.3 mbps in SNCP 1+1
17 Mbps per cell site for A3 failure
6.4 Mbps per cell site for A2/A4 failure
In summary, ABR can provide much higher capacities in all scenarios, with the
exception of worst case failures. The increased capacity allows operators to
improve customer stratification, and enhance subscribers’ overall Quality-of-
Experience (QoE) with better performance in mail delivery, content sharing,
backup services, Facebook access, and video streaming.
Doubles ring capacity by using the TDM protection path to provide extra
capacity for Ethernet services.
Leaves revenue-generating 2G voice traffic unaffected in the migration
process, with no need for protocol conversion.
Related topics:
Adaptive Coding Modulation (ACM)
A unique advantage of IP-10G’s ACM implementation is its ability to use
sophisticated adaptive techniques in a hybrid, TDM/packet model. Using
Ceragon’s innovative Native2 migration solution, in which TDM and Ethernet
traffic is natively and simultaneously carried over a single microwave link,
both TDM and Ethernet services can have configurable priority. When more
than one E1 channel is connected to a cell site, one of the channels can be
given a higher priority in order to maintain network synchronization as well
as a minimum level of service. The rest of the E1 channels may be forwarded
at a lower priority.
The figure below illustrates the benefits of Ceragon’s unique ACM adaption for
TDM based o the number of E1 channel, with the following assumptions:
Frequency Band – 15 GHz
Rain Zone – N (120 mm/year)
Antennas – 1.2 m
Distance – 18 Km
Polarization – Horizontal
Ceragon’s Unique ACM Adaption for TDM
6.8 Synchronization
This section includes:
Synchronization Overview
IP-10G Synchronization Solution
Available Synchronization Interfaces
Synchronization Configuration
Synchronization Using TDM Trails
SyncE from Co-Located TDM Trails
Native Sync Distribution Mode
SyncE PRC Pipe Regenerator Mode
SSM Support and Loop Prevention
ESMC PDU Support for Loop Prevention in Ethernet Interfaces
Sync is the traditional technique used, with traceability to a PRS master clock
carried over PDH/SDH networks, or using GPS.
Phase Lock with Latency Correction: Applicable to CDMA, CDMA-2000,
UMTS-TDD, and WiMAX networks.
Limits coding time division overlap.
Typical performance target: frequency accuracy of < 20 - 50 ppb, phase
difference of < 1-3 ms.
GPS is the traditional technique used.
Synchronization Configuration
Sync Source
Radio Link
IP-10G Node
Ethernet Interface Signal Clock = Reference
IP-10G Node
Signal Clock = Reference Signal Clock = Reference
Related topics:
Licensing
In this mode, targeting nodal configurations, synchronization is distributed
natively end-to-end over the radio links in the network.
No TDM trails or E1 interfaces at the tail sites are required.
Synchronization is typically provided by one or more clock sources (SSU/GPS)
at fiber hub sites.
Native Sync Distribution Mode
In native Sync Distribution mode, the following interfaces can be used as the
sync references:
E1STM-1GE (SyncE)10
Additionally, the following interfaces can be used for sync output:
E1GE/FE (SyncE)
Native Sync Distribution mode can be used in any link configuration and any
network topology.
Ring topologies present special challenges for network synchronization. Any
system that contains more than one clock source for synchronization, or in
which topology loops may exist, requires an active mechanism to ensure that:
A single source is be used as the clock source throughout the network,
preferably the source with the highest accuracy.
There are no reference loops. In other words, no element in the network
will use an input frequency from an interface that ultimately derived that
frequency from one of the outputs of that network element.
10
SyncE input is only supported in the R3 hardware release.
Related topics:
Licensing
In SyncE PRC pipe regenerator mode, frequency is transported between the
GbE interfaces through the radio link.
PRC pipe regenerator mode makes use of the fact that the system is acting as a
simple link (so no distribution mechanism is necessary) in order to achieve
the following:
Improved frequency distribution performance:
PRC quality
No use of bandwidth for frequency distribution
Simplified configuration
In PRC pipe regenerator mode, frequency is taken from the incoming GbE
Ethernet signal, and used as a reference for the radio frame. On the receiver
side, the radio frame frequency is used as the reference signal for the outgoing
Ethernet PHY.
Frequency distribution behaves in a different way for optical and electrical
GbE interfaces, because of the way these interfaces are implemented:
For optical interfaces, separate and independent frequencies are
transported in each direction.
For electrical interfaces, each PHY must act either as clock master or as
clock slave in its own link. For this reason, frequency can only be
distributed in one direction, determined by the user.
PRC regenerator mode does not completely override the regular
synchronization distribution, but since it makes use of the Ethernet interfaces,
the following limitations apply:
In PRC regenerator mode, Ethernet interfaces cannot be configured as a
synchronization source for distribution.
In PRC regenerator mode, Ethernet interfaces cannot be configured to take
the system reference clock for their outgoing signal.
Frequency distribution through the radio is independent for each
mechanism and is carried out at a different layer.
For PRC pipe regenerator mode to work, the following is necessary:
The system must be configured to Smart Pipe mode.
Interface Eth1 (GbE) must be enabled.
Ethernet interfaces must not be configured as the system synchronization
source.
11
Refer to RFU-C roll-out plan for availability of each frequency.
12
Remote mount configuration is not supported for 42 GHz.
Split Mount √ √ √ √ √ √
Installation Type
All-Indoor -- √ √ -- -- √
Space Diversity 14
SD (BBS/IFC) BBS BBS + IFC BBS BBS BBS BBS
Method
Frequency
FD (BBS) √ √ √ √ √ √
Diversity
1+0/2+0/1+1/2+2 √ √ √ √ √ √
Configuration N+1 -- √ √ -- -- --
N+0 ( N>2) -- √ √ -- -- --
High Power
-- √ √ √ -- --
(up to 29 dBm)
Tx Power (dBm)
Ultra High Power
-- √ √ -- -- --
(up to 32 dBm)
Direct Mount √
RFU Mounting √ -- -- √ √
Antenna
3.5MHz – 56 MHz √ -- √ -- -- --
Bandwidth
10 MHz – 30 MHz √ √ √ √ √ √
(BW)
56 MHz √ -- √ √ √ √
13
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
14
1500 HP (11 GHz ) 40 MHz bandwidth does not support IF Combining. For this frequency,
Space Diversity is only available via BBS.
7.3 RFU-C
FibeAir RFU-C is a fully software configurable, state-of-the-art RFU that
supports a broad range of interfaces and capacities from 10 Mbps up to 500
Mbps. RFU-C operates in the frequency range of 6-42 GHz.
RFU-C supports low to high capacities for traditional voice and Ethernet
services, as well as PDH/SDH/SONET or hybrid Ethernet and TDM interfaces.
Traffic capacity throughput and spectral efficiency are optimized with the
desired channel bandwidth. For maximum user choice flexibility, channel
bandwidths can be selected together with a range of modulations from QPSK
to 256 QAM.
With RFU-C, traffic capacity throughput and spectral efficiency are optimized
with the desired channel bandwidth. For maximum user choice flexibility,
channel bandwidths can be selected together with a range of modulations
from QPSK to 256 QAM over 3.5-56 MHz channel bandwidth.
When RFU-C operates in co-channel dual polarization (CCDP) mode using
XPIC, two carrier signals can be transmitted over a single channel, using
vertical and horizontal polarization. This enables double capacity in the same
spectrum bandwidth.
15
Over temperature.
16
Remote mount configuration is not supported for 42 GHz.
10501-10563 10333-10395
10333-10395 10501-10563
10529-10591 10361-10423
168A
10361-10423 10529-10591
10585-10647 10417-10479
11425-11725 10915-11207
13002-13141 12747-12866
12747-12866 13002-13141
266
13127-13246 12858-12990
12858-12990 13127-13246
12807-12919 13073-13185
266A
13073-13185 12807-12919
15110-15348 14620-14858
14620-14858 15110-15348
15 GHz 490
14887-15117 14397-14627
14397-14627 14887-15117
15144-15341 14500-14697 644
19160-19700 18126-18690
18126-18690 19160-19700
1010
18 GHz 18710-19220 17700-18200
17700-18200 18710-19220
19260-19700 17700-18140
1560
17700-18140 19260-19700
23000-23600 22000-22600
1008
22000-22600 23000-23600
17
24UL GHz
24000 - 24250 24000 - 24250 All
25530-26030 24520-25030
24520-25030 25530-26030
1008
25980-26480 24970-25480
28150-28350 27700-27900
27700-27900 28150-28350
450
27950-28150 27500-27700
27500-27700 27950-28150
28050-28200 27700-27850 350
27700-27850 28050-28200
27960-28110 27610-27760
17
Customers in countries following EC Directive 2006/771/EC (incl. amendments) must observe
the 100mW EIRP obligation by adjusting transmit power according to antenna gain and RF line
losses.
40550-41278 42050-42778
18
42 GHz 42050-42778 40550-41278 1500
41222-41950.5 42722-43450
42722-43450 41222-41950.5
18
42GHz support is a roadmap item; parameters and availability are subject to change.
Height: 200 mm
Width: 200 mm
RFU-C
Depth: 85 mm
Weight: 4kg/9 lbs
Direct mount or remote using the same antenna type
RFU-Antenna Connection
Remote mount: Standard flexible waveguide (frequency dependent)
Coaxial cable RG-223 (100 m/300 ft), Belden 9914/RG-8 (300
IDU-RFU Connection
m/1000 ft) or equivalent, N-type connectors (male)
Polarization Vertical or Horizontal
Standard Mounting OD Pole 50 mm-120 mm/2”-4.5” (subject to vendor and antenna size)
Operating Range -40.5 to -72 VDC
ETS 300 019-2-1 class T1.2, with a temperature range of -25°C
Storage
to+85°C.
ETS 300 019-2-2 class 2.3, with a temperature range of -40°C
Transportation
to+85°C.
Power Consumption RFU-C 1+0: 22W
6-26 GHz 1+1: 39W
Power Consumption RFU-C 1+0: 26W
28-42 GHz 1+1: 43W
Temperature range for continuous operating temperature with high
reliability:
-33°C to +55°C
(-27°F to 131°F)
Operating Temperature
Temperature range for exceptional temperatures; tested
successfully, with limited margins:
-45°C to +60°C
(-49°F to 140°F)
19
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
20
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
7.4 1500HP/RFU-HP
FibeAir 1500HP and RFU-HP are high transmit power RFUs designed for long
haul applications with multiple carrier traffic. Together with their unique
branching design, 1500HP/RFU-HP can chain up to five carriers per single
antenna port and 10 carriers for dual port, making them ideal for Trunk or
Multi Carrier applications. The 1500HP/RFU-HP can be installed in either
indoor or outdoor configurations.
The field-proven 1500HP/RFU-HP was designed to enable high quality
wireless communication in the most cost-effective manner. With tens of
thousands of units deployed worldwide, the 1500HP/RFU-HP serves mobile
operators enabling them to reach over longer distances while enabling the use
of smaller antennas. The RFU-HP also includes a power-saving feature (“green
mode”) that enables the microwave system to automatically detect when link
conditions allow it to use less power.
1500HP and RFU-HP 1RX support Space Diversity via Baseband Switching in
the IDU (BBS). 1500HP 2RX, supports Space Diversity through IF Combining
(IFC). Both types of Space Diversity are valid solutions to deal with the
presence of multipath.
Notes: 1500 HP (11 GHz) 40 MHz bandwidth does not support IF
Combining. For this frequency, Space Diversity is only
available via BBS.
1500HP/RFU-HP is compatible with IP-10G hardware
releases R2 and R3. It cannot be used with R1.
21
For guidance on the differences between 1500HP and RFU-HP, refer to RFU Selection Guide
on page 221.
22
Over temperature.
23
1500 HP (11 GHz ) 40 MHz bandwidth does not support IF Combining. For this frequency,
space diversity is only available via BBS.
Frequency Range
Frequency Band Channel Bandwidth
(GHz)
20 MHz to
U6 GHz 6.425 to 7.100
40/56 /60 MHz
TX
350MHz IF TX TX Pre-
PA
chain Amp
Controller and
FSK peripherals DC / CTRL
Quadplexer
C C TCXO
RF LPBK
o o
n n
-48V
PSU
n n
e e RX
c c RX
LNA Extention port
combiner chain RX Main
140MHz t t
o o
r r RX
RX
LNA
chain RX Diversity
10M
diplexer
XLO
XPIC SW
Antenna
VCO
Diversity
IF & controller Board
RX
Chassis
IDU XPIC source
(Ntype conn.) sharing \ RSL ind.
(TNC conn.)
TX
350MHz IF TX TX FMM FLM
chain
Controller and
FSK peripherals DC / CTRL
Quadplexer
C C TXCO
RF LPBK
o o
n n
-48V
PSU
n n
e e RX
c c RX
LNA Extention port
chain RX Main
140MHz t t
o o
r r
10M
diplexer
XLO
XPIC SW
VCO
IF & controller Board
RX Board
Chassis
IDU XPIC source
(Ntype conn.) sharing \ RSL ind.
(TNC conn.)
TX
350MHz IF
TX
TX Pre-
RFIC PA
chain Amp
C C
(BMA conn.)
o o
IDU
n n 40M
RF LPBK
e e
c c
sharing \ RSL
-48V
XPIC source
(BMA conn.)
PSU section t t
ind.
o o RX
RX
r r RX
RFIC chain LNA Extention port
140MHz
40M
diplexer
XLO
XPIC SW
VCO
PSC TRX
Chassis
XPIC source
sharing \ RSL ind.
(TNC conn.)
Note that the main differences between the 1500HP 1RX and RFU-HP 1RX are:
RFU-HP offers higher TX power for split mount
The RFU-HP 1RX offers full support for 3.5M-56MHz channels.
The RFU-HP 1RX supports the green-mode feature
Both systems are fully compatible with all OCB and ICB devices.
24
1500 HP (11 GHz ) 40 MHz bandwidth does not support IF Combining. For this frequency,
space diversity is only available via BBS.
Space Diversity with Multiple RFUs Space Diversity with Single RFU
The following block diagrams show the difference between the two OCBs and
the additional Diversity Circ block which is added in some diversity
configurations.
Old OCB – Type 1
Notes:
(c) – Radio Carrier
CCDP – Co-channel dual polarization
SP – Single pole antenna
DP – Dual pole antenna
In addition, the following losses will be added when using these items:
Subrack
The subrack hosts all the RFU components and connections, as shown in the
previous figure.
The subrack includes up to five RFUs per subrack (each RFU connects to an
ICB).
RFU with Branching
RF Filters
The RF Filters are used for specific frequency channels and Tx/Rx separation.
The filters are attached to the ICB, and each RFU contains one Rx and one Tx
filter.
In an IFC Space Diversity configuration, each RFU contains two Rx filters to
combine the IF signals, along with one Tx filter.
The ICC sums the Rx and Tx signals and combines the N channels to the output
ports (one or two, in accordance with the configuration).
Patch Panel
The ICB’s IF and XPIC cables are connected to the patch panel. The IDU’s IF
cables are connected to the specific RFU location. An XPIC cable is used
between two RFUs which are using the same Tx and Rx filters with different
polarizations (V and H).
Fan Tray
The fan tray contains eight controlled and monitored fans, which cool the RFU
heat dissipations. The fan tray is a tray which is part of ETSI rack (as shown
above), while when using a 19” frame rack a fan tray is a separate unit which
must be assembled separately (shown below).
Fan Tray in 19” Frame Rack
When a configuration includes more than ten carriers, two racks are
assembled and connected.
Configuration with More than Ten Carriers – Two Connected Racks
1+1
2+1 3+1 4+1
Configuration Interfaces 1+0 FD
3+0 4+0 5+0
2+0
6L 4
6H 4.5
WG losses per 100m
7/8GHz 6
11GHz 10
Added to adjacent
Symmetrical Coupler 3
channel configuration
All-Indoor Tx and Rx 0.3 (1c) 0.3 (1c) 0.7 (2c) 0.7 (2c) 1.1 (3c)
CCDP with DP antenna
Diversity RX 0.2 (1c) 0.2 (1c) 0.6 (2c) 0.6 (2c) 1.0 (3c)
Tx and Rx 0.3 (1c) 0.7 (2c) 1.1 (3c) 1.5 (4c) 1.9 (5c)
SP Non adjacent channels
Diversity RX 0.2 (1c) 0.6 (2c) 1.0 (3c) 1.4 (4c) 1.8 (5c)
CCDP with DP antenna Tx and Rx 0.3 (1c) 0.7 (1c) 1.1 (2c) 1.1 (2c) 1.5 (3c)
Upgrade Ready Diversity RX 0.2 (1c) 0.6 (1c) 1.0 (2c) 1.0 (2c) 1.4 (3c)
CCDP with DP antenna Tx and Rx 1.5 (3c) 1.9 (4c) 1.9 (4c) 2.3 (5c) 2.3 (6c)
Upgrade Ready Diversity RX 1.4 (3c) 1.8 (4c) 1.8 (4c) 2.2 (5c) 2.2 (6c)
Main Configurations
1+0
1+0 East West
1+1
1+1 East West
1+1 HSB Compact Front View
1A. The default PDU which is assembled with the ETSI rack has a special
addition of a plastic cover.
For special cases, when PDU protection is required, a PDU with plastic
protection cover can be provided.
The PN for this PDU with protection cover is: 32T-PDU_CVR.
RFU Models
Diversity/Non-Diversity Split-Mount
Space Diversity IFC (2Rx) (6, 7, 8 ,11GHz) 15OCBf-SD-xxxy-ZZZ-H/L
Non Space Diversity (1Rx) (6, 7, 8GHz) 15OCBf-xxxy-ZZ-H/L
25
11GHz Non Space Diversity (1Rx) 15OCB11w-xxxy-ZZ-H/L
25
11GHz OCB is a wide BW OCB which supports up to 40MHz, while the other OCBs (6L, 6H, 7,
8GHz) support up to 30MHz.
26
11GHz OCB is a wide BW OCB which supports up to 40MHz, while the other OCBs (6L, 6H, 7,
8GHz) support up to 30MHz.
7.5 RFH-HS
FibeAir RFU-HS is a high transmit power RFU for long-haul applications.
Based on Ceragon’s field-proven 1500HP technology, RFU-HS supports
capacities of up to 500 Mbps for TDM and IP interfaces.
With its high transmit power, FibeAir RFU-HS is designed to enable high
quality wireless communication in the most cost-effective manner, reaching
over longer distances while enabling the use of smaller antennas.
Height: 409mm
Width: 286 mm
RFU Dimensions
Depth: 86 mm
Weight: 8 kg
RFU Antenna
Standard flexible waveguide (frequency dependent)
Connection
Coaxial cable RG-223 (100 m/300 ft), Belden 9914/RG-8 (300 m/1000 ft)
IDU-RFU Connection
or equivalent, N-type connectors (male)
Maximum System
1+0: 88W
Power Consumption
1+1: 134W
(IDU and RFU)
Storage ETS 300 019-2-1 class T1.2, with a temperature range of -25°C to+85°C.
Transportation ETS 300 019-2-2 class 2.3, with a temperature range of -40°C to+85°C.
7.6 RFU-SP
FibeAir RFU-SP supports multiple capacities, frequencies, modulation
schemes, and configurations for various network requirements. RFU-SP
operates in the frequency range of 6-8 GHz, and supports capacities of 40
Mbps to 400 Mbps for TDM and IP interfaces. The capacity can easily be
doubled using XPIC.
Height: 409mm
Width: 286 mm
RFU Dimensions
Depth: 86 mm
Weight: 8 kg
RFU Antenna
Standard flexible waveguide (frequency dependent)
Connection
Coaxial cable RG-223 (100 m/300 ft), Belden 9914/RG-8 (300 m/1000 ft)
IDU-RFU Connection
or equivalent, N-type connectors (male)
Maximum System
1+0: 88W
Power Consumption
1+1: 130W
(IDU and RFU)
Storage ETS 300 019-2-1 class T1.2, with a temperature range of -25°C to+85°C.
Transportation ETS 300 019-2-2 class 2.3, with a temperature range of -40°C to+85°C.
Frequency Marketing
Vendor Diameter Manufacturer PN
Band Model
Andrew 7/8 GHz 4ft VHLP4-7W-CR3 A-4-7_8-A
Andrew 7/8 GHz 6ft VHLP6-7W-CR3 A-6-7_8-A
RFS 6L 4ft SU4-59CVA A-4-6L-R
RFS 6L 6ft SU6-59CVA A-6-6L-R
RFS 6U 4ft SU4-65CVA A-4-6H-R
RFS 6U 6ft SU6-65CVA A-6-6H-R
RFS 7/8 GHz 4ft SB4-W71CVA A-4-7_8-R
RFS 7/8 GHz 6ft SU6B-W71CVA A-6-7_8-R
Xian Putian 6L 4ft WTG12-58DAR A-4-6L-X
Xian Putian 6L 6ft WTG18-58DAR A-6-6L-X
Xian Putian 6U 4ft WTG12-64DAR A-4-6H-X
Xian Putian 6U 6ft WTG18-64DAR A-6-6H-X
Xian Putian 7/8 GHz 4ft WTG12-W71DAR A-4-7_8-X
Xian Putian 7/8 GHz 6ft WTG18-W71DAR A-6-7_8-X
6-8
Configuration Interfaces
GHz
Remote Mount Added on remote
Flex WG 0.5
antenna mount configurations
1+0 Integrated antenna Integrated antenna 0
1+1 HSB Main TR 1.6
Integrated antenna
with asymmetrical coupler Secondary TR 6.5
1+1/2+2 HSB Main TR 1.6
Remote antenna
with asymmetrical coupler Secondary TR 6.5
2+0 SP (with CPLR) Integrated antenna 4
4+0 DP Remote mount antenna 4
7.7 1500P
8. Typical Configurations
This chapter includes:
IP-10G Configuration Options
Point-to-Point Configurations
Nodal Configurations
Note: The component tables in this section show the number of
components and accessories required for each
configuration, but do not include regular traffic cables, and
optional cables such as alarm and user channel cables. They
do include splitters and Y cables required for protected
configurations.
For optical (SFP) interfaces, two cables are required for
each interface, one for TX and one for RX.
1+1 Components
2+0/XPIC Link with 64 E1s (no Multi-Radio) Components (Each Side of Link)
1+1 HSB Link with 16 E1s+ STM-1 Components (Each Side of the Link)
8.3.1 Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM-1 Mux
Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM-1 Mux
Chain with 1+0 Downlink and 1+1 HSB Uplink, with STM-1 Mux Components
(Entire Chain)
Node with 2 x 1+0 Downlinks and 1 x 1+1 HSB Uplink Components (Entire
Node)
8.3.3 Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM-1 Mux
Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM-1 Mux
Chain with 1+1 Downlink and 1+1 HSB Uplink, with STM-1 Mux Components
(Entire Chain)
8.3.4 Native2 Ring with 3 x 1+0 Links + STM-1 Mux Interface at Main
Site
Native2 Ring with 3 x 1+0 Links + STM-1 Mux Interface at Main Site
Native2 Ring with 3 x 1+0 Links + STM-1 Mux Interface at Main Site
Components (Entire Ring)
8.3.5 Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at
Main Site
Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site
Native2 Ring with 3 x 1+1 HSB Links + STM-1 Mux Interface at Main Site
Components (Entire Ring)
8.3.6 Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink with
STM-1 Mux
Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink with STM-1 Mux
Node with 1 x 1+1 HSB Downlink and 1 x 1+1 HSB Uplink with STM-1 Mux
Components (Entire Node)
Native2 Ring with 4 x 1+0 Links, with STM-1 Components (Entire Ring)
Native2 Ring with 3 x 1+0 Links + Spur Link 1+0 Components (Entire Ring)
8.3.9 Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link (5 hops
total), with STM-1 Mux
Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link (5 hops total), with STM-1
Mux
Native2 Ring with 4 x 1+0 MW Links and 1 x Fiber Link with STM-1 Mux
Components (Entire Ring)
Native2 Ring with 2 x 2+0/XPIC MW Links and 1 x Fiber Link with 2 x STM-1
Components (Entire Ring)
Provider Network
Management Center
blocking capacity is a special feature for Single Pipe applications that blocks
management frames from egressing the line interface.
This feature is also relevant only to standalone units or the main unit in a
nodal configuration. There is no purpose in blocking the In-Band management
VLAN in extension units, since the management VLAN can be blocked in the
Ethernet switch port.
9.8.3.3 Authorization
Users are assigned to user groups. Each group has separate and well-defined
authorization to access resources. Security configuration can only be
performed by the group with the highest permission level.
In the NMS, it is possible to customize groups and group permissions.
9.8.4.5 SNMP
IP-10G supports SNMP v1, V2c or v3. The default community string in NMS
and the SNMP agent in the embedded SW are disabled. Users are allowed to
set community strings for access to IDUs.
SNMPv3 connections are authenticated with a single user ID and password.
Admin users can configure this user ID and password.
IP-10G supports the following MIBs:
RFC-1213 (MIB II)
RMON MIB
Ceragon (proprietary) MIB.
Access to all IDUs in a node is provided by making use of the community and
context fields in SNMPv1 and SNMPv2c/SNMPv3, respectively.
SNMP IP Forwarding
Nodal configurations are usually managed by a single IP address for the main
slot and by using the different community strings for each individual
extension slot. The SNMP IP forwarding feature is intended for users who are
managing IP-10G shelves with SNMP using a third party NMS. This feature
adds an option to define separate IP addresses for each slot and to access each
slot via SNMP using this standalone IP address.
Whenever IP addresses are configured per slot, an SNMP master agent in the
main slot will forward to its sub-agents in extension slots all corresponding
SNMP messages. Each extension slot will reply and send SNMP traps with its
own IP address.
Note: SNMP management of each shelf can be accessed either by
community strings or by user-defined standalone IP
addresses. However, you cannot use both methods in a
single shelf.
The default behavior of the shelf is to use community strings to manage
extension slots. To enable SNMP IP forwarding, the user must set the shelf IP
address parameters to non-zero values.
Note: This feature is only intended for SNMP management. Web
and CLI management will always be accessed through a
single IP address of the main slot in the shelf.
9.8.4.7 Encryption
Encryption algorithms for secure management protocols include:
Symmetric key algorithms: 128-bit AES
Asymmetric key algorithms: 1024-bit RSA
9.8.4.8 SSH
The CLI interface supports SSH-2
Users of type of “administrator” or above can enable or disable SSH.
9.11 CeraBuild
CeraBuild is an application that enables installation and maintenance
personnel to initiate and produce commissioning reports to ensure that an IP-
10G system was set up properly and that all components are in order for
operation.
CeraBuild includes the following tools:
Site Commission Tool
Link Commission Tool
PM Commission Tool
Diagnostics Tool
10.1 OAM
FibeAir IP-10G provides complete Operations Administration and
Maintenance (OAM) functionality at multiple layers, including:
Alarms and events
Maintenance signals, such as LOS, AIS, and RDI.
Performance monitoring
Maintenance commands, such as loopbacks and APS commands.
OAM Functionality
Domain
Level
Customer Level 7 +
Provider Level
0 -
Customer Level MEP Provider Level MEP
MEP ID & Remote MEP IDs must be unique. A MEP ID should NOT be
reused for Remote MEP IDs on the same (specific) MAID.
CFM works according to the outer VLAN. In Managed Switch mode, the service
is identified by the 802.1Q VLAN, while in Metro Switch (Provider Bridge)
mode, the service is recognized only by the outer “S-tag”, which might
encapsulate an inner C-tag (CQ19849). This is illustrated in the following
example
The example above assumes that a Switch (802.1Q bridge) trunk port is
connected to a Metro Switch CN port. MEP is defined on the leftmost access
port, and MIP, with the same level, is defined on the leftmost CN port. When an
LTM (Link-trace message) egresses the leftmost trunk port, it is tagged (step
1). This LTM ingresses the leftmost CN port, and reaches the CPU. The CPU
strips its VLAN (step 2), and generates an LTR (Link-trace Response) message
back to the CN port.
This LTR message does not carry any VLAN (step 3). Now when it ingresses
the leftmost trunk port, it is discarded (step 4). This example demonstrates
that a MIP issued on the CN port does not reply to LTM. In such scenarios, MIP
should be avoided on a CN port. CN ports are part of a provider domain. Thus,
MIP or MEP on these ports are part of the provider OAM domain, and should
be defined as such.
Automatic link-trace timer is a trigger for an automatic link-trace process that
might take longer than the value to which the timer is configured, due to the
number of remote MEPs (each link-trace process takes around 12 seconds).
When automatic link-trace timer is set to a new value, the new cycle period
will take place only after the current cycle period is terminated
The maximum number of MEPs guaranteed to provide reliable indications is
50 per IDU.
Standard Description
802.3 10base-T
802.3u 100base-T
802.3ab 1000base-T
802.3z 1000base-X
802.3ac Ethernet VLANs
802.1Q Virtual LAN (VLAN)
802.1p Class of service
802.1ad Provider bridges (QinQ)
802.3x Flow control
802.3ad Link aggregation
802.1ag – Ethernet service OA&M (CFM)
802.1w RSTP
802.1AB Link Layer Discovery Protocol (LLDP)
Auto MDI/MDIX for 1000baseT
RFC 1349 IPv4 TOS
RFC 2474 IPv4 DSCP
RFC 2460 IPv6 Traffic Classes
27
Note that the voltage at the BNC port on the RFUs is not accurate and should be used only as
an aid
12. Specifications
This chapter includes:
General Specifications
Transmit Power Specifications
Receiver Threshold Specifications
Radio Capacity Specifications
Ethernet Latency Specifications
E1 Latency Specifications
Interface Specifications
Mechanical Specifications
Power Input Specifications
Power Consumption Specifications
Environmental Specifications
Related Topics:
Standards and Certifications
Note: All specifications are subject to change without prior
notification.
28
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
Modulation 6-8 GHz 10-15 GHz 18-23 GHz 24GHzUL*30 26 GHz 28 GHz 32,38 GHz 4231 GHz
QPSK 26 24 22 0 21 14 18 15
8 PSK 26 24 22 0 21 14 18 15
16 QAM 25 23 21 0 20 14 17 14
32 QAM 24 22 20 0 19 14 16 13
64 QAM 24 22 20 0 19 14 16 13
128 QAM 24 22 20 0 19 14 16 13
256 QAM 22 20 18 0 17 12 14 11
*For 1ft ant or lower.
QPSK 30 27 33 30 33
8 PSK 30 27 33 30 33
16 QAM 30 27 33 30 33
32 QAM 30 26 33 29 33
64 QAM 29 26 32 29 32
128 QAM 29 26 32 29 31
256 QAM 27 24 30 27 30
29
Refer to RFU-C roll-out plan for availability of each frequency.
30
Customers in countries following EC Directive 2006/771/EC (incl. amendments) must observe
the 100mW EIRP obligation by adjusting transmit power according to antenna gain and RF line
losses.
31
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
QPSK 30
8 PSK 30
16 QAM 30
32 QAM 30
64 QAM 29
128 QAM 29
256 QAM 27
QPSK 24
8 PSK 24
16 QAM 24
32 QAM 24
64 QAM 24
128 QAM 24
256 QAM 22
QPSK 23 23 22 21 20
8 PSK 23 23 22 21 20
16 QAM 23 21 20 20 19
32 QAM 23 21 20 20 19
64 QAM 22 20 20 19 18
128 QAM 22 20 20 19 18
33
256 QAM 21 19 19 18 17
32
1dBm higher for 6L GHz.
33
20dBm for 11GHz.
34
Refer to RFU-C roll-out plan for availability of each frequency.
35
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
Channel Occupied
Profile Modulation Frequency (GHz)
Spacing Bandwidth 99%
6-15 18 23 24 26 28 31 32, 38 4237
0 QPSK -89.1 -88.6 -87.1 -84.1 -86.6 -86.6 -85.6 -87.1 -87.5
1 8 PSK -85.0 -84.5 -83.0 -80.0 -82.5 -82.5 -81.5 -83.0 -83.5
2 16 QAM -82.7 -82.2 -80.7 -77.7 -80.2 -80.2 -79.2 -80.7 -81.0
3 32 QAM -78.0 -77.5 -76.0 -73.0 -75.5 -75.5 -74.5 -76.0 -76.5
28 MHz 26 MHz
4 64 QAM -76.0 -75.5 -74.0 -71.0 -73.5 -73.5 -72.5 -74.0 -74.5
5 128 QAM -71.6 -71.1 -69.6 -66.6 -69.1 -69.1 -68.1 -69.6 -70.0
6 256 QAM (Strong FEC) -71.0 -70.5 -69.0 -66.0 -68.5 -68.5 -67.5 -69.0 -69.5
7 256 QAM (Light FEC) -68.0 -67.5 -66.0 -63.0 -65.5 -65.5 -64.5 -66.0 -66.5
0 QPSK -93.3 -92.8 -91.3 -88.3 -90.8 -90.8 -89.8 -91.3 -85.0
1 8 PSK -89.6 -89.1 -87.6 -84.6 -87.1 -87.1 -86.1 -87.6 -79.5
2 16 QAM -78.9 -78.4 -76.9 -73.9 -76.4 -76.4 -75.4 -76.9 -77.0
3 32 QAM -75.1 -74.6 -73.1 -70.1 -72.6 -72.6 -71.6 -73.1 -73.5
40 MHz 36.5 MHz
4 64 QAM -71.9 -71.4 -69.9 -66.9 -69.4 -69.4 -68.4 -69.9 -70.0
5 128 QAM -70.7 -70.2 -68.7 -65.7 -68.2 -68.2 -67.2 -68.7 -69.0
6 256 QAM (Strong FEC) -68.4 -67.9 -66.4 -63.4 -65.9 -65.9 -64.9 -66.4 -66.5
7 256 QAM (Light FEC) -65.8 -65.3 -63.8 -60.8 -63.3 -63.3 -62.3 -63.8 -64.0
0 QPSK -86.4 -85.9 -84.4 -81.4 -83.9 -83.9 -82.9 -84.4 -84.5
1 8 PSK -81.1 -80.6 -79.1 -76.1 -78.6 -78.6 -77.6 -79.1 -79.5
2 16 QAM -80.0 -79.5 -78.0 -75.0 -77.5 -77.5 -76.5 -78.0 -78.5
3 32 QAM -75.8 -75.3 -73.8 -70.8 -73.3 -73.3 -72.3 -73.8 -74.0
56 MHz 52 MHz
4 64 QAM -73.5 -73.0 -71.5 -68.5 -71.0 -71.0 -70.0 -71.5 -72.0
5 128 QAM -70.5 -70.0 -68.5 -65.5 -68.0 -68.0 -67.0 -68.5 -69.0
6 256 QAM (Strong FEC) -68.1 -67.6 -66.1 -63.1 -65.6 -65.6 -64.6 -66.1 -66.5
7 256 QAM (Light FEC) -65.1 -64.6 -63.1 -60.1 -62.6 -62.6 -61.6 -63.1 -63.5
36
Refer to RFU-C roll-out plan for availability of each frequency.
37
42GHz RFU-C is a roadmap item; parameters and availability are subject to change.
1500HP/RFU-HP
Channel Occupied
Profile Modulation 6 GHz 7-11GHz39
Spacing Bandwidth 99%
0 QPSK -91.5 -91.0
1 8 PSK -88.4 -87.9
2 16 QAM -86.4 -85.9
3 32 QAM -83.8 -83.3
7 MHz 6.5 MHz
4 64 QAM -82.3 -81.8
5 128 QAM -80.0 -79.5
6 256 QAM (Strong FEC) -76.8 -76.3
7 256 QAM (Light FEC) -73.3 -72.8
0 QPSK -90.3 -89.8
1 8 PSK -86.5 -86.0
2 16 QAM -83.1 -82.6
3 32 QAM -81.5 -81.0
14 MHz 12.5 MHz
4 64 QAM -80.1 -79.6
5 128 QAM -77.1 -76.6
6 256 QAM (Strong FEC) -74.1 -73.6
7 256 QAM (Light FEC) -71.8 -71.3
38
1500HP supports channels with up to 30MHz occupied bandwidth.
39
Threshold figures for 11GHz are for 1500HP only
1500HP/RFU-HP
Channel Occupied
Profile Modulation 6 GHz 7-11GHz40
Spacing Bandwidth 99%
0 QPSK -89.1 -88.6
1 8 PSK -85.0 -84.5
2 16 QAM -82.7 -82.2
3 32 QAM -78.0 -77.5
28 MHz 26 MHz
4 64 QAM -76.0 -75.5
5 128 QAM -71.6 -71.1
6 256 QAM (Strong FEC) -71.0 -70.5
7 256 QAM (Light FEC) -68.0 -67.5
0 QPSK -86.9 -86.4
1 8 PSK -81.4 -80.9
2 16 QAM -78.9 -78.4
3 32 QAM -75.1 -74.6
40 MHz 36 MHz
4 64 QAM -71.9 -71.4
5 128 QAM -70.7 -70.2
6 256 QAM (Strong FEC) -68.4 -67.9
7 256 QAM (Light FEC) -65.8 -65.3
0 QPSK -86.4 -85.9
1 8 PSK -81.1 -80.6
2 16 QAM -80.0 -79.5
3 32 QAM -75.8 -75.3
56 MHz 52 MHz
4 64 QAM -73.5 -73.0
5 128 QAM -70.5 -70.0
6 256 QAM (Strong FEC) -68.1 -67.6
7 256 QAM (Light FEC) -65.1 -64.6
40
Threshold figures for 11GHz are for 1500HP only
41
1500HP supports channels with up to 30MHz occupied bandwidth.
Channel Occupied
Profile Modulation 6-8 GHz
Spacing Bandwidth 99%
0 QPSK -89.5
1 8 PSK -85.5
2 16 QAM -83.0
3 32 QAM -78.5
28 MHz 26 MHz
4 64 QAM -76.5
5 128 QAM -72.0
6 256 QAM (Strong FEC) -71.5
7 256 QAM (Light FEC) -68.5
0 QPSK -87.0
1 8 PSK -81.5
2 16 QAM -79.0
3 32 QAM -75.5
40 MHz 36.5 MHz
4 64 QAM -72.0
5 128 QAM -71.0
6 256 QAM (Strong FEC) -68.5
7 256 QAM (Light FEC) -66.0
0 QPSK -86.5
1 8 PSK -81.5
2 16 QAM -80.5
3 32 QAM -76.0
56 MHz 52 MHz
4 64 QAM -74.0
5 128 QAM -71.0
6 256 QAM (Strong FEC) -68.5
7 256 QAM (Light FEC) -67.0
42
1500HP supports channels with up to 30MHz occupied bandwidth.
Channel Occupied
Profile Modulation 6-8 GHz
Spacing Bandwidth 99%
0 QPSK -89.5
1 8 PSK -85.5
2 16 QAM -83.0
3 32 QAM -78.5
28 MHz 26 MHz
4 64 QAM -76.5
5 128 QAM -72.0
6 256 QAM (Strong FEC) -71.5
7 256 QAM (Light FEC) -68.5
0 QPSK -87.0
1 8 PSK -81.5
2 16 QAM -79.0
3 32 QAM -75.5
40 MHz 36.5 MHz
4 64 QAM -72.0
5 128 QAM -71.0
6 256 QAM (Strong FEC) -68.5
7 256 QAM (Light FEC) -66.0
0 QPSK -86.5
1 8 PSK -81.5
2 16 QAM -80.5
3 32 QAM -76.0
56 MHz 52 MHz
4 64 QAM -74.0
5 128 QAM -71.0
6 256 QAM (Strong FEC) -68.5
7 256 QAM (Light FEC) -67.0
12.4.1.5 28 MHz Channel Bandwidth Ultra high capacity (Class 6A, ACAP
only)
Profile Modulation Minimum Radio Max # of Ethernet capacity (Mbps) (per average
required Throughput supported Ethernet frame size)
capacity (Mbps) E1s
license 64 128 256 512 1024 1518
bytes bytes bytes bytes bytes bytes
0 QPSK 50 43 17 53 47 44 43 42 42
1 8 PSK 50 57 24 70 63 59 57 56 56
2 16 QAM 100 82 34 102 91 86 83 82 81
3 32 QAM 100 109 46 137 123 115 112 110 109
4 64 QAM 150 135 57 170 152 143 138 136 135
5 128 QAM 150 165 70 208 186 175 169 166 165
6 256 QAM (Strong FEC) 200 182 78 230 206 193 187 184 183
7 256 QAM (Light FEC) 200 195 83 246 220 207 200 197 196
Profile Modulation Minimum Radio Max # of Ethernet capacity (Mbps) (per average
required Throughput supported Ethernet frame size)
capacity (Mbps) E1s
license 64 128 256 512 1024 1518
bytes bytes bytes bytes bytes bytes
0 QPSK 100 76 32 95 85 80 77 76 76
1 8 PSK 100 113 48 143 128 120 116 114 114
2 16 QAM 150 150 64 190 170 159 154 152 151
3 32 QAM 200 199 84 252 226 212 205 202 201
4 64 QAM 300 248 84 314 281 264 255 251 249
5 128 QAM 300 297 84 377 337 317 306 301 299
6 256 QAM (Strong FEC) 400 338 84 429 383 360 349 343 341
7 256 QAM (Light FEC) 400 367 84 465 416 391 378 372 370
12.4.2.5 28 MHz Channel Bandwidth Ultra high capacity (Class 6A, ACAP
only)
Profile Modulation Minimum Radio Max # of Ethernet capacity (Mbps) with MAC header
required Throughput supported compression (per average Ethernet frame
capacity (Mbps) E1s size)
license
64 128 256 512 1024 1518
bytes bytes bytes bytes bytes bytes
0 QPSK 50 43 17 60 50 46 43 42 42
1 8 PSK 50 57 24 81 67 61 58 57 56
2 16 QAM 100 82 34 117 98 89 85 82 82
3 32 QAM 100 109 46 157 131 119 113 111 110
4 64 QAM 150 135 57 194 162 147 140 137 136
5 128 QAM 150 165 70 238 199 181 172 168 166
6 256 QAM (Strong FEC) 200 182 78 263 220 200 190 186 184
7 256 QAM (Light FEC) 200 195 83 281 235 214 203 198 197
12.4.3.5 28 MHz Channel Bandwidth Ultra high capacity (Class 6A, ACAP
only)
Profile Modulation Minimum Radio Max # of Ethernet capacity (Mbps) with Multi-Layer
required Throughput supported header compression (per average Ethernet
capacity (Mbps) E1s frame size)
license
64 128 256 512 1024 1518
bytes bytes bytes bytes bytes bytes
0 QPSK 50 43 17 153 71 53 47 44 43
1 8 PSK 50 57 24 204 94 71 63 59 58
2 16 QAM 100 82 34 296 137 103 91 85 84
3 32 QAM 100 109 46 398 184 139 122 115 112
4 64 QAM 150 135 57 492 227 171 151 142 139
5 128 QAM 150 165 70 603 279 210 185 174 170
6 256 QAM (Strong FEC) 200 182 78 667 308 232 204 192 188
7 256 QAM (Light FEC) 200 195 83 713 330 248 218 205 201
43
When installed with 19 inch brackets, the unit width is 486 mm.
High 33-26 77 77
Medium 25-20 48 53
Low 19-11 34 34
Mute NA 20 20
0 to 95%,
Relative Humidity 5% to 100%
Non-condensing
Altitude 3,000m (10,000ft)
The following figure and the table underneath illustrate the cables and
accessories, both mandatory and optional, in a 1+0 system.
Ethernet + 32 E1s, 1+0
The following figure and the table underneath illustrate the cables and
accessories, both mandatory and optional, in a 1+1 system.
Note: This table only includes components that are unique for
protected configurations.
Ethernet + 32 E1s, 1+1 HSB
GBE-SPL-SM SM/LC Optical splitter conn. 1300nm 50/5 Optical Splitter (Single Mode)
GBE-SPL-MM-0.6M MM/LC Optical splitter 62.5/125 0.6M Optical Splitter (Multi-Mode – 0.6 meters)
OP-SM-CBL-LC-LC-DPLX 3M Duplex Optical Cable LC-LC SM 3M Duplex Optical Cable, LC-LC, 3 meters
OP-SM-CBL-LC-LC-DPLX 10M Duplex Optical Cable LC-LC SM 10M Duplex Optical Cable, LC-LC, 10 meters
OP-MM-CBL-LC-LC-DPLX 3M Duplex Optical Cable LC-LC MM 3M Duplex Optical Cable, LC-SC, 3 meters (Multi-Mode)
OP-MM-CBL-LC-LC-DPLX 6M Duplex Optical Cable LC-LC MM 6M Duplex Optical Cable, LC-SC, 6 meters (Multi-Mode)
OP-SM-HSPL-LC-LC 0.5M/0.5M Opt. H-splt SM 1310nm, LC/LC, 0.5M/0.5M Optical H Cable (Single Mode, 0.5/0.5 meters)
OP-SM-HSPL-LC-LC 1M/1M Opt. H-splt SM 1310nm, LC/LC, 1M/1M Optical H Cable (Single Mode, 1/1 meters)
OP-MM-HSPL-LC-LC 0.5M/0.5M Opt. H-splt MM 850nm, LC/LC, 0.5M/0.5M Optical H Cable (Multi-Mode, 0.5/0.5 meters)
OP-MM-HSPL-LC-LC 1M/1M Opt. H-splt MM 850nm, LC/LC, 1M/1M Optical H Cable (Multi-Mode, 1/1 meters)
13.9 E1 Cables
IP10-CBL-4E1-MDR- IP-10 4E1 cable MDR68-RJ45 0.3M, CABLE,SCSI68 Male TO 4xRJ45 Male
RJ45-XED-0.3m cross CROSS,0.3M,120 OHM
IP10-CBL-8E1-MDR- IP-10 8E1 cable MDR68-RJ45 0.3M, CABLE,SCSI68 Male TO 8xRJ45 Male
RJ45-XED-0.3m cross CROSS,0.3M,120 OHM
IP10-CBL-8E1-MDR- IP-10 8E1 cable MDR68-RJ45 1.5M, CABLE,SCSI68 Male TO 8xRJ45 Male
RJ45-XED-1.5m cross CROSS,1.5M,120 OHM
IP10-CBL-8E1-MDR- IP-10 8E1 cable MDR68-RJ45 3M, CABLE,SCSI68 Male TO 8xRJ45 Male
RJ45-XED-3m cross CROSS,3M,120 OHM
IP10-CBL-16E1-MDR-LA- IP-10 16E1 cable MDR68-RJ45
RJ45-XD3m 3M,LA,cross
IP10-CBL-16E1MDRLA- IP-10 16E1 cable MDR68-RJ45 CABLE,SCSI 68PIN TO 16*RJ-
RJ45-XD1.5m 1.5M,LA,crs 45,1.5M,120 Ohm,LEFT ANGLE,CROSS
IP10-CBL-16E1MDRLA- IP-10 16E1 cable MDR68-
RJ45XD-1.25m RJ45,Cross, 1.25M
IP10-CBL-4E1-MDR- CABLE,SCSI 68PIN TO 4*RJ-
IP-10 4E1 cable MDR68-RJ45 0.3M
RJ45-0.3m 45,0.3M,120 Ohm
IP10-CBL-8E1-MDR- CABLE,SCSI 68PIN TO 8*RJ-
IP-10 8E1 cable MDR68-RJ45 0.3M
RJ45-0.3m 45,0.3M,120 Ohm
CABLE,SCSI68
LEFT ANGLE TO
SCSI68
IP10-CBL-16E1-MDR-MDR-EXT-0.6m IP-10 16E1 Extension cable 0.6m, MDR68
FEMALE,0.6M,120
OHM, WITH
ADAPTATION.
CABLE,2xSCSI68
LEFT ANGLE TO
IP10-CBL-16E1-OE-PROT-5M IP-10 16 E1s cable open-end ,5M w/ prot. OE,0.6M+5M,120
OHM, WITH
ADAPTATION
13.9.8 E1 Y Cable
A 0.6 meter SCSI168 left angle 120 ohm Y splitter cable (2 x male, 1 x female)
is used to provide a single input/output to and from the two 16 E1 ports of the
IDUs in the protected pair to a single external source in protected (HSB)
configurations.
E1 Y Cable
IP10-CBL-16E1-MDR-MDR-0.6m IP-10 16 E1 ports cable straight 0.6m E1 Straight Cable (0.6 meters)
IP10-CBL-16E1-MDR-MDR-1.5m IP-10 16 E1 ports cable straight 1.5m E1 Straight Cable (1.5 meters)
IP10-CBL-16E1-MDR-MDR-10m IP-10 16 E1 ports cable straight 10m E1 Straight Cable (10 meters)
IP10-CBL-16E1-MDR-MDR-25m IP-10 16 E1 ports cable straight 25m E1 Straight Cable (25 meters)
IP10-CBL-16E1-MDR-MDR-0.6m IP-10 16 E1 ports cable straight 0.6m E1 Straight Cable (0.6 meters)
IP10-CBL-16E1-MDR-MDR-1.5m IP-10 16 E1 ports cable straight 1.5m E1 Straight Cable (1.5 meters)
IP10-CBL-16E1-MDR-MDR-10m IP-10 16 E1 ports cable straight 10m E1 Straight Cable (10 meters)
IP10-CBL-16E1-MDR-MDR-25m IP-10 16 E1 ports cable straight 25m E1 Straight Cable (25 meters)
Alarms Y Cable
13.13 IF Cable
Each IDU-RFU connection requires an RG8 IF cable and two N-Type BNC
connectors.
IF Cable Marketing Models