You are on page 1of 71

What is Duobinary Modulation for Optical Systems?

August 28, 2012 4 >> Why NRZ Isnt Always the Best Modulation Format Most optical systems use NRZ modulation format. While NRZ modulation is suitable for long haul systems in which the dispersion of the single mode fiber is compensated for by another fiber with negative dispersion, it is not the best choice for uncompensated single mode fibers. Duobinary modulation turns out to be a much better choice in this case, since it is more resilient to dispersion and is also reasonably simple to implement. We will explain the basics of duobinary modulation and present the lab test result which shows that duobinary modulation is much more better than NRZ modulation for the uncompensated optical fiber channel. >> The Basics of Duobinary Modulation Simply put, duobinary modulation is a method for transmitting R bits/second using less than R/2 Hz of bandwidth. However, the most famous theory of communication the Nyquists sampling theorem, shows that in order to transmit R bits/second with no intersymbol interference, the minimum bandwidth required of the transmitted pulse is R/2 Hz. This means that duobinary pulses will have ISI (Intersymbol Interference). However, this ISI is introduced in a controlled manner so that it can be subtracted out to recover the original values. Let the transmitted signal be

Here {dk} are the data bits, q(t) is the transmitted pulse, and T=1/R is the bit period. The pulse q(t) is usually chosen such that there is no ISI at the sampling instances (t=KT, k=0,+/-1, , are the sampling instances):

NRZ is one such scheme and requires a bandwidth of R Hz to transmit R bits/second (this is twice as large as the Nyquist bandwidth of R/2 Hz). The simplest duobinary scheme transmits pulses with ISI as follows:

We then see from (1.1) and (1.3) that at the sampling instance kT, the receiver does not recover that data bit dk, but rather (dk-1 + dk). However, this scheme allows for pulses with a smaller bandwidth. By allowing some ISI, the transmitted pulse q(t) can be made longer in the time domain, and hence its spectrum becomes narrower in the frequency domain. With a narrower spectrum, the distortion effects of the channel are also fewer. This outcome is one of the reasons why duobinary modulation is resilient to dispersion. One way of generating duobinary signals is to digitally filter the data bits with a two-tap finite impulse response (FIR) filter with equal weights and then low-pass filter the resulting signal to obtain the analog waveform with the property in (1.3) as shown in figure 1a below.

When the input to the FIR filter is binary (let these binary values be 1 and 1), the output can take on one of three values: 0.5*(-1+(-1))=-1; -1+1=0; or 0.5*(1+1)=1. Hence the duobinary signal is a three level signal. An important property of the three-value sequence at the output of the FIR filter is that it is a correlated signal and hence all possible sequences of the three values cannot occur. For example, the output sequence of the FIR filter cannot contain a 1 followed by a 1, or a 1 followed by a 1; a 1 and a 1 will always have a 0 between them. Similarly, the combinations {1 0 1} and {-1 0 1} also can never occur at the output of the FIR filter; only a {-1 0 1} or a {1 0 1} can occur. As will be explained later, this sequence is another reason why duobinary modulation is resilient to dispersion. On a final note, the FIR filter and the low-pass filter can be combined into a single analog filter for ease of implementation (as shown in figure 1b). The two-tap FIR filter is after all a simple low-pass digital filter. For a 10Gbps data stream, a good filter that combines the functions of the FIR filter and the analog filter is a 2.8GHz Bessel filter. >> Differential Encoding The ISI introduced at the transmitter can be unraveled at the receiver by differential encoding. At each sampling instance kT, the receiver assumes the value of xk = (dk + dk-1), where xk = x(kT) and x(t) satisfies (1.3). If the precious decision at the output of the receiver is the sampled value to obtain : , this decision can be subtracted from

However, a single error at the receiver will propagate forever, causing a catastrophic decoding error. To avoid this catastrophic error propagation, it is better to move this differential decoding to the transmitter and differentially precode the data. The data bits dk are differentially encoded as follows: The transmitted signal is now

with q(t) satisfying (1.3). Hence at the sampling instance kT, the receiver samples the value . >> Implementing a High-Speed Differential Encoder

One circuit that can be used to implement a differential encoder is an exclusive-OR (XOR) gate as shown in figure 2 below. However, it can be difficult to implement the 1-bit delay in the feedback path at high data rates such as 10 Gbps.

Another circuit that does not involve delay in the feedback path is shown in figure 3 below. Here, a divideby-2 counter has a clock gated with the data. When the data is high, the counter changes state, which is equivalent to adding a 1 modulo 2. When the data is low, the counter state remains the same, which is equivalent to adding a 0 modulo 2.

>> The Optical System The final step is to modulate the light with the three-level duobinary signal, which implies a three-level optical signal. This result is achieved with a Mach-Zehnder (MZ) modulator biased at its null point. With a zero input, no light is transmitted, but the +1 and 1 inputs are transmitted as +E and E electric fields. While this is a three-level signal in terms of the electric field, it is a two-level signal in terms of optical power. This choice significantly reduces the complexity of the receiver (the first optical duobinary systems used a mapping that requires three levels of optical power). One of the key components is a driver that can produce a voltage swing of 2*V volts at high data rates such as 10 Gbps in order to drive the MZ modulator.

The combination of the duobinary encoder and the above mapping to electric fields helps further reduce the effects of dispersion in the fiber. As the pulses travel down the fiber, they spread out in time owing to dispersion. In an NRZ scheme, a data sequence of {1 0 1} is mapping onto the optical domain as {+E 0 +E}. In the encoded duobinary sequence, a {1 0 1} sequence cannot occur, but a {1 0 1} does occur, which is mapped to {+E 0 E} in the optical domain. The effect of dispersion in the two cases is shown in figure 5 below, which depicts why the resulting dispersion is less in the case of duobinary modulation.

The same receiver that is used for a NRZ modulation scheme can be used for duobinary modulation. The power detector squares the electric field to detect power and hence the +E and E outputs of the fiber get mapped to the same power level and are detected as logical 1s. >> The Complete Duobinary Transmitter The complete duobinary transmitter is shown in Figure 6 below. An inverter is added at the input to the differential encoder; without it, the data at the receiver is inverted. This inverter can be placed either at the receiver or the transmitter. Since signal paths are usually differential, the inverter is not an additional piece of hardware that is required but instead can be implemented by reversing the differential lines from the data source to the AND gate. The exact sequence of transformations that occur in the data path at each stage is given in an example in Figure 7.

>> Lab Test Result Measurements were carried out to determine the performance of duobinary modulation and compare it with that of NRZ modulation at 10 Gbps. The measurements were made at 1550nm over standard single mode fiber (SMF). This fiber has a dispersion of 17 ps/nm*km at this wavelength and an attenuation of 0.2 dB/km. The experimental setup is depicted in Figure 8. An optical amplifier was added after the MZ modulator to compensate for the loss in the MZ modulator and to launch sufficient power to transmit through 120km of fiber. The power launched into the fiber was kept as a level low enough to avoid any selfphase modulation effects. An interesting property of the 27-1 PRBS sequence used (generator polynomial = 1 + x6 + x7) is that following differential encoding, it is the same PRBS sequence shifted by 6 bits. (The property that a differential encoded PRBS sequence is a delayed version of the original sequence holds true whenever the generator polynomial is of the form 1 + xn + xn+1, n > 0, and the resulting delay between the PRBS sequence and the differential encoded sequence is (n-1)). Since the BERT is capable of aligning the transmitted and received data streams, the delay is of no consequence, and the step of differential encoding the data can be eliminated.

>> What This Tells Us Duobinary modulation is a much more resilient modulation scheme compared with NRZ modulation in the presence of chromatic dispersion. It is possible to extend the reach of NRZ systems from 80km to 120km without the use of dispersion compensating fibers. While a NRZ system is dispersion limited at 120km, a duobinary system is power limited at this same 120km length of SMF fiber. The components are available today to implement such systems What is Optical Amplifier Noise Figure? June 25, 2012 2 All amplifiers degrade the signal-to-noise ratio (SNR) of the amplified signal because of spontaneous emission that adds to the signal during its amplification.

The SNR degradation is quantified through a parameter Fn, called the amplifier-noise figure in analogy with the electronic amplifiers, and is defined as (Equation 1.12)

where SNR refers to the electrical power generated when the signal is converted to electrical current by using a photodetector. In general, Fn would depend on several detector parameters which govern the shot noise and thermal noise associated with the detector. One can obtain a simple expression for Fn by considering an ideal detector whose performance is limited by shot noise only. Consider an amplifier with the amplification factor G so that the output power is related to the input power by Pout = GPin. The SNR of the input signal is given by (Equation 1.13)

where <I> = RPin is the average photocurrent, R=q/h is the responsivity of an ideal photodetector with unit quantum efficiency, and (Equation 1.14) is obtained for the shot noise by setting the dark current Id = 0. Here f is the detector bandwidth. To evaluate the SNR of the amplified signal, one should add the contribution of spontaneous emission to the receiver noise. The spectral density of spontaneous-emission-induced noise is nearly constant (white noise) and can be written as (Equation 1.15) where is the optical frequency. The parameter nsp is called the spontaneous-emission factor or population-inversion factor. Its value is 1 for amplifiers with complete population inversion (all atoms in the excited case), but becomes >1 when the population inversion is incomplete. For a two-level system (Equation 1.16) where N1 and N2 are the atomic populations for the ground and excited states, respectively. The effect of spontaneous emission is to add fluctuations to the amplified power which are converted to current fluctuations during the photodetection process. It turns out that the dominant contribution to the receiver noise comes from beating of spontaneous emission with the signal itself. This beating phenomenon is similar to heterodyne detection: Spontaneously emitted radiation mixes coherently with the amplified signal at the photodetector and produces a heterodyne component of the photocurrent. The variance of the photocurrent can then be written as (Equation 1.17) The SNR of the amplified signal is then given by (Equation 1.18)

where the last relation was obtained by neglecting the term due to shot noise and is valid for G >> 1.

The amplifier-noise figure can now be obtained by substituting Eqs. 1.13 and 1.18 in 1.12. It we also use Eq. 1.15 for Ssp, we obtain (Equation 1.19) This equation shows that the SNR of the amplified signal is degraded by a factor of 2 (or 3dB) even for an ideal amplifier for which nsp = 1. For most practical amplifiers, Fn exceeds 3 dB and can be as large as 6-8 dB. For its application in optical communication systems, an optical amplifier should have as low an Fn as possible. The effective noise figure of the chain of cascaded optical amplifiers can be calculated as

where Fno,eff is the effective noise figure of the amplifier chain that contains the total number of k optical amplifiers. The first amplifier in the chain is the most important one in terms of the noise impact. That is the reason why multistage optical amplifiers should be designed to have the first stage with lower noise figure. Accordingly, any decrease in the effective value of the amplifiers noise figure will bring a significant benefit in the overall system performance. The total power of the spontaneous emission noise is

where Esp is the electric field of the spontaneous emission, Bop is the effective bandwidth of spontaneous emission determined either by the optical amplifier bandwidth or by an optical filter. Please note that factor 2 in the equation accounts for the contributions of two fundamental polarization modes, which are present at the output of the optical amplifier. What is Coherent Lightwave Communication System? May 18, 2012

18
>> What is Intensity Modulation with Direct Detection (IM/DD)? Current fiber optic lightwave communication systems are based on a simple digital transmission scheme in which an electrical bit stream is used to modulate the intensity of the optical carrier, and the optical signal is detected directly at a photodiode to convert it to the original digital signal in the electrical domain. Such a scheme is referred to as intensity modulation with direct detection (IM/DD). >> What is Coherent Optical Communication? In contrast to intensity modulation with direct detection (IM/DD), many alternative schemes, well known in the context of radio and microwave communication systems, transmit information by modulating the frequency or the phase of the optical carrier and detect the transmitted signal by using homodyne or or heterodyne detection techniques. Since phase coherence of the optical carrier plays an important role in the implementation of such schemes, they are referred to as coherent communication techniques, and the fiber optic communication systems based on them are called coherent lightwave systems. Coherent communication techniques were explored during the 1980s, and many field trials established their feasibility by 1990. >> Why Do We Need Coherent Lightwave Systems? The motivation behind using the coherent communication techniques is twofold. 1) Receiver sensitivity can be improved by up to 20dB compared with IM/DD systems. Such an improvement permits much longer transmission distance (up to an additional 100km near 1.55um) for the same amount of transmitter power. 2) The use of coherent detection allows an efficient use of fiber bandwidth. Many channels can be transmitted simultaneously over the same fiber by using frequency-division multiplexing (FDM) with a channel spacing as small as 1-10 GHz.

>> Basic Concepts Behind Coherent Lightwave Systems 1. Local Oscillator The basic idea behind coherent lightwave system is to mix the received signal coherently with a continuouswave (CW) optical field before it is incident on the photodetector (as shown in figure 1 below).

The continuous wave field is generated locally at the receiver using a narrow-linewidth laser, called the local oscillator (LO), a term borrowed from the radio and microwave literature. To see how mixing of the received signal with a local oscillator can improve the receiver performance, let us write the optical signal using complex notation as

(Equation 1.1)

where o is the carrier frequency, As is the amplitude, and s is the phase. The optical field associated with the local oscillator is given by a similar expression,

(Equation 1.2)

where ALO, LO, and LO represent the amplitude, frequency, and phase of the the local oscillator, respectively The scalar notation is used for both Es and ELO after assuming that the two fields are identically polarized (polarization-mismatch issues can be discussed later). Since a photodetector responds to the optical intensity, the optical power incident at the photodetector is given by P = K|Es+ELO|2 where K is a constant of proportionality. Using Eqs. (1.1 and 1.2), we get

(Equation 1.3)

where

(Equation 1.4)

The frequency is known as the intermediate frequency (IF). When 0 LO, the optical signal is demodulated in two stages, its carrier frequency is first converted to an intermediate frequency IF (typically 0.1-5GHz) before the signal is demodulated to the baseband. It is not always necessary to use an intermediate frequency. In fact, there are two different coherent detection techniques to choose from, depending on whether or not IF equals zero. They are known as homodyne and heterodyne detection techniques. 2. Homodyne Detection In this coherent-detection technique, the local-oscillator frequency LO is selected to coincide with the signal-carrier frequency 0 so that IF = 0. From Equation 1.3, the photocurrent (I=RP, where R is the detector responsivity) is given by

(Equation 1.5)

Typically, PLO >> Ps, and Ps + PLO PLO. The last term in Equation 1.5 contains the information transmitted and is used by the decision circuit. Consider the case in which the local-oscillator phase is locked to the signal phase so that s = LO. The homodyne signal is then given by

(Equation 1.6)

Advantages of Homodyne Detection

The main advantage of homodyne detection is evident from equation 1.6 if we note that the signal current in the direct-detection case is given by Idd(t) = RPs(t). Denoting the average optical power by average electrical power is increased by a factor of , the

with the use of homodyne detection.

Since PLO can be made much larger than , the power enhancement can exceed 20dB. Although shot noise is also enhanced, it is shown that homodyne detection improves the signal-to-noise ratio (SNR) by a large factor. Another advantage of coherent detection is evident from Equation 1.5. Because the last term in this equation contains the signal phase explicitly, it is possible to transmit information by modulating the phase or frequency of the optical carrier. Direct detection does not allow phase or frequency modulation, as all information about the signal phase is lost.

Disadvantage of Homodyne Detection

A disadvantage of homodyne detection also results from its phase sensitivity. Since the last term in equation 1.5 contains the local-oscillator phase LO explicitly, clearly LO should be controlled. Ideally, s and LO should stay constant except for the intentional modulation of s. In practice, both s and LO fluctuate with time in a random manner. However, their difference s LO can be forced to remain nearly constant through an optical phase-locked loop. The implementation of such as loop is not simple and makes the design of optical homodyne receivers quite complicated. In addition, matching of the transmitter and local-oscillator frequencies puts stringent

requirements on the two optical sources. These problems can be overcome by the use of heterodyne detection, as discussed next. 3. Heterodyne Detection In the case of heterodyne detection the local-oscillator frequency LO is chosen to differ from the signalcarrier frequency 0 such that the intermediate frequency IF is in the microwave region (IF ~ 1 GHz). Using Equation 1.3 together with I = RP, the photocurrent is now given by

(Equation 1.7)

Since PLO >> Ps in practice, the direct-current (dc) term is nearly constant and can be removed easily using bandpass filters. The heterodyne signal is then given by the alternating-current (ac) term in equation 1.7 or by

(Equation 1.8)

Similar to the case of homodyne detection, information can be transmitted through amplitude, phase, or frequency modulation of the optical carrier. More importantly, the local oscillator still amplifies the received signal by a large factor, thereby improving the SNR. However, the SNR improvement is lower by a factor of 2 (or by 3dB) compared with the homodyne case. This reduction is referred to as the heterodyne-detection penalty. The origin of the the 3dB penalty can be seen by considering the signal power (proportional to the square of the current). Because of the ac nature of Iac, the average signal power is reduced by a factor of 2 when is averaged over a full cycle at the intermediate frequency (recall that the average of cos2 over is 1/2).

Advantages of Heterodyne Detection

The advantage gained at the expense of the 3dB penalty is that the receiver design is considerably simplified because an optical phase-locked loop is no longer needed. Fluctuations in both s and LO still need to be controlled using narrow-linewidth semiconductor lasers for both optical sources. However, the linewidth requirements are quite moderate when an asynchronous demodulation scheme is used. This feature makes the heterodyne-detection scheme quite suitable for practical implementation in coherent lightwave systems. 4. Signal-to-Noise Ratio The advantage of coherent detection for lightwave systems can be made more quantitative by considering the SNR of the receiver current. The receiver current fluctuates because of shot noise and thermal noise. The variance 2 of current fluctuation is obtained by adding the two contributions so that

(Equation 1.9)

where

(Equation 1.10)

The current I in equation 1.10 is the total photocurrent generated at the detector and is given by equation 1.5 or 1.7, depending on whether homodyne or heterodyne detection is employed. In practice, PLO >> Ps, and I in equation 1.10 can be replaced by the dominant term RPLO for both cases.

The SNR is obtained by dividing the average signal power by the average noise power. In the heterodyne case, it is given by

(Equation 1.11)

In the homodyne case, the SNR is larger by a factor of 2 if we assume that s = LO in Equation 1.5. The main advantage of coherent detection can be seen from Equation 1.11. Since the local-oscillator power PLO can be controlled at the receiver, it can be made large enough that the receiver noise is dominated by shot noise. More specifically, when

(Equation 1.12)

Under the same conditions, the dark-current contributions to the shot noise is negligible (Id << RPLO). The SNR is then given by

(Equation 1.13)

where R = q/h. The use of coherent detection allows one to achieve the shot-noise limit even for p-i-n receivers whose performance is generally limited by thermal noise. Moreover, in contrast with the case of avalanche photodiode (APD) receivers, this limit is realized without adding any excess shot noise. It is useful to express the SNR in terms of the number of photons, Np, received within a single bit. At the bit rate B, the signal power is related to Np as . Typically f B/2. By using these values of

and f in equation 1.13, the SNR is given by a simple expression

(Equation 1.14)

In the case of homodyne detection, SNR is larger by a factor of 2 and is given by SNR = 4Np. There are more discussions regarding the dependence of the BER on SNR and shows how receiver sensitivity is improved by the use of coherent detection. >> Modulation Formats As we said earlier, an important advantage of using the coherent detection techniques is that both the amplitude and the phase of the received optical signal can be detected and measured. This feature opens up the possibility of sending information by modulating either the amplitude, or the phase, or the frequency of an optical carrier. In the case of digital communication systems, the three possibilities give rise to three modulation formats known as amplitude-shift keying (ASK), phase-shift keying (PSK), and frequency-shift keying (FSK). Figure 2 below shows schematically the three modulation formats for a specific bit pattern.

1. ASK Format The electric field associated with an optical signal can be written as

(Equation 2.1)
In the case of ASK format, the amplitude As is modulated while keeping 0 and s constant. For binary digital modulation, As takes one of two fixed values during each bit period, depending on whether 1 or 0 bit is being transmitted. In most practical situations, As is set to zero during transmission of 0 bits. The ASK format is then called onoff keying (OOK) and is identical with the modulation scheme commonly used for noncoherent (IM/DD) digital lightwave systems.

The implementation of ASK for coherent systems differs from the case of direct-detection systems in one important aspect. Whereas the optical bit stream for direct-detection systems can be generated by modulating a light-emitting diode (LED) or a semiconductor laser directly, external modulation is necessary for coherent communication systems. The reason behind this necessity is related to phase changes that invariably occur when the amplitude As (or the power) is changed by modulating the current applied to a semiconductor laser. For IM/DD systems, such unintentional phase changes are not seen by the detector (as the detector responds only to the optical power) and are not of major concern except for the chirp-induced power penalty. The situation is entirely different in the case of coherent systems, where the detector response depends on the phase of the received signal. The implementation of ASK format for coherent systems requires the phase s to remain nearly constant. This is achieved by operating the semiconductor laser continuously at a constant current and modulating its output by using an external modulator. Since all external modulators have some insertion losses, a power penalty incurs whenever an external modulator is used; it can be reduced to below 1dB for monolithically integrated modulators. A commonly used external modulator makes use of LiNbO3 waveguides in a Mach-Zehnder (MZ) configuration. The performance of external modulators is quantified through the on-off ratio (also called extinction ratio) and the modulation bandwidth. LiNbO3 modulators provide an on-off ratio in excess of 20 and can be modulated at speeds up to 75GHz. The driving voltage is typically 5V but can be reduced to 3V with a suitable design. Other materials can also be used to make external modulators. For example, a polymeric electro-optic MZ modulator required only 1.8V for shifting the phase of a 1.55um signal by in one of the arms of the MZ interferometer. Electroabsorption modulators, made using semiconductors, are often preferred because they do not require the use of an interferometer and can be integrated monolithically with the laser. Optical transmitters with an integrated electroabsorption modulator capable of modulating at 10 Gb/s were available since 1999 and are used routinely for IM/DD lightwave systems. Such integrated modulators exhibited a bandwidth of more than 50GHz and had the potential of operating at bit rates of up to 100 Gb/s. They are likely to be employed for coherent systems as well. 2. PSK Format In the case of PSK format, the optical bit stream is generated by modulating the phase s in Equation 2.1 while the amplitude As and the frequency 0 of the optical carrier are kept constant. For binary PSK, the phase s takes two values, commonly chosen to be 0 and . Figure 2 above shows the binary PSK format schematically for a specific bit pattern. An interesting aspect of the PSK format is that the optical intensity remains constant during all bits and the signal appears to have a CW form. Coherent detection is a necessity for PSK as all information would be lost if the optical signal were detected directly without mixing it with the output of a local oscillator. The implementation of PSK requires an external modulator capable of changing the optical phase in response to an applied voltage. The physical mechanism used by such modulators is called electrorefraction. Any electro-optic crystal with proper orientation can be used for phase modulation. A LiNbO3 crystal is commonly used in practice. The design of LiNbO3-based phase modulators is much simpler than that of an amplitude modulator as a Mach-Zehnder interferometer is no longer needed, and a single waveguide can be used. The phase shift occurring while the CW signal passes through the waveguide is related to the index change n by the simple relation

(Equation 2.2)
where lm is the length over which index change is induced by the applied voltage. The index change n is proportional to the applied voltage, which is chosen such that = . Thus, a phase shift of can be imposed on the optical carrier by applying the required voltage for the duration of each 1 bit. Semiconductors can also be used to make phase modulators, especially if a multi-quantum-well (MQW) structure is used. The electrorefraction effect originating from the quantum-confinement Stark effect is

enhanced for a quantum-well design. Such MQW phase modulators have been developed and are able to operate at a bit rate of up to 40 Gb/s in the wavelength range 1.3-1.6um. Already in 1992, MQW devices had a modulation bandwidth of 20 GHz and required only 3.85V for introducing a phase shift when operated near 1.55um. The operating voltage was reduced to 2.8V in a phase modulator based on the electroabsorption effect in a MQW waveguide. A spot-size converter is sometimes integrated with the phase modulator to reduce coupling losses. The best performance is achieved when a semiconductor phase modulator is monolithically integrated within the transmitter. Such transmitters are quite useful for coherent lightwave systems. The use of PSK format requires that the phase of the optical carrier remain stable so that phase information can be extracted at the receiver without ambiguity. This requirement puts a stringent condition on the tolerable linewidths of the transmitter laser and the local oscillator. The linewidth requirement can be somewhat relaxed by using a variant of the PSK format, known as differential phase-shift keying (DPSK). In the case DPSK, information is coded by using the phase difference between two neighboring bits. For instance, if k represents the phase of the kth bit, the phase difference = k k-1 is changed by or 0, depending on whether kth bit is a 1 or 0 bit. The advantage of DPSK is that the transmittal signal can be demodulated successfully as long as the carrier phase remains relatively stable over a duration of two bits. 3. FSK Format In the case of FSK modulation, information is coded on the optical carrier by shifting the carrier frequency 0 itself. For a binary digital signal, 0 takes two values, 0 + and 0 , depending on whether a 1 or 0 bit is being transmitted. The shift f = /2 is called the frequency deviation. The quantity 2f is sometimes called tone spacing, as it represents the frequency spacing between 1 and 0 bits. The optical field for FSK format can be written as

(Equation 2.3)
where + and signs correspond to 1 and 0 bits. By noting that the argument of the cosine can be written as , the FSK format can also be viewed as a kind of PSK modulation such that the carrier phase increases or decreases linearly over the bit duration. The choice of the frequency deviation f depends on the available bandwidth. The total bandwidth of a FSK signal is given approximately by 2f + 2B, where B is the bit rate. When f >> B, the bandwidth approaches 2f and is nearly independent of the bit rate. This case is often referred to as wide-deviation or wideband FSK. In the opposite case of f << B, called narrow-deviation or narrowband FSK, the bandwidth approaches 2B. The ratio FM = f/B, called the frequency modulation (FM) index, serves to distinguish the two cases, depending on whether FM >> 1 or FM << 1. The implementation of FSK requires modulators capable of shifting the frequency of the incident optical signal. Electro-optic materials such as LiNbO3 normally produce a phase shift proportional to the applied voltage. They can be used for FSK by applying a triangular voltage pulse (sawtooth-like), since a linear phase change corresponds to a frequency shift. An alternative technique makes use of Bragg scattering from acoustic waves. Such modulators are called acousto-optic modulators. Their use is somewhat cumbersome in the bulk form. However, they can fabricated in compact form using surface acoustic waves on a slab waveguide. The device structure is similar to that of an acousto-optic filter used for wavelength-division multiplexing (WDM) applications. The maximum frequency shift if typically limited to below 1 GHz for such modulators. The simplest method of producing an FSK signal makes use of the direct-modulation capability of semiconductor lasers. As discussed before, a change in the operating current of a semiconductor laser leads to changes in both the amplitude and frequency of emitted light. In the case of ASK, the frequency shift or chirp of the emitted optical pulse is undesirable. But the same frequency shift can be used to advantage for

the purpose of FSK. Typically values of frequency shifts are ~ 1GHz/mA. Therefore, only a small change in the operating current (~1mA) is required for producing the FSK signal. Such current changes are small enough that the amplitude does not change much from bit to bit. For the purpose of FSK, the FM response of a distributed feedback (DFB) laser should be flat over a bandwidth equal to the bit rate. As seen in figure 3 below, most DFB lasers exhibit a dip in their FM response at a frequency near 1 MHz. The reason is that that two different physical phenomena contribute to the frequency shift when the device current is changed. Changes in the refractive index, responsible for the frequency shift, can occur either because of a temperature shift or because of a change in the carrier density. The thermal effects contribute only up to modulation frequencies of about 1MHz because of their slow response. The FM response decreases in the frequency range 0.1 10MHz because of thermal contribution and the carrier-density contribution occur with opposite phases.

Several techniques can be used to make the FM response more uniform. An equalization circuit improves uniformity but also reduces the modulation efficiency. Another technique makes use of transmission codes which reduce the low-frequency components of the data where distortion is highest. Multi-section DFB lasers have been developed to realize a uniform FM response. Figure 3 shows the FM response of a twosection DFB laser. It is not only uniform up to about 1 GHz, but its modulation efficiency is also high. Even better performance is realized by using three-section DBR lasers. Flat FM response from 100 kHz to 15 GHz was demonstrated in 1990 in such lasers. By 1995, the use of gain-coupled, phase-shifted, DFB lasers extended the range of uniform FM response from 10 kHz to 20 GHz. When FSK is performed through direct modulation, the carrier phase varies continuously from bit to bit. This case is often referred to as continuous-phase FSK (CPFSK). When the tone spacing 2f is chosen to be B/2 (FM = 1/2), CPFSK is also called minimum-shift keying (MSK). >> Demodulation Schemes As discussed above, either homodyne or heterodyne detection can be used to convert the received optical signal into an electrical form. In the case of homodyne detection, the optical signal is demodulated directly to the baseband. Although simple in concept, homodyne detection is difficult to implement in practice, as it requires a local oscillator whose frequency matches the carrier frequency exactly and whose phase is locked to the incoming signal. Such a demodulation scheme is called synchronous and is essential for homodyne detection. Although optical phase-locked loops have been developed for this purpose, their use is complicated in practice.

Heterodyne detection simplifies the receiver design, as neither optical phase locking nor frequency matching of the local oscillator is required. However, the electrical signal oscillates rapidly at microwave frequencies and must be demodulated from the IF band to the baseband using techniques similar to those developed for microwave communication systems. Demodulation can be carried out either synchronously or asynchronously. Asynchronous demodulation is also called incoherent in the radio communication literature. In the optical communication literature, the term coherent detection is used in a wider sense. A lightwave system is called coherent as long as it uses a local oscillator irrespective of the demodulation technique used to convert the IF signal to baseband frequencies. We will focus on the synchronous and asynchronous demodulation schemes for heterodyne systems. 1. Heterodyne Synchronous Demodulation

Figure 4 shows a synchronous heterodyne receiver schematically. The current generated at the photodiode is passed through a bandpass filter (BPF) centered at the intermediate frequency IF. The filtered current in the absence of noise can be written as

(Equation 3.1)

where and = LO s is the phase difference between the local oscillator and the signal. The noise is also filtered by the BPF. Using the in-phase and out-of-phase quadrature components of the filtered Gaussian noise, the receiver noise is included through

(Equation 3.2)
where ic and is are Gaussian random variables of zero mean with variance 2 given by Equation 1.9. For synchronous demodulation, If(t) is multiplied by cos(IFt) and filtered by a low-pass filter. The resulting baseband signal is

(Equation 3.3)
where angle brackets denote low-pass filtering used for rejecting the ac components oscillating at 2IF. Equation (3.3) shows that only the in-phase noise component affects the performance of synchronous heterodyne receivers. Synchronous demodulation requires recovery of the microwave carrier at the intermediate frequency IF. Several electronic schemes can be used for this purpose, all requiring a kind of electrical phase-locked loop. Two commonly used loops are the squaring loop and the Costas loop. A squaring loop uses a square-law device to obtain a signal of the form cos2(IFt) that has a frequency component at 2IF. This component can be used to generate a microwave signal at IF. 2. Heterodyne Asynchronous Demodulation

Figure 5 below shows an asynchronous heterodyne receiver schematically. It does not require recovery of the microwave carrier at the intermediate frequency, resulting in a much simpler receiver design. The filtered signal If(t) is converted to the baseband by using an envelope detector, followed by a low-pass filter.

The signal received by the decision circuit is just Id = |If |, where If is given by Eq. (3.2). It can be written as

(Equation 3.4)
The main difference is that both the in-phase and out-of-phase quadrature components of the receiver noise affect the signal. The SNR is thus degraded compared with the case of synchronous demodulation. As discussed, sensitivity degradation resulting from the reduced SNR is quite small (about 0.5 dB). As the phase-stability requirements are quite modest in the case of asynchronous demodulation, this scheme is commonly used for coherent lightwave systems. The asynchronous heterodyne receiver shown in Fig. 5 requires modifications when the FSK and PSK modulation formats are used.

Figure 6 shows two demodulation schemes. The FSK dual-filter receiver uses two separate branches to process the 1 and 0 bits whose carrier frequencies, and hence the intermediate frequencies, are different. The scheme can be used whenever the tone spacing is much larger than the bit rates, so that the spectra of 1 and 0 bits have negligible overlap (wide-deviation FSK). The two BPFs have their center frequencies separated exactly by the tone spacing so that each BPF passes either 1 or 0 bits only. The FSK dual-filter receiver can be thought of as two ASK single-filter receivers in parallel whose outputs are combined before reaching the decision circuit. A single-filter receiver of Fig. can be used for FSK demodulation if its bandwidth is chosen to be wide enough to pass the entire bit stream. The signal is then processed by a frequency discriminator to identify 1 and 0 bits. This scheme works well only for narrowdeviation FSK, for which tone spacing is less than or comparable to the bit rate (FM 1). Asynchronous demodulation cannot be used in the case the PSK format because the phase of the transmitter laser and the local oscillator are not locked and can drift with time. However, the use of DPSK format permits asynchronous demodulation by using the delay scheme shown in Fig. 6(b). The idea is to multiply the received bit stream by a replica of it that has been delayed by one bit period. The resulting signal has a component of the form cos(k k1), where k is the phase of the kth bit, which can be used to recover the bit pattern since information is encoded in the phase difference k k1. Such a scheme requires phase stability only over a few bits and can be implemented by using DFB semiconductor lasers. The delay-demodulation scheme can also be used for CPFSK. The amount of delay in that case depends on the tone spacing and is chosen such that the phase is shifted by for the delayed signal. Next Generation 40Gig and 100Gig Ethernet Transceiver Technology Review March 1, 2012 27 >> Introduction on the first Generation 40/100 Gigabit Ethernet CFP Modules With the completion of 40/100 Gigabit Ethernet (GbE) optical interface standards (IEEE 802.3ba-2010) and pluggable optical transceiver module specifications (CFP-MSA Rev 1.4), and with the production shipment of first-generation 40GbE/100GbE CFP products underway, optical module vendors are focusing on developing technologies and proving design-ins for their next-generation 40/100GbE pluggable optical transceivers. The key objectives include significant reductions in module power dissipation and size, which are critical to increasing system port density and reducing overall optical port cost for system vendors and their customers. The 100GbE CFP module provides the highest faceplate density (in terms of Gbps per faceplate aperaturemodule pitch area) for MSA-specified pluggable optical modules to date. However, from a systems point of view, the CFP port density most likely will be limited by thermal constraints on power dissipation, which may typically be greater than 25 W for first-generation 100GbE CFP modules. In this tutorial, we will discuss some of the key challenges facing optical module vendors considering these design objectives and outlines some of the more promising technical approaches to tackle and overcome these challenges. >> First Generation 40/100GbE Pluggable Optical Transceivers Lets first review the technologies and designs of choice in the first generation of 40GbE/100GbE pluggable optical transceivers. The 40GbE and 100GbE optical interfaces specified in IEEE 802.3ba-2010 are summarized in table 1 below.

From a market application perspective, the 40GbE and 100GbE LR4 singlemode fiber optical interfaces are high priority and present the greatest technological challenges when it comes to significant reductions in transceiver power dissipation and form factor size. 1. CFP 40GBase-LR4 Design The first-generation 40GBase-LR4 optical transceiver is based on a 4x10G architecture that comprises the following discrete components: 10G CWDM 1310nm uncooled directly modulated distributed feedback (DM-DFB) transmit optical subassemblies (TOSAs) 10G PIN photodiode (PD) with integrated transimpedance amplifier (TIA) Receive optical subassemblies (ROSAs) Four-channel optical multiplexing/de-multiplexing filters Quad dual-channel clock/data recovery (CDR) IC. The quad CDR IC provides the XLAUI 4x10G electrical interface defined in the IEEE Std 802.3ba-2010 specification These components are packaged into the pluggable CFP module; the modules mechanical, electrical, and management interface specifications are given in the recently completed CFP MSA rev. 1.4, as shown in figure 1 below.

The CFP module-host system management interface is based on the IEEE Std MDIO/MDC interface and includes several new features, such as programmable controls and alarms, module state transitions, and error rate calculations. The first-generation design leverages existing 10G optoelectronic device technology and uses innovations in packaging to realize high-performance, low-cost modules for high-volume production. The CFP 40GBase-LR4 module power dissipation is typically in the range of 6 W, which fits well within the CFP modules 32-W power maximum. Thus, there is considerable interest in reducing the 40G-LR4 module form factor in next-generation designs for increases 40GbE port density. This will be addressed later in this tutorial.

2. CFP 100GBase-LR4/ER4 Design The first-generation 100GBase-LR4/ER4 optical transceiver architecture is similar to that of the 40GBaseLR4, but with the speed of the active optoelectronic components increased to 28Gbps for realizing a 4x28G optical interface. Additionally, the CAUI electrical interface defined in IEEE 802.3ba-2010 is widened from 4x10G lanes to 10x10G lanes. A 10:4/4:10 gearbox serializer/deserializer IC is used to implement the electrical interface between the 10-lane host data path and the four-lane optical data path. The optical interface defined in IEEE 802.3ba-2010 uses a four-wavelength LAN-WDM 800-GHz wavelength grid in the 1310-nm band and optical multiplexing/de-multiplexing on single mode fiber. The transmitter optical specifications for LR4 and ER4 are based on cooled electro-absorption modulation with integrated DFB (EA-DFB) laser technology, but were written to allow eventual implementation with directly modulated DFB lasers for smaller size, lower power consumption, and lower cost TOSAs. The receiver optical specifications for LR4 and ER4 are based upon PIN-PD detector technology with integrated TIA. The receiver specification also includes optical amplification, such as from a semiconductor optical amplifier, to compensate for optical fiber attenuation loss in the ER4 40-km application. These components are packaged into the CFP pluggable module (previously shown in figure 1) with noncoaxial, 28-Gbps electrical connections between the discrete component TOSAs, ROSAs, and gearbox IC, as shown in figure 2 below.

The first-generation 100Gbase-LR4 module power dissipation is typically in the range of 24 W, which poses significant thermal management challenges for system designers, particularly as they seed to increase 100GbE optical port density. Thus, there is strong motivation to significantly reduce the 100GBase-LR4 optical transceiver module power dissipation in the next-generation design. >> Next-Generation 40GbE/100GbE Optical Module Design Targets For the next-generation 40GbE/100GbE optical transceiver modules, system designers want significant reductions in power dissipation and form factor size. These objectives are particularly critical as system houses work to scale their core switching and routing input/output capacities and reduce constraints on port densities due to thermal management limits. For 40Gbase-LR4, the priority target is module form factor reduction. In terms of faceplate density, the current CFP form factor for 40GbE ports is 2.5X less efficient compared to the 100GbE CFP.

One approach the CFP design enables is to double or possible triple the number of 40GbE ports within a single CFP module. While this approach increases port density, it suffers from reduced port provisioning modularity. A more feasible approach is to make use of the existing QSFP+ form factor (SFF-8436) and the non-retimed XLPPI electrical interface specified in IEEE 802.3ba-2010. This approach increases faceplate density by more than 60 percent over the CFP while retaining port modularity. To make this switch, however, optical module vendors need to not only reduce the physical size of their optical components, but they must reduce component power dissipation by over 50 percent so as to fit into the 3.5-W maximum power envelope od the QSFP+. For 100GBase-LR4/ER4, the priority target is power dissipation reduction. System makers are looking for power consumption reduction on the order of 50 percent or more. This will ease system thermal management and enable 100GbE port count scaling in the short term. For the longer term, however, system designers seek 100GbE transceiver roadmaps with significant reductions in both power dissipation and form factor size. >> Next-Generation 40GbE/100GbE Technologies Several promising technological advances in progress could be used by optical module designers to achieve their next-generation 40GbE/100GbE module design targets. These include: Laser array/planar lightwave circuitry (PLC) hybrid integrated TOSA PD/TIA array/PLC hybrid integrated ROSA Low-power BiCMOS ICs and CMOS gearbox IC Narrow 4x28G-VSR electrical interface, electrical connector, and 28G CDR IC CFP2 electro-mechanical module development Hybrid integration of DFB discrete or array lasers with optical multiplexing PLCs has been investigated intensively across the industry. Some of the key challenges to using this technology in TOSA development include laser/PLC device alignment and optical coupling loss minimization. Use of a laser array is desired, as it minimizes the number of process steps in active alignment with the PLC device. However, a DFB laser array is particularly challenging to realize with sufficient gain across all channels for a wide temperature range. Nevertheless, four-channel devices appear to be feasible for realizing an optical hybrid integrated TOSA. Similarly, hybrid integration of PIN-PD and TIA arrays with optical de-multiplexing PLC has also been investigated. Early process using this type of ROSA was made with 10GBase-LX4 module designs, so it appears feasible to scale ROSA rates up to 4x10G and 4x28G. Challenges still remain in PD/PLC device passive alignment, control of attenuation and polarization mode dispersion losses, and temperature stability for realizing an optical hybrid integrated ROSA. Significant reduction of 40GbE/100GbE optical transceiver power dissipation will come from improvements in the component ICs. For next-generation 40GBase-LR4, use of the non-retimed XLPPI electrical interface enables elimination of the quad CDR device, which results in more than 30 percent power consumption savings. Further process and design improvements in laser drivers and TIAs will assist with an overall module power consumption reduction of over 50 percent. For next-generation 100GBase-LR4, EA-DFB driver IC process improvement and CMOS gearbox ICs will be a major factor in module power consumption reduction. With these factors, plus improvements in TIA and module DC-DC power conversion, 50 percent overall CFP module power dissipation looks feasible in the near term. For the longer term, it is desirable to narrow the module electrical interface to four parallel lanes operating each at 28 Gbps. This would enable replacement of the gearbox IC with a quad 28G CDR IC, thus reducing electrical interface IC complexity and power consumption. Work is underway in the Optical Internetworking Forum (OIF) to specify a host chip to optical module electrical interface, called CEI 28G-VSR. Electrical connector suppliers, physical layer IC suppliers, host system vendors, and module vendors are working together in the OIF to confirm application requirements and specify a 28G channel module and electrical interface characteristics.

Even with all of these developments, it still appears that 100GBase-LR4 power dissipation will be on the order of 10 W, which is still too high to fit into the existing QSFP+ form factor power envelope. To reduce module form factor size, consideration of a next-generation CFP module, CFP2, is underway that would be compactly sized for sub-10-W power dissipation and support a narrow 4x28G electrical interface. With the above-noted technology advances, the next-generation 40GBase-LR4 QSFP+ conceptually could be realized as shown in figure 3 below.

A future 100Gbase-LR4 CFP2 would look similar architecturally, with operation at 4x28G and inclusion of a quad 28G CDR electrical interface. The CFP2 module dimension specifications are an open point of study at this time, but past design experience suggests the CFP2 may look mechanically similar to the existing X2 form factor. >> Some More Thought First-generation 40GbE/100GbE CFP optical transceivers are now completing customer qualification and shipping in production. Key design targets for next-generation optical transceivers are: significant reduction of module power dissipation and form factor size. Critical technologies for tackling these design targets include 4x10G and 4x28G hybrid integrated TOSAs/ROSAs and process improvements in 28G gearbox and CDR ICs. There also may be consideration of uncooled CWDM 28G laser technology for realizing 100GbE optical transceivers in a QSFP+ like form factor for short singlemode fiber (<2km) applications. What is Chromatic Dispersion ? (material dispersion and waveguide dispersion) December 12, 2011 6 Chromatic dispersion is the term given to the phenomenon by which different spectral components of a pulse travel at different velocities. To understand the effect of chromatic dispersion, we must understand the significance of the propagation constant . We will restrict our discussion to single mode fiber since in the case of multimode fiber, the effects of intermodal dispersion usually overshadow those of chromatic

dispersion. So the propagation constant in our discussions will be that associated with the fundamental mode of the fiber. Chromatic dispersion arises for two reasons. The first reason is that the refractive index of silica, the material used to make optical fiber, is frequency dependent. Thus different frequency components travel at different speeds in silica. This component of chromatic dispersion is called material dispersion. Although material dispersion is the principle component of chromatic dispersion for most fibers, there is a second component, called waveguide dispersion. To understand the physical origin of waveguide dispersion, we need to know that the light energy of a mode propagates partly in the core and partly in the cladding. Also that the effective index of a mode lies between the refractive indices of the cladding and the core. The actual value of the effective index between these two limits depends on the proportion of of power that is contained in the cladding and the core. If most of the power is contained in the core, the effective index is closer to the core refractive index; if most of it propagates in the cladding, the effective index is closer to the cladding refractive index. The power distribution of a mode between the core and cladding of the fiber is itself a function of the wavelength. More accurately, the longer the wavelength, the more power in the cladding. Thus, even in the absence of material dispersion so that the refractive indices of the core and cladding are independent of wavelength if the wavelength changes, this power distribution changes, causing the effective index or propagation constant of the mode to change. This is the physical explanation for waveguide dispersion. What is Effective Length and Effective Area ? (concepts for understanding nonlinear effect in optical fibers) December 12, 2011 4 >> Effective Length Le The nonlinear interactions in optical fibers depends on the transmission length and the cross-sectional area of the fiber. The longer the link length, the more the interaction and the worse the effect of nonlinearity. However, as the signal propagates along the link, its power decreases because of fiber attenuation. Thus, most of the nonlinear effects occur early in the fiber span and diminish as the signal propagates. Modeling this effect can be quite complicated, but in practice, a simple model that assumes that the power is constant over a certain effective length Le has proved to be quite sufficient in understanding the effect of nonlinearities. Suppose Po denotes the power transmitted into the fiber and P(z) = Poe-z denotes the power at distance z along the link, with being the fiber attenuation. Let L denote the actual link length. Then the effective length is defined as the length Le such that

This yields

Typically, = 0.22 dB/km at 1.55 m wavelength, and for long links where L >> 1/, we have Le 20 km. Lets look at the figure below for the effective transmission length calculation. In the figure, (a) is a typical distribution of the power along the length L of a link. The peak power is Po. (b) is a hypothetical uniform distribution of the power along a link up to the effective length Le. This length Le is chosen such that the area under the curve in (a) is equal to the area of the rectangle in (b).

>> Effective Area Ae In addition to the link length, the effect of a nonlinearity also grows with the optical power intensity in the fiber.For a given power, the intensity is inversely proportional to the area of the core. Since the power is not uniformly distributed within the cross section of the fiber, it is convenient to use an effective cross-sectional area Ae, related to the actual area A and the cross-sectional distribution of the fundamental mode F(r,), as

where r and denote the polar coordinates. The effective area, as defined above, has the significance that the dependence of most nonlinear effects can be expressed in terms of the effective area for the fundamental mode propagating in the given type of fiber. For example, the effective intensity of the pulse can be taken to be Ie = P/Ae, where P is the pulse power, in order to calculate the impact of certain nonlinear effects such as Self-Phase Modulation (SPM). The effective area of standard single mode fiber (SMF) is around 85 m2 and that of Dispersion-Shifted Fiber (DSF) around 50 m2. The dispersion compensating fibers have even smaller effective areas and hence exhibit higher nonlinearities. Lets look at the following figure, it shows the effective cross-sectional area. (a) shows a typical distribution of the signal intensity along the radius of optical fiber. (b) shows a hypothetical intensity distribution, equivalent to that in (a) for many purposes, showing an intensity distribution that is nonzero only for an area Ae around the center of the fiber.

What are Fiber Optic Transponders? April 26, 2012 9 1. What is fiber optic transponder?

In optical fiber communications, a transponder is the element that sends and receives the optical signal from a fiber. A transponder is typically characterized by its data rate and the maximum distance the signal can travel. >> The difference between a fiber optic transponder and transceiver A transponder and transceiver are both functionally similar devices that convert a full-duplex electrical signal in a full-duplex optical signal. The difference between the two being that transceivers interface electrically with the host system using a serial interface, whereas transponders use a parallel interface to do so. So transponders provide easier to handle lower-rate parallel signals, but are bulkier and consume more power than transceivers. >> Major functions of a fiber optic transponder includes: Electrical and optical signals conversions Serialization and deserialization Control and monitoring 2. Applications of fiber optic transponder Multi-rate, bidirectional fiber transponders convert short-reach 10 Gb/s and 40 Gb/s optical signals to longreach, single-mode dense wavelength division multiplexing (DWDM) optical interfaces. The modules can be used to enable DWDM applications such as fiber relief, wavelength services, and Metro optical DWDM access overlay on existing optical infrastructure. Supporting dense wavelength multiplexing schemes, fiber optic transponders can expand the useable bandwidth of a single optical fiber to over 300 Gb/s. Transponders also provide a standard line interface for multiple protocols through replaceable 10G small form-factor pluggable (XFP) client-side optics. The data rates and typical protocols transported include synchronous optical network/synchronous digital hierarchy (SONET/SDH) (OC-192 SR1), Gigabit Ethernet (10GBaseS and 10GBaseL), 10 G Fibre Channel (10 GFC) and SONET G.709 forward error correction (FEC)(10.709 Gb/s). Fiber optic transponder modules can also support 3R operation (reshape, retime, regenerate) at supported rates. Often, fiber optic transponders are used to for testing interoperability and compatibility. Typical tests and measurements include jitter performance, receiver sensitivity as a function of bit error rate (BER), and transmission performance based on path penalty.Some fiber optic transponders are also used to perform transmitter eye measurements. >> Major Applications of fiber optic transponder 300-pin MSA fiber optic transponders can transparently carry a native 10G LAN PHY, SONET/SDH and Fibre Channel payload with a carrier grade DWDM Optical Transport Network (OTN) interface without the need for bandwidth limitation.

Transponders offer G.709 compliant Digital Wrapper, Enhanced Forward Error Correction (FEC) and Electrical Dispersion Compensation (EDC) for advanced optical performance and management functions superior to those found in DWDM Transponder systems. They support full C or L band tunability and is designed to interoperate with any Open DWDM line system that support 50GHz spaced wavelengths per the ITU-T grid. Enables reach extension on SONET, Storage Area Network (SAN), Gigabit Ethernet, and dispersion limited links Wavelength services and Metro optical access overlay Agile Optical Networks >> Other Applications 1) Multimode to Single Mode Conversion Some transponders can convert from multimode to single mode fiber, short reach to long reach lasers, and/or 850/1310nm to 1550nm wavelengths. Each transponder module is protocol transparent and operates fully independent of the adjacent channels.

2) Redundant Fiber Path Each transponder module can also include a redundant fiber path option for extra protection. The redundant fiber option transmits the source signal over two different optical paths to two redundant receivers at the other end. If the primary path is lost, the backup receiver is switched on. Because this is done electronically rather than mechanically, it is much faster and more reliable.

3) Repeater As an optical repeater, some fiber optic transponders effectively extend an optical signal to cover the desired distance. With the Clock Recovery option, a degraded signal can be de-jittered and retransmitted to optimize signal quality.

4) Mode Conversion Mode conversion is one of the quickest and simplest ways of extending multimode optical signals over greater distances on single mode fiber optics. Note: Most receivers are capable of receiving both multimode and single mode optical signals.

3. 10Gb/s transponder block diagram

Fiber optic transponders do the simple conversion from low-speed electrical signals to high-speed optical signals These optical transceivers with built-in MUX/DEMUX come in a compact package with a multiplexing function converting 622Mbps low-speed electrical signals to a 10Gbps ultra-high-speed optical signal.

They can contribute to significantly smaller and cheaper optical interfaces in communications equipment and switches/routers.

4. How to select a transponder? Selecting fiber optic transponders requires an understanding of jitter measurements and BER measurements. >> Jitter measurement There are three types of jitter measurement: jitter generation, jitter tolerance, and jitter transfer. Jitter analyzers are used with fiber optic transponders and test boards. Jitter generation data includes current and maximum values for jitter peak peak, jitter + peak, jitter peak, and jitter RMS (root mean squared). Jitter tolerance and jitter performance are scaled values. The following figure shows an example setup for jitter measurements.

>> BER measurement For BER measurements, test boards with fiber optic transponders are used with pulse pattern generators, error detectors, reference lasers, and reference receivers. Case temperature is an important variable. The following figure shows an example setup for Bit Error Rate Measurement.

>> Transmitter eye measurement For transmitter eye measurements, fiber optic transponders are used with pulse pattern generators, reference lasers, and high-speed oscilloscopes. There are significant differences between the filtered eye and the unfiltered eye. Eye measurements vary by distance and, because of the error rate (ER), may require optical modulation amplitude (OMA) instead. The following figure shows an example setup for transmitter eye measurement.

>> Path Penalty Measurement For path-penalty testing, test boards with fiber optic transponders are used with pulse pattern generators, reference lasers, reference receivers, and fiber spools. Original chirp and optimal chirp are important parameters to consider. The following figure shows an example setup for path penalty measurement.

What are Phase Velocity, Group Velocity, and Signal Velocity? December 9, 2011 ===================================================================== =========================== Frequency dispersion in groups of gravity waves on the surface of deep water. The red dot moves with the phase velocity, and the green dots propagate with the group velocity. In this deep-water case, the phase velocity is twice the group velocity. The red dot overtakes two green dots when moving from the left to the right of the figure. ===================================================================== =========================== The velocity of a wave can be defined in many different ways, partly because there many different kinds of waves, and partly because we can focus on different aspects or components of any given wave. The ambiguity in the definition of "wave velocity" often leads to confusion, and we frequently read stories about experiments purporting to demonstrate "superluminal" propagation of electromagnetic waves (for example). Invariably, after looking into the details of these experiments, we find the claims of "superluminal communication" are simply due to a failure to recognize the differences between phase, group, and signal velocities. In the simple case of a pure traveling sinusoidal wave we can imagine a "rigid" profile being physically moved in the positive x direction with speed v as illustrated below.

Clearly the wave function depends on both time and position. At any fixed instant of time, the function varies sinusoidally along the x axis, whereas at any fixed location on the x axis the function varies sinusoidally with time. One complete cycle of the wave can be associated with an "angular" displacement of 2p radians. The angular frequency w of a wave is the number of radians per unit time at a fixed position, whereas the wave number k is the number of radians per unit distance at a fixed time. (If we prefer to speak in terms of cycles rather than radians, we can use the wavelength l = 2p /k and the frequency n = w/2p .) In terms of these parameters we can express a pure traveling wave as the function

A(t,x) = A0 cos(kx wt) where the "amplitude" A0 is the maximum of the function. (We use the cosine function rather than the sine merely for convenience, the difference being only a matter of phase.) The minus sign denotes the fact that if we hold t constant and increase x we are moving "to the right" along the function, whereas if we focus on a fixed spatial location and allow time to increase, we are effectively moving "to the left" along the function (or rather, it is moving to the right and we are stationary). Reversing the sign gives A0 cos(kx + wt), which is the equation of a wave propagating in the negative x direction. Note that the function A(t,x) is the fundamental solution of the (one-dimensional) "wave equation"

Since w is the number of radians of the wave that pass a given location per unit time, and 1/k is the spatial length of the wave per radian, it follows that w/k = v is the speed at which the shape of the wave is moving, i.e., the speed at which any fixed phase of the cycle is displaced. Consequently this is called the phase velocity of the wave, denoted by vp. In terms of the cyclical frequency and wavelength we have vp = ln. If we imagine the wave profile as a solid rigid entity sliding to the right, then obviously the phase velocity is the ordinary speed with which the actual physical parts are moving. However, we could also imagine the quantity "A" as the position along a transverse space axis, and a sequence of tiny massive particles along the x axis, each oscillating vertically in accord with A0 cos(kx -wt). In this case the wave pattern propagates to the right with phase velocity vp, just as before, and yet no material particle has any lateral motion at all. This illustrates that the phase of a traveling wave form may or may not correspond to a particular physical entity. Its entirely possible for a wave to "precess" through a sequence of material entities, none of which is moving in the direction of the wave. In a sense this is similar to the phenomenon of aliasing in signal processing. What we perceive as a coherent wave may in fact be simply a sequence of causally disjoint processes (like the individual spring-mass systems) that happen to be aligned spatially and temporally, either by chance or design, so that their combined behavior exhibits a wavelike pattern, even though there is no actual propagation of energy or information along the sequence. Since a general wave (or wavelike phenomenon) need not embody the causal flow of any physical effects, there is obviously there is no upper limit on the possible phase velocity of a wave. However, even for a "genuine" physical wave, i.e., a chain of sequentially dependent events, the phase velocity does not necessarily correspond to the speed at which energy or information is propagating. This is partly a semantical issue, because in order to actually convey information, a signal cannot be a simple periodic wave, so we must consider non-periodic signals, making the notion of "phase" somewhat ambiguous. If the wave profile never exactly repeats itself, then arguably the "period" of the signal must be the entire signal. On this basis we might say that the velocity of the signal is unambiguously equal to the "phase velocity", but in this context the phase velocity could only be defined as the speed of the leading (or trailing) edge of the overall signal. In practice and common usage, though, we tend to define the "phase" of a signal with respect to the intervals between consecutive local maxima (or minima, or zero crossings). To illustrate, consider a signal consisting of two superimposed sine waves with slightly different frequencies and wavelengths, i.e., a signal with the amplitude function As most people know from experience, the combination of two slightly unequal tones produces a "beat", resulting from the tones cycling in and out of phase with each other. Using a well-known trigonometric identity we can express the two components of this signal in the form

Therefore, adding the two terms of A(x,t) together, the products of sines cancel out, and we can express the overall signal as

This can be somewhat loosely interpreted as a simple sinusoidal wave with the angular velocity w, the wave number k, and the modulated amplitude 2cos(Dkx Dwt). In other words, the amplitude of the wave is itself a wave, and the phase velocity of this modulation wave is v =Dw/Dk. A typical plot of such a signal is shown below for the case w = 6 rad/sec, k = 6 rad/meter, Dw = 0.1 rad/sec, Dk = 0.3 rad/meter.

The "phase velocity" of the internal oscillations is w/k = 1 meter/sec, whereas the amplitude envelope wave (indicated by the dotted lines) has a phase velocity of Dw/Dk = 0.33 meter/sec. As a result, if we were riding along with the envelope, we would observe the internal oscillations moving forward from one group to the next. The propagation of information or energy in a wave always occurs as a change in the wave. The most obvious example is changing the wave from being absent to being present, which propagates at the speed of the leading edge of a wave train. More generally, some modulation of the frequency and/or amplitude of a wave is required in order to convey information, and it is this modulation that represents the signal content. Hence the actual speed of content in the situation described above is Dw/Dk. This is the phase velocity of the amplitude wave, but since each amplitude wave contains a group of internal waves, this speed is usually called the group velocity. Physical waves of a given type in a given medium generally exhibit a characteristic group velocity as well as a characteristic phase velocity. This is because within a given medium there is a fixed relationship between the wave number and the frequency of waves. For example, in a transparent optical medium the refractive index n is defined as the ratio c/vp where c is the speed of light in vacuum and vp is the phase velocity of light in that medium. Now, since vp = w/k, we have w = kc/n. Bearing in mind that the refractive index is typically a function of the frequency (resulting in the "dispersion" of colors seen in prisms, rainbows, etc.), we can take the derivative of w as follows

Hence any modulation of an electromagnetic wave in this medium will propagate at the group velocity

In a medium whose refractive index is constant, independent of frequency (such as the vacuum), we have dn/dk = 0 and therefore the group velocity equals the phase velocity. On the other hand, in most commonly observable transparent media (such as air, water, glass, etc.) at optical frequencies have a refractive indices that increase slightly as a function of wave number and (therefore) frequency. This is why the high frequency (blue) components of a beam of white light are deflected more than the low frequency (red) components as they pass through a glass prism. It follows that the group velocity of light in such media (called dispersive) is less than the phase velocity. It is quite possible for the phase velocity of a perfectly monochromatic wave of light, assuming such a thing exists, to exceed the value of c, because it conveys no information. In fact, the concept of a perfectly monochromatic beam of light is similar to the idea of a "free photon", in the sense that neither of them has any physical significance or content, because a photon must be emitted and absorbed, just as a beam of light cannot be infinite in extent and duration, but must always have a beginning and an end, which introduces a range of spectral components to the signal. Any modulation will propagate at the group velocity, which, in dispersive media, is always less than c. An example of an actual physical application in which we must be careful to distinguish between the phase and the group velocity is the case of electromagnetic waves propagating through a hollow magnetic

conductor, often called a waveguide. A waveguide imposes a "cutoff frequency" w0 on any propagating electromagnetic waves based on the geometry of the tube, and will not sustain waves of any lower frequency. This is roughly analogous to how the pipes in a Church organ will sustain only certain resonant patterns. As a result, the dominant wave pattern of a propagating wave with a frequency of w will have a wave number k given by

Since (as weve seen) the phase velocity is w/k, this implies that the (dominant) phase velocity in a waveguide with cutoff frequency w0 is

Hence, not only is the phase velocity generally greater than c, it approaches infinity as w approaches the cutoff frequency w0. However, the speed at which information and energy actually propagates down a waveguide is the group velocity, which (as weve seen) is given by dw/dk. Taking the derivative of the preceding expression for k with respect to w gives

so the group velocity in a waveguide with cutoff frequency w0 is

which of course is always less than or equal to c. Unfortunately we frequently read in the newspapers about how someone has succeeded in transmitting a wave with a group velocity exceeding c, and we are asked to regard this as an astounding discovery, overturning the principles of relativity, etc. The problem with these stories is that the group velocity corresponds to the actual signal velocity only under conditions of normal dispersion, or, more generally, under conditions when the group velocity is less than the phase velocity. In other circumstances, the group velocity does not necessarily represent the actual propagation speed of any information or energy. For example, in a regime of anomalous dispersion, which means the refractive index decreases with increasing wave number, the preceding formula shows that what we called the group velocity exceeds what we called the phase velocity. In such circumstances the group velocity no longer represents the speed at which information or energy propagates. To see why the group velocity need not correspond to the speed of information in a wave, notice that in general, by superimposing simple waves with different frequencies and wavelengths, we can easily produce a waveform with a group velocity that is arbitrarily great, even though the propagation speeds of the constituent waves are all low. A snapshot of such a case is shown below. In this figure the sinusoidal wave denoted as "A" has a wave number of kA = 2 rad/meter and an angular frequency of wA = 2 rad/sec, so its individual phase velocity is vA = 1 meter/sec. The sinusoidal wave denoted as "B" has a wave number of kB = 2.2 rad/meter and an angular frequency of wB = 8 rad/sec, so its individual phase velocity is vB = 3.63 meters/sec.

The sum of these two signals is denoted as "A+B" and, according to the formulas given above, it follows that this sum can be expressed in the form 2cos(kx-wt)cos(Dkx-Dwt) where k = 5, w = 2.1, Dk = 0.1, and Dw = 3. Consequently, the "envelope wave" represented by the second factor has a phase velocity of 30 meters/sec. Nevertheless, its clear that no information can be propagating faster than the phase speeds of the constituent waves A and B. Indeed if we follow the midpoint of a "group" of A+B as it proceeds from left to right, we find that when it reaches the right hand side it consists of the sum of peaks of A and B that entered at the left long before the current "group" had even "appeared". This is just one illustration of how simple interfering phase effects can be miss-construed as ultra-high-speed signals. In fact, by simply setting kA to 2.2 and kB to 2.0, we can cause the "groups" of A+B to precess from right to left, which might mistakenly be construed as a signal propagating backwards in time! Needless to say, we have not succeeded in arranging for a message to be received before it was sent, nor even in transmitting a message superluminally. Examples of this kind merely illustrate that the "group velocity" does not always represent the speed at which real information (or energy) is moving. This stands to reason, because the two cosine factors of the carrier/modulation waveform are formally identical, so we cant arbitrarily declare one of them to represent the carrier and the other to represent the modulation. Both are required, so we shouldnt expect the information speed to be any greater than the lesser of the two phase speeds, nor should it exceed the lesser of the phase speeds of the two components A and B. Furthermore, we already know that the transmission of information via an individual wave such as A will propagate at the speed of an incremental disturbance of A, which depends on how dw and dk are related to each other. In the example above we arbitrarily selected increments of wand k, but our ability to do this in a physical context would depend on a great deal of flexibility in the wave propagation properties of the medium. This is where the ingenuity of the experimenter can be deployed to arrange various exotic substances and fields in such a way as to permit the propagation of waveforms with the desired properties. The important point to keep in mind is that none of these experiments actually achieves meaningful superluminal transfer of information. Incidentally, since we can contrive to make the "groups" propagate in either direction, its not surprising that we can also make them stationary. Two identical waves propagating in opposite directions at the same speed are given by Superimposing these two waves propagating (with synchronized nodes) in opposite directions yields a standing pure wave What is Four-Wave Mixing (FWM) in Fiber Optic Communication Systems? December 1, 2011 6 >> Nonlinear Effects in High Power, High Bit Rate Fiber Optic Communication Systems When optical communication systems are operated at moderate power (a few milliwatts) and at bit rates up to about 2.5 Gb/s, they can be assumed as linear systems. However, at higher bit rates such as 10 Gb/s and above and/or at higher transmitted powers, it is important to consider the effect of nonlinearities. In case of WDM systems, nonlinear effects can become important even at moderate powers and bit rates.

There are two categories of nonlinear effects. The first category happens because of the interaction of light waves with phonons (molecular vibrations) in the silica medium of optical fiber. The two main effects in this category are stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS). The second category of nonlinear effects are caused by the dependence of refractive index on the intensity of the optical power (applied electric field). The most important nonlinear effects in this category are selfphase modulation (SPM) and four-wave mixing (FWM).

>> Basic Principles of Four-Wave Mixing 1. How the Fourth Wave is Generated In a WDM system with multiple channels, one important nonlinear effect is four-wave mixing. Four-wave mixing is an intermodulation phenomenon, whereby interactions between 3 wavelengths produce a 4th wavelength. In a WDM system using the angular frequencies 1, n, the intensity dependence of the refractive index not only induces phase shifts within a channel but also gives rise to signals at new frequencies such as 2ij and i + j k. This phenomenon is called four-wave mixing. In contrast to Self-Phase Modulation (SPM) and Cross-Phase Modulation (CPM), which are significant mainly for high-bit-rate systems, the four-wave mixing effect is independent of the bit rate but is critically dependent on the channel spacing and fiber chromatic dispersion. Decreasing the channel spacing increases the four-wave mixing effect, and so does decreasing the chromatic dispersion. Thus the effects of Four-Wave Mixing must be considered even for moderate-bit-rate systems when the channels are closely spaced and/or dispersion-shifted fibers are used. To understand the effects of four-wave mixing, consider a WDM signal that is the sum of n monochromatic plane waves. Thus the electric field of this signal can be written as

The nonlinear dielectric polarization PNL(r,t) is given by

where (3) is called the third-order nonlinear susceptibility and is assumed to be a constant (independent of t). Using the above two equations, the nonlinear dielectric polarization is given by

Thus the nonlinear susceptibility of the fiber generates new fields (waves) at the frequencies i j k (i, j, knot necessarily distinct). This phenomenon is termed four-wave mixing. The reason for this term is that three waves with the frequencies i, j, and k combine to generate a fourth wave at a frequency i j k. For equal frequency spacing, and certain choices of I,j, and k, the fourth wave contaminates i. For example, for a frequency spacing , taking 1, 2, and 3 to be successive frequencies, that is, 2 = 1 + and 3 = 1 + 2, we have 1-2+3 = 2, and 22-1=3. In the above equation, the term (28) represents the effect of SPM and CPM. The terms (29), (31), and (32) can be neglected because of lack of phase matching. Under suitable circumstances, it is possible to approximately satisfy the phase-matching condition for the remaining terms, which are all of the form i + j k, I,j k (i, j not necessarily distinct). For example, if the wavelengths in the WDM system are closely spaced, or are spaced near the dispersion zero of the fiber, then is nearly constant over these frequencies and the phase-matching condition is nearly satisfied. When this is so, the power generated at these frequencies can be quite significant. 2. Power Penalty Due to Four-Wave Mixing From the above discussion, we can see that the nonlinear polarization causes three signals at frequencies i, j, and k to interact to produce signals at frequencies i j k. Among these signals, the most troublesome one is the signal corresponding to ijk = i + j k, i k, j k Depending on the individual frequencies, this beat signal may lie on or very close to one of the individual channels in frequency, resulting in significant crosstalk to that channel. In a multichannel system with W channels, this effect results in a large number (W(W-1)2) of interfering signals corresponding to i ,j,k varying from 1 to W. In a system with three channels, for example, 12 interfering terms are produced, as shown in the following figure.

Interestingly, the effect of four-wave mixing depends on the phase relationship between the interacting signals. If all the interfering signals travel with the same group velocity, as would be the case if there were no chromatic dispersion, the effect is reinforced. On the other hand, with chromatic dispersion present, the different signals travel with different group velocities. Thus the different waves alternately overlap in and out of phase, and the net effect is to reduce the mixing efficiency. The velocity difference is greater when the channels are space farther apart (in systems with chromatic dispersion). To quantify the power penalty due to four-wave mixing, we can start from the following equation

This equation assumes a link of length L without any loss and chromatic dispersion. Here Pi, Pj, and Pk denote the powers of the mixing waves and Pijk the power of the resulting new wave, is the nonlinear refractive index (3.0x 10-8 m2/W), and dijk is the so-called degeneracy factor. In a real system, both loss and chromatic dispersion are present. To take the loss into account, we replace L with the effective length Le, which is given by the following equation for a system of length L with amplifiers spaced l km apart.

The presence of chromatic dispersion reduces the efficiency of the mixing. We can model this by assuming a parameter ijk, which represents the efficiency of mixing of the three waves at frequencies i, j, and k. Taking these two into account, we can modify the preceding equation to

For on-off keying (OOK) signals, this represents the worst-case power at frequency ijk, assuming a 1 bit has been transmitted simultaneously on frequencies i, j, and k. The efficiency ijk goes down as the phase mismatch between the interfering signals increases. We can obtain the efficiency as

Here, is the difference in propagation constants between the different waves, and D is the chromatic dispersion. Note that the efficiency has a component that varies periodically with the length as the

interfering waves go in and out of phase. In this example, we will assume the maximum value for this component. The phase mismatch can be calculated as = i + j k ijk where r represents the propagation constant at wavelength r. Four-wave mixing manifests itself as intrachannel crosstalk. The total crosstalk power for a given channel c is given as

Assume the amplifier gains are chosen to match the link loss so that the output power per channel is the same as the input power. The crosstalk penalty can therefore be calculated from the following equation.

Assume that the channels are equally spaced and transmitted with equal power, and the maximum allowable penalty due to Four-Wave Mixing (FWM) is 1 dB. Then if the transmitted power in each channel is P, the maximum FWM power in any channel must be < P, where can be calculated to be 0.034 for a 1 dB penalty using the above equation. Since the generated FWM power increases with link length, this sets a limit on the transmit power per channel as a function of the link length. This limit is plotted in the following figure for both standard single mode fiber (SMF) and dispersion-shifted fiber (DSF) for three cases (1) 8 channels spaced 100 GHz apart (2) 32 channels spaced 100 GHz part (3) 32 channels spaced 50 GHz apart For standard single mode fiber (SMF) the chromatic dispersion parameter is taken to be D = 17 ps/nm-km, and for DSF the chromatic dispersion zero is assumed to lie in the middle of the transmitted band of channels. The slope of the chromatic dispersion curve, dD/d, is taken to be 0.055 ps/nm-km2.

We can get several conclusions from the above power penalty figure. 1). The limit is significantly worse in the case of dispersion-shifted fiber than it is for standard single mode fiber. This is because the four-wave mixing efficiencies are much higher in dispersion-shifted fiber due to the low value of the chromatic dispersion.

2). The power limit gets worse with an increasing number of channels, as can be seen by comparing the limits for 8-channel and 32 channel systems for the same 100 GHz spacing. This effect is due to the much larger number of four-wave mixing terms that are generated when the number of channels is increases. In the case of dispersion-shifted fiber, this difference due to the number of four-wave mixing terms is imperceptible since, even though there are many more terms for the 32 channel case, the same 8 channels around the dispersion zero as in the 8 channel case contribute almost all the four-wave mixing power. The four-wave mixing power contribution from the other channels is small because there is much more chromatic dispersion at these wavelengths. 3) The power limit decreases significantly if the channel spacing is reduce, as can be seen by comparing the curves for the two 32-channel systems with channel spacing of 100 GHz and 50 GHz. This decrease in the allowable transmit power arises because the four-wave mixing efficiency increases with a decrease in the channel spacing since the phase mismatch is reduced. (For SMF, though the efficiencies at both 50 GHz and 100 GHz are small, the efficiency is much higher at 50 GHz than at 100 GHz.) 3. Solutions for Four-Wave Mixing Four-wave mixing is a severe problem in WDM systems using dispersion-shifted fiber but does not usually pose major problem in systems using standard fiber. In face, it motivated the development of None-Zero Dispersion-Shifted Fiber (NZ-DSF). In general, the following actions alleviate the penalty due to four-wave mixing. 1) Unequal channel spacing. The positions of the channels can be chosen carefully so that the beat terms do not overlap with the data channels inside the receiver bandwidth. This may be possible for a small number of channels in some cases but needs careful computation of the exact channel positions.

2) Increases channel spacing. This increases the group velocity mismatch between channels. This has the drawback of increasing the overall system bandwidth, requiring the optical amplifiers to be flat over a wider bandwidth, and increases the penalty due to Stimulated Raman Scattering (SRS).

3) Using higher wavelengths beyond 1560nm with DSF. Even with DSF, a significant amount of chromatic dispersion is present in this range, which reduces the effect of four-wave mixing. The newly developed Lband amplifiers can be used for long-distance transmission over DSF. 4) As with other nonlinearities, reducing transmitter power and the amplifier spacing will decrease the penalty 5) If the wavelengths can be demultiplexed and multiplexed in the middle of the transmission path, we can introduce difference delays for each wavelength. This randomizes the phase relationship between the different wavelengths. Effectively, the FWM powers introduced before and after this point are summed instead of the electric fields being added in phase, resulting in a smaller FWM penalty. >> Fiber Optic Communication System Design Considerations When designing a fiber optic communication system some of the following factors must be taken into consideration: Which modulation and multiplexing technique is best suited for the particular application? Is enough power available at the receiver (power budget)? Rise-time and bandwidth characteristics Noise effects on system bandwidth, data rate, and bit error rate Are erbium-doped fiber amplifiers required? What type of fiber is best suited for the application? Cost 1. Power Budget The power arriving at the detector must be sufficient to allow clean detection with few errors. Clearly, the signal at the receiver must be larger than the noise. The power at the detector, Pr, must be above the threshold level or receiver sensitivity Ps. Pr >= Ps The receiver sensitivity Ps is the signal power, in dBm, at the receiver that results in a particular bit error rate (BER). Typically the BER is chosen to be one error in 109 bits or 109. The received power at the detector is a function of: Power emanating from the light source (laser diode or LED)(PL) Source to fiber loss (Lsf) Fiber loss per km (FL) for a length of fiber (L) Connector or splice losses (Lconn) Fiber to detector loss (Lfd) The allocation of power loss among system components is the power budget. The power margin is the difference between the received power Pr and the receiver sensitivity Ps by some margin Lm. L m = Pr P s where Lm is the loss margin in dB, Pr is the received power, Ps is the receiver sensitivity in dBm. If all of the loss mechanisms in the system are taken into consideration, the loss margin can be expressed as the following equation. All units are dB and dBm. Lm = PL Lsf (FL L) Lconn Lfd Ps

2. Bandwidth and Riser Time Budgets The transmission data rate of a digital fiber optic communication system is limited by the rise time of the various components, such as amplifiers and LEDs, and the dispersion of the fiber. The cumulative effect of all the components should not limit the bandwidth of the system. The rise time tr and bandwidth BW are related by BW = 0.35/tr This equation is used to determine the required system rise time. The appropriate components are then selected to meet the system rise time requirements. The relationship between total system rise time and component rise time is given by the following equation where ts is the total system rise time and tr1, tr2, are the rise times associated with the various components. To simplify matters, divide the system into five groups: Transmitting circuits (ttc) LED or laser (tL) Fiber dispersion (tf) Photodiode (tph) Receiver circuits (trc) The system rise time can then be expressed as The system bandwidth can then be calculated using the following equation from the total rise time ts as given in the above equation BW = 0.35/ts Electrical and Optical Bandwidth Electrical bandwidth (BWel) is defined as the frequency at which the ratio current out/current in (Iout/Iin) drops to 0.707. (Analog systems are usually specified in terms of electrical bandwidth.) Optical bandwidth (BWopt) is the frequency at which the ratio power out/power in (Pout/Pin) drops to 0.5. Because Pin and Pout are directly proportional to Iin and Iout (not I2in and I2out), the half-power point is equivalent to the half-current point. This results in a BWopt that is larger than the BWel as given in the following equation BWel = 0.707 BWopt

3. Fiber Connectors Many types of connectors are available for fiber optics, depending on the application. The most popular are: SCsnap-in single-fiber connector ST and FCtwist-on single-fiber connector FDDIfiber distributed data interface connector In the 1980s, there were many different types and manufacturers of connectors. Today, the industry has shifted to standardized connector types, with details specified by organizations such as the Telecommunications Industry Association(TIA), the International Electrotechnical Commission, and the Electronic Industry Association (EIA). Snap-in connector (SC)developed by Nippon Telegraph and Telephone of Japan. Like most fiber connectors, it is built around a cylindrical ferrule that holds the fiber, and it mates with an interconnection adapter or coupling receptacle. A push on the connector latches it into place, with no need to turn it in a tight space, so a simple tug will not unplug it. It has a square cross section that allows high packing density on patch panels and makes it easy to package in a polarized duplex form that ensures the fibers are matched to the proper fibers in the mated connector.

Twist-on single-fiber connectors (ST and FC)long used in data communication; one of several fiber connectors that evolved from designs originally used for copper coaxial cables.

Duplex connectorsA duplex connector includes a pair of fibers and generally has an internal key so it can be mated in only one orientation. Polarizing the connector in this way is important because most systems use separate fibers to carry signals in each direction, so it matters which fibers are connected. One simple

type of duplex connector is a pair of SC connectors, mounted side by side in a single case. This takes advantage of their plug-in-lock design. Other duplex connectors have been developed for specific types of networks, as part of comprehensive standards. One example is the fixed-shroud duplex (FSD) connector specified by the fiber distributed data interface (FDDI) standard.

4. Fiber Optic Couplers A fiber optic coupler is a device used to connect a single (or multiple) fiber to many other separate fibers. There are two general categories of couplers: Star couplers T-couplers

A. Star Couplers Transmissive type Optical signals sent into a mixing block are available at all output fibers. Power is distributed evenly. For an n n star coupler (n-inputs and n-outputs), the power available at each output fiber is 1/n the power of any input fiber.

The output power from a star coupler is simply Po = Pin/n where n = number of output fibers. An important characteristic of transmissive star couplers is cross talk or the amount of input information coupled into another input. Cross coupling is given in decibels and is typically greater than 40 dB. The reflective star coupler has the same power division as the transmissive type, but cross talk is not an issue because power from any fiber is distributed to all others. B. T-Couplers In the following figure, power is launched into port 1 and is split between ports 2 and 3. The power split does not have to be equal. The power division is given in decibels or in percent. For example, and 80/20 split means 80% to port 2, 20% to port 3. In decibels, this corresponds to 0.97 dB for port 2 and 6.9 dB for port 3.

Directivity describes the transmission between the ports. For example, if P3/P1 = 0.5, P3/P2 does not necessarily equal 0.5. For a highly directive T-coupler, P3/P2 is very small. Typically, no power is expected to be transferred between any two ports on the same side of the coupler. Another type of T-coupler uses a graded-index (GRIN) lens and a partially reflective surface to accomplish the coupling. The power division is a function of the reflecting mirror. This coupler is often used to monitor optical power in a fiber optic line. 5. Wavelength-Division Multiplexers (WDM) The couplers used for wavelength-division multiplexing (WDM) are designed specifically to make the coupling between ports a function of wavelength. The purpose of these couplers is to separate (or combine) signals transmitted at different wavelengths. Essentially, the transmitting coupler is a mixer and the receiving coupler is a wavelength filter. Wavelength-division multiplexers use several methods to separate different wavelengths depending on the spacing between the wavelengths. Separation of 1310 nm and 1550 nm is a simple operation and can be achieved with WDMs using bulk optical diffraction gratings. Wavelengths in the 1550-nm range that are spaced at greater than 1 to 2 nm can be resolved using WDMs

that incorporate interference filters. An example of an 8-channel WDM using interference filters is given in the following figure. Fiber Bragg gratings are typically used to separate very closely spaced wavelengths in a DWDM system (< 0.8 nm).

6. Erbium-Doped Fiber Amplifiers (EDFA) Erbium-doped fiber amplifiers (EDFA)The EDFA is an optical amplifier used to boost the signal level in the 1530-nm to 1570-nm region of the spectrum. When it is pumped by an external laser source of either 980 nm or 1480 nm, signal gain can be as high as 30 dB (1000 times). Because EDFAs allow signals to be regenerated without having to be converted back to electrical signals, systems are faster and more reliable. When used in conjunction with wavelength-division multiplexing, fiber optic systems can transmit enormous amounts of information over long distances with very high reliability.

7. Fiber Bragg Gratings (FBG) Fiber Bragg gratingsFiber Bragg gratings are devices that are used for separating wavelengths through diffraction, similar to a diffraction grating (see the following figure). They are of critical importance in DWDM systems in which multiple closely spaced wavelengths require separation. Light entering the fiber Bragg grating is diffracted by the induced period variations in the index of refraction. By spacing the periodic variations at multiples of the half-wavelength of the desired signal, each variation reflects light with a 360 phase shift causing a constructive interference of a very specific wavelength while allowing others to pass. Fiber Bragg gratings are available with bandwidths ranging from 0.05 nm to >20 nm.

Fiber Bragg grating are typically used in conjunction with circulators, which are used to drop single or multiple narrowband WDM channels and to pass other express channels. Fiber Bragg gratings have emerged as a major factor, along with EDFAs, in increasing the capacity of next generation high-bandwidth fiber optic systems.

The following figure depicts a typical scenario in which DWDM and EDFA technology is used to transmit a number of different channels of high-bandwidth information over a single fiber. As shown, n-individual wavelengths of light operating in accordance with the ITU grid are multiplexed together using a multichannel coupler/splitter or wavelength-division multiplexer. An optical isolator is used with each optical source to minimize troublesome back reflections. A tap coupler then removes 3% of the transmitted signal for wavelength and power monitoring. Upon traveling through a substantial length of fiber (50-100 Km), an EDFA is used to boost the signal strength. After a couple of stages of amplifications, an add/drop channel consisting of a fiber Bragg grating and circulator is introduced to extract and then reinject the signal operating at the 3 wavelength. After another stage of amplification via EDFA, a broadband WDM is used to combine a 1310-nm signal with the 1550-nm window signals. At the receiver end, another broadband WDM extracts the 1310-nm signal, leaving the 1550-nm window signals. The 1550-nm window signals are finally separated using a DWDM that employs an array of fiber Bragg gratings, each tuned to the specific transmission wavelength. This system represents the current state of the art in high-bandwidth fiber optic data transmission.

Share on facebookShare on twitterShare on emailShare on printMore Sharing Services Leave a Reply You must be logged in to post a comment. Copyright 2013 Fiber Optic Training & Tutorials FAQ, Tips & News. All Rights Reserved. >> Fiber Optic Detectors The purpose of a fiber optic detector is to convert light emanating from the optical fiber back into an electrical signal. The choice of a fiber optic detector depends on several factors including wavelength, responsivity, and speed or rise time. The following figure shows the various types of detectors and their spectral responses.

The process by which light is converted into an electrical signal is the opposite of the process that produces the light. Light striking the detector generates a small electrical current that is amplified by an external circuit. Absorbed photons excite electrons from the valence band to the conduction band, resulting in the

creation of an electron-hole pair. Under the influence of a bias voltage these carriers move through the material and induce a current in the external circuit. For each electron-hole pair created, the result is an electron flowing in the circuit. Typical current levels are small and require some amplification as shown in the following figure.

The most commonly used photodetectors are the PIN and avalanche photodiodes (APD). The material composition of the device determines the wavelength sensitivity. In general, silicon devices are used for detection in the visible portion of the spectrum; InGaAs crystal are used in the near-infrared portion of the spectrum between 1000 nm and 1700 nm, and germanium PIN and APDs are used between 800 nm and 1500 nm. The following table gives some typical photodetector characteristics:

Some of the more important detector parameters are listed below. Responsivity the ratio of the electrical power to the detectors output optical power Quantum efficiency the ratio of the number of electrons generated by the detector to the number of photons incident on the detector Quantum efficiency = (Number of electrons)/Photon Dark current the amount of current generated by the detector with no light applied. Dark current increases about 10% for each temperature increase of 1C and is much more prominent in Ge and InGaAs at longer wavelengths than in silicon at shorter wavelengths. Noise floor minimum detectable power that a detector can handle. The noise floor is related to the dark current since the dark current will set the lower limit. Noise floor = Noise (A)/Responsivity (A/W) Response time the time required for the detector to respond to an optical input. The response time is related to the bandwidth of the detector by BW = 0.35/tr where tris the rise time of the device. The rise time is the time required for the detector to rise to a value equal to 63.2% of its final steady-state reading. Noise equivalent power (NEP) at a given modulation frequency, wavelength, and noise bandwidth, the incident radiant power that produces a signal-to-noise ratio of one at the output of the detector. >> Direct Modulation versus External Modulation Lasers and LEDs used in telecommunication applications are modulated using one of two methods: direct modulation or external modulation. 1. Direct Modulation

In direct modulation (see the following figure), the output power of the device varies directly with the input drive current. Both LEDs and lasers can be directly modulated using analog and digital signals. The benefit of direct modulation is that it is simple and cheap. The disadvantage is that it is slower than indirect modulation with limits of less than approximately 3 GHz.

2. External Modulation In external modulation (see the following figure), an external device is used to modulate the intensity or phase of the light source. The light source remains on while the external modulator acts like a shutter controlled by the information being transmitted. External modulation is typically used in high-speed applications such as long-haul telecommunication or cable TV head ends. The benefits of external modulation are that it is much faster and can be used with higher-power laser sources. The disadvantage is that it is more expensive and requires complex circuitry to handle the high frequency RF modulation signal.

External modulation is typically accomplished using an integrated optical modulator that incorporates a waveguide Mach-Zehnder interferometer fabricated on a slab of lithium niobate (LiNbO3). The waveguide is created using a lithographic process similar to that used in the manufacturing of semiconductors. The waveguide region is slightly doped with impurities to increase the index of refraction so that the light is guided through the device (see the following figure).

Light entering the modulator (via fiber pigtail) is split into two paths. One path is unchanged or unmodulated. The other path has electrodes placed across it. Because LiNbO3 is an electro-optic material, when a voltage is placed across the waveguide its index of refraction is changed, causing a phase delay

proportional to the amplitude of the applied voltage. When the light is then recombined, the two waves interfere with one another. If the two waves are in phase, the interference is constructive and the output is on. If the two waves are out of phase, the interference is destructive and the waves cancel each other. The input voltage associated with a 180 phase shift is known as V . The induced phase shift can be calculated using: where Vin is the voltage applied to the modulator. Lithium niobate modulators are well developed and used extensively in both CATV and telecommunication applications. Devices are available at both the 1310-nm and 1550-nm wavelengths. >> Analog and Digital Signals Information in a fiber optic system can be transmitted in one of two ways: analog or digital. An analog signal is one that varies continuously with time. For example, when you speak into the telephone, your voice is converted to an analog voltage that varies continuously. The signal from your cable TV company is also analog.

A digital signal is one that exists only at discrete levels. For example, in a computer, information is represented as zeros and ones (0 and 5 volts). In the case of the telephone, the analog voice signal emanating from your handset is sent through a pair of wires to a device called a concentrator, which is located either on a utility pole, in a small service box, or in a manhole. The concentrator converts the analog signal to a digital signal that is combined with many other telephone signals through a process called multiplexing.

In telecommunication, most signals are digitized. An exception is cable TV, which still transmits video information in analog form. With the advent of digital and high definition television (HDTV), cable TV are also transmitted digitally. Digital transmission has several advantages over analog transmission. First, it is easier to process electronically. No conversion is necessary. It is also less susceptible to noise because it operates with discrete signal levels. The signal is either on or off, which makes it harder to corrupt. Digital signals may also be encoded to detect and correct transmission errors. >> What Is PCM Pulse Code Modulation Pulse code modulation (PCM) is the process of converting an analog signal into a 2n-digit binary code.

Consider the block diagram shown in the above figure. An analog signal is placed on the input of a sample and hold. The sample and hold circuit is used to capture the analog voltage long enough for the conversion to take place. The output of the sample and hold circuit is fed into the analog-to-digital converter (A/D). An A/D converter operates by taking periodic discrete samples of an analog signal at a specific point in time and converting it to a 2n-bit binary number. For example, an 8-bit A/D converts an analog voltage into a binary number with 28 discrete levels (between 0 and 255). For an analog voltage to be successfully converted, it must be sampled at a rate at least twice its maximum frequency. This is known as the Nyquist sampling rate. An example of this is the process that takes place in the telephone system. A standard telephone has a bandwidth of 4 kHz. When you speak into the telephone, your 4-kHz bandwidth voice signal is sampled at twice the 4-kHz frequency or 8 kHz. Each sample is then converted to an 8-bit binary number. This occurs 8000 times per second. Thus, if we multiply we get the standard bit rate for a single voice channel in the North American DS1 System, which is 64 kbits/s.

The output of the A/D converter is then fed into a driver circuit that contains the appropriate circuitry to turn the light source on and off. The process of turning the light source on and off is known as modulation and will be discussed later in this tutorial series. The light then travels through the fiber and is received by a photodetector that converts the optical signal into an electrical current. A typical photodetector generates a current that is in the micro- or nanoamp range, so amplification and/or signal reshaping is often required. Once the digital signal has been reconstructed, it is converted back into an analog signal using a device called a digital-to-analog converter or DAC. A digital storage device or buffer may be used to temporarily store the digital codes during the conversion process. The DAC accepts an n-bit digital number and outputs a continuous series of discrete voltage steps. All that is needed to smooth the stair-step voltage out is a simple low-pass filter with its cutoff frequency set at the maximum signal frequency as shown in the figure below.

>> Line Coding for Digital Communication Signal format is an important consideration in evaluating the performance of a fiber optic system. The signal format directly affects the detection of the transmitted signals. The accuracy of the reproduced signal depends on the intensity of the received signal, the speed and linearity of the receiver, and the noise levels of the transmitted and received signal. Many coding schemes are used in digital communication systems, each with its own benefits and drawbacks. The most common encoding schemes are the return-to-zero (RZ) and non-return-to-zero (NRZ). The NRZ encoding scheme, for example, requires only one transition per symbol, whereas RZ format requires two transitions for each data bit. This implies that the required bandwidth for RZ must be twice that of NRZ. This is not to say that one is better than the other. Depending on the application, any of the code formats may be more appropriate than the others. For example, in synchronous transmission systems in which large amounts of data are to be sent, clock synchronization between the transmitter and receiver must be ensured. In this case Manchester encoding is used. The transmitter clock is embedded in the data. The receiver clock is derived from the guaranteed transition in the middle of each bit. These are all illustrated in the figure below.

Digital systems are analyzed on the basis of rise time rather than on bandwidth. The rise time of a signal is defined as the time required for the signal to change from 10% to 90% of its maximum value. The system rise time is determined by the data rate and code format. Depending on which code format is used, the number of transitions required to represent the transmitted data may limit overall the data rate of the system. The system rise time depends on the combined rise time characteristics of the individual system components. The signal shown in the following figure a represents a signal with adequate rise time. Even though the pulses are somewhat rounded on the edges, the signal is still detectable. In the following figure b, however, the transmitted signal takes too long to respond to the input signal.

The effect is exaggerated in the following figure, where, at high data rates, the rise time limitations cause the data to be distorted and thus lost.

To avoid this distortion, an acceptable criterion is to require that a system have a rise time ts of no more than 70% of the pulse width Tp; For an RZ, Tp takes half the bit time T so that or

where Br = 1/T is the system bit rate. For an NRZ format, Tp = T and thus The following figure shows transmitted (a) RZ and (c) NRZ pulse trains and the effects of system rise time on (b) format RZ and (d) format NRZ.

>> Basic Fiber Optic Communication System Fiber optics is a medium for carrying information from one point to another in the form of light. Unlike the copper form of transmission, fiber optics is not electrical in nature. A basic fiber optic system consists of a transmitting device that converts an electrical signal into a light signal, an optical fiber cable that carries the light, and a receiver that accepts the light signal and converts it back into an electrical signal. The following figure shows the schematic of a basic fiber optic system.

>> Fiber Optic Communication System Wavelength Windows Optical fiber transmission uses wavelengths that are in the near-infrared portion of the spectrum, just above the visible, and thus naked eyes cannot see. Typical optical transmission wavelengths are 850 nm, 1310 nm, and 1550 nm. Lasers are usually used for 1310nm or 1550nm single mode systems. Multimode LEDs are used for 850nm or 1300nm multimode systems. These wavelengths were chosen because optical fibers have low attenuation at these wavelengths and LEDs and lasers are readily available at these wavelengths.

>> How to Calculate Fiber Optic Signal Loss Fiber optic loss is typically expressed in terms of decibels (dB) and can be calculated using the following equation.

Oftentimes, loss in optical fiber is also expressed in terms of decibels per kilometer (dB/km). Take a look at the following sample calculation. A fiber of 100m length has Pin = 10 W and Pout = 9 W. Find the loss in dB/km.

>> Optical Fiber Types

1. Step Index Multimode Fibers

Step-index multimode fiber has an index of refraction profile that steps from high to low as measured from core to cladding. Step index multimode fibers have relatively large core diameter and numerical aperture (NA). The core/cladding diameter of a typical multimode fiber used for telecommunication is 62.5/125 m. The term multimode refers to the fact that multiple modes or paths through the fiber are possible. Step index multimode fiber is used in applications that require high bandwidth (< 1 GHz) over relatively short distances (< 3 km) such as a LAN or a campus network backbone.

2. Graded-Index Multimode Fibers

Graded-index fiber is a compromise between the large core diameter and N.A. of multimode fiber and the higher bandwidth of single-mode fiber. With creation of a core whose index of refraction decreases parabolically from the core center toward the cladding, light traveling through the center of the fiber experiences a higher index than light traveling in the higher modes. This means that the higher-order modes travel faster than the lower-order modes, which allows them to catch up to the lower-order modes, thus decreasing the amount of modal dispersion, which increases the bandwidth of the fiber.

3. Step-Index Single Mode Fibers

Single-mode step-index fiber allows for only one path, or mode, for light to travel within the fiber. In a multimode step-index fiber, the number of modes Mn propagating can be approximated by the following equation.

Here V is known as the normalized frequency, or the V-number, which relates the fiber size, the refractive index, and the wavelength. The V-number is given by the following equation

In this equation, a is the fiber core radius, is the operating wavelength, n1 is the core index, and is the relative refractive index difference between core and cladding. It can be shown that by reducing the diameter of the fiber to a point at which the V-number is less than 2.405, higher-order modes are effectively extinguished and single-mode operation is possible. The core diameter for a typical single-mode fiber is between 5 m and 10 m with a 125-m cladding. Single-mode fibers are used in applications in which low signal loss and high data rates are required, such as in long spans where repeater/amplifier spacing must be maximized.

Because single-mode fiber allows only one mode or ray to propagate (the lowest-order mode), it does not suffer from modal dispersion like multimode fiber and therefore can be used for higher bandwidth applications. However, even though single-mode fiber is not affected by modal dispersion, at higher data rates chromatic dispersion can limit the performance. This problem can be overcome by several methods. One can transmit at a wavelength in which glass has a fairly constant index of refraction (~1300 nm), use an optical source such as a distributed-feedback laser (DFB laser) that has a very narrow output spectrum, use special dispersion-compensating fiber, or use a combination of all these methods. single-mode fiber is used in high-bandwidth, long-distance applications such as long-distance telephone trunk lines, cable TV head-ends, and high-speed local and wide area network (LAN and WAN) backbones. >> Optical Fiber Dispersion

1. What is Fiber Dispersion?

Dispersion, expressed in terms of the symbol t, is defined as pulse spreading in an optical fiber. As a pulse of light propagates through a fiber, elements such as numerical aperture, core diameter, refractive index profile, wavelength, and laser line width cause the pulse to broaden. This poses a limitation on the overall bandwidth of the fiber as demonstrated in the following figure.

Dispersion t can be determined from the following equation. Dispersion is measured in time, typically nanoseconds or picoseconds. Total dispersion is a function of fiber length. The longer the fiber, the more the dispersion. The following equation gives the total dispersion per unit length. The overall effect of dispersion on the performance of a fiber optic system is known as inter-symbol interference (ISI) as shown in the following figure.

Inter-symbol interference occurs when the pulse spreading caused by dispersion causes the output pulses of a system to overlap, rendering them undetectable. If an input pulse is caused to spread such that the rate of change of the input exceeds the dispersion limit of the fiber, the output data will become indiscernible.

2. Fiber Optic Dispersion Types a) Modal Dispersion

Dispersion is generally divided into two categories: modal dispersion and chromatic dispersion. Model dispersion is defined as pulse spreading caused by the time delay between lower-order modes (modes or rays propagating straight through the fiber close to the optical axis) and higher-order modes (modes propagating at steeper angles). This is shown in the following figure. Modal dispersion is problematic in multimode fiber, causing bandwidth limitation, but it is not a problem in single-mode fiber where only one mode is allowed to propagate.

b) Chromatic Dispersion

Chromatic dispersion is pulse spreading due to the fact that different wavelengths of light propagate at slightly different velocities through the fiber. All light sources, whether laser or LED, have finite line widths, which means they emit more than one wavelength. Because the index of refraction of glass fiber is a wavelength-dependent quantity, different wavelengths propagate at different velocities. Chromatic dispersion is typically expressed in units of nanoseconds or picoseconds per (km-nm). Chromatic dispersion consists of two parts: material dispersion and waveguide dispersion. Material dispersion is due to the wavelength dependency on the index of refraction of glass. Waveguide dispersion is due to the physical structure of the waveguide. In a simple step-indexprofile fiber, waveguide dispersion is not a major factor, but in fibers with more complex index profiles, waveguide dispersion can be more significant. Material dispersion and waveguide dispersion can have opposite signs depending on the transmission wavelength. In the case of a step-index single-mode fiber, these two effectively cancel each other at 1310 nm, yielding zerodispersion. This makes very high-bandwidth communication possible at this wavelength. However, the drawback is that, even though dispersion is minimized at 1310 nm, attenuation is not. Glass fiber exhibits minimum attenuation at 1550 nm. Coupling that with the fact that erbium-doped fiber amplifiers (EDFA) operate in the 1550-nm range makes it obvious that, if the zero-dispersion property of 1310 nm could be shifted to coincide with the 1550-nm transmission window, high-bandwidth longdistance communication would be possible. With this in mind, zero-dispersion-shifted fiber was developed. When considering the total dispersion from different causes, we can approximate the total dispersion by ttot

where tn represents the dispersion due to the various components that make up the system. The transmission capacity of fiber is typically expressed in terms of bandwidth distance. For example, the bandwidth distance product for a typical 62.5/125-m (core/cladding diameter) multimode fiber operating at 1310 nm might be expressed as 600 MHzkm. The approximate bandwidth of a fiber can be related to the total dispersion by the following relationship

c) Dispersion Shifted Fiber

By altering the design of the waveguide, we can increase the magnitude of the waveguide dispersion o as to shift the zero-dispersion wavelength to 1550 nm. This type of fiber has an index profile that resembles a W and hence is sometimes referred to as W-profile fiber as shown below.

Although this type of fiber works well at the zero-dispersion wavelength, in systems in which multiple wavelengths are transmitted, such as in wavelength-division multiplexing, signals transmitted at different wavelengths around 1550 nm can interfere with one another, resulting in a phenomenon called four-wave mixing, which degrades system performance. However, if the waveguide structure of the fiber is modified so that the waveguide dispersion is further increased, the zero-dispersion point can be pushed past 1600 nm (outside the EDFA operating window). This means that the total chromatic dispersion can still be substantially lowered in the 1550-nm range without having to worry about performance problems. This type of fiber is known as nonzero-dispersion-shifted fiber. The following figure compares the material chromatic and wavelength dispersions for single-mode fiber and dispersion-shifted fiber.

What Is Wavelength Selective SwitchWSS? March 14, 2011 39 1. What Is a Wavelength Selective Switch (WSS)?

WSS stands for Wavelength Selective Switch. WSS has become the central heart of modern DWDM reconfigurable Agile Optical Network (AOC). WSS can dynamically route, block and attenuate all DWDM wavelengths within a network node. The following figure shows WSSs functionality.

The above figure shows that a WSS consists of a single common optical port and N opposing multiwavelength ports where each DWDM wavelength input from the common port can be switched (routed) to any one of the N multi-wavelength ports, independent of how all other wavelength channels are routed. This wavelength switching (routing) process can be dynamically changed through a electronic communication control interface on the WSS. So in essence, WSS switches DWDM channels or wavelengths. There are also variable attenuation mechanism in WSS for each wavelength. So each wavelength can be independently attenuated for channel power control and equalization. 2. How Does a WSS Work? There are several WSS switching engine technologies on the market today, here we will demonstrate a MEMS based design. The different switching technologies will be discussed in the next section. A) 1X2 Configuration The following figure shows a diffraction grating and MEMS based 12 wavelength selective switch.

The light from a fiber is collimated by a lens with focal length f and demultiplexed by diffraction off the grating. The direction of the beam after the grating will depend on the wavelength 0 of the beam. The diffracted beams then pass through the lens for a second time, and the spectrally resolved light is focused on the reflective linear MEMS device, which is also referred as 1D (1 dimension) MEMS device. The MEMS device then either changes the amplitude (attenuate) or the direction of the beam. The reflected light passes through the lens and is wavelength-multiplexed by diffraction off the grating, and finally the lens couples the light back into the fiber. The output light is separated from the input light by a circulator. B) 1XN Wavelength Selective Switch The 1XN switch can be considered as a generalization of the 12 switch. Because every wavelength in the 1XN switch can be switched to any one of the N output ports, this switch can be used in a fully flexible OADM (Optical Add Drop Multiplexer) with multiple add/drop fiber ports, each of which carries single or multiple wavelengths. 1XN switches can be cascaded to form larger architectures, and NxN wavelength selective matrix can be built by interconnecting back-to-back 1XN switches. Lets look at the optical design of the 1XN wavelength selective switch (WSS).

In the 1xN switch design, it uses an additional lens in Fourier transform configuration to perform a space to angle conversion in the first stage of the switch. Also the 1xN switch will require tilt mirrors with N different tilt angles. These are usually implemented as analog mirrors. Here is how the design works. The common input fiber enters the switch at point A where light is collimated by a microlens.

The following lens image the collimated beam on the diffraction grating at point C. The wavelength dispersed beams fall then onto the MEMS device plane D On MEMS device plane D, the beams are reflected with certain tilt angle depending on micromirrors setting. All reflected beams are focused on point B again, where the angle to space conversion section will image the beam on the output fiber. Each output corresponds to a specific tilt angle of the micromirrors. This MEMS based switch can switch as many as 128 wavelengths with 50 GHz spacing. The total insertion loss is less than 6 dB. It uses a 100mm focal length mirror and a 1100 lines/mm grating. The micromirrors can be actuated by +/- 8 using a voltage of <115V and the switch can be used as variable attenuator by detuning the tilt angle of the micromirrors. 3. WSS Switching Engine Technologies The optical design we discussed in the previous section is based on MEMS micromirrors. Here we will discuss several more switching engine technologies. A. MEMS Switching Engine Micromirror array is fabricated in silicon, using wafer scale lithographic processes leveraged from semiconductor industry.

When voltage is applied to electrode, it causes mirror to tilt due to electrostatic attraction. Attenuation is provided by tilting to offset the beam slightly at the output fiber. A angle-to-offset lens converts beam tilt to beam displacement at an input/output fiber array.

The advantage is that offset perpendicular to wavelength dispersion direction gives attenuation with no change in channel shape as shown below.

The tradeoffs are between mechanical resonance frequency (hinge stiffness), driving voltage, and tilt angle often result in high driving voltages. B. Liquid Crystal (LC) Switching Engine Principle Liquid crystal cell selectively controls the polarization state of transmitted light by application of a control voltage as shown below.

For the switching process to work, the liquid crystal cell (LC) must be followed by a polarization dependent optical element such as a PBS (Polarization Beam Splitter) to change the path of the transmitted light based on their polarization.

Random polarized input must be separated into two orthogonal polarizations.

In a binary switching configuration, N liquid crystal (LC) cells can select among 2N output ports. And an extra liquid crystal (LC) cell and polarizer can be used to provide attenuation.

C. LCoS (Liquid Crystal on Silicon) The following two graphics show Liquid Crystal on Silicon (LCoS) technology and the optical design of an wavelength selective switch based on LCoS switching technology. A LCoS-based switch engine built uses an array of phase controlled pixels to implement beam steering by creating a linear optical phase retardation in the direction of the intended deflection.

LCOS is a display technology which combines Liquid Crystal and semiconductor technologies to create a high resolution, solid-state display engine. In WSS design, the LCoS is used to control the phase of light at each pixel to produce an electrically-programmable grating. This can control the beam deflection in a

vertical direction by varying either the pitch or blaze of the grating whilst the width of the channel is determined by the number of pixel columns selected in the horizontal direction. In the WSS design, it incorporates polarization diversity, control of mode size and a 4-f wavelength optical imaging in the dispersive axis of the LCoS providing integrated switching and optical power control. In operation, the light passes from a fiber array through the polarization diversity optics which both separates and aligns the orthogonal polarization states to be in the high efficiency s-polarization state of the diffraction grating. The light from the input fiber is reflected from the imaging mirror and then angularly dispersed by the grating, reflecting the light back to the cylindrical mirror which directs each optical frequency (wavelength) to a different portion of the LCoS. The path for each wavelength is then retraced upon reflection from the LCoS, with the beam-steering image applied on the LCoS directing the light to a particular port of the fiber array.

4. Switch Engine Technologies and Minimum Achievable Spot Sizes Some switching engine technologies require a minimum beam size to function properly, and thus place a limit on the minimum optical system length for a give channel passband width, channel spacing, and dispersive element. The advantages of a small optical system size include a smaller overall module footprint, greater functional density, lower packing cost, and greater tolerance to mechanical shock and environmental conditions. The following figure shows the minimum spot size for various switch engine technologies.

You might also like