# Hilbert Transforms, Analytic Functions, and Analytic Signals

Clay S. Turner 3/2/05 V1.0

Introduction:
Hilbert transforms are essential in understanding many modern modulation methods. These transforms effectively phase shift a function by 90 degrees independent of frequency. Of course practical implementations have limitations. For example, the phase shifting of a low frequency implies a long delay, which in turn implies a computational process that maintains a long history of the signal. Hilbert transforms are useful in creating signals with one sided Fourier transforms. Also the concepts of analytic functions and analytic signals will be shown to be related through Hilbert transforms.

90 Degree Phase Shifters:
We will take a spectral approach and start with an ideal 90-degree phase shifter. To do this we would like a function that will transform a sinusoid to another sinusoid with the same amplitude and frequency but simply phase shifted by 90 degrees. So recalling the trigonometric identities: Cos (t − 90 ) = Sin(t ) Sin(t − 90 ) = −Cos (t ) Then we desire our transform to do the following:
Cos (t ) ⇒ Sin(t ) Sin(t ) ⇒ −Cos (t )

(1)

(2)

These two conditions will also cause a sinusoid with arbitrary phase to be shifted 90 degrees. This follows since it may be decomposed into a sine and a cosine and each of these components will be shifted by 90 degrees. To show this, start with a sinusoid of arbitrary phase, θ , and decompose it into a sine and a cosine components.
Cos (t + θ ) = Cos (t )Cos (θ ) − Sin(t ) Sin(θ )

(3)

Next transform (phase shift) the two components and reduce. Cos (t − 90  )Cos (θ ) − Sin(t − 90  ) Sin(θ ) = Sin(t )Cos (θ ) + Cos (t ) Sin(θ ) = Sin(t + θ ) = Cos (t + θ − 90  ) (4)

So let’s look at the transform in the frequency domain. we find the frequency domain solution for the 90degree phase shifter is: H (ω ) = − j × Sgn(ω ) (7) The Signum function is defined as: 1 ω>0 Sgn(ω ) = 0 ω = 0 −1 ω < 0 (8) . The linearity property was used above in the arbitrary phase case. the transform will also obey superposition. We also desire the transform to be expressible as a linear convolution. Recalling the Fourier pairs for sinusoids: Cos (ω 0t ) ⇔ (δ (ω − ω 0 ) + δ (ω + ω 0) ) / 2 Sin(ω 0t ) ⇔ (δ (ω − ω 0 ) − δ (ω + ω 0 ) ) / 2 j (5) Then our frequency domain phase shift requirements become (after canceling out the twos and moving j over): [δ (ω − ω 0 ) + δ (ω + ω 0 )]× H (ω ) = − j[δ (ω − ω 0 ) − δ (ω + ω 0 )] [δ (ω − ω 0 ) − δ (ω + ω 0 )]× H (ω ) = − j[δ (ω − ω 0 ) + δ (ω + ω 0 )] (6) Recalling the Dirac Delta function is a distribution whose non-zero support is over an infinitely narrow region. we find H (ω ) = j . Doing this. we find δ (0) × H (0) = 0 . By modeling our transform this way. So for ω 0 = ω > 0 . we then only have to compare 4 points to solve this set of equations. hence H (0) = 0 . Thus. Likewise for − ω 0 = ω < 0 . we can then use a powerful theorem from Fourier analysis that equates convolution in one domain to multiplication in the other. Finally for ω 0 = ω = 0 . After putting these cases together.So we can see the effect of the transformation is a 90-degree phase shift regardless of the original phase. we find δ (0) × H (ω ) = − jδ (0) or simply H (ω ) = − j . we only compare two points at a time. It is helpful to look at positive frequencies separately from the negative ones.

3/2/2005 Page 3 of 13 . This is achieved just by finding the inverse Fourier transform of H (ω ). I. we find the definition for the inverse transform. by 90 degrees. we immediately have two representations for the direct and inverse transforms. u (t ) = ∫ v(ξ )h(t − ξ )dξ −∞ ∞ (10) Since convolution is commutative. Two shifts result in a 180-degree phase shift. for example v(t ) . we just convolve it with 1 /(π × t ). which is simple negation.Hilbert Transforms. and then we modeled it as a convolution. Since we started from a phase shifter point of view. v(t ) * h(t ) . The inverse Hilbert transform may be thought of as Hilbert transforming a function three times. Hilbert Transform Definitions: Some papers start by defining a Hilbert transform is as an Integral transform. shifting 270 degrees is the same as shifting negative 90 degrees.. The result is: h(t ) = 1 2π ∞ −∞ jωt ∫ − j × sgn(ω ) × e dω = j 2π −∞ jωt ∫ e dω − 0 j 1 e jωt dω = ∫ π ×t 2π 0 ∞ (9) So when we wish to phase shift a function. we just convolve with .1 /(π × t ). Recalling the standard integral form for the convolution of two functions. Analytic Functions and Analytic Signals Now we need the time domain version of the phase shifter so we can express the phase shifter as a convolution.e. we also have: u (t ) = ∞ −∞ ∫ h(ξ )v(t − ξ )dξ (11) So in terms of convolution. our phase shifter has the following integral forms v(t ) = 1 u (ξ ) 1 u (t − ξ ) dξ = ∫ dξ ∫ π −∞ t − ξ π −∞ ξ ∞ ∞ (12) Now that we have a definition for the forward Hilbert transform. So a negative 90-degree phase shift is simply the negation of a 90-degree phase shift! So for the inverse Hilbert transform. Of course an elementary change of variable converts one representation into the other.

taken symmetrically about the singularity. Hilbert transforms leave the function in the same domain as the original – the Hilbert transform of a temporal function is itself temporal. which results in exact cancellation. we find: P∫ dx dx dx dy dx dx dy = Lim ∫ + ∫ = Lim ∫ +∫ = Lim ∫ − = Lim ∫ 0 = 0 ε ε ε ε →0 → → → 0 0 0 x x x y x x y −∞ −∞ ε ∞ ε ε ε ∞ −ε ∞ ε ∞ ∞ ∞ (17) 3/2/2005 Page 4 of 13 . Unlike other types of transforms. then its Hilbert transform v(t ) is: v(t ) = 1 u (ξ ) 1 u (t − ξ ) P∫ dξ = P ∫ dξ π −∞ t − ξ π −∞ ξ ∞ ∞ (13) The inverse Hilbert transform is likewise similarly defined. A similar limiting process is also used with the 2nd form.Hilbert Transforms. This implies a careful limiting process. An example of evaluating a CPV integral Let’s evaluate: P∫ dx x −∞ ∞ (16) So using the limit approach. we see that Hilbert transformation is also a linear operation. Since integration is a linear operation. u (t ) = v(t − ξ ) −1 v(ξ ) −1 dξ = P∫ dξ P∫ π −∞ t − ξ π −∞ ξ ∞ ∞ (14) Since the basic integrals are improper these are to be evaluated as Cauchy Principal Value (CPV) Integrals. can be easily Hilbert transformed by doing the appropriate 90-degree phase shifts on each of the components. Analytic Functions and Analytic Signals Given a function u (t ) . This basically means (for the 1st variant of the transform): P∫ u (ξ ) u (ξ ) u (ξ ) dξ = Lim ∫ dξ + ∫ dξ R →∞ ξ −t ξ −t −∞ −R ε ξ −t R ∞ −ε (15) ε →ξ −t Here epsilon is an increasingly small distance from the singularity. This means any function expanded into a sum of sinusoids.

So inserting Cos (t ) into the 2nd form of the Hilbert transform integral. Analytic Functions and Analytic Signals A change of variable was made in the middle step.Hilbert Transforms. we now find: v(t ) = Cos (t − ξ ) Sin(ξ ) 1 1 Cos (ξ ) P∫ dξ = ∫ Cos (t ) + Sin(t )dξ π −∞ ξ π −∞ ξ ξ ∞ ∞ (19) Now using your favorite integral table or an adroit integration technique [see appendix A]. Likewise a similar process may be used to find the Hilbert transform of Sin(t ) is − Cos (t ) . obtain the following identities. y = − x . So this verifies that convolution with the kernel. 3/2/2005 Page 5 of 13 . 1 1 Cos (ξ ) Sin(ξ ) v(t ) = Cos (t ) ∫ dξ + Sin(t ) ∫ dξ = Sin(t ) π ξ π ξ −∞ −∞ ∞ ∞ ∞ (20) (21) Thus we find v(t ) = Sin(t ) . Hence the Hilbert transform of Cos (t ) is Sin(t ) . does indeed perform a 90-degree phase shift operation. A simple example of finding a Hilbert transform via convolution: Let’s find the Hilbert transform of u (t ) = Cos (t ) . Cos (ξ ) dξ = 0 ∫ ξ −∞ ∞ Sin(ξ ) dξ = π ∫ ξ −∞ So now the solution is easy. 1 (π × t ) . we obtain: 1 Cos (t − ξ ) v(t ) = P ∫ dξ π −∞ ξ ∞ (18) Using the trigonometric identity for the cosine of a sum of angles.

y ) − j × v( x. ∂v  u ( x + ∆x. y )  ∂u f ' ( z ) = Lim  = + j  ∆x →0 ∆x ∂x   ∂x (25) Next will move along the y-axis and let ∆x = 0 . y ) (23) Next we will utilize the standard definition of a derivative from real analysis and extend it to the complex case. y ) + j × v( x + ∆x. so we will now look at a requirement for isotropic differentiation of complex functions. We will use the standard paradigm of using a complex variable composed of two real components combined in the following way: z = x + jy (22) Here x and y are real values and j represents − 1 . Recall ∆z = ∆x + j∆y . So we will first let ∆y = 0 and move along the x-axis. y ) − j × v( x. y ) − u ( x. This derivative limit is: f ' ( z ) = Lim ∆z →0 f ( z + ∆z ) − f ( z ) ∆z (24) A difficulty with the definition is the result may depend on the direction that taken by ∆z . Analytic Functions and Analytic Signals Analytic Functions The theory of Hilbert transformation is intimately connected with complex analysis. y + ∆y ) + j × v( x. This will yield path independence and have a derivative definition that gives consistent results. y ) + j × v ( x. y + ∆y ) − u ( x.  u ( x. y )  ∂u ∂v + f ' ( z ) = Lim  =−j  j∆y →0 j∆y ∂y ∂y   (26) 3/2/2005 Page 6 of 13 . hence.Hilbert Transforms. So we will take two orthogonal paths and require the resulting limits to be the same. we would like to know under what conditions the derivative is path independent. Similarly we can compose a complex valued function by combining two real valued functions (also standard): f ( z ) = u ( x.

However a function can’t be analytic at just a single point – if it is analytic at a point. So using the chain rule. y ) each ∂ 2 u ∂ 2u ∂ 2v ∂ 2v solve Laplace’s equation. the function’s being analytic creates an interesting restriction. The C-R relations are: ∂u ∂v = ∂x ∂y ∂v ∂u =− ∂x ∂y (27) If a complex function obeys the C-R relations and has differentiable components. then let’s look at the derivative with respect to z * . y ) . we find the following expansion: ∂f  ∂u ∂x ∂u ∂y  ∂f  ∂v ∂x ∂v ∂y  ∂f  +   + + = * * *  ∂y ∂z *  ∂y ∂z *  ∂u  ∂z  ∂x ∂z  ∂v  ∂x ∂z  Next use ∂f ∂f =1 = j ∂u ∂v ∂x 1 = ∂z * 2 ∂y j = * 2 ∂z (28) (29) So our derivative now has the form: ∂f  ∂u ∂v  1  ∂u ∂v  j = −   +  +   ∂z *   ∂x ∂y  2  ∂y ∂x  2 (30) If the function obeys the C-R relations. These are found by setting the two limits equal to each other and equating the real and imaginary parts. u ( x.Hilbert Transforms. Another neat quality of analytic functions is the components. A function may be analytic over just part of its domain. then it also must be analytic in an open neighborhood around that point. Starting with the C-R relations: 3/2/2005 Page 7 of 13 . it would seem that our complex function has two degrees of freedom. y ) + j × v( x. it is said to be analytic. y ) and v( x. Analytic Functions and Analytic Signals The requirement that the two limits are to be the same. results in the Cauchy-Riemann relations. and f ( x. one purely real and the other purely imaginary. However. These follow ∂x 2 ∂y 2 ∂x 2 ∂y 2 directly from differentiating the C-R relations. analytic functions are independent of z * . y ) = u ( x. If we recall that x = (z + z * ) 2 and y = (z − z * ) 2 j . Specifically + = 0 and + = 0 . Since we composed our complex function by summing together two functions. then the partial derivative is simply zero! So in a sense.

y ) = e − y Cos ( x) v( x. Such a function is analytic everywhere and is sometimes called an entire function. Using the exponent addition rule and Euler’s identity. An Analytic Function Example Example: Show e jz is analytic.Hilbert Transforms. y ) = e − y Sin( x) Now find the four partial derivatives: ∂u = −e − y Sin( x) ∂x ∂v = −e − y Sin( x) ∂y ∂v = e − y Cos ( x) ∂x ∂u = −e − y Cos ( x) ∂y (36) (35) (34) Plugging these into the C-R relations. In advanced texts on complex analysis. we assumed the equivalence of mixed partials and the existence of higher order derivatives. y ) . one finds they are obeyed for all x and y. 3/2/2005 Page 8 of 13 . these will be shown to be always true for analytic functions. we find: e jz = e jx − y = e − y e jx = e − y (Cos ( x) + jSin( x)) = e − y Cos ( x) + je − y Sin( x) So we have: u ( x. so we have the following neat theorem for analytic functions: ∇ 2 f ( x. Analytic Functions and Analytic Signals ∂u ∂v ∂v ∂u = and =− ∂x ∂y ∂x ∂y (31) After differentiating the first relation with respect to x and the second relation with respect to y. A similar process can be applied to make the case for v( x. we find: ∂ 2u ∂ 2 v ∂ 2v ∂ 2u and = =− 2 ∂x 2 ∂y∂x ∂x∂y ∂y (32) Now equating the mixed partials. we find u xx = −u yy . y ) = 0 (33) In our proof of this. their sum will also. so we see that Laplace’s equation is satisfied. Since both components solve Laplace’s equation.

Our above analytic function has the corresponding analytic signal: f ( x) = u ( x. How to chose which axis will be made clear shortly. So this signal has no negative frequency components. For our example function. I will use the real axis.0) = Cos ( x) + jSin( x) (37) The choice of axis depends on the analytic function. The first property is analytic signals have one-sided Fourier transforms. If the wrong choice is made. just let ψ (t ) = e jwt = Cos (ω × t ) + j × Sin(ω × t ) . A third property is the analytic signal’s imaginary portion is the Hilbert transform of its real portion. Analytic Functions and Analytic Signals Analytic Signals Related to the concept of analytic functions is the idea of an analytic signal. ψ * (t ) . the result will not be an analytic signal. which has no positive frequency components. (40) 3/2/2005 Page 9 of 13 . Likewise starting with the conjugate. For example: u (t ) = And of course: ψ (t ) + ψ * (t ) ψ (t ) − ψ * (t ) v(t ) = 2 2j (39) ψ (t ) = u (t ) + j × v(t ) ψ * (t ) = u (t ) − j × v(t ) For Euler’s identity. Analytic signals have several properties that prove important in signal processing. then given an analytic signal. One way to make one is by evaluating an analytic function along one of the axes. The components of analytic signals also obey a generalization of Euler’s identity. A second property is analytic signals obey a generalization of Euler’s identity. ψ (t ) = u (t ) + j × v(t ) .0) + j × v( x. Next use the spectral form of the Hilbert transformation to find: Ψ (ω ) = U (ω ) + j × (− j ) Sgn(ω ) × U (ω ) = (1 + Sgn(ω )) × U (ω ) (38) And we can easily see that Ψ (ω ) = 0 when ω < 0 . we find its spectral representation to be Ψ (ω ) = (1 − Sgn(ω )) × U (ω ) .Hilbert Transforms. Assuming our third property is true. its Fourier transform is Ψ (ω ) = U (ω ) + j × V (ω ) .

For some analytic functions. 0=∫ ∞ ψ ( z) ψ ( x) ψ ( z) dz + P ∫ dx + ∫ dz − τ − τ − τ z x z A B −∞ (43) Earlier I mentioned how the choice of axis matters. The last integral (sometimes called 3/2/2005 Page 10 of 13 . the Cauchy-Goursat theorem says this equals zero. Since we are assuming ψ ( z ) is analytic on and inside of c and the path takes a detour around the singularity.Hilbert Transforms. the right horizontal line segment. we will have to evaluate a not so obvious integral. and the inner semicircle with radius epsilon. Hence. This is where we choose the axis so that Jordan’s Lemma may be applied to the 1st integral to make it zero. we are able to combine the middle two integrals into a CPV integral. So next we expand this integral into 4 path integrals whose sum is zero. the left horizontal line segment. we will use the y axis instead of the x axis. Analytic Functions and Analytic Signals To show the Hilbert transform property of analytic signals. Thus we have: ε R ψ ( z) ψ ( x) ψ ( x) ψ ( z) dz + ∫ dx + ∫ dx + ∫ dz 0=∫ z −τ x −τ x −τ z −τ A −R B ε (42) Since we will look at the limiting case where the limits are such that the outer semicircle becomes infinitely large and the inner semicircle likewise becomes infinitely small. We want to evaluate the following integral over the 4-piece contour shown in figure 1: C ∫ z − τ dz ψ ( z) (41) Figure 1 The four pieces are: the outer semicircle with radius R.

we will have to do this multiple times. Since this is an integral of a product of functions. There is an approach called “iterated integration by parts” that is easily applied. Basically we separate the integrand into two parts that become the headings of two columns. we find: P∫ u ( x) v( x) dx + j × P ∫ dx = π ( j × u (τ ) − v(τ )) x −τ x −τ −∞ −∞ ∞ ∞ (45) So by splitting this out into two real valued equations. we find: ∞ 1 u ( x) dx P∫ π −∞τ − x ∞ −1 v( x) dx u (τ ) = P∫ π −∞τ − x v(τ ) = (46) These are simply the Hilbert transform relations between u (τ ) and v(τ ) . Similarly each entry in the right column is made by taking the integral of the entry right above it. This integral is found by using a variation of Cauchy’s integral formula where in the limit of the path’s radius going to zero. Analytic Functions and Analytic Signals a detour integral since it is used to hop around the singularity evaluates to − jπψ (τ ) . we know that “integration by parts” would be the standard approach. So plugging in these two results we find: P∫ ∞ −∞ ψ ( x) dx = jπψ (τ ) x −τ (44) Now recalling ψ ( x) = u ( x) + j × v( x) . However with this example.Hilbert Transforms. And the method is actually easier to remember. the integral’s value is jθψ (τ ) where θ is the angle. Appendix A: Integration tricks Iterated Integration by Parts Let’s say we wish to integrate ∫x e 2 2x dx . Usually the columns are filled until the bottom left hand entry is 3/2/2005 Page 11 of 13 . Each entry in the left column is formed by taking the derivative of the entry right above it. measured ccw. subtended by the path relative to the singularity.

Since this example is one where the left hand term is a polynomial. Analytic Functions and Analytic Signals zero. we find: 3/2/2005 Page 12 of 13 . so we build the table like this: e − sx − se − sx s 2 e − sx Sin( x) − Cos ( x) − Sin( x) (49) So now we find: ∫ Sin( x)e − sx dx = + − e − sx Cos ( x) − se − sx Sin( x) + ∫ − s 2 e − sx Sin( x) dx + C ( ) ( ) ( ) (50) This is one of the cases where the integration process takes you around a loop and ends up almost where you began. However some integrals have terms that will not go away regardless of the number of times a function is differentiated. For ∫ x 2 e 2 x dx . So simplifying the algebra. we build the following table: x2 2x 2 0 e2 x e2x 2 e2x 4 e2 x 8 (47) So now combining terms with alternating sign.Hilbert Transforms. The integral is made up of the sum of diagonal (upper left to lower right) products with alternating sign plus the integral of the product of the entries on the bottom row taken with a continued alternating sign. For example let’s find: ∫ Sin( x) × e − sx dx . then this last integral is not needed. An example will make this iterated process clear. Except the bottom integral works out to be an independent function times the original integral. If the rows are filled in until the bottom row is zero. the rows are filled in until the bottom row’s product is zero. we find: ∫x e 2 2x dx = + x 2 × e 2 x 2 − 2 x × e 2 x 4 + 2 × e 2 x 8 − ∫ 0 × e 2 x 8 dx = e 2 x x 2 − x 2 + 1 4 + C ( ) ( ) ( ) ( ) ( ) (48) So one can certainly see this method’s efficacy.

which we can easily evaluate using the above result. ∞ − sx ∫ Sin( x)e dx = 0 1 1 + s2 (53) 1/x Substitution Now let’s look at a method for evaluating the integral of a sinc function. So we can now x 0 ∞ ∞ Sin( x) Sin( x) 1 dx = 2∫ dx = 2∫ ∫ e − sx Sin( x)dxds = 2∫ ds = 2 × Tan −1 ( s ) ] = π 2 ∫ 0 x x 1+ s −∞ 0 0 0 0 (54) 3/2/2005 Page 13 of 13 . First realizing the integrand is even. see how to do the integral.Hilbert Transforms. Analytic Functions and Analytic Signals (1 + s )∫ Sin( x)e 2 − sx dx = −e − sx (Cos ( x) + s × Sin( x) ) + C (51) Which when the integral is fully resolved yields: − e − sx (Cos ( x) + s × Sin( x) + C ) 1 + s2 − sx ∫ Sin( x)e dx = (52) We will soon have need for the related semi-infinite definite integral. Specifically ∞ Sin( x) dx . we can reduce it to a semi-infinite ∫ x −∞ interval and then perform the following tricky substitution. ∞ ∞ ∞∞ ∞ 1 = ∫ e − st ds .