Professional Documents
Culture Documents
Edinson Fuentes
Magı́ster en Ciencias – Matemáticas
Code: 2949624539
Edinson Fuentes
Magı́ster en Ciencias – Matemáticas
Code: 2949624539
Advisor
Luis Enrique Garza Gaona, Ph.D.
Coadvisor
Herbert Alonso Dueñas Ruiz, Ph.D.
Research line
Orthogonal Polynomials
Abstract: The main purpose of this work is the study of algebraic and analytic
properties associated with spectral transformations of sequences of matrix orthogonal
polynomials with respect to a matrix measure supported either on the unit circle or on
the real line. In both frameworks, we study some properties related with a perturbation
of a sequence of matrix moments. We extend to the matrix case some algebraic and
analytic properties of matrix orthogonal polynomials on the unit circle, that are known
in the scalar case, associated with the Uvarov and Christoffel matrix transformations of a
Hermitian matrix measure supported on the unit circle. We also study properties of block
Hessenberg and block CMV matrices associated with sequences of matrix orthonormal
polynomials, under certain transformations of the corresponding matrix measure. In
addition, we extend to the matrix case some properties of the Szegő transformation
and then use these properties to analyze its effect on some spectral transformations, fo-
cusing on the relationship between the matrix-valued Stieltjes and Carathéodory functions.
Jury
Mirta Marı́a Castro Smirnova, Ph.D.
Jury
Manuel Domı́nguez de la Iglesia, Ph.D.
Jury
Pablo Manuel Román, Ph.D.
Advisor
Luis Enrique Garza Gaona, Ph.D.
Coadvisor
Herbert Alonso Dueñas Ruiz, Ph.D.
This small paragraph is not enough to express the immense gratitude that I have for my
advisors Professor Luis Enrique Garza Gaona researcher of the Universidad de Colima
México and Professor Herbert Alonso Dueñas Ruiz researcher of the Universidad Nacional
de Colombia. I want to thank you for your encouragement, patience, advice, constant
support, kindness and for guiding me in the correct way the completion of this doctoral
thesis.
A special thank you to Professor Luis Enrique Garza Gaona for teaching me his par-
ticular way of mathematics research and, for being an excellent host and welcoming me
with affection during my internship at the Universidad de Colima México; to him I owe to
great extent that this four-year journey has been fruitful and full of joys. I would also like
to thank Professor Herbert Alonso Dueñas Ruiz for involving me in the marvelous world
of the orthogonal polynomials.
On the other hand, I want to thank all the professors, friends and colleagues from
the Universidad Nacional de Colombia and the Universidad Pedagógica y Tecnológica de
Colombia who helped me in one way or another in my academic and personal formation.
Finally I would like to thank my wife Martha Leonor Saiz Saenz, my son Edison Samuel
Fuentes Saiz and all of my family for their constant encouragement and love. Despite not
fully understanding what I was doing, their support was always there. Without them none
of this would have been possible.
Edinson Fuentes
Bogotá, September 2019
This work has been possible, in part, thanks to Colciencias from Colombia (convoca-
toria 727 de 2015 becas para estudiantes de doctorado) and to Colfuturo.
Contents
Contents I
List of Tables IV
List of Figures V
Nomenclature VI
Introduction VIII
I
Contents
II
Contents
4.1 Block Hessenberg matrices and orthonormal matrix polynomials on the unit
circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.1.2 Block Hessenberg matrices for MOPUC . . . . . . . . . . . . . . . . . . . . 98
4.1.3 Block Hessenberg matrices for perturbations of MOPUC . . . . . . . . 102
4.1.3.1 Uvarov matrix transformation with m mass points . . . . . . 102
4.1.3.2 Christoffel matrix transformation . . . . . . . . . . . . . . . . . . . 104
4.1.4 A block factorization for block Hessenberg matrices . . . . . . . . . . . . 108
4.2 CMV block matrices for symmetric matrix measures on the unit circle . . . . 110
4.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.2.2 CMV block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.2.3 CMV Matrices for symmetric matrix measures . . . . . . . . . . . . . . . . 111
4.2.4 Verblunsky matrix coefficients associated with transformations of
the matrix measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2.5 A CMV block matrix associated with two sequences of Verblunsky
coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Bibliography 135
III
List of Tables
IV
List of Figures
V
Nomenclature
The following nomenclature will be used throughout the document. Some symbols take
different meanings depending on the chapter, and therefore we do not include them in the
following list.
Roman Symbols
AT — Transpose of A.
AH — Conjugate transpose of A.
cn — Moment of order n−th associated to the matrix measure σ.
C — The set of complex numbers.
C — Block CMV matrix.
det — Determinant.
deg — Degree of a polynomial.
D — The unit disc {z ∈ C : |z| < 1}.
F — Matrix-valued Carathéodory function.
Hn — Block (finite) Hankel matrix.
HϕL — Block (left) Hessenberg matrix.
HϕR — Block (right) Hessenberg matrix.
I — l × l identity matrix.
1 H
Im(·) — 2i (· − · ) for · ∈ Ml .
J — Block Jacobi matrix.
KnR — Right kernel matrix.
KnL — Left kernel matrix.
Mk×l — Set of k × l matrices with complex entries.
Ml — Ring of l × l matrices with complex entries.
MOPUC— Matrix orthogonal polynomials on the unit circle.
MOPRL — Matrix orthogonal polynomials on the real line.
n!! — Double factorial of number n.
N — Set of natural numbers.
N — Nevai class of matrix measures.
pRn — Right matrix orthonormal polynomial on the real line.
pLn — Left matrix orthonormal polynomial on the real line.
PnR — Right monic matrix orthogonal polynomial on the real line.
Pn L — Left monic matrix orthogonal polynomial on the real line.
Pl [z] — Set of matrix polynomials in z ∈ C.
Pln [z] — Set of matrix polynomials of degree less than or equal to n in z ∈ C.
R — Real line.
1 H
Re(·) — 2 (· + · ) for · ∈ Ml .
VI
Nomenclature
Greek Symbols
Other Symbols
VII
Introduction
Throughout history, the theory of orthogonal polynomials has been developed in two
closely related directions:
The first direction is closely related with the theory of special functions, combinatorial
and linear algebra, and is mainly dedicated to the study of concrete orthogonal systems
or hierarchies of such systems, such as the Jacobi polynomials, Hahn, Askey-Wilson, etc.
The theory of discrete orthogonal polynomials and the q-polynomials are also found in this
direction, as well as many of the current advances in the study of orthogonal polynomials
of several variables.
The second direction is characterized by the use of methods of mathematical analysis
or methods related to it. The general properties of systems of orthogonal polynomials
comprise a small part of these analytic aspects, while two extremely rich branches cover
the larger part:
The theory of orthogonal polynomials on the real line originated in the work of A. M.
Legendre, who was interested in solving the equations of motion of celestial mechanics.
Thus A. M. Legendre built in 1785 the first family of orthogonal polynomials on the real
line. These polynomials were also considered by P. S. Laplace relating them to spherical
functions in the context of planetary motion. Subsequently, in 1821 Rodrigues managed
to express these polynomials in terms of what we know today as the Rodrigues’ formula.
Later on, K. G. Jacobi in 1826 introduced the first family of polynomials which nowadays
is known as classic, the Jacobi polynomials, that generalized those initially introduced by
VIII
Introduction
Legendre. Then, Hermite polynomials appeared in 1864, although its weight function had
already been studied by P. S. Laplace in 1810 as an application to probability theory. The
last family (of the so-called classics) to appear were the Laguerre
R ∞ −tpolynomials, introduced
in 1879 when he was looking for a solution for the integral x e t−1 dt by developing it
in continued fractions, although they were already partially known by N.H Abel and J. L.
Lagrange. Later on, T. S. Stieltjes introduced the so-called moment problem [164, 165]
and P. L. Chebyshev introduced the polynomials that bear his name while considering the
problem of finding the best uniform polynomial approximation of a continuous function
[37, 38]. All of the above and other researchers helped the theory of orthogonal polynomials
on the real line to be fully constituted. In 1939, the monograph Orthogonal Polynomials
by G. Szegő [166] appears, in which all the results up to that moment are unified.
Similarly, the study of the theory orthogonal polynomials with respect to a nontri-
vial measure supported on the unit circle was initiated by G. Szegő in a series of articles
published between 1915 and 1925, and later extended to a more general context of or-
thogonality respect to linear functionals by Ya. L. Geronimus. The two volumes of B.
Simon’s monograph [156, 157] appeared in the beginning of the 21st century, and they
constitute the most exhaustive description of the state of art in the theory of orthogonal
polynomials on the unit circle to this date.
Thanks to the origin of orthogonal polynomials in modern science, their study has
been performed from many different points of view. That’s why orthogonal polynomials
both on the real line and on the unit circle have very useful properties in the solution
of mathematical and physical problems. Their relations with moment problems [113,
155], rational approximation [21, 147], operator theory [114, 119], interpolation, Gaussian
quadrature formulas [39, 89, 99, 166], electrostatics [111], statistical quantum mechanics
[159], special functions [9], number theory [11] (irrationality [13] and transcendence [46]),
graph theory [23], combinatorics, random matrices [47], stochastic process [153], data
sorting and compression [72], and their role in the spectral theory of linear differential
operators and Sturm-Liouville problems [145], as well as their applications in the theory
of integrable systems [73, 98, 143, 144] constitute some illustrative samples of their impact.
In the last decades some generalizations of the theory of orthogonal polynomials, on
both the real line and the unit circle, have been developed in the literature. One of these
generalizations appears when a matrix of orthogonality measures (or simply, a matrix
measure) is considered. In this case, two sequences of orthogonal matrix polynomials
(polynomials with matrix coefficients) are obtained due to the non-commutativity of the
matrices and were considered first by M. G. Krein [121, 122] in 1949. Later on, several
researchers have made contributions in this theory until today. In the last 30 years,
several known properties of orthogonal polynomials in the scalar case have been extended
to the matrix case, such as algebraic aspects related to their zeros, recurrence relations,
Favard type theorems and Christofell-Darboux formulas, Jacobi, Hessenberg and CMV
block matrices, matrix-valued Stieltjes and Carathéodory functions, among many others.
A nice summary, as well as many references, can be found in [42]. The extensions to the
matrix case are usually developed by using procedures that are analogous to the ones used
in the scalar case, with the added complications that result from the non commutativity of
the matrices. Due to this non commutativity, two different (although related) orthogonal
sequences, left and right, are usually considered.
As in the scalar case, matrix orthogonal polynomials on the real line (MOPRL)
have proved to be a useful tool in the analysis of many problems of mathematics such as
IX
Introduction
rational approximation theory [83], spectral theory of Jacobi matrices [10, 146], analysis
of polynomial sequences satisfying higher order recurrence relations [54, 71], quantum
random walks [24] and Gaussian quadrature formulas [59, 69, 160], among many others.
The problem of MOPRL whose derivatives also orthogonal was considered in [28]. Other
differential properties, such as the role of MOPRL as solutions of first and second order
differential equations has been analyzed in [27, 35, 36, 55, 71, 60, 61, 62, 63, 64, 102, 142],
among many others.
The theory of matrix orthogonal polynomials with respect to a matrix measure sup-
ported on unit circle (MOPUC) is relatively recent and was indirectly developed as a
consequence of the study of problems related to prediction theory [106, 107, 118, 120, 171].
More recently, a connection between MOPUC and quantum random walks has been ex-
plored in [24]. Some of the applications of MOPUC are found in the analysis of sequences
of polynomials orthogonal with respect to a scalar measure supported on the complex
plane [138], the analysis of time series in relation with the estimation of the frequencies
of a stationary harmonic process [163], circuits and systems theory [48], linear algebra
(in particular, the factorization of positive definite block-Toeplitz matrices [177]), spectral
operator theory [95] and, more recently, Toda type integrable systems [6]. Moreover, they
constitute a basic tool in Gaussian matrix quadrature formulas [21, 161, 163].
Most of the results contained in this document are related with spectral transforma-
tions of matrix measures supported on unit circle. Let us clarify what we mean by spectral
transformation. Let Pl be the set of normalized positive semi-definite l × l matrix mea-
sures supported on unit circle, and let D∞l be the set of matrix sequences (αn )n≥0 with
kαn k2 < 1 (where k · k2 is the Euclidean matrix norm [109]). Moreover, let Fl be the
set of matrix-valued Carathéodory functions (see Subsection 1.4.4 ) associated with pos-
itive semi-definite matrix measures supported on unit circle. Let us consider an injective
mapping
∆ : D∞l → Dl .
∞
By Verblunsky’s theorem for MOPUC (see Theorem 1.4.2), there exists a bijective map-
ping S : Pl → D∞
l . ∆ induces a mapping
Γ : Pl → Pl
D∞ −−−→ D∞
l − l
∆
is commutative, i.e., ∆◦S = S ◦Γ. We will call the mapping Γ a spectral transformation
of a matrix measure supported on unit circle. Similarly, let us consider an injective
mapping
F : Fl → Fl .
By Riesz-Herglotz Theorem for MOPUC (see Theorem 1.4.8), there exists a bijective
mapping S 0 : Pl → Fl . F induces a mapping
Γ : Pl → Pl ,
X
Introduction
Fl −−−−→ Fl
F
S0 S0
is commutative, i.e., F ◦ = ◦ Γ. The mapping Γ will also be called spectral trans-
formation of a matrix measure supported on unit circle.
The theory of orthogonal polynomials associated to spectral transformations of mea-
sures supported on the unit circle was developed in the 1990s (see [1, 17, 93, 94, 112, 127,
136, 137, 148] among many others). It is also worth mentioning an interpretation based
on the matrix representation of the multiplication operator with respect to an orthonor-
mal basis, when the corresponding measures are supported on an arc of the unit circle
([96, 97]). Spectral transformations find applications in the integrable systems theory
[169], fourth order linear differential equations with polynomial coefficients [103] and the
bispectral problem [105], among others.
The theory of spectral transformations of matrix measures has not received a lot of
attention in the literature, although there are several contributions in this direction. For
instance, for MOPRL, connection formulas and asymptotic properties associated with
the Uvarov matrix transformations are deduced in [174, 175, 176]. In [2, 3, 84, 85], some
generalizations of the Christoffel and Geronimus matrix transformations are studied, ob-
taining connection formulas for the associated MOPRL that are expressed in terms of
quasi-determinants. Spectral transformations of matrix measures supported on unit cir-
cle are much less studied, although the Uvarov matrix transformation, consisting in the
addition of a Dirac matrix measure, has been studied in [173], where the authors stud-
ied asymptotic relative properties of the corresponding orthogonal sequences. As in the
classical theory of orthogonal polynomials on the unit circle, the asymptotic behavior of
sequences of Sobolev type (in particular, of Uvarov type) in the matrix case plays a central
role in questions related to their application in approximation processes, Gaussian matrix
quadrature formulas and Fourier expansions (see [21, 161, 163]). Recently, spectral trans-
formations associated with perturbations of matrix moments have been studied in [40, 41],
in connection with the so-called matrix moment problem.
As stated above, matrix orthogonal polynomials both on the real line and on the
unit circle have been substantially studied in the last years not only because its intrin-
sic mathematical interest, but also because of its many applications. This constitutes a
motivation to develop this work. Furthermore, in [48], the authors present a general and
self-contained theory of MOPUC, connecting topics as diverse as factorization of positive
definite block-Toeplitz matrices, Fourier expansion of positive definite periodic function
matrices and spectral factorization of such matrices, least-squares approximation on the
unit circle, and characterization of positive function matrices in the unit disk. Thus, pro-
perties of MOPUC, both algebraic and analytic, can be used as a tool in order to obtain
results for the scalar case. In a similar way, it is possible that by analyzing properties as-
sociated with spectral transformations of MOPUC, one can deduce properties associated
with spectral transformations of orthogonal polynomials on the scalar case, which is also
a motivation of this work.
Most of the results of this work are oriented in the study of some algebraic and analy-
tic properties associated with spectral transformations of a Hermitian matrix measure
XI
Introduction
supported on the unit circle. Those results consist in extending to the matrix case several
properties known in the scalar case, related to recurrence formulas, connection formulas,
relative asymptotics, block Hessenberg and CMV matrix factorizations, among others.
Finally, there exists an interesting relation between (scalar) measures supported on
[−1, 1] and measures supported on the unit circle, known in the literature as the Szegő
transformation. In [156, 157], the author shows not only how these measures are related,
but also the relation between the corresponding families of orthogonal polynomials, as
well as the relation between the sequences of parameters of the recurrence relation on
the real line and the sequence of Verblunsky coefficients on the unit circle, known as
Geronimus relation. The interest of the Szegő transformation is not only from a theoretical
perspective, but also from a computational point of view, since it has been used in a
systematic way in the study of Szegő’s quadrature formulas on the unit circle and its
application to quadrature formulas in intervals of the real line [12, 43, 44].
The matrix version of Szegő transformation has been studied in [42, 51, 172]. In
[172], the authors show that when two positive definite matrix measures supported on
the real line and on the unit circle, respectively, are related through of the Szegő matrix
transformation, there exists a relation between the corresponding sequences of orthogonal
matrix polynomials. The relation between the Verblunsky matrix coefficients and the
coefficients of the three-term recurrence relation known as Geronimus matrix relations is
shown in [51, 116]. In this work, we extend to the matrix case some properties of the direct
and inverse Szegő transformation and then we use these properties to analyze its effect
on some spectral transformations, focusing on the relationship between the matrix-valued
Stieltjes and Carathéodory functions.
XII
Contributions and structure of the thesis
The contributions of this thesis are included in seven articles. Among these we have
1. On a finite moment perturbation of linear functionals and the inverse Szegő trans-
formation [78]. (Published)
2. CMV block matrices for symmetric measures on the unit circle [76]. (In press)
3. Matrix moment perturbations and the inverse Szegő matrix transformation [79]. (In
press)
1. Block Hessenberg matrices and orthonormal matrix polynomials on the unit circle
[75].
On the other hand, this work is divided into chapters whose content is distributed as
follows:
Chapter 1 is meant for non-experts and therefore it contains some introductory and
background material. In the first section some matrix analysis results are recalled. Then,
we state some properties of matrix orthogonal polynomials both on the real line and on the
unit circle that will be used in the sequel. There are many contributions in the literature
where algebraic and analytic properties of scalar orthogonal polynomials are generalized
to the matrix case. An outstanding summary and references on the subject can be found
in [42, 48] among others. In the last section, we define the Szegő matrix transformation
and state some of its properties, including a relation between the matrix-valued Stieltjes
function associated with a matrix measure supported on [−1, 1] and the matrix-valued
Carathéodory function associated with a matrix measure supported on the unit circle.
In Chapter 2, we study a problem associated with a perturbation of the sequence of
moments of a scalar measure. Given a perturbation of a measure supported on unit circle,
we analyze the perturbation obtained on the corresponding measure on the real line,
XIII
Contributions and structure of the thesis
when both measures are related through the inverse Szegő transformation. Moreover,
using the connection formulas for the corresponding sequences of orthogonal polynomials,
we deduce several properties such as relations between the corresponding norms. This
analysis is presented on Section 1 and is later generalized to the matrix case in Section 2.
The results of this chapter are contained in [78, 79].
In Chapter 3, we extend to the matrix case some algebraic and analytic properties of
MOPUC, that are known in the scalar case, associated with a generalized Uvarov ma-
trix transformation of a Hermitian matrix measure supported on the unit circle. We study
properties such as connection formulas between the corresponding sequences of MOPUC,
relative asymptotics for the matrix polynomials, as well as for their leading principal (ma-
trix) coefficients, and properties of the zeros for the Uvarov matrix polynomials by using
the associated (truncated) block Hessenberg matrices. Similar properties are deduced
for the case of the Christoffel matrix transformation. The results for the Uvarov matrix
transformation are contained in [82] and the Christoffel case appears in [80].
In the first section of Chapter 4, we study properties of block Hessenberg matrices
associated with MOPUC. We also consider the Uvarov and Christoffel spectral matrix
transformations of the orthogonality measure, and obtain some relations between the
associated Hessenberg matrices. The results are contained in [75]. In the second section, we
present some properties of symmetric positive definite matrix measures supported on unit
circle, and relate the CMV block matrices associated with two symmetric matrix measures.
Moreover, we present some sequences of Verblunsky matrix coefficients associated with
certain transformations of matrix measures. The results can be found in [76].
In Chapter 5, given a normalized positive definite matrix measure µ supported on
[−1, 1], we obtain necessary conditions such that some spectral perturbations applied to
µ are preserved under the Szegő matrix transformation. The results are included in [81].
Finally, in Chapter 6 first we present our conclusions, and we formulate some open
problems which have arisen during our research. They constitute research problems that
will be addressed in the near future.
XIV
CHAPTER 1
In the remaining of the document, we will use the following notation. Let C be the set of
complex numbers and denote by Mk×l = Mk×l (C) the set of k × l matrices with complex
entries. In particular Ml = Ml×l (C) is the ring of l × l matrices and I, 0 will denote the
identity and the zero matrix, respectively, in Ml . Let %(A) be the spectrum of A ∈ Ml ,
i.e. the set of its eigenvalues (an eigenvalue of A is a number λ ∈ C such that Av = λv
for some nonzero vector v ∈ Ml×1 called eigenvector).
Hermitian matrices have real eigenvalues and unitary matrices have eigenvalues on the
unit circle. The set of Hermitian matrices is a real vector space.
A matrix A ∈ Ml is called positive definite (resp. semi-definite) if it is Hermitian and
uH Au > 0 (resp. uH Au ≥ 0) for every nonzero vector u ∈ Ml×1 . The notation A > B
(resp. A ≥ B ) for Hermitian matrices A, B ∈ Ml means that A − B is positive definite
(resp. semi-definite). The next theorem establishes some results on definite positive
matrices whose proof can be found in [53, 109, 124].
1
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
1. The matrix A is positive definite (resp. semi-definite) if for every λ ∈ %(A), λ > 0
(resp. λ ≥ 0).
2. The matrix A is positive definite (resp. semi-definite) if and only if all principal
minors of A are positive (resp. non-negative).
4. (Cholesky factorization). The matrix A is positive definite if and only if there exists
a unique lower triangle matrix L ∈ Ml with positive entries on the main diagonal
and such that A = LLH .
Theorem 1.1.3. (Spectral Theorem [109]). A matrix is normal if and only if it is unitarily
diagonalizable.
Let k·k2 and k·kF be, respectively, the Euclidean and Frobenius spectral matrix norms
(see [109]) defined by
l
X
kAk22 H
= Π(A A) and kAk2F = |ai,j |2 ,
i,j=1
where Π(A) is the spectral radius of A = [ai,j ]li,j=1 ∈ Ml , i.e. Π(A) = max{|λ| : λ ∈
%(A)}. These norms satisfy
2. kAk2 ≤ kAkF ,
2
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
A B
Given a block matrix , with A, B, C, D ∈ Ml , its inverse can be computed
C D
by using the Schur complement (see [178])
−1
(A − BD−1 C)−1 −A−1 B(D − CA−1 B)−1
A B
= . (1.1)
C D −D−1 C(A − BD−1 C)−1 (D − CA−1 B)−1
Let R be a ring, then a right module over R is a set M together with two operations
+ : M × M → M and · : M × R → M
2. v · (α + β) = v · α + v · β.
3. (v + w) · α = v · α + w · α.
4. v · (α · β) = (v · α) · β.
In a similar way, one defines a left module over R. If M is a right and left module over R,
then M is said to be a bi-module [125, 152]. M is said to be a free right (or left) module
over R if M admits a basis, that is, there exists a subset S of M such that S is not empty,
S generates M, (M = hSi = span(S)) and S is linearly independent.
3
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
A matrix measure µ induces right and left sesquilinear forms, defined respectively as
Z
hf, giR,µ = f (x)H dµ(x)g(x) ∈ Ml , f, g ∈ Pl [x], (1.3)
E
Z
hf, giL,µ = f (x)dµ(x)g(x)H ∈ Ml , f, g ∈ Pl [x]. (1.4)
E
The sesquilinearity of these forms means that for every f, g, h ∈ Pl [x], and A, B ∈ Ml ,
they satisfy
1. hf, gA + hBiR,µ = hf, giR,µ A + hf, hiR,µ B and hgA + hB, f iR,µ = AH hg, f iR,µ +
BH hh, f iR,µ .
2. hf, Ag + BhiL,µ = hf, giL,µ AH + hf, hiL,µ BH and hAg + Bh, f iL,µ = Ahg, f iL,µ +
Bhh, f iL,µ .
As a consequence of the above proposition, for any monic matrix polynomial f ∈ P[x],
hf, f iR,µ and hf, f iL,µ are non-singular matrices. Therefore, it is possible to construct a
sequence of monic matrix polynomials (PnR (x))n≥0 (resp. (PnL (x))n≥0 ) orthogonal with
respect to (1.3) (resp. (1.4)) by applying the Gram-Schmidt orthogonalization process
to the basis (xn I)n≥0 . This is, PnR and PnL are the unique monic matrix polynomials
satisfying
hxk I, PnR iR,µ = hPnL , xk IiL,µ = 0, k = 0, 1, . . . , n − 1, (1.5)
the polynomials (PnR (x))n≥0 and (PnL (x))n≥0 are called, respectively, the right and left-
orthogonal matrix polynomials sequences with respect to the matrix measure µ.
From the Lemma 1.3 and (1.5) it follows that PnR (x) = PnL (x)H for all x ∈ R and from
the properties of sesquilinear forms we have, for every n ∈ N
for some invertible matrix sn ∈ Ml . Furthermore, (PkR (x))nk=0 is a right module basis for
Pln [x]; indeed, any f ∈ Pln [x] has a unique expansion,
n
X
f (x) = PkR (x)fkR , fkR = s−1 R
k hPk , f iR,µ . (1.7)
k=0
pR R n
n (x) = ζn x + lower order terms ,
pL L n
n (x) = ζn x + lower order terms ,
for the right and left cases, respectively, with non-singular matrices ζnR and ζnL . They are
orthonormal with respect to (1.3) and (1.4), respectively. This is,
hpR R L L
k , pn , iR,µ = hpn , pk , iL,µ = Iδn,k , k = 0, 1, . . . , n.
pR R R L L L
n (x) = Pn (x)ζn νn , and pn (x) = τn ζn Pn (x),
where νn , τn ∈ Ml are unitary matrices. This is, matrix orthonormal polynomials on the
real line are unique up to multiplication by unitary matrices, since
hpR H R H R R H H
n νn , pn νn iR,µ = νn hpn , pn iR,µ νn = νn νn = I,
hτnH pL H L H L L H
n , τn pn iL,µ = τn hpn , pn iL,µ τn = τn τn = I.
By (1.7), the sequence of matrix polynomials (pR n (x))n≥0 also form a right orthonormal
module basis in Pl [x]. So for any f ∈ Pl [x], the expansion
∞
X
f (x) = pR R R R
k (x)fk , fk = hpk , f iR,µ (1.8)
k=0
5
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
and the corresponding Hilbert space HR , HL , with norms of Frobenius type given by
q q
kf kR = hf, f iµesc ,R and kf kL = hf, f iµesc ,L .
holds true. Obviously, since f is a polynomial, there are only finitely many non-zero terms
in (1.8), (1.9) and (1.10).
Let J be the block Jacobi matrix representing the multiplication by x operator with respect
to the basis (pR R
n )n≥0 . As in the scalar case, using the orthogonality properties of pn , we
get that Jnm = 0 if |n − m| > 1. Denote
Bn = Jnn = hpR R H R R
n−1 , xpn−1 iR,µ and An = Jn,n+1 = Jn+1,n = hpn−1 , xpn iR,µ .
Then we have
B1 A1 0 ···
AH 1 B2 A2 · · ·
J= . (1.11)
0 AH 2 B3 · · ·
.. .. .. . .
. . . .
xpR R H R R
n (x) = pn+1 (x)An+1 + pn (x)Bn+1 + pn−1 (x)An , n ≥ 1. (1.12)
If we set pR
−1 (x) = 0 and A0 = I, the relation (1.12) also holds for n = 0. Since for x real
we have pn (x)H = pR
L
n (x), by conjugating (1.12), we get
xpL L L H L
n (x) = An+1 pn+1 (x) + Bn+1 pn (x) + An pn−1 (x), n ≥ 1. (1.13)
The matrices Bn ∈ Ml are Hermitian and the matrices An ∈ Ml are non-singular for
n ≥ 0. Any block matrix of the form (1.11) with Bn = BnH and det An 6= 0 for all n will
be called a block Jacobi matrix corresponding to the Jacobi parameters An and Bn . It is
well known (see Favard’s Theorem [42], Section 2.2.4) that there exits a bijection between
Hermitian matrix measures with compact support and block Jacobi matrices.
1.3.2 Zeros
1. All the zeros of PnR (x) are real. Moreover, det(PnR (x)) = det(PnL (x)).
6
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
then
xn+1,j ≤ xn,j ≤ xn+1,j+l .
where (Un (x))n≥0 is the sequence of Chebyshev polynomials of the second kind on R (see
[39]). We have that
Z 1
1 dx π
hPkR , PnR iR,µ = n xn Tn (x) √ I2 δnk = 2n I2 δnk , k = 0, · · · , n,
2 −1 1−x 2 2
where (Tn (x))n≥0 is the sequence of Chebyshev polynomials of the first kind on R (see [39])
and I2 ∈ M2 is the 2 × 2 identity matrix. The coefficients of the recurrence relation (1.12)
are:
1 1 0 1 0 0 1
B1 = , Bn = , n ≥ 2 and An = I2 , n ≥ 1.
2 0 1 2 0 0 4
Furthermore, the zeros of PnR (x) are given by
2kπ
xn,k = ± cos , k = 1, . . . , n,
2n + 1
pR H R H R R
n+1 (x)An+1 pn (y) − pn (x)An+1 pn+1 (y)
H
KnR (x, y) = , n≥1 (1.16)
x−y
and
pL H H L L H L
n+1 (x) An+1 pn (y) − pn (x) An+1 pn+1 (y)
KnL (x, y) = , n≥1 (1.17)
x−y
for any x, y ∈ R and x 6= y.
7
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
Furthermore, denoting Kn (x, y) = KnR (x, y) = KnL (x, y), for any matrix polynomial
π ∈ Pln [x], the kernel matrix polynomial have the reproducing properties ([42]), i.e., for
any x, y, z ∈ R
R
1. E Kn (y, x)dµ(x)π(x) = π(y).
R
2. E π(x)dµ(x)Kn (x, y) = π(y).
R
3. E Kn (y, x)dµ(x)Kn (x, z) = Kn (y, z).
The matrix-valued Stieltjes function (or Cauchy transformation) associated with the nor-
malized matrix measure µ is defined by
Z 1
1
S(x) = dµ(t), x ∈ C\[−1, 1]. (1.18)
−1 x − t
It is of great interest in the theory of MOPRL (see [91, 92]), and admits the following
series expansion
∞
X 1
S(x) = µn , (1.19)
xn+1
n=0
Example 1.3.3. The double factorial of number n (denoted by n!!) is the product of all
the integers from 1 up to n that have the same parity (odd or even) as n. That is,
Qn/2
k=1 (2k) = n(n − 2)(n − 4) · · · (4)(2),
if n is even,
n!! = (1.21)
Q(n+1)/2
k=1 (2k − 1) = n(n − 2)(n − 4) · · · (3)(1), if n is odd.
thus, the n-th moment matrix associated with the normalized matrix measure (1.14) is
given by " (n−1)!! 1+(−1)n 1−(−1)n
#
n!!
n!! 2 (n+1)!! 2
µn = n!! 1−(−1)n (n−1)!! 1+(−1)n ,
(n+1)!! 2 n!! 2
8
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
Let T := {z ∈ C : |z| = 1} and D := {z ∈ C : |z| < 1} be the unit circle and unit
disc, respectively, where z ∈ T is parametrized as z = eiθ with θ ∈ (−π, π]. In what
follows, σ = (σi,j )li,j=1 will be an l × l Hermitian matrix measure supported on T (it can
be supported on a finite subset of T). Notice that such a measure can be decomposed as
dθ
dσ(θ) = W(θ) + dσs (θ), (1.22)
2π
where W ∈ Ml denotes the Radon-Nikodym derivative of the absolutely continuous part
of σ with respect to the scalar Lebesgue’s measure, and σs represents the singular part.
If, in addition, the matrix measure σ is Hermitian and positive definite, then the matrix
W(θ) is a Hermitian and positive definite matrix.
As in the real line case, a matrix measure σ induces right and left sesquilinear forms,
defined respectively as
Z Z π
H dσ(z)
hp, qiR,σ = p(z) q(z) = p(eiθ )H dσ(θ)q(eiθ ) ∈ Ml , p, q ∈ Pl [z], (1.23)
T iz −π
Z Z π
dσ(z) H
hp, qiL,σ = p(z) q(z) = p(eiθ )dσ(θ)q(eiθ )H ∈ Ml , p, q ∈ Pl [z]. (1.24)
T iz −π
These sesquilinear forms satisfy the same properties that their real line counterparts. By
applying the Gram-Schmidt orthogonalization process to the basis (z n I)n≥0 , it is possible
to construct a sequence of monic matrix polynomials (ΦR L
n (z))n≥0 (resp. (Φn (z))n≥0 ) or-
R L
thogonal with respect to (1.23) (resp. (1.24)). This is, Φn (z) (resp. Φn (z)) is the unique
monic matrix polynomial satisfying
hz k I, ΦR L k
n iR,σ = hΦn , z IiL,σ = 0, k = 0, 1, . . . , n − 1. (1.25)
SnR = hΦR R L L L
n , Φn iR,σ , and Sn = hΦn , Φn iL,σ ,
9
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
with SnR ∈ Ml and SnL ∈ Ml non-singular for n ≥ 0. As in the real line case, we have that
(ΦR n l l
k (z))k=0 is a right module basis for Pn [z]; indeed, any π ∈ Pn [z] has a unique expansion,
n
X
−R
π(z) = ΦR R R R
k (z)πk , πk = Sk hΦk , πiR,σ . (1.26)
k=0
Similarly, (ΦL n l
k (z))k=0 is a left module basis for Pn [z], and
n
X
−L
π(z) = πkL ΦL L L
k (z), πk = hπ, Φk iL,σ Sk .
k=0
ϕR R n
n (z) = κn z + lower order terms,
ϕL L n
n (z) = κn z + lower order terms,
for the right and left cases, respectively, with non-singular matrices κR L
n and κn . They are
orthonormal with respect to the sesquilinear forms (1.23) and (1.24) respectively. This is,
hϕR R L L
k , ϕn iR,σ = hϕn , ϕk iL,σ = δn,k I, k = 0, 1, . . . , n.
ϕR R R
n (z) = Φn (z)κn νn , ϕL L L
n (z) = τn κn Φn (z), (1.27)
where νn , τn ∈ Ml are unitary matrices. This is, as in the real case, matrix orthonormal
polynomial are unique up to multiplication by unitary matrices. If we restrict κR n (resp.
κL
n ) to be positive definite, then νn = I (resp. τn = I) and the orthonormal sequence is
uniquely determined (see [5]) with
−R/2 −L/2
κR
n = Sn and κL
n = Sn . (1.28)
Furthermore, κR L
n (resp. κn ) can always be uniquely chosen to satisfy
−1 −1
κR
n κR L L
n+1 > 0 and κn+1 κn > 0. (1.29)
10
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
If the Hermitian matrix measure is positive definite, we can derive the corresponding
scalar products
and their corresponding Hilbert spaces HR , HL , with norms of Frobenius type given by
q q
kπkR = hπ, πiσesc ,R and kπkL = hπ, πiσesc ,L .
holds true. Obviously, since π is a polynomial, there are only finitely many non-zero terms
in (1.30), (1.31) and (1.32).
Let (ϕR L
n (z))n≥0 and (ϕn (z))n≥0 be the right and left-orthonormal matrix polynomial se-
quences, respectively, with respect to the Hermitian matrix measure σ along with the
condition (1.29).
For a matrix polynomial π ∈ Pln [z] of degree n, we define the reversed polynomial π ∗
by
H
1
π ∗ (z) = z n π .
z̄
Moreover, clearly we have for π, q ∈ Pln [z] of degree n and for any A ∈ Ml
1. (π ∗ (z))∗ = π(z).
2. (Aπ(z))∗ = π ∗ (z)AH , (π(z)A)∗ = AH π ∗ (z).
3. hπ ∗ , q ∗ iR,σ = hπ, qiL,σ , hπ ∗ , q ∗ iL,σ = hπ, qiR,σ .
Now define
−1 R −1
ρR R L L L
n := (κn+1 ) κn and ρn := κn (κn+1 ) . (1.33)
Notice that we have ρR L
n > 0 and ρn > 0.
zϕR R R L,∗ H
n (z) = ϕn+1 (z)ρn + ϕn (z)αn , (1.34)
zϕL
n (z) = ρL L
n ϕn+1 (z) + αnH ϕR,∗
n (z). (1.35)
These recurrence relations are known in the literature as forward Szegő recurrence
relations.
2. q q
ρR
n = I − αn αnH and ρL
n = I − αnH αn . (1.36)
11
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
The matrices αn will henceforth be called the Verblunsky matrix coefficients associated
with the matrix measure σ. Since ρR n is positive definite and therefore non-singular, we
have
kαn k2 < 1, for all n ≥ 0. (1.37)
In fact, the following converse result holds.
Theorem 1.4.2. (Verblunsky’s Theorem [42]). Any sequence (αn )n≥0 satisfying (1.37)
is the sequence of Verblunsky matrix coefficients of a unique positive semi-definite matrix
measure supported on T.
Notice that, if the support of σ contains only finitely many points, say n points, then
we can define a truncated sequence (Φk (z))nk=0 , and the recurrence relations terminate
after that. This situation is closely related to the development of Gaussian quadrature
formulas for matrix-valued functions (see [21, 161, 162]).
From (1.34) and (1.35), it can be deduced that
H −1
αn = − κR
n ΦR
n+1 (0)
H
κL
n , (1.38)
−1 H
= − κR ΦL H
κL
αn n n+1 (0) n . (1.39)
Notice that the Szegő recursion for the monic MOPUC reads as
ΦR R L,∗ R
n+1 (z) = zΦn (z) + Φn (z)Φn+1 (0), (1.41)
ΦL
n+1 (z) = zΦL
n (z) + ΦL R,∗
n+1 (0)Φn (z), (1.42)
and, as a consequence,
zΦL,H
R L,∗ R L,∗ zI n+1 (0)
[Φn+1 (z), Φn+1 (z)] = [Φn (z), Φn (z)] . (1.47)
ΦR
n+1 (0) I
12
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
and, by using (1.1) to determine the inverse of the block matrix, we get
−1
zΦL,H
zI n+1 (0) =
ΦR
n+1 (0) I
−1 −1
1 L,H R (0) L,H R (0)ΦL,H (0)
z I − Φ n+1 (0)Φ n+1 −Φn+1 (0) I − Φ n+1 n+1
−1 −1 .
1 R L,H R R L,H
− z Φn+1 (0) I − Φn+1 (0)Φn+1 (0) I − Φn+1 (0)Φn+1 (0)
These recurrence relations are known in the literature as backward Szegő recurrence re-
lations. Since I − ΦL,H R −R R R L,H −L L
n+1 (0)Φn+1 (0) = Sn Sn+1 and I − Φn+1 (0)Φn+1 (0) = Sn Sn+1 (see
[141], Proposition 4.1), we have
−R R L,∗
zΦR R R
n (z)Sn Sn+1 = Φn+1 (z) − Φn+1 (z)Φn+1 (0), (1.48)
ΦL,∗ −L L
n (z)Sn Sn+1 = ΦL,∗
n+1 (z) − ΦR L,H
n+1 (z)Φn+1 (0). (1.49)
On the other hand, from (1.42) and (1.43) and taking into account
−1 −1
L,H L,H
ΦR
n+1 (0) I − Φ n+1 (0)ΦR
n+1 (0) = I − Φ R
n+1 (0)Φn+1 (0) ΦR
n+1 (0),
L,∗
If ΦR
n (z)A = Φn (z)B for some n ≥ 1, then A = B = 0.
R,∗
If AΦL
n (z) = BΦn (z) for some n ≥ 1, then A = B = 0.
L,∗ H R,∗ H L
Proof. Assume ΦRn (z)A = Φn (z)B. Then, A Φn (z) = B Φn (z). Setting z = 0, we
R H H L R,∗
have Φn (0)A = B and A = B Φn (0), since Φn (0) = I. As a consequence, we get
0 = I − ΦR L,H
n (0)Φn (0) B. (1.52)
13
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
and since κL H
n−1 and I − αn−1 αn−1 are non-singular for n ≥ 1, we get B = 0 and A = 0.
The other statement can be proved in a similar way.
1.4.2 Zeros
3. For each n ≥ 0,
det ϕR L
n (z) = det ϕn (z) .
4. For each n ≥ 0,
det κR L
n = det κn .
As a consequence, it follows that for n ≥ 0, both (ϕR −1 and (ϕL (z))−1 exist for |z| ≥ 1
n (z)) n
R,∗ L,∗
and both (ϕn (z))−1 and (ϕn (z))−1 exist for |z| < 1. Moreover, since det κR L
n = det κn
for n ≥ 1, then
det ΦR L
n (z) = det Φn (z) , n ≥ 0. (1.53)
As a consequence, the n−th degree monic MOPUC, using the matrix Heine’s formula
[141] is n
R φn (z) − n+1 iφn−1 (z)
Φn (z) = n , n≥0
− n+1 iφn−1 (z) φn (z)
where φ−1 (z) = 0, φ0 (z) = 1 and for n ≥ 1
n
1 + (−1)n−k
1 X n−k
φn (z) = (−1) 2 (k + 1)z k .
n+1 2
k=0
Moreover
n+2
SnR = I2 , n ≥ 0.
2(n + 1)
14
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
k det(ΦR
k (z)) Zeros
1
1−∗ z2 + 4 ±0.5i
2−∗ z 4 − 29 z 2 + 1
9 ±0.4714 ± 0.33333i
7 4
3−∗ z6 − 16 z − 81 z 2 + 1
16 ±0.63833 ± 0.072085i
±0.60583i
14 6 3 4 2 2 1
4−∗ z8 − 25 z + 25 z − 25 z + 25 ±0.67815 ± 0.13783i
±0.35828 ± 0.53783i
23 8 5 6 1 4 1 2 1
5−∗ z 10 − 36 z + 18 z + 12 z − 18 z + 36 ±0.66837 ± 0.29419i
±0.57018 ± 0.37570i
±0.67033i
34 10 19 8 4 6 3 4 2 2 1
6−∗ z 12 − 49 z + 49 z − 49 z + 49 z − 49 z + 49 ±0.28765 ± 0.63411i
±0.63989 ± 0.41068i
±0.68380 ± 0.20514i
0.8
0.6
0.4
0.2
-1 -0.5 0.5 1
-0.2
-0.4
-0.6
-0.8
-1
Notice that all the zeros of ΦRk (z) with k = 1, . . . , 6 lie in D. In the same way, the
reversed matrix polynomial of ΦR n (z) is
" #
R,∗ φR,∗
n (z)
n
n+1 izφ ∗
n−1 (z)
Φn (z) = ∗ , n ≥ 0,
n
n+1 izφn−1 (z) φ∗n (z)
Pn 1+(−1)k k
where φ∗n (z) = 1
n+1 k=0 2 (−) 2 (n + 1 − k)z k , φ∗0 (z) = 1 and φ∗−1 (z) = 0.
15
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
k det(ΦR,∗
k (z)) Zeros
1−∗ 1 + 41 z 2 ±2.0i
7 2
3−∗ 1− 16 z − 18 z 4 + 1 6
16 z ±1.5469 ± 0.17469i
±1.6506i
14 2 3 4 2 6 1 8
4−∗ 1− 25 z + 25 z − 25 z + 25 z ±1.4161 ± 0.28782i
±0.8579 ± 1.2878i
23 2 5 4 1 6 1 8 1 10
5−∗ 1− 36 z + 18 z + 12 z − 18 z + 36 z ±1.2533 ± 0.55169i
±1.2229 ± 0.80579i
±1.4918i
34 2 19 4 4 6 3 8 2 10 1 12
6−∗ 1− 49 z + 49 z − 49 z + 49 z − 49 z + 49 z ±0.59329 ± 1.3079i
±1.1068 ± 0.71038i
±1.3417 ± 0.40251i
1.5
0.5
-0.5
-1
-1.5
-2
Figure 1.2. Zeros of the first six reversed MOPUC.
16
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
for the right and left cases, respectively. It is shown in [42] that for n ≥ 1 they satisfy the
Christoffel-Darboux formulas
ϕL,∗ L,∗ H R R
n+1 (z)ϕn+1 (ζ) − ϕn+1 (z)ϕn+1 (ζ)
H
KnR (ζ, z) = , (1.56)
1 − z ζ̄
ϕR,∗ H R,∗ L H L
n+1 (ζ) ϕn+1 (z) − ϕn+1 (ζ) ϕn+1 (z)
KnL (z, ζ) = , (1.57)
1 − z ζ̄
ϕL,∗ L,∗ H R R
n (z)ϕn (ζ) − z ζ̄ϕn (z)ϕn (ζ)
H
KnR (ζ, z) = , (1.58)
1 − z ζ̄
ϕR,∗ H R,∗ L H L
n (ζ) ϕn (z) − z ζ̄ϕn (ζ) ϕn (z)
KnL (z, ζ) = , (1.59)
1 − z ζ̄
for z ζ̄ 6= 1.
1.
R −1 L L −1
ϕR,∗ L,∗ R
n (z) = (κn ) Kn (z, 0) and ϕn (z) = Kn (0, z)(κn ) . (1.60)
2.
−R −L
KnL (0, 0) = κR R H
n (κn ) = Sn and KnR (0, 0) = (κL H L
n ) κn = S n . (1.61)
3.
(κ−L H R L −R H
n ) ϕn (0) = ϕn (0)(κn ) and ϕR,H −L −R L,H
n (0)κn = κn ϕn (0). (1.62)
4.
−L
H −L −R H −R L,H
κL
n−1 κn κL L
n−1 κn = I − ϕn (0)(κn ) κn ϕn (0)
(1.63)
−1 L,H
= I − ϕL L
n (0)Kn (0, 0) ϕn (0),
and
H
κ−R R −R R
n κn−1 κn κn−1 = I − ϕR,H −L −L H R
n (0)κn (κn ) ϕn (0)
(1.64)
−1 R
= I − ϕR,H R
n (0)Kn (0, 0) ϕn (0).
5. For any matrix polynomial π ∈ Pln [z] and ζ ∈ C, the kernel polynomials have the
reproducing properties ([42])
hπ(z), KnR (ζ, z)iR,σ = π(ζ)H , and hKnL (z, ζ), π(z)iL,σ = π(ζ)H . (1.65)
Proof.
17
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
R −R R −R −R H −R
SnR = hΦR R
n , Φn iR,σ = hϕn κn , ϕn κn iR,σ = (κn ) κn .
and taking z = ζ,
L,∗ R,∗
ϕL,∗ L R R,∗ L R
n (z)ϕn (z) − ϕn (z)ϕn (z) = z ϕn−1 (z)ϕn−1 (z) − ϕn−1 (z)ϕn−1 (z)
..
.
= z n ϕL,∗
0 (z)ϕL
0 (z) − ϕ R
0 (z)ϕ R,∗
0 (z) ,
(κL H L L H L R R R R,H
n ) κn − (κn−1 ) κn−1 = Kn (0, 0) − Kn−1 (0, 0) = ϕn (0)ϕn (0),
and therefore
−L
H −L −L H R −L
κL
n−1 κn κL R,H
n−1 κn = I − (κn ) ϕn (0)ϕn (0)κn .
Then, (1.61) and (1.62) imply (1.63). (1.64) follows in a similar way.
since KnL (0, 0)−1 and KnR (0, 0)−1 are positive semi-definite matrices.
Moreover Z π
dθ
SnR = SnL = hΦn , Φn iR,σ = e−inθ Ωeinθ = Ω > 0.
−π 2π
18
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
−L/2
Thus, ϕL,∗ L,∗
n (z) = Φn (z)Sn = Ω−1/2 , hence the kernel matrix polynomial is
n
Ω−1 − z n+1 Ω−1/2 (ζ n+1 Ω−1/2 )H 1 − (z ζ̄)n+1 X
KnR (ζ, z) = = Ω−1 = Ω−1 (z ζ̄)k .
1 − z ζ̄ 1 − (z ζ̄)
k=0
In particular, if z = ζ ∈ T
KnR (ζ, ζ) = (n + 1)Ω−1 .
when we use right matrix polynomials or the right sesquilinear form, and
π(A) = Bk Ak + · · · + B1 A + B0 , ( in particular ϕL L k
k (A) = κk A + · · · )
for left matrix polynomials or the left sesquilinear form. With this notation, the repro-
ducing property of the kernel matrix polynomial (1.65) can be extended to
hπ(z), KnR (A, z)iR,σ = π(A)H and hKnL (z, A), π(z)iL,σ = π(A)H . (1.68)
Pk R
Indeed, let π(z) = i=0 ϕi (z)λi , with 0 ≤ k ≤ n. Then, we have
* k n
+
X X
hπ(z), KnR (A, z)iR,σ = ϕR
i (z)λi , ϕR R
j (z)ϕm (A)
H
i=0 j=0
k
X
= λH R H H
i ϕi (A) = π(A) .
i=0
19
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
that
π
eiθ + z
Z
F (z) = dσ(θ). (1.69)
−π eiθ − z
The matrix measure σ is given by the unique weak limit of the measures dσr (θ) =
dθ
ReF (reiθ ) 2π as r ↑ 1. Moreover, we have
∞
X ∞
X
F (z) = c0 + 2 c−n z n = I + 2 c−n z n , (1.70)
n=1 n=1
where cn is the n−th matrix moment associated with the measure σ, i.e.,
Z π
n n
cn = hI, z IiR,σ = hz I, IiL,σ = einθ dσ(θ), n ∈ Z. (1.71)
−π
Example 1.4.9. Consider the positive semi-definite matrix measure (1.54) supported on
T. The sequence of matrix moments is
1 0 i 0 1 i 0 1
c0 = , c1 = , c−1 = −
0 1 2 1 0 2 1 0
and
0 0
cn = c−n = for n ≥ 2.
0 0
Therefore, the respective matrix-valued Carathéodory function is
1 −iz
F (z) = .
−iz 1
The following result is a direct consequence of the matrix version of the Riesz-Herglotz’s
Theorem.
Proof. Let
∞
X ∞
X
F (z) = I + 2 c−n z n and F̃ (z) = I + 2 c̃−n z n ,
n=1 n=1
cn = c̃n .
20
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
Let X (z) = [I, Iz −1 , Iz, Iz −2 , Iz 2 , . . .]T be a block vector, and denote the CMV left and
right moment matrices associated with the matrix measure σ by
c0 c−1 c1 c−2 · · ·
c1
c0 c2 c−1 · · ·
FR := hX (z)T , X (z)T iR,σ = c−1
c−2 c0 c−3 · · · , (1.72)
c2
c1 c3 c0 · · ·
.. .. .. .. ..
. . . . .
c0 c1 c−1 c2 ···
c−1 c0 c−2 c1 ···
FL := hX (z), X (z)iL,σ =
c1 c2 c0 c3 ··· ,
(1.73)
c−2 c−1 c−3 c0 ···
.. .. .. .. ..
. . . . .
respectively. Notice that if the matrix measure σ is Hermitian, then so are FR and FL .
Let (FK )n ∈ Mnl be the truncated moment matrices with K = R, L. The matrix measure
σ is said to be quasi-definite if (FK )n satisfies
Definition 1.4.11. Given a quasi-definite matrix measure σ, the set of monic matrix
polynomials (ΦR L
i,n (z))n≥0 , (Φi,n (z))n≥0 , i = 1, 2 are said to be biorthogonal with respect
to σ if
2. hΦR R R L L L R
2,k , Φ1,n iR,σ = δk,n S2,1,n and hΦ1,k , Φ2,n iL,σ = δk,n S1,2,n , with S2,1,n 6= 0 and
L
S1,2,n 6= 0.
Proposition 1.4.12. ([6] Proposition 2) Given a quasi-definite matrix measure σ,
there exist sequences of monic biorthogonal matrix matrix polynomials (ΦR
i,n (z))n≥0 ,
(ΦLi,n (z))n≥0 , i = 1, 2 and they are unique.
21
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
Here, it is important to notice the order of the matrix polynomials in the sesquilinear
form; if k 6= n, then hΦR R L L
1,k , Φ2,n iR,σ and hΦ1,k , Φ2,n iL,σ could be different from 0. Notice
that this constitutes a more general framework to study matrix orthogonal polynomials.
However, if σ is Hermitian,
hΦR R R L L L
i,k , Φi,n iR,σ = δk,n Si,n and hΦi,n , Φi,k iL,σ = δk,n Si,n .
Necessary conditions for σ to belong to the Nevai or Szegő class can be given in terms
of the Verblunsky matrix coefficients, as follows.
Theorem 1.4.15. (Rakhmanov’s Theorem [168]). If the Hermitian matrix measure σ ∈
N , then
lim αn = 0.
n→∞
Notice that S ⊂ N . For the scalar case, the Nevai and Szegő conditions were introduced
in [166]. In the matrix case, the Szegő matrix condition was introduced in [48] and some
well known results in the scalar case, such as the Bernstein-Szegő Theorem, were extended
to the matrix case in [49].
When z ∈ T and z = ζ the Christoffel-Darboux formula (1.57) becomes
ϕL H L R,∗ H R,∗
n (z) ϕn (z) = ϕn (z) ϕn (z)
22
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
H
R,∗ −1 R,∗ −1
and rearranging the terms ϕL
n (z)ϕn (z) ϕL
n (z)ϕn (z) = I, so that
R,∗ −1 is a unitary matrix. For z ∈ D̄, let ∆(z) = ϕL (z)ϕR,∗ (z)−1 . Then ∆
ϕLn (z)ϕn (z) n n
is analytic in D, continuous in D̄ and k∆(z)k2 = 1 on T so by the maximum principle
−1
kϕL R,∗
n (z)ϕn (z) k2 ≤ 1 for |z| ≤ 1,
equivalently
R −1 L,∗
ϕn (z) ϕn (z)
≤ 1 for |z| ≥ 1. (1.74)
2
In the scalar case, it is very well known (see [150]) that if αn → 0, then
ϕn+1 (z)
lim = z, |z| > 1, with ϕn = ϕR L
n = ϕn .
n→∞ ϕn (z)
In [173], the authors extended the result for MOPUC associated with a positive defi-
nite matrix measure in the Szegő class. The following proposition generalizes the ratio
asymptotics for Hermitian matrix measures in the Nevai class.
and since αn → 0 implies ρRn → I, (1.78) implies that (1.75) holds if σ ∈ N . On the other
hand, (1.76) follows from the definition of the reversed polynomials.
23
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
−1 −1 R
lim ϕL,∗
n+1 (z) ϕR L,∗
n+1 (z) = lim z ϕn (z) ϕn (z) = · · · = lim z n I = 0.
n→∞ n→∞ n→∞
Proof. For z ∈ C \ D̄, the result follows from the previous Propositions and (1.56), (1.57).
Let z ∈ T. From (1.56), we have
−1 R −1 −1 L,∗ R −1 L,∗ H
(1−z ζ̄) ϕR
n (z) Kn−1 (ζ, z) ϕR,H
n (ζ) = ϕR
n (z) ϕn (z) ϕn (ζ) ϕn (ζ) −I,
As a consequence, from (1.80) we get (1.82). The result for the left kernel is obtained in
a similar way.
24
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
−1
lim ϕL L
n (ζ) Kn (ζ, ζ) ϕL,H
n (ζ) = 0.
n→∞
The following result establishes bounds for the leading principal coefficients of the
orthonormal matrix polynomials.
Proposition 1.4.22.
Similarly,
σ ∈ S if and only if lim kκR
n k2 < ∞. (1.85)
n→∞
H
Proof. Let An (0) = κL
n κL
n , and notice that
H L
kκL 2
n k2 = Π κL
n κn = Π(An (0)). (1.86)
If σ ∈ S, from [48] (Theorem 17), limn→∞ Π(An (0)) < ∞ for every n ∈ N, then from
(1.86) limn→∞ kκL
n k2 < ∞.
Conversely, if limn→∞ kκL n k2 < ∞, then from (1.86) Π(An (0)) < ∞ for every n ∈ N,
thus the numbers det(An (0)) are bounded from above (since the product of the eigenvalues
of a matrix equals its determinant). Therefore, from [48] (Theorem 19) and the Theorem
1.4.16 it follows that σ ∈ S. In a similar way (1.85) can be proved.
Proof. If σ ∈ S, from [48] the sequence (An (z))n≥0 converges uniformly on every compact
subset of D, where An (z) = Un (z)T−1 T n
n+1 Un (0) with Un (z) = [I, · · · , z I] and Tn+1 ∈
Ml(n+1) is a block Toeplitz matrix. Moreover,
n −L H
ϕL H L H L
n (z) = z (κn ) An (1/z̄) and An (0) = (κn ) κn . (1.87)
On the other hand, using the Christoffel-Darboux formula (1.56) with z = ζ ∈ D and
(1.87), we have
25
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
lim ϕR R H L H L
n (z)ϕn (z) = lim ϕn (z) ϕn (z) = 0,
n→∞ n→∞
thus
lim kϕR L
n (z)k2 = lim kϕn (z)k2 = 0,
n→∞ n→∞
In this section, we are interested in extending to the matrix case some properties from the
Szegő transformation. The matrix version of the Szegő transformation has been studied
in [42, 51, 172]. Properties such as the relation between the associated Stieltjes and
Carathéodory functions in the scalar case are given in Theorem 13.1.2 of [157]. It will
be the key tool to analyze the effect on some spectral transformations caused by the
application of the matrix Szegő transformation in Chapters 2 and 5.
A matrix measure σ(θ) supported on T that is invariant under θ 7→ −θ (i.e., z 7→
z̄ = z −1 ) is called symmetric (in Section 4.2, we will study some properties of symmetric
matrix measures, as well as the relation with block CMV matrices). If µ(x) is a normalized
positive semi-definite matrix measure supported on [−1, 1], then it is possible to define a
normalized symmetric positive semi-definite matrix measure σ(θ) supported on (−π, π] by
1
2 dµ(cos θ), −π ≤ θ ≤ 0,
dσ(θ) = (1.88)
− 12 dµ(cos θ), 0 ≤ θ ≤ π.
This is the so-called Szegő matrix transformation (see [42, 51]) that relates normalized
matrix measures supported on [−1, 1] with normalized (symmetric) matrix measures sup-
ported on T. We will denote this transformation by dσ = Sz(dµ). These measures satisfy
Z 1 Z π
f (x)dµ(x) = f (cos θ)dσ(θ) (1.89)
−1 −π
for every integrable function f on [−1, 1]. Similarly, the inverse Szegő matrix transforma-
tion satisfies Z π Z 1
g(θ)dσ(θ) = g(arccos x)dµ(x)
−π −1
for every integrable function g on T with g(θ) = g(−θ). We will denote this relation by
dµ = Sz −1 (dσ).
In [172], the authors show that when the measures µ and dσ = Sz(dµ) are positive
semi-definite, there exists a relation between the corresponding sequences of orthogonal
matrix polynomials, as follows.
26
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
As a consequence, the n−th degree monic MOPUC, using the matrix Heine’s formula
[141] is n
R φn (z) − n+1 φn−1 (z)
Φn (z) = n , n ≥ 1,
− n+1 φn−1 (z) φn (z)
1 Pn 1+(−1)n−k
where φn (z) = n+1 k=0 2 (k + 1)z k and φ0 (z) = 1. Taking into account that
n+2
hΦR R
n , Φn iR,σ = I2 ,
2(n + 1)
27
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
i.e.,
1 1 dθ
Sz √ W(arccos x)dx = π| sin θ|ω(cos θ) . (1.94)
π 1 − x2 2π
The proof is analogous to the one given in [157] Proposition 13.1.1. In particular, for
W(θ) = cosk θ, with k ≥ 0, we have
xk
−1 k dθ 1
Sz cos θ = √ dx. (1.95)
2π π 1 − x2
Notice that, if σ is a symmetric matrix measure, its associated sequence of moments are
Hermitian matrices, since
Z π Z π
cn = inθ
e dσ(θ) = e−inθ dσ(θ) = c−n = cHn. (1.96)
−π −π
The following result establishes a relation between the Verblunsky matrix coefficients
and the coefficients of the three-term recurrence relation.
The above equations are known as Geronimus relations, and have been proved in
[51, 172].
There is also a relation between the matrix-valued Stieltjes function and the matrix-
valued Carathéodory function associated with µ and σ, respectively, as follows.
28
Chapter 1. Matrix orthogonal polynomials and the Szegő matrix transformation
eiθ + z e−iθ + z 1 − z2
1 1
+ = .
2 eiθ − z e−iθ − z −1
z z + z − 2 cos θ
1 π eiθ + z 1 π e−iθ + z
Z Z
F (z) = dσ(θ) + dσ(θ)
2 −π eiθ − z 2 −π e−iθ − z
1 0 eiθ + z e−iθ + z 1 π eiθ + z e−iθ + z
Z Z
= + dµ(cos θ) − + dµ(cos θ)
4 −π eiθ − z e−iθ − z 4 0 eiθ − z e−iθ − z
Z 0 Z π
1 − z2
1 1
= −1 − 2 cos θ
dµ(cos θ) − −1 − 2 cos θ
dµ(cos θ) ,
2z −π z + z 0 z+z
1 − z2 1 1 1 1 −1 1
Z Z
F (z) = dµ(t) − dµ(t)
2z 2 −1 x − t 2 1 x−t
1 − z2 1 1
Z
= dµ(t)
2z −1 x − t
29
CHAPTER 2
Summary
The structure of the chapter is as follows. In the first section, given a sequence of
moments (complex numbers) associated with a Hermitian linear functional L defined in
the space of Laurent polynomials, we study a new functional LΩ which is a perturbation of
L in such a way that a finite number of moments are perturbed. Necessary and sufficient
conditions are given for the regularity of LΩ , and a connection formula between the corre-
sponding families of orthogonal polynomials is obtained. On the other hand, assuming LΩ
is positive definite, the perturbation is analyzed through the inverse Szegő transformation.
We point out that this is the only part of the document where we are restricted to the case
l = 1 (scalar case). Of course, these results can be obtained from Section 2.2 by setting
l = 1, but we decided to include them in a separate section since the formulas are slightly
different, because the use of scalar objects allows several simplifications. Moreover, in this
case we deal with the more general case of a linear functional. Later on, some of the results
are extended to the matrix case. First, we introduce a perturbation on a matrix measure
supported on the unit circle that results in a perturbation of the corresponding sequence of
matrix moments, and analyze the resulting perturbation on the associated matrix measure
on the real line, when both measures are related through the Szegő matrix transformation.
This is the main contribution of this chapter. Then, we deal with connection formulas and
some other properties associated with the corresponding matrix orthogonal polynomials
sequences on the real line and on the unit circle, respectively. Connection formulas for
perturbations of matrix moments have been studied previously in [40, 41]. Finally, we
illustrate the obtained results with an example.
30
Chapter 2. Analysis of perturbation of moments
2.1.1 Introduction
Consider a linear functional L (more information about linear functionals see [39]) defined
in the linear space of Laurent polynomials Λ = span{z n }n∈Z such that L is Hermitian,
i.e.,
cn = hL, z n i = hL, z −n i = c−n , n ∈ Z.
Then, a bilinear functional can be defined in the linear space P1 [z] = span{z n }n≥0 of
polynomials with complex coefficients by
The sequence of complex numbers (cn )n∈Z is called the sequence of moments associated
with L. On the other hand, a sequence of monic polynomials (Φn (z))n≥0 , with deg (Φn ) =
n, is said to be orthogonal with respect to L if the condition
hΦn , Φm iL = δn,m Sn ,
where Sn 6= 0, holds for every n, m ≥ 0. The necessary and sufficient conditions for
the existence of such a sequence can be expressed in terms of the Toeplitz matrix T =
limn→∞ Tn , where Tn is defined by
c0 c1 · · · cn−1
c−1 c0 · · · cn−2
Tn = .. .
.. .. ..
. . . .
c−(n−1) c−(n−2) · · · c0
The sequence (Φn (z))n≥0 satisfies the orthogonality condition if and only if Tn is non-
singular for every n ≥ 0. In such a case, L is said to be quasi-definite (or regular). On the
other hand, if det Tn > 0 for every n ≥ 1, then L is said to be positive definite and it has
the integral representation
Z Z π
dσ(z)
hp, qiσ = hp, qiL = p(z)q(z) = p(eiθ )q(eiθ )dσ(θ), p, q ∈ P1 [z], (2.1)
T iz −π
where σ is a nontrivial positive Borel measure supported on the unit circle T. In such
a case, there exists a (unique) family of polynomials (ϕn (z))n≥0 , with deg ϕn = n and
positive leading coefficient, such that
Z
ϕn (z)ϕm (z)dσ(z) = δm,n .
T
Notice that (2.1) is a particular case of the sesquilinear forms (1.23) and (1.24).
In [30], the authors studied the perturbation associated with the linear functional
Z
dz
hp, qiL̃ := hp, qiL + m p(z)q(z) , p, q ∈ P1 [z],
T iz
where m ∈ R and L is (at least) a quasi-definite Hermitian linear functional defined in the
linear space of Laurent polynomials. Notice that all moments associated with L̃ are equal
to the moments associated with L, except for the first moment, which is c̃0 = c0 + m.
The corresponding Toeplitz matrix T̃ is the result of adding m to the main diagonal of T.
31
Chapter 2. Analysis of perturbation of moments
where j ∈ N is fixed and h·, ·iLθ is the bilinear functional associated with the normalized
Lebesgue measure on the unit circle was studied in [31]. It is easily seen that the moments
associated with Lj are equal to those of L, except for the moments of order j and −j, which
are perturbed by adding m and m̄, respectively. In other words, the corresponding Toeplitz
matrix is perturbed on the j-th and −j-th subdiagonals. In both cases, the authors
obtained the regularity conditions for such a linear functional and deduced connection
formulas between the corresponding orthogonal sequences.
Assuming that both L and Lj are positive definite, the perturbation (2.2) can be
expressed in terms of the corresponding measures as
dθ dθ
dσ̃(θ) = dσ(θ) + mz j + m̄z −j . (2.3)
2π 2π
On the other hand, the connection between the measure (2.3) and its corresponding
measure supported in [−1, 1] via the inverse Szegő transformation was analyzed in [77],
and it is deduced that the perturbed moments on the real line depend on the Chebyshev
polynomials of the first kind.
In this section, we will extend those results to the case where a perturbation of a finite
number of moments is introduced in (2.2). In next subsection, for the positive definite case,
the study of the perturbation through the inverse Szegő transformation will be analyzed.
In Subsection 2.1.3 necessary and sufficient conditions for the regularity of the perturbed
functional are obtained, as well as a connection formula that relates the corresponding
families of monic orthogonal polynomials. An illustrative example will be presented in the
last subsection.
Let L be a quasi-definite linear functional on the linear space of Laurent polynomials, and
let (cn )n∈Z be its associated sequence of moments.
Definition 2.1.1. Let Ω be a finite set of non negative integers. The linear functional
LΩ is defined such that the associated bilinear functional satisfies
X
hp, qiLΩ = hp, qiL + (mr hz r p, qiLθ + m̄r hp, z r qiLθ ) , p, q ∈ P1 [z]. (2.4)
r∈Ω
where mr ∈ C and
Z π
dθ
hp, qiLθ = p(eiθ )q(eiθ ) , p, q ∈ P1 [z],
−π 2π
is the bilinear functional associated with the normalized Lebesgue measure in the unit
circle.
32
Chapter 2. Analysis of perturbation of moments
On the other hand, if FΩ (z) is the Carathéodory function associated with LΩ , then from
(1.70) X
FΩ (z) = F (z) + 2 mr z r ,
r∈Ω
Proposition 2.1.2. Let σ be a positive nontrivial Borel measure with real moments sup-
ported on the unit circle, let µ be its corresponding measure supported in [−1, 1], such that
dµ = Sz −1 (dσ) and define σ̃Ω as in (2.5). Then, the measure µ̃Ω = Sz −1 (σ̃Ω ) is given by
X dx
dµ̃Ω (x) = dµ(x) + 2 mr Tr (x) √ , (2.6)
π 1 − x 2
r∈Ω
where Tr (x) := cos(rθ) is the r-th degree Chebyshev polynomial of the first kind on R (see
[39]) and mr real.
33
Chapter 2. Analysis of perturbation of moments
Proof. Notice that, setting z = eiθ = cos θ + i sin θ, x = cos θ, Tr (x) = rk=0 ak xk and mr
P
real. Then, from (2.5)
X
r dθ −r dθ
dσ̃Ω (θ) = dσ(θ) + mr (cos θ + i sin θ) + mr (cos θ + i sin θ)
2π 2π
r∈Ω
X dθ
= dσ(θ) + 2 Tr (x)mr
2π
r∈Ω
r
X X dθ
= dσ(θ) + 2 mr ak cosk θ .
2π
r∈Ω k=0
Since σ̃Ω is a symmetric positive measure, by applying the inverse Szegő transformation
and taking into account (1.95), we have
r
X X dx
dµ̃Ω (x) = dµ(x) + 2 mr ak xk √
r∈Ω k=0
π 1 − x2
X dx
= dµ(x) + 2 mr Tr (x) √
r∈Ω
π 1 − x2
which is (2.6).
The next proposition expresses the considered perturbation in terms of the moments
associated with the matrix measure (2.6). Recall that n!! is the double factorial or semi-
factorial of a number n ∈ N defined as in (1.21).
Proposition 2.1.3. Let µ be a positive measure supported in [−1, 1] and µ̃Ω defined as in
(2.6) and (µn )n≥0 , (µ̃n )n≥0 be their corresponding sequences of moments. Then
P µn , if 0 ≤ n < rm ,
µ̃n = (2.7)
µn + π2 r∈Ω mr B(n, r), if rm ≤ n,
• B(n, r) = 0, if r + n is odd,
πr P [r/2] (−1) (r−k−1)!(2)r−2k (r+n−2k−1)!!
k
• B(n, r) = 2 k=0 k!(r−2k)! (r+n−2k)!! , if r + n is even.
1
As a consequence, by the orthogonality of Tr (x) with respect to √1−x 2
, we obtain for the
n−th moments with n ∈ /Ω
(
µn , if 0 ≤ n < rm ,
µ̃n = 2 P
R 1 n Tr (x)dx (2.8)
µn + π r∈Ω mr −1 x √1−x2 , if rm ≤ n.
34
Chapter 2. Analysis of perturbation of moments
1 1 [r/2]
Z
dx r
Z X (−1)k (r − k − 1)!(2x)r−2k dx
xn Tr (x) √ = xn √
−1 1−x 2 2 −1 k!(r − 2k)! 1 − x2
k=0
[r/2]
r X (−1)k (r − k − 1)!(2)r−2k 1 r+n−2k dx
Z
= x √ ,
2 k!(r − 2k)! −1 1 − x2
k=0
and, since
1
(n − 1)!! 1 + (−1)n
Z
dx
xn √ =π , with (−1)!! = 1
−1 1 − x2 n!! 2
(2.8) becomes (2.7).
From the previous proposition we can conclude that a perturbation of the moments
cr and c−r with r ∈ Ω, associated with a measure σ supported in the unit circle, results
in a perturbation, defined by (2.7), of the moments µn , n ≥ rm , associated with a mea-
sure µ supported in [−1, 1], when both measures are related through the inverse Szegő
transformation.
1. A(s1 ,s2 ;l1 ,l2 ;r) will denote a (s2 − s1 + 1) × (l2 − l1 + 1) matrix whose entries are a(s,l)r ,
where s1 ≤ s ≤ s2 and l1 ≤ l ≤ l2 ,. For instance,
a(2,4)6 a(2,5)6
A(2,3;4,5;6) = .
a(3,4)6 a(3,5)6
(0) (n−1)
2. Υn−1
n (0) = [Φn (0), · · · , Φn (0)]T ∈ Mn×1 .
(0) (n−1)
3. Υ̃n−1
n (0) = [Φ̃n (0), · · · , Φ̃n (0)]T ∈ Mn×1 .
(0,−2)
4. Derivatives of negative order are defined as zero. For instance, Kn (z, y) ≡ 0.
Necessary and sufficient conditions for the regularity of LΩ , as well as the relation between
the corresponding sequences of orthogonal polynomials, are given in the next result.
Proposition 2.1.4. Let L be a quasi-definite linear functional and let (Φn (z))n≥0 be its
associated monic orthogonal polynomials sequence. The following statements are equivalent
35
Chapter 2. Analysis of perturbation of moments
(n−r)
X X Φn (0)
S̃n = Sn + (Qn Wn )T Yrn + m̄r 6= 0, n ≥ 0,
(n − r)!
r∈Ω r∈Ω
with
(r)
mr Φn0!r!(0)
..
.
(2r−1)
Φn (0)
mr (r−1)!(2r−1)!
(2r) (0)
Φn (0) Φn (0)
m r r!(2r)! + m̄ r r!0!
r
..
Yn =
. ,
(2.9)
m Φn (0) + m̄ Φ(n−2r)
(n)
n (0)
r (n−r)!(n)! r (n−r)!(n−2r)!
(n−2r+1)
Φn (0)
m̄r (n−r+1)!(n−2r+1)!
..
.
(n−r−1)
Φn (0)
m̄r (n−1)!(n−r−1)!
X
Qn = (In + Lrn )−1 ,
r∈Ω
X
Wn = Φn (0) − m̄r n!C(0,n−1;n,n;r) ,
r∈Ω
(0,r)
Kn−1 (z,0)
mr 0!r!
..
.
(0,2r−1)
Kn−1 (z,0)
mr (r−1)!(2r−1)!
(0,2r) (0,0)
K (z,0) Kn−1 (z,0)
mr n−1 r!(2r)! + m̄r r!0!
..
Krn−1 (z, 0) =
. ,
(0,n−1) (0,n−2r−1)
K (z,0) Kn−1 (z,0)
m n−1
r (n−r−1)!(n−1)! + m̄r (n−r−1)!(n−2r−1)!
(0,n−2r)
Kn−1 (z,0)
m̄r (n−r)!(n−2r)!
..
.
(0,n−r−1)
Kn−1 (z,0)
m̄r (n−1)!(n−r−1)! .
36
Chapter 2. Analysis of perturbation of moments
and
mr A(0,r−1;0,r−1;r) B(0,r−1;r,n−r−1;r) m̄r C(0,r−1;n−r,n−1;r)
Lrn = mr A(r,n−r−1;0,r−1;r) B(r,n−r−1;r,n−r−1;r) m̄r C(r,n−r−1;n−r,n−1;r) ,
mr A(n−r,n−1;0,r−1;r) B(n−r,n−1;r,n−r−1;r) m̄r C(n−r,n−1;n−r,n−1;r)
Furthermore, if (Φ̃n (z))n≥0 denotes the monic orthogonal polynomials sequence associated
with LΩ , then
(0,n−r)
X X Kn−1 (z, 0)
Φ̃n (z) = Φn (z) − (Qn Wn )T Krn−1 (z, 0) − m̄r , (2.13)
(n − r)!
r∈Ω r∈Ω
for every n ≥ 1.
Remark 2.1.5. Notice that Qn and Lrn are n × n matrices, whereas Yrn , Wn and Krn−1 (z, 0)
are n-th dimensional column vectors.
Proof. Assume LΩ is a quasi-definite linear functional, and denote by (Φ̃n (z))n≥0 its as-
sociated monic orthogonal polynomials sequence. Let us write
n−1
X
Φ̃n (z) = Φn (z) + λn,k Φk (z), (2.14)
k=0
where, for 0 ≤ k ≤ n − 1,
hΦ̃n , Φk iL
λn,k =
Sn
dy dy
hΦ̃n , Φk iLΩ − r∈Ω mr T y r Φ̃n (y)Φk (y) 2πiy − r∈Ω m̄r T y −r Φ̃n (y)Φk (y) 2πiy
P R P R
=
Sk
X mr Z dy X m̄r Z dy
=− y r Φ̃n (y)Φk (y) − y −r Φ̃n (y)Φk (y) ,
Sk T 2πiy Sk T 2πiy
r∈Ω r∈Ω
and notice that hΦ̃n , Φk iL 6= 0 (in general), and hΦ̃n , Φk iLΩ = 0 for n > k.
37
Chapter 2. Analysis of perturbation of moments
From the power series expansion of Φ̃n (y) and Kn−1 (z, y), we have
n (l) n+r (l−r)
X Φ̃n (0) X Φ̃n (0) l
y r Φ̃n (y) = y r yl = y,
l! (l − r)!
l=0 l=r
Z n+r
X Φ̃(l−r) n−1 (0,l)
Kn−1 (z, 0) 1 dy
Z
dy n (0) X
y r Φ̃n (y)Kn−1 (z, y) = yl
T 2πiy T l=r (l − r)! l! y l 2πiy
l=0
n−r−1 (l) (0,l+r)
X Φ̃n (0) Kn−1 (z, 0)
= .
(l)! (l + r)!
l=0
Similarly,
n−r
X Φ̃(l+r) n (l) (0,l) (0,l−r)
(0) Kn−1 (z, 0) X Φ̃n (0) Kn−1 (z, 0)
Z
−r dy n
y Φ̃n (y)Kn−1 (z, y) = = .
T 2πiy (l + r)! l! l! (l − r)!
l=0 l=r
As a consequence, we get
n−r−1 (0,l+r) n (0,l−r)
!
(l) (l)
X X Φ̃n (0) Kn−1 (z, 0) X Φ̃n (0) Kn−1 (z, 0)
Φ̃n (z) = Φn (z) − mr − m̄r ,
(l)! (l + r)! (l)! (l − r)!
r∈Ω l=0 l=r
(2.15)
which after a reorganization of the terms becomes
r−1 (l) (0,l+r)
n (0,l−r)
!
(l)
X X Φ̃n (0) Kn−1 (z, 0)
X Φ̃n (0) Kn−1 (z, 0)
Φ̃n (z) = Φn (z) − mr + m̄r
l! (l + r)! (l)! (l − r)!
r∈Ω l=0 l=n−r
X n−r−1 (0,l+r) (0,l−r)
!!
X Φ̃(l)
n (0) K (z, 0) K (z, 0)
− mr n−1 + m̄r n−1 .
l! (l + r)! (l − r)!
r∈Ω l=r
38
Chapter 2. Analysis of perturbation of moments
(l)
In order to find the constant values Φ̃n (0), we take s derivatives, 0 ≤ s ≤ n, with respect
to the variable z and evaluate at z = 0 to obtain the (n + 1) × (n + 1) linear system
r−1 (l) (s,l+r) n (s,l−r)
!
(l)
X X Φ̃ n (0) Kn−1 (0, 0) X Φ̃ n (0) Kn−1 (0, 0)
Φ̃(s) (s)
n (0) = Φn (0) − mr + m̄r
l! (l + r)! (l)! (l − r)!
r∈Ω l=0 l=n−r
X n−r−1 (s,l+r) (s,l−r)
!!
X Φ̃(l)n (0) Kn−1 (0, 0) Kn−1 (0, 0)
− mr + m̄r .
l! (l + r)! (l − r)!
r∈Ω l=r
If we use (2.10), (2.11) and (2.12), then the linear system becomes
r−1 n−r−1 n
!
X X X X
Φ̃(s) (s)
n (0) = Φn (0)− mr a(s,l)r Φ̃(l)
n (0) + b(s,l)r Φ̃(l)
n (0) + m̄r c(s,l)r Φ̃(l)
n (0) .
r∈Ω l=0 l=r l=n−r
Notice that the last equation (i.e., when s = n) gives no information, since a(n,l)r =
(n) (n)
b(n,l)r = c(n,l)r = 0 and Φ̃n (0) = Φn (0) = n!. As a consequence, the (n + 1) × (n + 1)
linear system can be reduced to an n × n linear system that can be expressed in matrix
form as !
X X
In + Lrn Υ̃nn−1 = Υn−1
n − m̄r n!C(0,n−1;n,n;r) = Wn .
r∈Ω r∈Ω
which is (2.13).
39
Chapter 2. Analysis of perturbation of moments
Conversely, assume In + r∈Ω Lrn is nonsingular for every n ≥ 1 and define (Φ̃n (z))n≥0 as
P
in (2.13). For 0 ≤ k ≤ n − 1, we have
(0,l+r) (0,l+r)
*n−r−1 + n−r−1
* +
X Φ̃(l)n (0) Kn−1 (z, 0)
X (l)
Φ̃n (0) Kn−1 (z, 0)
, Φk (z) = , Φk (z)
l! (l + r)! l! (l + r)!
l=0 L l=0 L
n−r−1 n−1
* +
(l) (l+r)
X Φ̃n (0) X Φt (z)Φt (0)
= , Φk (z)
l!(l + r)! St
l=0 t=0 L
n−r−1 (l) (l+r)
X Φ̃n (0) Φk (0)
= ,
l! (l + r)!
l=0
(0,l−r)
(l−r)
(l) (l)
Pn Φ̃n (0) Kn−1 (z,0) Pn Φ̃n (0) Φk (0)
and, similarly, l=j l! (l−r)! , Φk (z) = l=r l! (l−r)! . In addition,
L
Z
r dz
hz Φ̃n , Φk iLθ = z r Φ̃n (z)Φk (z)
T 2πiz
n
! k
n−r−1 (l+r)
(l) (l) X Φ̃(l)
Φ̃n (0)z l X Φn (0)z −l dz n (0) Φk (0)
Z X
= zr = ,
T l! l! 2πiz l! (l + r)!
l=0 l=0 l=0
40
Chapter 2. Analysis of perturbation of moments
(l) (l−r)
Φ (0)
and also hΦ̃n , z r Φk iLθ = nl=r Φ̃nl!(0) k(l−r)! . Thus, for 0 ≤ k ≤ n − 1, and taking into
P
Notice that if rm = min{r : r ∈ Ω}, then from Proposition 2.1.4 we conclude that if
(0,n−r )
n < rm then Krn−1
m
(z, 0) is the zero vector, Kn−1 m (z, 0) = 0 and according to (2.13) we
have Φ̃n (z) = Φn (z). This means that the only affected polynomials are those with degree
n ≥ rm .
The above proposition generalizes the case when Ω has a single element k 6= 0 (see
[31]) and the case Ω = {0} (see [30]).
2.1.4 Example
Let L be the Christoffel transformation of the normalized Lebesgue measure with para-
meter ξ = 1, i.e.,
hp, qiL = h(z − 1)p, (z − 1)qiLθ ,
and let Ω = {1, 2}. Then,
hp, qiL{1,2} = hp, qiL + m1 hzp, qiLθ + m̄1 hp, zqiLθ + m2 hz 2 p, qiLθ + m̄2 hp, z 2 qiLθ , (2.17)
41
Chapter 2. Analysis of perturbation of moments
i.e., the moments of order 1 and 2 are perturbed. Since the sequence (z n )n≥0 is orthogonal
with respect to Lθ , the monic orthogonal polynomials sequence with h(z − 1)p, (z − 1)qiLθ
is given by (see [20])
n
1 n+1 1 X
Φn (z) = z − z j , n ≥ 1,
z−1 n+1
j=0
or, equivalently,
n
Φn (z) = z n +
Φn−1 (z), n ≥ 1,
n+1
and its corresponding reversed polynomial is
n
1 1 X
Φ∗n (z) = 1− z j+1 , n ≥ 1.
1−z n+1
j=0
Furthermore, if 0 ≤ s ≤ n, we have
and if 0 ≤ t, s ≤ n − 1,
n−1
(0,t)
X (t + 1)!
Kn−1 (z, 0) = Φp (z),
p=t
p+2
n−1
(s,t)
X (t + 1)!(s + 1)!
Kn−1 (0, 0) = .
(p + 1)(p + 2)
p=max{s,t}
As a consequence, we have
n−1
(s + 1)!(l + r + 1) X 1
a(s,l)r = ,
l! (p + 1)(p + 2)
p=max{s,l+r}
n−1
(s + 1)!(l − r + 1) X 1
c(s,l)r = ,
l! (p + 1)(p + 2)
p=max{s,l−r}
n−1 n−1
(s + 1)! X l+r+1 X l−r+1
b(s,l)r = mr + m̄r .
l! (p + 1)(p + 2) (p + 1)(p + 2)
p=max{s,l+r} p=max{s,l−r}
We now proceed to obtain the monic orthogonal polynomial sequence associated with
L{1,2} , denoted by (Φ̃n (z))n≥0 . Notice that we have
and
n−1
c(n−1,n)2 = ,
n(n + 1)
42
Chapter 2. Analysis of perturbation of moments
and thus
(0)
Φn (0)
c(0,n)1 c(0,n)2
(1)
Φn (0) c(1,n)1 c(1,n)2
Wn = ..
− m̄1 n!
..
− m̄ n!
..
.
. 2
.
(n−2) c(n−2,n) c(n−2,n)
Φn (0) 1 2
(n−1)
Φn (0) c(n−1,n)1 c(n−1,n)2
1! 1!
2! 2!
1 − m̄1
..
2 m̄2
..
= − .
n+1
. n+1
.
(n − 1)! (n − 1)!
n! n! n−1
2n
m1 a(0,0)1 b(0,1)1 ··· b(0,n−2)1 m̄1 c(0,n−1)1
m1 a(1,0)1 b(1,1)1 ··· b(1,n−2)1 m̄1 c(1,n−1)1
L1n = .. .. .. .. ..
,
. . . . .
m1 a(n−2,0)
1
b(n−2,1)1 ··· b(n−2,n−2)1 m̄1 c(n−2,n−1)1
m1 a(n−1,0)1 b(n−1,1)1 ··· b(n−1,n−2)1 m̄1 c(n−1,n−1)1
and for n ≥ 3,
Pn−1 Φp (z)
3m2 p=2 p+2
Pn−1 Φp (z)
4m2 p=3 p+2
5m2 Pn−1 Φp (z) m̄2 Pn−1 Φp (z)
+
2 p=4 p+2 2 p=0 p+2
..
K2n−1 (z, 0) = ,
.
Φp (z) Φp (z)
Pn−1 Pn−1
nm2 m̄2
(n−3)! p=n−1 p+2 + (n−3)(n−5)! p=n−5 p+2
m̄2 Pn−1 Φp (z)
(n−2)(n−4)! p=n−4 p+2
m̄2 Pn−1 Φp (z)
(n−1)(n−3)! p=n−3 p+2
43
Chapter 2. Analysis of perturbation of moments
L2n =
···
m2 a(0,0)2 m2 a(0,1)2 b(0,2)2 b(0,n−3)2 m̄2 c(0,n−2)2 m̄2 c(0,n−1)2
m2 a(1,0)2
m2 a(1,1)2 b(1,2)2 ··· b(1,n−3)2 m̄2 c(1,n−2)2 m̄2 c(1,n−1)2
m2 a(2,0)
2
m2 a(2,1)2 b(2,2)2 ··· b(2,n−3)2 m̄2 c(2,n−2)2 m̄2 c(2,n−1)2
.. .. .. .. .. .. ..
,
. . . . . . .
m2 a(n−3,0)
2
m2 a(n−3,1)2 b(n−3,2)2 ··· b(n−3,n−3)2 m̄2 c(n−3,n−2)2 m̄2 c(n−3,n−1)2
m2 a(n−2,0)
2
m2 a(n−2,1)2 b(n−2,2)2 ··· b(n−2,n−3)2 m̄2 c(n−2,n−2)2 m̄2 c(n−2,n−1)2
m2 a(n−1,0)2 m2 a(n−1,1)2 b(n−1,2)2 ··· b(n−1,n−3)2 m̄2 c(n−1,n−2)2 m̄2 c(n−1,n−1)2
1. Degree one:
(0,0)
m̄1 K0 (z, 0) (1 − m̄1 )
Φ̃1 (z) = Φ1 (z) − =z+ Φ0 (z),
0! 2
since
K10 (z, 0) = 0, K20 (z, 0) = 0
and
1
Φ1 (z) = z + Φ0 (z).
2
2. Degree two:
m1 2m̄1
−1 T
T T ! +1
1 − m̄1 1! 2m̄2 1! 3 3
Φ̃2 (z) = Φ2 (z) − −
3 2! 3 1/2
2m1 m̄1
3 3
+1
2m1
3
Φ1 (z)
2m̄1 Φ0 (z) Φ1 (z)
− Φ1 (z) − m̄2 +
3 2 3
Φ0 (z) Φ1 (z)
m̄1 2
+ 3
2m1
T
m̄1
+1 − 2m 1
3
Φ1 (z)
1 − m̄1 1! 3 3
= z2 + 2Φ1 (z) − K
3 2!
Φ0 (z) Φ1 (z)
− 2m̄
3
1 m1
3
+1 m̄1 + 2 3
2m1
T
m̄1
+1 − 2m 1
3
Φ1 (z)
3 3
Φ0 (z) Φ1 (z) 2 1!
− m̄2 + − K ,
2 3 3 1/2
Φ0 (z) Φ1 (z)
− 2m̄
3
1 m1
3
+1 m̄1 2
+ 3
since
2
K21 (z, 0) = 0, Φ2 (z) = z 2 + Φ1 (z)
3
and
1 9
K= 1 2 = |m + 3|2 − 4|m |2 .
det(I2 + L2 + L2 ) 1 1
44
Chapter 2. Analysis of perturbation of moments
n
since Φn (z) = z n + n+1 Φn−1 (z).
On the other hand, assuming that the linear functional (2.17) is positive definite, the
associated measure is
dθ dθ dθ
dσ(θ) = |z − 1|2 + 2Re(m1 z) + 2Re(m2 z 2 ) ,
2π 2π 2π
and the corresponding moments are given by
2, if n = 0,
−1 + m̄ 1 , if n = 1,
−1 + m1 , if n = −1,
c̃n =
m̄2 , if n = 2,
m2 , if n = −2,
0, in other case.
45
Chapter 2. Analysis of perturbation of moments
i.e., the first and second subdiagonals are perturbed. Furthermore, from (1.94)
r !
2 1−x dθ
Sz dx = |z − 1|2 ,
π 1+x 2π
q
2 1−x
where (µn )n≥0 are the moments associated with the measure dµ(x) = π 1+x dx, and
1
r 1 2, if n = 0,
xn xn+1
1−x −
Z Z
2 n 2
n!!
µn = x dx = √ dx = −2 (n+1)!! , if n is odd,
π −1 1+x π −1 1 − x2 2 (n−1)!! , if n is even.
n!!
2.2.1 Introduction
The term moment problem was used for the first time in T. J. Stieltjes’ classic memoir
[165] (published posthumously between 1894 and 1895) dedicated to the study of conti-
nued fractions. The moment problem is a question in classical analysis that has produced
a rich theory in applied and pure mathematics. This problem is beautifully connected
to the theory of orthogonal polynomials, spectral representation of operators, matrix fac-
torization problems, probability, statistics, prediction of stochastic processes, polynomial
optimization, inverse problems in financial mathematics and function theory, among many
other areas. In the matrix case, M. Krein was the first to consider this problem in [121],
and later on the study of density questions related to the matrix moment problem were
studied in [66, 67, 130, 131]. Recently, the theory of matrix moment problem is used in [50]
for the analysis of random matrix-valued measures. In this section, we are interested in
the study of some properties related with a perturbation of a sequence of matrix moments,
within the framework of the theory of matrix orthogonal polynomials both on the real line
and on the unit circle.
46
Chapter 2. Analysis of perturbation of moments
2.2.2 Matrix moments on the real line and the unit circle related
through the inverse Szegő transformation
dθ
dσj (θ) = dσ(θ) + 2 cos(jθ)mj , (2.21)
2π
and thus σj is a symmetric, positive semi-definite matrix measure supported on unit circle.
Notice that the right sesquilinear form associated with (2.20) is
Notice that (2.20) and (2.22) are extensions to the matrix case of (2.3) and (2.2), respec-
tively, with Ω = {j}. From (2.22), one easily sees that the corresponding matrix moment
c̃k satisfies
ck , if k ∈
/ {j, −j},
c̃k = cj + mH j , if k = j,
c−j + mj , if k = −j.
In other words, σj represents an additive perturbation of the matrix moments cj and c−j
of σ.
The following result shows the matrix measure supported on [−1, 1] that is related
with (2.20) through the inverse Szegő matrix transformation. This result constitutes a
generalization of the corresponding result on the scalar case studied in [77, 78]. The proof
is analogous to the one given in Proposition 2.1.2 for matrix measure, with Ω = {j} and
mj ∈ Ml a Hermitian matrix.
dx
dµj (x) = dµ(x) + 2Tj (x)mj √ , j ≥ 0, (2.23)
π 1 − x2
where Tj (x) := cos(jθ) is the j−th degree Chebyshev polynomial of the first kind on R (see
[39]).
The next proposition is the matrix version of Proposition 2.1.3 with Ω = {j}. We also
omit the proof, since it is similar to the one of the scalar case. Recall that n!! is the double
factorial or semi-factorial of a number n ∈ N defined as in (1.21).
47
Chapter 2. Analysis of perturbation of moments
Proposition 2.2.2. Let µ be a positive semi-definite matrix measure supported on [−1, 1],
µj defined as in (2.23) and (µn )n≥0 , (µ̃n )n≥0 be their corresponding sequences of matrix
moments. Then,
µn + mj B(n, j), if j ≤ n and n + j is even,
µ̃n = (2.24)
µn , otherwise,
where
[j/2]
X (j − k − 1)!(2)j−2k (j + n − 2k − 1)!!
B(n, j) = j (−1)k .
k!(j − 2k)! (j + n − 2k)!!
k=0
This means that the perturbation of the matrix moments cj and c−j of a matrix
measure σ supported on the unit circle results in a perturbation, defined by (2.24), of the
moments µn associated with a measure µ supported in [−1, 1], when both measures are
related through the inverse matrix Szegő transformation.
In the remaining of the section, we will use the following notation. For 0 ≤ k ≤ j − 1, let
(j)
χk = [0 · · · 0 I 0 · · · 0]H ∈ Mjl×l ,
(j)
where I is in the k-th position, and define χk = [0, · · · , 0]H for j ≥ k or k < 0. We also
define
(j+1) (j+1)
Λ(j+1)
n,m := [χn χn+1 · · · χ(j+1)
m ] ∈ M(j+1)l×(m−n+1)l ,
with n ≤ m, for n, m ∈ Z.
Notice that Hj is a Hankel invertible and Hermitian matrix for j ≥ 1. With these matrices,
it is shown in [140] that the sequence of (right) matrix polynomials (PjR (x))j≥0 defined by
48
Chapter 2. Analysis of perturbation of moments
i.e. each PjR is defined as the Schur complement of xj I in the matrix Fj , is orthogonal
with respect the sesquilinear form (1.3), this is
R
hPm , PjR iR,µ = δm,j sj , m = 0, . . . , j,
Conversely, any sequence of matrix moments (µj )j≥0 satisfying that the corresponding
Hankel matrices are invertible and Hermitian defines a Hermitian matrix measure. In this
situation, we will say (µj )j≥0 is a Hermitian sequence.
In a similar way, a sequence of left matrix polynomials (PnL (x))n≥0 can be defined such
that it is orthogonal with respect to sesquilinear form (1.4). Since PjL (x) = PjR (x)H for
all x ∈ R, it suffices to consider Pj (x) = PjR (x).
(k)
Let Pj (x) be the k−th derivative of Pj (x). From (2.25) we can deduce
(k) (
(j)
Pj (0) −(χk )H Hj−1 vj,2j−1 , if 0 ≤ k ≤ j − 1,
= (2.27)
k! I, if k = j.
The kernel matrix defined as (1.15) can be expressed also in terms of the moments by (see
[140])
I
yI
−1
Kj (y, x) = [I xI · · · xj I]Hj+1 .. . (2.28)
.
yj I
(i,r)
Denoting Kj (y, x) the i−th derivative (resp. r−th) Kj (y, x) with respect to the variable
y (resp. x), we have from (2.28), for 0 ≤ i ≤ j and 0 ≤ r ≤ j,
(i,0)
Kj (0, x) −1 (j+1)
= [I xI · · · xj I]Hj+1 χi ,
i!
I
(0,r) yI
Kj (y, 0) −1
= (χ(j+1) )H Hj+1 ,
r ..
r! .
yj I
(i,r)
Kj (0, 0) −1 (j+1)
= (χ(j+1)
r )H Hj+1 χi .
i!r!
49
Chapter 2. Analysis of perturbation of moments
Now, we will show the relation between the polynomials matrix P̃j and Pj , as well as the
relation between s̃j and sj , in terms of the sequence of matrix moments (µn )n≥0 . Notice
that the perturbation of a matrix measure µ supported on E = [−1, 1] defined by (2.23)
constitutes a particular case with
mj B(n, j), if j ≤ n and n + j is even,
Mn =
0, otherwise.
k (k−p,0) (p)
X Kj−1 (0, x) P̃j (0)
P̃j (x) = Pj (x) − Mk = Pj (x) − Kkj−1 (0, x)D−1 −1
k,0 DMk D0,k P̃j,k (0)
(k − p)! p!
p=0
with
(k+1) −1
h i
(j) (j)
P̃j,k (0) = I(k+1)l + D0,k (Λ0,k )H Hj−1 Λ0,k D−1
k,0 DMk Λk,0 Pj,k (0). (2.30)
Theorem 2.2.3. [40]. Let µ and µ̃ be Hermitian matrix measures associated with the
Hermitian sequences (µk )2n−1 2n−1
k=0 and (µ̃k )k=0 , where µ̃k = µk + Mk , with 0 ≤ k ≤ 2n − 1.
For 1 ≤ j ≤ n, (P̃j (x))nj=0 is orthogonal with respect to µ̃ if
2n−1
X
P̃j (x) = Pj (x) − Kkj−1 (0, x)M(k, j)Pj,k (0), with P̃0 (x) = P0 (x) = I,
k=0
where
(k+1) −1
h i
(j) H −1 (j) −1
M(k, j) = D−1 D D −1
k,0 Mk 0,k I (k+1)l + D 0,k (Λ 0,k ) Hj Λ D D Λ
0,k k,0 Mk k,0 . (2.31)
50
Chapter 2. Analysis of perturbation of moments
Now, we will deduce a relation for the corresponding norms. We will need the following
auxiliary results. For 0 ≤ j ≤ n, let M̃j ∈ Mjl be defined as
M0 ··· Mk ··· Mj−1
.. ..
. Mk . j−1
X 2j−2
M̃j := Mk
.
..
= M +
X
Mtj,k ,
M2j−2−k j,k
.. ..
k=0 k=j
. M2j−2−k .
Mj−1 · · · M2j−2−k ··· M2j−2
1. If k = 0, . . . , j − 1, then
k
(j)
X
−1
H̃j,k − Hj−1 = − Hj−1 χk−p Mk (χ(j) H −1
p ) H̃j,k . (2.33)
p=0
2. If k = j, . . . , 2j − 2, then
j−1
(j)
X
−1
H̃j,k − Hj−1 =− Hj−1 χk−p Mk (χ(j) H −1
p ) H̃j,k . (2.34)
p=k−(j−1)
and thus
k
(j) (j) (j) (j) (j)
X
Mj,k = χk Mk (χ0 )H + ··· + χ0 Mk (χk )H = χk−p Mk (χ(j) H
p ) .
p=0
Substituting Mj,k in (2.35), (2.33) follows. A similar procedure can be used to prove
(2.34)
51
Chapter 2. Analysis of perturbation of moments
Let H̃j ∈ Mlj be the block Hankel matrix associated with the perturbed matrix
moments sequence, i.e.
µ˜0 µ˜1 · · · µj−1
˜
µ˜1 µ˜2 · · · µ˜j
H̃j = . = Hj + M̃j .
.. .. ..
.. . . .
µj−1
˜ µ˜j · · · µ2j−2
˜
Theorem 2.2.5. Let µ be a Hermitian matrix measure with matrix moments (µk )2n−1 k=0 .
Define a new Hermitian sequence of matrix moments (µ̃k )2n−1
k=0 by µ̃ k = µ k + Mk with
0 ≤ k ≤ 2n − 1, and denote by µ̃ its corresponding Hermitian matrix measure. Then, for
1 ≤ j ≤ n, we have
2n−1
(k+1)
X
s̃j = sj + Pj,k (0)H Λk,0 M(k, j)Pj,k (0), (2.37)
k=0
Proof. Let us assume first that only the k−th moment is perturbed. From (2.26) we have
that if 2j < k, then s̃j = sj . We will show that if k ≤ 2j, we have
k (k−p) (p)
X (Pj (0))H P̃j (0)
s̃j = sj + Mk . (2.38)
(k − p)! p!
p=0
For 0 ≤ k ≤ j − 1, notice that µ̃2j = µ2j and ṽj,2j−1 = vj,2j−1 , and thus from (2.26) we get
H
s̃j = sj −µ2j +vj,2j−1 Hj−1 vj,2j−1 + µ̃2j −ṽj,2j−1
H −1
H̃j,k H
ṽj,2j−1 = sj −vj,2j−1 −1
(H̃j,k −Hj−1 )ṽj,2j−1 .
52
Chapter 2. Analysis of perturbation of moments
Since ( (p)
P̃j (0) = 0, if j < p,
(k−p)
Pj (0) = 0, if p < k − j,
we get (2.38).
Finally, for k = 2j (notice that if j = n the result is also valid with M2j = 0),
−1
H
µ̃2j = µ2j + M2j and ṽj,2j−1 H̃j,k H
ṽj,2j−1 = vj,2j−1 Hj−1 vj,2j−1 ,
s̃j H
= sj − µ2j + vj,2j−1 Hj−1 vj,2j−1 + µ̃2j − ṽj,2j−1
H −1
H̃j,k ṽj,2j−1
k (k−p) (p)
X (Pj (0))H P̃j (0)
= sj − µ2j + µ̃2j = sj + M2j = sj + Mk .
(k − p)! p!
p=0
On the other hand, using the notation given in (2.29), (2.38) can be written as
(k) (k−1) (0)
s̃j = sj + [Pj (0)H Pj (0)H · · · Pj (0)H ]D−1 −1
k,0 DMk D0,k P̃j,k (0),
The general case, i.e. when all moments are perturbed (k = 0, . . . , 2n − 1) is easy to
deduce, since we have (2.36) and
2j−1
(j)
X
ṽj,2j−1 = vj,2j−1 + χk−j Mk ,
k=0
so (2.37) follows.
2.2.3.2 Connection formulas for the unit circle and further results
53
Chapter 2. Analysis of perturbation of moments
as
cH
0 + c0 c1 ··· cj−1 cj
H
c1 H
c0 + c0 · · · cj−2 ..
Tj =
.. .. .. .. , Gj =
Tj .
,
. . . . c1
H H cH
cj−1 cj−2 ··· 0 + c0 I zI · · · zj I
and the vector block
H H
νj = cH H
j cj−1 · · · c1 ∈ Mlj×l .
Notice that Tj is a Toeplitz invertible and Hermitian matrix for j ≥ 1. As in the real line
case, any sequence (cj )j∈Z such that Tj is invertible and Hermitian for j ≥ 1 will be called
a Hermitian sequence, and has an associated Hermitian matrix measure σ supported on
T. The sequence of monic matrix polynomials (ΦR j (z))j≥0 defined by (see [141])
ΦR j
j (z) = z I − [I zI · · · z
j−1
I]Tj−1 νj , with ΦR
0 (z) = I, (2.39)
i.e. each ΦR j
j is defined as the Schur complement of z I in the matrix Gj , is orthogonal
with respect to the sesquilinear form (1.23) and satisfies
hΦR R R
m , Φj iR,σ = δm,j Sj , m = 0, . . . , j,
where the Hermitian matrix SjR is the Schur complement of Tj+1 with respect to c0 + cH
0 ,
i.e.,
H −1
SjR = cH R H
0 + c0 − νj Tj νj , with S0 = c0 + c0 . (2.40)
In a similar way, a sequence (ΦLn (z))n≥0 can be constructed such that it is orthogonal with
respect to the sesquilinear form (1.24). Henceforth we will consider only the right matrix
polynomials, denoted by Φj = ΦR R
j and the corresponding norms Sj = Sj for j ≥ 0, since
the results for the left polynomials are analogous.
(k)
Let Φj (x) be the k−th derivative of Φj (x), from (2.39) we can deduce
(k) (
(j)
Φj (0) −(χk )H Tj−1 νj , if 0 ≤ k ≤ j − 1,
= (2.41)
k! I, if k = j.
The kernel matrix polynomial defined as (1.55) can be expressed also in terms of the
moments by (see [141])
I
w̄I
−1
Kj (w, z) = [I zI · · · z j I]Tj+1 .. . (2.42)
.
w̄j I
54
Chapter 2. Analysis of perturbation of moments
(i,r)
As in the real case, Kj (w, z) is the i−th derivative (resp. r−th) Kj (w, z) with respect
to the variable w (resp. z). Thus, from (2.42), for 0 ≤ i ≤ j and 0 ≤ r ≤ j, we have
(i,0)
Kj (0, z) −1 (j+1)
= [I xI · · · z j I]Tj+1 χi ,
i!
I
(0,r) w̄I
Kj (w, 0) −1
= (χ(j+1) )H Tj+1 ,
r ..
r! .
w̄j I
(i,r)
Kj (0, 0) −1 (j+1)
= (χ(j+1)
r )H Tj+1 χi .
i!r!
In this subsection, in a similar way as in the real case, we will show the relation between
the matrix polynomials Φ̃j and Φj , as well as the relation between S̃j and Sj , in terms of
the matrix moments (cn )n≥0 . Notice that (2.20) constitutes a particular case, when only
one of the moments is perturbed. Let us define the following notation
(0,0) (1,0) (k,0)
Kkj−1 (0, z) := [Kj−1 (0, z) Kj−1 (0, z) · · · Kj−1 (0, z)] ∈ Ml×(k+1)l ,
(0) (1) (k)
Υkj (0) := [Φj (0)T Φj (0)T · · · Φj (0)T ]T ∈ M(k+1)l×l ,
(0) (1) (k)
Υ̃kj (0) := [Φ̃j (0)T Φ̃j (0)T · · · Φ̃j (0)T ]T ∈ M(k+1)l×l , (2.43)
Dj+1
mk := diag{mk · · · mk } ∈ M(j+1)l ,
Dk,j+k := diag{k!I · · · (j + k)!I} ∈ M(j+1)l .
As in the real line case, for a fixed k, with 0 ≤ k ≤ j − 1, in [41] the authors proved that
j−k (p,0) (p+k) j−k−1 (p+k,0) (p)
X Kj−1 (0, z) Φ̃j (0) X Kj−1 (0, z) Φ̃j (0)
Φ̃j (z) = Φj (z) − mk − mH
k
p! (p + k)! (p + k)! p!
p=0 p=0
(j+1) (j+1) −1 −1
= Φj (z) − Kjj−1 (0, z) D−1 −1 j+1 j+1 j
0,j Dk,j+k Λ−k,j−k Dmk + Λk,j+k D0,j Dk,j+k DmH Υ̃j (0)
k
with
−1
(j) (j) (j+1) (j+1) −1 j+1
Υ̃jj (0) = I(j+1)l + D0,j (Λ0,j )H Tj−1 Λ0,j D−1 Λ D j+1
k,j+k −k,j−k mk + Λ D D
k,j+k 0,j mH Υjj (0)
k
(2.44)
As a consequence, we have the following result.
Theorem 2.2.6. [41]. Let σ be a Hermitian matrix measure supported on T with matrix
moments (ck )nk=0 . Define a new Hermitian sequence of matrix moments (c̃k )nk=0 by c̃k =
ck + mk , for 0 ≤ k ≤ n. Then, if (Φj (z))nj=0 and (Φ̃j (z))nj=0 are the corresponding monic
55
Chapter 2. Analysis of perturbation of moments
MOPUC, we have
j
Kjj−1 (0, z)N(k, j)Υjj (0), with Φ̃0 (x) = Φ0 (x) = I,
X
Φ̃j (z) = Φj (z) −
k=0
where
(j+1) (j+1) −1 −1 j+1
N(k, j) = D−1 D −1
Λ
0,j k,j+k −k,j−k D j+1
m k
+ Λ D D D
k,j+k 0,j k,j+k m H
k
−1 (2.45)
(j) H −1 (j) (j+1) (j+1) j+1
I(j+1)l + D0,j (Λ0,j ) Tj Λ0,j Dk,j+k Λ−k,j−k Dmk + Λk,j+k D−1
−1 j+1
D
0,j mH .
k
The above theorem is a generalization to the matrix case of what was studied in [31, 78].
The particular case of a perturbation of a single matrix moment given by (2.20) is shown
in the following corollary.
Corollary 2.2.7. Let k ∈ N (fixed) with 0 ≤ k ≤ n, and let σ be a Hermitian matrix
measure supported on T. Define σk as a perturbation of σ given by (2.20). Then,
Then,
j−k−1 j−k−1
(j) H −1 (j)
X X
Tj−1 − −1
T̃j,k = Tj−1 χ(j)
p mk (χp+k ) T̃j,k + Tj−1 χp+k mH (j) H −1
k (χp ) T̃j,k . (2.47)
p=0 p=0
56
Chapter 2. Analysis of perturbation of moments
Proof. We have
−1
T̃j,k − Tj−1 = Tj−1 (Tj − T̃j,k )T̃j,k
−1
= −Tj−1 Nj,k T̃j,k
−1
. (2.48)
Notice that
j−k−1 j−k−1
(j) (j)
X X
Nj,k = χ(j) H
p mk (χp+k ) + χp+k mH (j) H
k (χp ) ,
p=0 p=0
Let T̃j ∈ Mlj be the block matrix Toeplitz with every moment perturbed, i.e.
c̃H
0 + c̃0 c̃1 ··· c̃j−1
H
c̃1 H
c̃0 + c̃0 · · · c̃j−2
T̃j = = Tj + Ñj ,
.. .. .. ..
. . . .
H
c̃j−1 H
c̃j−2 ··· c̃H
0 + c̃0
Theorem 2.2.9. Let σ be a Hermitian matrix measure supported on T with matrix mo-
ments (ck )nk=0 . Define a Hermitian sequence (c̃k )nk=0 with c̃k = ck + mk , for 0 ≤ k ≤ n,
and denote by σ̃ the associated Hermitian matrix measure. Then, for 0 ≤ j ≤ n, we have
j
Υjj (0)H N(k, j)Υjj (0),
X
S̃j = Sj + (2.50)
k=0
Proof. Let us first assume that only a single k-th matrix moment is perturbed. From
(2.40), we have that if j < k, then S̃j = Sj . Let us show that for 0 ≤ k ≤ j,
j−k (p) (p+k) j−k (p+k) (p)
X (Φj (0))H Φ̃j (0) X (Φj (0))H Φ̃j (0)
S̃j = Sj + mk + mH
k . (2.51)
p! (p + k)! (p + k)! p!
p=0 p=0
57
Chapter 2. Analysis of perturbation of moments
we deduce (2.51).
(j)
For 1 ≤ k ≤ j, ν̃j = νj + χj−k mk , and proceeding as above we get
−1 (j) (j)
S̃j = Sj − νjH (T̃j,k − Tj−1 )ν̃j − νjH Tj−1 χj−k mk − mH H −1
k (χj−k ) T̃j,k ν̃j .
so (2.51) follows.
On the other hand, using (2.43), the first sum of (2.51) can be written as
j−k (p) (p+k) j (p−k) (p) j (p−k) (p)
X (Φj (0))H Φ̃j (0) X (Φj (0))H Φ̃j (0) X (Φj (0))H Φ̃j (0)
mk = mk = mk
p! (p + k)! (p − k)! p! (p − k)! p!
p=0 p=k p=0
(j+1)
= Υjj (0)H D−1 −1 j+1 j
0,j Dk,j+k Λ−k,j−k Dmk Υ̃j (0),
The general case, when all moments are perturbed, follows immediately taking into account
(2.49) and
j
(j)
X
ν̃j = νj + χj−k mk ,
k=1
so (2.50) follows.
58
Chapter 2. Analysis of perturbation of moments
S̃j = Sj , 0 ≤ j < k,
S̃j = Sj + Υjj (0)H N(k, j)Υjj (0), k ≤ j ≤ n,
2.2.4 Example
As an illustrative example of the results of the above subsections, consider the matrix
measure that appears in [24], Section 8,
√ θ ∈ − π4 , π4 ,
1 1 + cos 2θ √ ±1 dθ
dσ(θ) = √ , if (2.52)
cos 2θ ±1 1 + cos 2θ 2π
θ ∈ 3π 5π
, ,
4 4
Now, we consider the problem of determining the matrix measure associated if some
of the moments given by (2.53) change. For instance, let us introduce a perturbation on
the matrix moment c4m+1 , for some m ∈ Z, given by
α̃m 0 1 αm + 0 1 0 1
c̃4m+1 = √ = √ = c4m+1 + √ ,
2 1 0 2 1 0 2 1 0
0 1
i.e. m4m+1 = √2 . The perturbed matrix measure, obtained by using (2.21)
1 0
(j = 4m + 1), is
√
0 cos(4m + 1)θ dθ
dσ4m+1 (θ) = dσ(θ) + 2 , (2.54)
cos(4m + 1)θ 0 2π
√
0 cos(4m + 1)θ
where σ is as in (2.52). Notice that 2 is a positive semi-
cos(4m + 1)θ 0
definite matrix if < 0. Hence, since σ is also positive semi-definite, σ4m+1 is positive semi-
definite if < 0. Furthermore, the matrix measure (2.52) and its perturbation (2.54) are
Hermitian and thus there exists an associated sequence of orthogonal matrix polynomials,
as well as their norms, given by the Theorems 2.2.6 and 2.2.9.
59
Chapter 2. Analysis of perturbation of moments
On the other hand, to illustrate the results in Section 2.2.2, by using (1.93), (1.94) and
taking into account that cos 2θ = 2 cos2 θ − 1, we get
√
−1 1 1 2|x| √±1 dx
dµ(x) = Sz (dσ(θ)) = √ √ , (2.55)
π 2x2 − 1 ±1 2|x| 1 − x2
with x ∈ [−1, − √12 ]∪[ √12 , 1]. (2.55) is the measure associated with the measure (2.52), con-
nected through the inverse Szegő matrix transformation. Furthermore, from Proposition
2.2.1 with < 0 we get
√
2 0 T4m+1 (x) dx
dµ4m+1 (x) = dµ(x) + √ , (2.56)
π T4m+1 (x) 0 1 − x2
where µ is as in (2.55). Notice that the matrix measure (2.55) and its perturbation (2.56)
are positive semi-definite, so there exists a corresponding sequence of matrix orthogonal
polynomials on the real line and their norms, that can be obtained from Theorems 2.2.3
and 2.2.5.
60
CHAPTER 3
Summary
The aim of this chapter is to study algebraic and analytic properties of the MOPUC
associated with the Uvarov and Christoffel spectral transformations of a Hermitian matrix
measure σ supported on the unit circle, defined respectively by
m
X
dσum (z) = dσ(z) + Mj δ(z − ζj ),
j=1
3.1.1 Introduction
In this section, we consider the Uvarov matrix transformation with m > 0 masses of a
Hermitian matrix measure σ supported on T, defined by
m
X
dσum (z) = dσ(z) + Mj δ(z − ζj ), (3.1)
j=1
61
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
relative asymptotics when σ ∈ N , and properties of their zeros, by using truncated block
Hessenberg matrices. Notice that σuk is the Uvarov matrix transformation of the measure
σuk−1 with a single mass point, i.e.,
with σu0 := σ. The right and left sesquilinear forms associated with the matrix measure
(3.1) are
m
X
hp, qiR,σum = hp, qiR,σ + p(ζj )H Mj q(ζj ), p, q ∈ Pl [z], (3.3)
j=1
Xm
hp, qiL,σum = hp, qiL,σ + p(ζj )Mj q(ζj )H , p, q ∈ Pl [z]. (3.4)
j=1
T T
If we define p(ζ) := p(ζ1 )T , ..., p(ζm )T ∈ Mml×l , q(ζ) := q(ζ1 )T , ..., q(ζm )T
∈
Mml×l , p̂(ζ) := [p(ζ1 ), ..., p(ζm )] ∈ Ml×ml , q̂(ζ) := [q(ζ1 ), ..., q(ζm )] ∈ Ml×ml and the
block diagonal matrix
M1 · · · 0
M = ... .. .. ∈ M ,
. . lm
0 ··· Mm
then (3.3) and (3.4) can be written as
m
ζj Mj Mj ζj2 Mj ζj−1 Mj ···
R
ζj−1 Mj ζj−2 Mj ζj−3 Mj
X
F̃m = FR +
Mj ··· ,
j=1 ζj2 Mj ζj Mj ζj3 Mj Mj ···
.. .. .. .. ..
. . . . .
Since σum is an l × l Hermitian matrix measure on T, there exist sequences (Um,n R (z))
n≥0
L (z))
(right) and (Um,n n≥0 (left), which are orthonormal with respect to (3.5) and (3.6),
respectively. In the remaining of this section, we will consider only the right sequences
(ϕR R
n (z))n≥0 and (Um,n (z))n≥0 , since all the results can be obtained for the left sequences
in a similar way.
62
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
In this subsection, we find the connection formula between the corresponding sequences
of MOPUC and further properties.
Lemma 3.1.1. For n ≥ 0 and ζi 6= ζj i, j = 1, . . . , m, the block matrix KnR (ζ) defined by
KnR (ζ) = .. .. ..
∈ Mlm ,
. . .
R R
Kn (ζ1 , ζm ) · · · Kn (ζm , ζm )
is positive definite.
H
Proof. First notice that KnR (ζ) is Hermitian because KnR (ζi , ζj ) = KnR (ζj , ζi ) with
i, j = 1, . . . m, and nonsingular because ζi 6= ζj i, j = 1, . . . , m. Now, let us consider the
block matrix R
ϕ0 (ζ1 ) · · · ϕR
n (ζ1 )
GR .. .. ..
n (ζ) = ∈ Mlm×l(n+1) .
. . .
ϕR
0 (ζm ) · · · ϕR
n (ζm )
Since GR R
n (ζ) is nonzero for every n ≥ 0, the inequality is strict. As a consequence, Kn (ζ)
is a positive definite matrix for n ≥ 0.
where
KR
R R
n−1 (ζ, z) := Kn−1 (ζ1 , z), . . . , Kn−1 (ζm , z) ∈ Ml×ml ,
T T
FR
R T R
n (ζ) := Φn (ζ1 ) , . . . , Φn (ζm ) ∈ Mml×l .
Moreover,
H −1 R
κ̃−R κ̃−R R R R,H R
m,n m,n = S̃m,n = Sn + Fn (ζ)M Ilm + Kn−1 (ζ)M Fn (ζ), (3.8)
R (z))
Proof. Assume that (Um,n n≥0 is the sequence of monic MOPUC with respect to (3.5).
We can write
n−1
X
R R
Um,n (z) = Φn (z) + ΦR R
k (z)λn,k , (3.9)
k=0
63
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
m
ΦR,H
X
−R
λR
n,k = −Sk k
R
(ζj )Mj Um,n (ζj ), 0 ≤ k ≤ n − 1.
j=1
and writing UR T T ∈M
R T R
m,n (ζ) := Um,n (ζ1 ) , . . . , Um,n (ζm ) l×ml , (3.10) becomes
R
Um,n (z) = ΦR R R
n (z) − Kn−1 (ζ, z)MUm,n (ζ). (3.11)
D E D E m D
X E
k R k
z I, Um,n = z I, ΦR
n − z k I, Kn−1
R
(ζj , z) R
Mj Um,n (ζj )
R,σ R,σ R,σ
j=1
m
X D E D E
= δn,k SnR + − z k
I, KnR (ζj , z) + z k I, ϕR
n ϕR,H
n (ζj )
R
Mj Um,n (ζj )
R,σ R,σ
j=1
m
X H m D
X E
= δn,k SnR − ζjk I R
Mj Um,n (ζj ) + z k I, ΦR
n Sn−R ΦR,H R
n (ζj )Mj Um,n (ζj )
R,σ
j=1 j=1
Xm H m
X
= δn,k SnR − ζjk I R
Mj Um,n (ζj ) + δn,k ΦR,H R
n (ζj )Mj Um,n (ζj ).
j=1 j=1
Substituting on (3.13)
D E m
X
z k I, Um,n
R
= δn,k SnR + ΦR,H R
R R H R
n (ζj )Mj Um,n (ζj ) = δn,k Sn + Fn (ζ) MUm,n (ζ)
R,σum
j=1
64
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
R (z))
so that (Um,n n≥0 is orthogonal with respect to (3.5). From (3.12) we obtain (3.8) for
k = n. On the other hand, since M is positive definite,
−1 h −1 i−1
R −R
M Ilm + Kn−1 (ζ)M = M + Kn−1 (ζ)
R
is positive definite, then S̃m,n is positive definite for n ≥ 1.
Notice that the n−th degree matrix orthonormal polynomial has the form
R R
Um,n (z) = Um,n (z)κ̃R R n
m,n = κ̃m,n z + · · · ,
R
h −1 R i −R R
Um,n (z) = ϕRn (z) − KR
n−1 (ζ, z)M Ilm + K R
n−1 (ζ)M Ln (ζ) κn κ̃m,n , (3.14)
where LR T T ∈M
R T R
n (ζ) := ϕn (ζ1 ) , . . . , ϕn (ζm ) ml×l .
eiθ dθ
1 2
dσ(θ) = . (3.15)
2 e−iθ 2 2π
65
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Now, let us consider the Uvarov transformation of (3.15) with one mass point at ζ1 = 2
and M1 = I2 , i.e.
dσu1 (θ) = dσ(θ) + I2 δ(z − 2).
Notice that for n ≥ 2 we have
4n − 4 4n+1 + 2 4 − 4n
R 4 −1 1
I2 + Kn−1 (2, 2) = 2I2 + = ,
9 −1 13
4 9 4 − 4n 13 n
4 4 +5
R (2, 2) = 4 n 23 n 2
and denoting Ξn−1 = det I2 + Kn−1 27 (16) + 54 (4 ) − 27 , we get
13 n
4 + 5 4n − 4
R
−1 1 4
I2 + Kn−1 (2, 2) = ,
9Ξn−1 4n − 4 4n+1 + 2
2n 3(4n ) + 6 4n − 4
R
−1 R
I2 + Kn−1 (2, 2) Φn (2) = ,
9Ξn−1 − 92 4n+1 + 2
and
R R
−1 R
Kn−1 (2, z) I2 + Kn−1 (2, 2) Φn (2)
2n 2z − (2z)n 4n+1 + 19
2 −6
= 1
9Ξn−1 1 − 2z − 4z (18z + 22n+3 + 19) z1 (2z + 4n+1 z + 3)
3 × 4n + 16 4n − 4
+ .
− 92 4n+1 + 2
where
n−1
!
2n 19 X
φ1,1 (z, n) = z n − (4n+1 + ) (2z)k + 3(4n ) + 16
9Ξn−1 2
k=1
n−1
!
2n X
φ1,2 (z, n) = 6 (2z)k − 4n + 4
9Ξn−1
k=1
n−1
!
1 2n 18z + 22n+3 + 19 X k k−1 9
φ2,1 (z, n) = − z n−1 + 2 z +
2 9Ξn−1 4 2
k=1
n−1
!
2n X
φ2,2 (z, n) = z n − (2z + 4n+1 z + 3) 2k z k−1 + 4n+1 + 2 .
9Ξn−1
k=1
Now, let us consider the Uvarov transformation with two masses of (3.15), with mass
points ζ1 = 2 and ζ2 = 3 and M1 = M2 = I2 , i.e.
66
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
z − 25
R (z) 0
Thus, U2,1 = , and from (3.7) we can compute the sequence of monic
0 z − 25
matrix polynomials orthogonal with respect to (3.16) for n ≥ 2.
eiθ
2 0
1 dθ
dσ(θ) = e−iθ 2 eiθ . (3.17)
2 2π
0 e−iθ 2
We have log(det W(θ)) = log(1/2) is integrable over unit circle, then σ ∈ S ⊂ N . Using
the matrix Heine’s formula [141], we obtain
3
z 0 0 4 0 0
ΦR R R
0 (z) = S0 = I3 , Φ1 (z) =
− 1 z 0 , S1R = 0 3 0 ,
2 4
0 − 12 z 0 0 1
z 0 0 1 0 0
ΦL
0 (z) = S0
L
= I3 , Φ L
1 (z) = − 1
2 z 0 , S1L = 0 34 0 ,
0 − 12 z 0 0 3
4
and for n ≥ 2
2
zn
0 0 3 0 0
ΦR
n (z) =
− 2 z n−1
3 zn 0 , SnR = 0 3
4 0
.
1 n−2
3z − 12 z n−1 z n 0 0 1
zn
0 0 1 0 0
L 1 n−1 L
Φn (z) = − 2 z
z n 0 , Sn = 0 34
0 ,
1 n−2 2 n−1 n 2
3z −3z z 0 0 3
Now, let us consider the Uvarov transformation of (3.17) with one mass point at ζ = ζ1 ∈ T
and M = M1 ∈ M3 , i.e.
dσu1 (θ) = dσ(θ) + Mδ(z − ζ).
67
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
R
−1 R
= ΦR R R
U1,n n (z) − Kn−1 (ζ, z)M I3 + Kn−1 (ζ, ζ)M Φn (ζ),
−1 R
(3.18)
R
= SnR (z) + ΦR,H R
S̃1,n n (ζ)M I3 + Kn−1 (ζ, ζ)M Φn (ζ).
1 − 21 z 31 z 2
ΦL,∗
n (z) =
0 1 − 23 z .
0 0 1
Since the Nevai condition only depends on the absolutely continuous component of the
matrix measure, and the Uvarov matrix transformation only modifies the singular part of
the measure, the following result is straightforward.
2. If |ζ| = 1, we have
−1 R
lim ϕR,H R
n→∞ n (ζ)M I + Kn (ζ, ζ)M ϕn (ζ) = 0. (3.20)
from (1.81) and (1.82) with z = ζ we obtain (3.19). On the other hand, let |ζ| = 1. Notice
that we have
KnR (ζ, ζ) ≤ I + KnR (ζ, ζ)M M−1 = M−1 + KnR (ζ, ζ),
−1
and thus M I + KnR (ζ, ζ)M ≤ KnR (ζ, ζ)−1 . Therefore
−1 R
R,H −1 R
ϕn (ζ)M I + KnR (ζ, ζ) ϕn (ζ)
≤
ϕR,H R
n (ζ)Kn (ζ, ζ) ϕn (ζ) 2 .
2
68
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
2
−1 R
0 <
κR κ̃m,n
≤ 1. (3.22)
n
2
R , ϕR i
Considering hUm,n −R R R R −R R H and taking into
n R,σum = κ̃m,n κn , hUm,n , ϕn iR,σ = (κn κ̃m,n )
account (3.21), (3.23) becomes
H H −1
κ̃−R R −R R
+ κ−R R
LR H R
MLR
m,n κn = κn κ̃m,n n κ̃m,n n (ζ) Ilm + MKn−1 (ζ) n (ζ).
H H −1
I − κ−R R
κ−R R R −R R R
M LR −R R
n κ̃m,n n κ̃m,n = Ln (ζ)κn κ̃m,n Ilm + MKn−1 (ζ) n (ζ)κn κ̃m,n .
Since the right-hand side is positive definite because M and Kn−1R (ζ) are both positive
H
definite, we have κ−R R κ−R R
n κ̃m,n n κ̃m,n ≤ I, and therefore (3.22) follows.
The following result was proved in [173] for the particular case m = 1. Since the proof
for an arbitrary value of m is analogous, we do not include it here.
−1 R
Lemma 3.1.8. For n ≥ 0, the matrix κR n κ̃m,n can be assumed to be lower triangular
matrices with positive entries on the main diagonal.
circle have been widely studied in the literature (see [29, 33, 74, 126, 135] among many
others). Some applications in this direction are asymptotics of orthogonal polynomials
on unbounded intervals [129, 149, 167], and (one-point) Padé [128] and Hermite-Padé
approximations [22]. Moreover, in [101] the author studies Padé approximation to Markov
functions to which a rational function is added, which is equivalent to adding mass points
to a given measure, i.e. an Uvarov transformation with several masses.
As expected, the asymptotic behavior of the polynomials will depend on the location
of the mass points.
3.1.3.1 Case ζj ∈ C \ D̄
Proof. We use induction on m. For m = 1 the result was proved in [173] Theorem 3.4, i.e,
−1 1
lim κR
n κ̃R
1,n = I. (3.25)
n→∞ |ζ1 |
For m > 1, from Lemma 3.1.5 we have that σuk belongs to N for k = 2, . . . , m, and σuk
R (z) = z n κ̃R + · · · in
satisfies (3.2). Then, we can apply the previous argument to Uk,n k,n
R
terms of Uk−1,n (z) = z n κ̃R + · · · to obtain
k−1,n
−1 1
lim κ̃R
k−1,n κ̃R
k,n = I with k = 2, . . . , m. (3.26)
n→∞ |ζk |
−1 R −1 R
Since κR R −1 κ̃R R −1 κ̃R R
n κ̃ m,n = κn 1,n κ̃ 1,n 2,n · · · κ̃m−1,n κ̃m,n , from (3.25)
and (3.26) we easily get (3.24).
Proof. We use induction on m. For m = 1, the result was proved in [173] Theorem 4.1,
i.e,
−1 R ζ̄1 z − ζ1
lim ϕR
n (z) U1,n (z) = I, (3.28)
n→∞ |ζ1 | z ζ̄1 − 1
uniformly on every compact subset of z ∈ C \ D. For m ≥ 1, from Lemma 3.1.5 we know
σuk with k = 2, . . . , m belongs to N , and taking into account (3.2), we have
R
−1 R ζ̄k z − ζk
lim Uk−1,n (z) Uk,n (z) = I, with k = 2, . . . , m. (3.29)
n→∞ |ζk | z ζ̄k − 1
70
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Since
−1 −1 R −1 R −1 R
ϕR
n (z)
R
Um,n (z) = ϕR
n (z)
R
U1,n (z) U1,n (z) R
U2,n (z) · · · Um−1,n (z) Um,n (z),
Moreover, since
R
−1 R R
−1 R −1 R −1 R
Um,n (z) Um,n+1 (z) = Um,n (z) ϕn (z) ϕR
n (z) ϕn+1 (z) ϕR
n+1 (z) Um,n+1 (z),
another immediate consequence of the above proposition and (1.75) is the following.
R
−1 R
lim Um,n (z) Um,n+1 (z) = zI,
n→∞
Proof. For m = 1, from (3.14) with m = 1 and multiplying on the left by (ϕL,∗ −1
n (z)) , we
get
−1 R −1 R
(ϕL,∗
L,∗
n (z)) U1,n (z) = (ϕn (z)) ϕn (z)
−1 R
−1 R i
−(ϕL,∗ R
κ−R R
n (z)) Kn−1 (ζ 1 , z)M1 I + Kn−1 (ζ 1 , ζ1 )M1 ϕn (ζ 1 ) n κ̃1,n .
71
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
−1 R
with B = ϕR,H R
n (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 ϕn (ζ1 ), so from Proposition 1.4.18 and
Lemma 3.1.6, we obtain
−1 R
−1 R
lim (ϕL,∗ R
n→∞ n (z)) Kn−1 (ζ1 , z)M1 I + Kn−1 (ζ1 , ζ1 )M1 ϕn (ζ1 ) = 0.
R,∗
−1
lim Um,n (z) ϕL
n (z) = 0,
n→∞
3.1.3.2 Case ζj ∈ T
R R
H −1 R
iR,σu1 = hΦR R R R
hU1,n , U1,n n , Φn iR,σ + Φn (ζ1 ) M1 I + Kn−1 (ζ1 , ζ1 )M1 Φn (ζ1 ),
R , UR i −R H −R R R −R H −R
and since hU1,n 1,n R,σu1 = (κ̃1,n ) κ̃1,n and hΦn , Φn iR,σ = (κn ) κn , we get
−1 R
(κ̃−R H −R −R H −R −R H R,H R
ϕn (ζ1 )κ−R
1,n ) κ̃1,n = (κn ) κn + (κn ) ϕn (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 n
−1
h i
= (κ−R H
I + ϕR,H R
ϕR −R
n ) n (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 n (ζ1 ) κn .
As a consequence,
h −1 R i−1
(κ−R R −R R H R,H R
n κ̃1,n )(κn κ̃1,n ) = I + ϕn (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 ϕn (ζ1 ) .
72
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Notice that
−1
I + ϕR,H R
n (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 ϕR
n (ζ1 )
−1 −1 R
= ϕR R R
n (ζ1 ) (I + Kn (ζ1 , ζ1 )M1 )(I + Kn−1 (ζ1 , ζ1 )M1 ) ϕn (ζ1 ).
Thus,
(κ−R R −R R H
n κ̃1,n )(κn κ̃1,n )
−1 −1 R
(3.33)
= ϕR R R
n (ζ1 ) (I + Kn−1 (ζ1 , ζ1 )M1 )(I + Kn (ζ1 , ζ1 )M1 ) ϕn (ζ1 ).
Furthermore,
−1 −1 R
I − ϕR R R
n (ζ1 ) (I + Kn−1 (ζ1 , ζ1 )M1 )(I + Kn (ζ1 , ζ1 )M1 ) ϕn (ζ1 )
−1 R
= ϕR,H R
n (ζ1 )M1 (I + Kn (ζ1 , ζ1 )M1 ) ϕn (ζ1 ),
so we obtain
(κ−R R −R R H R,H R −1 R
n κ̃1,n )(κn κ̃1,n ) = I − ϕn (ζ1 )M1 (I + Kn (ζ1 , ζ1 )M1 ) ϕn (ζ1 ), (3.34)
Proof. Let m = 1. From (3.14) with m = 1 and multiplying on the left by (ϕR −1 and
n (z))
−R R H
on the right by (κn κ̃1,n ) , we get
−1 h −1 R −1
ϕR
n (z)
R
U1,n (z)(κ−Rn κ̃R H
1,n ) = I − ϕR n (z) Kn−1 (ζ1 , z) ϕR,H
n (ζ1 )
−1 R i
−R R −R R H
×ϕR,H
n (ζ 1 )M 1 I + K R
(ζ
n−1 1 1, ζ )M 1 ϕn 1 κn κ̃1,n (κn κ̃1,n ) .
(ζ )
73
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
so from (3.20)
−1 −R R −R R H
lim ϕR,H R
n (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 ϕR
n (ζ1 )κn κ̃1,n (κn κ̃1,n ) = 0. (3.38)
n→∞
Therefore, using the previous lemma and (1.82) with ζ1 ∈ T and |z| > 1, we get
−1 R −1 R
lim ϕR
n (z) U1,n (z) = lim ϕRn (z) U1,n (z)(κ−R R H
n κ̃1,n )
n→∞ n→∞
1
= lim κ−R n κ̃R
(κ
1,n n
−R R H
κ̃1,n ) − I × 0 = I.
n→∞ z ζ̄1 − 1
R,∗
−1
lim Um,n (z) ϕR,∗
n (z) = I,
n→∞
Proof. For m = 1, from (3.14) with m = 1 and multiplying on the left by (ϕL,∗
n (z))
−1 and
−1 R −R R H −1 R −1 R
(ϕL,∗
L,∗ L,∗
n (z)) U1,n (z)(κn κ̃1,n ) = (ϕn (z)) ϕn (z) − (ϕn (z)) Kn−1 (ζ1 , z)
−1 R,H
−1 R i
ϕR,H R
ϕn (ζ1 ) κ−R R −R R H
n (ζ1 ) ϕn (ζ1 )M1 I + Kn−1 (ζ1 , ζ1 )M1 n κ̃1,n (κn κ̃1,n ) .
Since ϕR −1 L,∗
n (ζ1 ) ϕn (ζ1 ) is an unitary matrix when ζ1 ∈ T, from (1.74) we get
and from (1.79) it follows that (3.39) is bounded when n tends to infinity. Therefore, using
(1.79), (3.32) and (3.38), we obtain
−1 R −1 R −R R H
lim (ϕL,∗ L,∗
n (z)) U1,n (z) = lim (ϕn (z)) U1,n (z)(κn κ̃1,n )
n→∞ n→∞
−1 R −R R −R R H
= lim (ϕL,∗
n (z)) ϕn (z)κn κ̃1,n (κn κ̃1,n ) = 0.
n→∞
For m > 1, we proceed as in the proof of Proposition 3.1.13 using Corollary 3.1.17.
74
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
R,∗
−1
lim Um,n (z) ϕL
n (z) = 0,
n→∞
3.1.3.3 Case ζj ∈ D
Since KnR (z, z), converges uniformly on every compact subset of z ∈ D (see Proposition
1.4.23), thus limn→∞ kϕR n (ζ1 )k2 = 0, therefore
lim
(κ−R R −R R H
n κ̃1,n )(κn κ̃1,n ) − I 2 = 0,
n→∞
3.1.4 Zeros
Here, we study the zeros of the MOPUC associated with an Uvarov perturbation. It is
worth noticing that zeros of orthogonal polynomials on the unit circle play an important
role in topics ranging from mathematics to physics (see [86]).
Let us consider the left monic sequence (ΦL L
n (z))n≥0 . Expand zΦj (z) into a Fourier
series using the left MOPUC
j+1
X
zΦL
j (z) = L
Hj,k ΦL
k (z), j = 0, . . . , n − 1. (3.41)
k=0
75
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
ΦL ΦL
0 (z) 0 (z) 0
.
.. .
.. . L
z = (HΦL )n−1 + .. Φn (z), (3.44)
ΦLn−1 (z) ΦL
n−1 (z) I
where
L
H0,0 I 0
L
H1,0 L
H1,1I
.. ..
. .. ..
(HΦL )n−1 = ∈ Mnl ,
L . . .
L
Hn−2,0 Hn−2,1 ··· L
Hn−2,n−2 I
L L
Hn−1,0 Hn−1,1 · · · L
Hn−1,n−2 L
Hn−1,n−1
is the truncated block Hessenberg matrix. Block Hessenberg matrices are very useful in
multivariate time series analysis, multichannel signal processing and Gaussian quadrature
of matrix-valued functions (see [161, 163]).
Notice that if ξ ∈ D is a zero of ΦL
n (z), then one can find a nonzero α(ξ) ∈ Ml×1 , such
that ΦLn (ξ)α(ξ) = 0 l×1 . Thus, from (3.44), we conclude that the zeros of ΦLn (z) are the
eigenvalues of (HΦL )n−1 with eigenvector
[(ΦL T L T T
0 (ξ)α(ξ)) , . . . , (Φn−1 (ξ)α(ξ)) ] ∈ Mnl×1 .
Example 3.1.21. Consider the Hermitian matrix measure σ defined as in (3.17). Notice
that ΦL R
n and Φn have a zero of multiplicity 3n in z = 0. On the other hand, from (3.43)
for this particular case, we get
0 0 0 1 0 0
1
2 01 0 0 1 0
L
H0,0 I3 0
2 0 0 0 1 .
(HΦL )1 = L L =
H1,0 H1,1 0 0 0 0 0 0
0 0 0 0 0 0
− 41 0 0 0 16 0
It is easy to see that the only eigenvalue of the block Hessenberg matrix (HΦL )1 is z = 0.
In general form, the block Hessenberg matrix
L
H0,0 I3 03
H L H L I3
1,0 1,1
..
(HΦL )n−1 = 03
03 03 . ,
(3.45)
.. .. .. . .
. . . . I3
03 03 03 · · · 03
76
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
KL L (z, ζ )T T
L
Kn−1 (z, ζ1 )T , . . . , Kn−1 ∈ Mml×l and FL
where
L n−1 (z, ζ) := m n (ζ) :=
Φn (ζ1 ), . . . , ΦL
n (ζm ) ∈ Ml×ml .
Let (LL
U Φ )n−1 be the change of basis matrix such that
L (z)
ΦL
Um,0 0 (z)
.. L ..
= (LU Φ )n−1 . (3.48)
. .
L
Um,n−1 (z) L
Φn−1 (z)
n−1
From (3.47), (LL U Φ )n−1 is a lower triangular block matrix, where (LL
U Φ )n−1 = (lj,k )j,k=0
with
0,
if j < k,
lj,k = I, if j = k, (3.49)
−FL (ζ) I + MKL (ζ) −1 MFL,H (ζ)S −L ,
h i
j lm j−1 k k if k < j.
ΦL ΦL
0 (z) 0 (z) 0
L .
.. L .
.. . L
z(LU Φ )n−1 = (HU L )n−1 (LU Φ )n−1 + .. Φn (z)
ΦLn−1 (z) ΦL
n−1 (z) I
(3.50)
0 n−1
.. X
+ . ln,k ΦL
k (z).
I k=0
Notice that
ΦL ΦL
0 n−1 0 ··· 0 0 (z) 0 (z)
.. X L .. .. .. ..
ln,k Φk (z) = . = An .
. . . .
I k=0 ln,0 · · · ln,n−1 L
Φn−1 (z) L
Φn−1 (z)
−1
Moreover, since (LL
U Φ )n−1 is a lower triangular block matrix with I in the main diagonal,
we have
0 0
L −1 .. .. −1
(LU Φ )n−1 . = . and (LL U Φ )n−1 An = An .
I I
77
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
ΦL ΦL
0 (z) 0 (z) 0
z .. L −1 L
.. .. ΦL (z).
= (LU Φ )n−1 (HU L )n−1 (LU Φ )n−1 + An +
. . . n
L
Φn−1 (z) L
Φn−1 (z) I
Therefore, we have proved the following proposition. Notice that we have followed the
ideas on [32], where the scalar case is considered.
L
Proposition 3.1.22. The zeros of the matrix polynomial Um,n are the eigenvalues of the
block matrix
(HΦL )n−1 − An .
Example 3.1.23. We consider the Uvarov transformation with one mass point of the
Hermitian matrix measure σ defined as in (3.17)
with M ∈ M3 and ζ ∈ T. As consequence of the above proposition the zeros of the matrix
polynomial
L
−1
(z) = ΦL L L L
U1,n n (z) − Φn (ζ) I3 + MKn−1 (ζ, ζ) MKn−1 (z, ζ) (3.51)
z − 12
0 0
L
U1,1 (z) = − 14 z − 12 0 .
0 − 4 z − 21
1
78
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
with
L 20 5 7 46 73 2 8 4
det(U1,2 (z)) = z 6 − z − z4 + z3 + z − z− .
17 17 85 340 85 85
On the other hand,
0 0 0 1 0 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0 0 0 0 0
2 1
0
2 0 0 0 1 0 0 0 0 0 0
(HΦL )1 − A2 = −
− 53 1 1 53 11 2
,
0
0 0 0 0 0 170 − 17 − 85 − 170 85 85
0 0 0 0 0 0 33 9
− 34 9
− 170 33 71
− 170 9
340 340 85
− 14 0 0 0 1
6 0 13
− 170 2
17
47
− 170 13
− 170 53
255 − 38
85
0.8
0.6
0.4
0.2
-1 -0.5 0.5 1
-0.2
-0.4
-0.6
-0.8
-1
L
Figure 3.1. Zeros of U1,2 (z) with ζ = 1 and M = I3 .
79
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
and for k = 2, . . . , n − 1
1 − 32 1
2
ln,k = ln,0 0 43 −1 .
3
0 0 2
For instance, if we consider n = 5
59 11 1
59 1 4
380 190 − 380 − 380 38 95
3 9 9 3
l5,0 = − − 152 76 152
, l5,1 =
152 − 13
76 0 ,
1 2 11 1 2
95 − 95 95 − 95 57 − 16
25
and for k = 2, 3, 4,
59 1 3
− 380 38 − 190
3
l5,k = 152 − 13
76
3
76
.
1 2
− 95 57 − 15
L (z), or the eigenvalues of (H
Thus, the zeros of U1,5 ΦL )4 − A5 , are plotted in the following
figure
0.8
0.6
0.4
0.2
-1 -0.5 0.5 1
-0.2
-0.4
-0.6
-0.8
-1
L
Figure 3.2. Zeros of U1,5 (z) with ζ = 1 and M = I3 .
80
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
and for k = 2, · · · 9,
1019 3 29
− 11730 391 − 7820
9 71 33
l10,k = 1564 − 782 3128
.
29 11 387
− 11730 1173 − 3910
The zeros are in the following figure
0.8
0.6
0.4
0.2
-1 -0.5 0.5 1
-0.2
-0.4
-0.6
-0.8
-1
L
Figure 3.3. Zeros of U1,10 (z) with ζ = 1 and M = I3 .
3.2.1 Introduction
The study of the Christoffel matrix transformation for matrix measures has not received
much attention in the literature. There are, however, some contributions in this direction.
For instance, for transformations of matrix measures supported on the real line, in [2, 3]
the authors consider the sesquilinear form defined by
Z
hf, giµ = f (x)w(x)dµ(x)g(x)T , f, g ∈ Pl [x],
E
The results on this subsection are oriented in that direction. We will consider certain
Christoffel transformation of a matrix measure supported on the unit circle and our aim
is to obtain several algebraic and analytic properties associated with it.
In the scalar case, the Christoffel transformation of a nontrivial positive Borel measure
λ supported on T is defined by (see [88, 133])
Notice that λ̃ is also positive definite. In this section, we consider the Christoffel trans-
formation of a Hermitian matrix measure σ supported on T, defined by
where
m
Y
Wm (z) = (zI − Aj ) with (Aj )m
j=1 ∈ Ml , m > 0,
j=1
with σc0 := σ. The corresponding right and left sesquilinear forms associated with the
measure (3.54) are
Notice that σcm is an l × l Hermitian matrix measure, and thus in this case there exist
R (z))
sequences (Cm,n L
n≥0 (right) and (Cm,n (z))n≥0 (left) which are orthonormal with respect
to (3.56) and (3.57) respectively.
In this section, we state several relations between the sequences of MOPUC (Cm,n R (z))
n≥0
R
and (ϕn+m (z))n≥0 . They will be used in the next subsections to derive relative asymptotic
R (z))
properties. In the sequel, we will only consider the right sequence (Cm,n n≥0 since all
the results can be obtained for the left sequence in a similar way. We first need a lemma.
82
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
R
Lemma 3.2.1. For n + m ≥ 1 and Ai 6= Aj i, j = 1, . . . , m, the block matrix Kn+m−1 (A)
defined by
R R
Kn+m−1 (A1 , A1 ) · · · Kn+m−1 (Am , A1 )
R
Kn+m−1 .. .. ..
(A) = ∈ Mlm ,
. . .
R R
Kn+m−1 (A1 , Am ) · · · Kn+m−1 (Am , Am )
Proof. First notice that the block matrix Kn+m−1 R (A) is Hermitian because
R
H R
Kn+m−1 (Ai , Aj ) = Kn+m−1 (Aj , Ai ) with i, j = 1, . . . m. Now let us consider the block
matrix R
ϕ0 (A1 ) · · · ϕR
n+m−1 (A1 )
GR .. .. ..
n+m−1 (A) = ∈ Mlm×l(n+m) .
. . .
ϕR
0 (Am ) · · · ϕR
n+m−1 (Am )
Notice that
R
Kn+m−1 (A) = GR R H
n+m−1 (A)Gn+m−1 (A) ,
where
KR
R R
n+m−1 (A, z) := Kn+m−1 (A1 , z), . . . , Kn+m−1 (Am , z) ∈ Ml×ml ,
(3.59)
T T
FR
R T R
n+m (A) := Φn+m (A1 ) , . . . , Φn+m (Am ) ∈ Mml×l ,
83
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
R R R,H
Since Kn+m−1 (Aj , z) = Kn+m (Aj , z) − ϕR
n+m (z)ϕn+m (Aj ), we have from (1.68)
D E D E
R,H
Wm (z)z k , Kn+m−1
R
(Aj , z) = − Wm (z)z k , ϕR n+m (z)ϕ n+m (Aj ) = −ΦR,H
n+m (Aj )δn,k ,
R,σ R,σ
R (z))
so that (Cm,n n≥0 is orthogonal with respect to (3.56). For k = n and from (3.61) we
obtain (3.60). On the other hand, since Sn+m R R
and (Kn+m−1 (A))−1 are positive definite
R
matrices, then S̃m,n is a positive definite matrix for n ≥ 1.
dθ
Example 3.2.3. Consider the matrix measure dσ(θ) = Ω 2π ∈ S ⊂ N , where Ω ∈ Ml is
a non-singular Hermitian matrix. The corresponding n-th degree monic polynomial is
Φn (z) = ΦR L n
n (z) = Φn (z) = z I.
n n
!−1
−1
X k X
(zI − R
A1 )C1,n (z) =z n+1
I−Ω zAH
1 Ak1 Ω−1 (Ak1 )H An+1
1 ,
k=0 k=0
n
!−1
X
R
S̃1,n = Ω + (A n+1 H
) Ak1 Ω−1 (Ak1 )H An+1
1 .
k=0
84
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
/ T, since ϕR
1. ζ1 ∈ n −1/2 we have from (1.55) with ζ = ζ
n (z) = z Ω 1
n
−1
X 1 − (ζ̄1 z)n+1
KnR (ζ1 , z) =Ω (ζ̄1 z)k = Ω−1 ,
1 − ζ̄1 z
k=0
1 − |ζ1 |2
(KnR (ζ1 , ζ1 ))−1 = Ω .
1 − |ζ1 |2(n+1)
1 − |ζ1 |2
R 2(n+1)
S̃1,n = Ω 1 + |ζ1 | .
1 − |ζ1 |2(n+1)
n
!−1
−1 −1
X 1
(Kn (ζ1 , ζ1 )) = Ω 1 = Ω
n+1
k=0
or, equivalently,
R n
C1,n (z) = z n + ζ1 C R
(z) I, n ≥ 1.
n + 1 1,n−1
R = 1+ 1
Furthermore, from (3.60) with m = 1, S̃1,n n+1 Ω.
Notice that this example generalizes to the matrix case the Christoffel transformation of
the normalized Lebesgue measure studied in [20].
Now, we will show that σcm belongs to the Szegő class by using a different approach
to the one used in the scalar case.
85
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
−1 R
Proof. From 3.60, since FR,H
n+m (A)
H KR
n+m−1 (A) Fn+m (A) is a positive semi-definite
R
matrix, then for all n ≥ 1 we have Sn+m R . From [48], we get
≤ S̃m,n
−H −R
R
= ΦR R R
κn+m = (Bn+m (0))−1 ,
Sn+m n+m , Φn+m R,σ = κn+m
−H −R −1
R
R R
R
S̃m,n = Um,n , Um,n R,σ
= κ̃ m,n κ̃m,n = B̃ m,n (0) ,
cm
so that B̃m,n (0) ≤ Bn+m (0). Notice that (det Bn+m (0))n≥0 is bounded from above, since
σ belongs to S, and thus det B̃m,n (0) is bounded from above. From [48] Theorem
n≥0
19, it follows that σcm belongs to S.
2
2
−1 R
−1
0 <
κR κ̃m,n
≤ 1 and 0 <
κ̃L L
m,n κn+m
≤ 1. (3.65)
n+m
2 2
86
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
and, as a consequence,
H
R
, KR −R R
LR,H
Wm (z)Cm,n n+m−1 (A, z) R,σ
= − κ κ̃
n+m m,n n+m (A)
H (3.68)
−R
= − LR R
n+m (A)κn+m κ̃m,n .
R
Since the right-hand member is positive semi-definite because Kn+m−1 (A) is positive def-
inite, then
−1 R H R −1 R
κR
n+m κ̃m,n κn+m κ̃m,n ≤ I
R (z))
−1 R H
Proof. If (Cm,n n≥0 is such that the matrices κRn+m κ̃m,n , n ≥ 0, are not lower
triangular matrices with positive entries in the main diagonal, then we can find a sequence
of unitary matrices (Qn )n≥0 such that
R
Cm,n (z)Qn = λ̃R n
m,n z + · · · ,
−1 R H
is a sequence of orthonormal matrix polynomials such that κR n+m λ̃m,n are upper
triangular matrices with positive entries on the main diagonal. To show this, notice that
the sequences of orthonormal matrix polynomials associated with the measures σ and σcm
have the form
(ϕR R
n (z)νn )n≥0 and (Cm,n (z)τn )n≥0 ,
where νn and τn are unitary matrices. As a consequence, their coefficients are related by
λR R R R
n = κn νn and λ̃m,n = κ̃m,n τn , and therefore
87
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
where Q̃n is a unitary matrix and S̃n is a nonsingular upper triangular matrix. Therefore,
taking
Sn = S̃nH Jn and Qn = JnH Q̃H
n,
where ( )
(S̃n )i,i
(Jn )i,j = if i = j and 0 otherwise ,
|(S̃n )i,i |
In this subsection, we deal with relative asymptotics for the Christoffel matrix transforma-
tion in the diagonalQcase, i.e. Aj = ζj I, where ζj ∈ C for j = 1, . . . , m. Therefore, here we
consider Wm (z) = m j=1 (z − ζj )I. For the scalar case, relative asymptotics for sequences of
orthogonal polynomials associated with Christoffel type perturbations of scalar measures
in N supported on the unit circle have been widely studied in the literature (see, among
others, [93, 133, 135]).
First, we will show that the matrix measure (3.54) belongs to N . This will allow us to
prove several asymptotic properties in the remaining of the chapter.
88
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
−L −R R
On the one hand, since σ ∈ N from (1.33) and (1.36) we get that κL
n+1 κn → I, κn+1 κn →
I, and
L,∗ −1 L,∗ H R,H
ϕn+1 (ζ1 )H Kn−R (ζ1 , ζ1 )ϕR R −R R
n+1 (ζ1 ) = (ϕn+1 (ζ1 ) ϕn+1 (ζ1 )) ϕn+1 (ζ1 )Kn (ζ1 , ζ1 )ϕn+1 (ζ1 ),
−R R
On the other hand, since αnH = −(κ−L H R R −L H R
n ) Φn+1 (0)κn = −(κn ) ϕn+1 (0)κn+1 κn , and
κ−R R
n+1 κn → I, therefore
lim (κ−L H R
n ) ϕn+1 (0) = 0,
n→∞
In this way
lim (κ−L H R R
n ) C1,n (0)κn = 0. (3.71)
n→∞
H = −(κ̃−L )H C R (0)κ̃R R L H H −R
Let α̃1,n 1,n−1 1,n 1,n−1 ⇒ C1,n (0) = −(κ̃1,n−1 ) α̃1,n κ̃1,n−1 , substituting in
(3.71)
−L H H −R
lim κ̃L R
κ
1,n−1 n α̃1,n κ̃ 1,n−1 n = 0,
κ
n→∞
H = 0. Therefore, we conclude that σ ∈ N .
from (3.65) it follows limn→∞ α̃1,n c1
R (z)
For m > 1, taking into account (3.55) and applying the previous argument to Ck,n
R
in terms of Ck−1,n (z), the lemma is proved.
As expected, the asymptotic behavior of the polynomials will depend on the location
of the mass points.
3.2.3.1 Case ζj ∈ C \ D̄
89
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
−R R
2
−R R
2
κn+1 κ̃1,n
= T r((κ−R κ̃ R H −R R
n+1 1,n ) κ κ̃
n+1 1,n ) ≤ l
κ n+1 1,n
≤ l,
κ̃
F 2
Furthermore, from Lemma 3.2.7, σck belongs to N for k = 2, . . . , m, and each σck
R (z) = z n κ̃R + · · · in
satisfies (3.55), then we can apply the previous argument to Ck,n k,n
R
terms of Ck−1,n+1 (z) = z n κ̃R + · · · to obtain
k−1,n+1
−1 1
lim κ̃R
k−1,n+1 κ̃R
k,n = I with k = 2, . . . , m. (3.77)
n→∞ |ζk |
Since
−1 −1 −1 R
κR
n+m κ̃R
m,n = κR
n+m κ̃R R
1,n+m−1 κ̃1,n+m−1 κ̃2,n+m−2 · · ·
−1 R −1 R
· · · κ̃R κ̃m−1,n+1 κ̃R
m−2,n+2 m−1,n+1 κ̃m,n (3.78)
90
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Proof. Let |z| ≥ 1. From (3.63) with m = 1 and multiplying on the left by (ϕR −1
n+1 (z)) ,
we get
−1
ϕR R
(z) = (I − Bn (z, ζ1 ))κ−R R
n+1 (z) (z − ζ1 )C1,n n+1 κ̃1,n , (3.80)
where
−1 R R,H −1 R,H
−1 R
Bn (z, ζ1 ) = ϕR R
n+1 (z) Kn (ζ1 , z)ϕn+1 (ζ1 ) ϕn+1 (ζ1 ) Kn (ζ1 , ζ1 ) ϕn+1 (ζ1 ),
and thus using (1.82) (with ζ = ζ1 for the first factor and z = ζ = ζ1 for the second factor)
−1
1 1
lim Bn (z, ζ1 ) = ¯ .
n→∞ z ζ1 − 1 |ζ1 |2 − 1
Therefore, taking limit when n tends to infinity in (3.80) and using (3.72) with m = 1, we
obtain
−1 ζ̄1 z − ζ1
lim ϕR R
n+1 (z) (z − ζ1 )C1,n (z) = I,
n→∞ |ζ1 | z ζ̄1 − 1
and thus
−1 R ζ̄1 1
lim ϕR
n+1 (z) C1,n (z) = I. (3.81)
n→∞ |ζ1 | z ζ̄1 − 1
On the other hand, since all σck with k = 2, . . . , m belong to N , and taking into
account (3.55), we can apply the previous argument to Ck,n R (z) in terms of C R
k−1,n+1 (z), to
obtain
R
−1 R ζ̄k 1
lim Ck−1,n+1 (z) Ck,n (z) = I, k = 2, . . . , m. (3.82)
n→∞ |ζk | z ζ̄k − 1
91
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Since
−1 R −1 R −1 R
ϕR R
R
n+m (z) Cm,n (z) = ϕn+m (z) C1,n+m−1 (z) C1,n+m−1 (z) C2,n+m−2 (z)
R
(z)−1 Cm−1,n+1
R
(z)−1 Cm,n
R R
· · · Cm−2,n+2 (z) Cm−1,n+1 (z) , (3.83)
−1 R −1 R ζ̄1 1
lim ϕR R
n+m (z) C1,n+m−1 (z) = lim ϕn+1 (z) C1,n (z) = I,
n→∞ n→∞ |ζ1 | z ζ̄1 − 1
ζ̄2 1
lim C R (z)−1 C2,n+m−2
R R
(z) = lim C1,n+1 (z)−1 C2,n
R
(z) = I,
n→∞ 1,n+m−1 n→∞ |ζ2 | z ζ̄2 − 1
..
.
lim C R (z)−1 Cm−1,n+1
R
(z) R
= lim Cm−2,n+1 (z)−1 Cm−1,n
R
(z)
n→∞ m−2,n+2 n→∞
ζ̄m−1 1
= I,
|ζm−1 | z ζ̄m−1 − 1
ζ̄m 1
R
lim Cm−1,n+1 (z)−1 Cm,n
R
(z) = I.
n→∞ |ζm | z ζ̄m − 1
Moreover, since
−1
R
(z))−1 Cm,n+1
R
(z) = ϕR −1 R
(Cm,n n+m (z) (Wm (z)Cm,n (z))
−1 R −1
× ϕR
R R
n+m (z) ϕn+m+1 (z) ϕn+m+1 (z) (Wm (z)Cm,n+1 (z)) ,
another immediate consequence of the above proposition and (1.75) is the following.
Corollary 3.2.12. If σ ∈ N and ζj ∈ C \ D̄ for j = 1, . . . , m, then
R
lim (Cm,n (z))−1 Cm,n+1
R
(z) = zI,
n→∞
92
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Proof. For m = 1, from (3.63) with m = 1 and multiplying on the left by (ϕL,∗ −1
n+1 (z)) , we
get
h i
ϕL,∗
n+1 (z)
−1 R
W1 (z)C1,n (z) = ϕL,∗ −1 R −R R
n+1 (z) ϕn+1 (z) − Bn (z, ζ1 ) κn+1 κ̃1,n (3.86)
where
Bn (z, ζ1 ) = ϕL,∗ −1 R R −1 R
n+1 (z) Kn (ζ1 , z)(Kn (ζ1 , ζ1 )) ϕn+1 (ζ1 )
= ϕL,∗ −1 R R,H −1 R,H R −1 R
n+1 (z) Kn (ζ1 , z)ϕn+1 (ζ1 ) ϕn+1 (ζ1 )(Kn (ζ1 , ζ1 )) ϕn+1 (ζ1 )
" #
(ϕR −1 L,∗ H L,∗ −1 R
n+1 (ζ1 ) ϕn+1 (ζ1 )) − ϕn+1 (z) ϕn+1 (z)
=
1 − zζ1
h i−1
−1 R R,H −1
× ϕR n+1 (ζ1 ) Kn (ζ1 , ζ1 )ϕn+1 (ζ1 )
so from Proposition 1.4.17 and (1.82), we obtain that limn→∞ Bn (z, ζ1 ) = 0. Therefore,
from (3.86)
lim ϕL,∗ −1 R
(z) = 0 ⇒ lim ϕL,∗ −1 R
n+1 (z) (z − ζ1 )C1,n n+1 (z) C1,n (z) = 0,
n→∞ n→∞
Therefore, from (3.84) and (3.87) and the above equation we get (3.85).
R,∗
−1
lim Cm,n (z) ϕL
n+m (z) = 0,
n→∞
93
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
3.2.3.2 Case ζj ∈ T
From (3.73)
(κ−R R −R R H R −1 R R −1 R
n+1 κ̃1,n )(κn+1 κ̃1,n ) = ϕn+1 (ζ1 ) Kn (ζ1 , ζ1 )Kn+1 (ζ1 , ζ1 ) ϕn+1 (ζ1 ). (3.89)
Furthermore,
−1 R −1 R R,H −1 R
I − ϕR R R
n+1 (ζ1 ) Kn (ζ1 , ζ1 )Kn+1 (ζ1 , ζ1 ) ϕn+1 (ζ1 ) = ϕn+1 (ζ1 )Kn+1 (ζ1 , ζ1 ) ϕn+1 (ζ1 ),
so we obtain
R,H
(κ−R R −R R H R −1 R
n+1 κ̃1,n )(κn+1 κ̃1,n ) = I − ϕn+1 (ζ1 )Kn+1 (ζ1 , ζ1 ) ϕn+1 (ζ1 ), (3.90)
(ϕR −1 R
(z) (κ−R R H −R R −R R H
n+1 (z)) W1 (z)C1,n n+1 κ̃1,n ) = (κn+1 κ̃1,n )(κn+1 κ̃1,n ) − Bn (z, ζ1 ), (3.92)
where
−1 R R,H −1 R,H −1 R
Bn (z, ζ1 ) =ϕR R
n+1 (z) Kn (ζ1 , z)ϕn+1 (ζ1 ) ϕn+1 (ζ1 )Kn (ζ1 , ζ1 ) ϕn+1 (ζ1 )
(κ−R R −R R H
n+1 κ̃1,n )(κn+1 κ̃1,n ) .
Using (3.89)
−1 R R,H −1 R,H −1 R
Bn (z, ζ1 ) = ϕR R
n+1 (z) Kn (ζ1 , z)ϕn+1 (ζ1 ) ϕn+1 (ζ1 )Kn+1 (ζ1 , ζ1 ) ϕn+1 (ζ1 ),
therefore from (1.82) and (1.83), Bn (z, ζ1 ) → 0. Thus taking limit when n tends to infinity
in (3.92) and using (3.88) we obtain that
−1
lim (ϕR R
n+1 (z)) (z − ζ1 )C1,n (z) = I,
n→∞
94
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
and as consequence
−1 1
lim (ϕR R
n+1 (z)) C1,n (z) = I.
n→∞ z − ζ1
For m > 1, proceeding as in the Proposition 3.2.9, we get (3.91).
ϕL,∗ −1 R
(z) (κ−R R H
n+1 (z) (z − ζ1 )C1,n n+1 κ̃1,n )
(3.93)
= ϕL,∗ −1 R −R R −R R H
n+1 (z) ϕn+1 (z)κn+1 κ̃1,n (κn+1 κ̃1,n ) − Bn (z, ζ1 ),
where
× ϕR,H R −1 R −R R −R R H
n+1 (ζ1 )Kn (ζ1 , ζ1 ) ϕn+1 (ζ1 )κn+1 κ̃1,n (κn+1 κ̃1,n ) .
Using (3.89)
Since ϕR −1 L,∗
n+1 (ζ1 ) ϕn+1 (ζ1 ) is a unitary matrix when ζ1 ∈ T, from (1.74), we get
L,∗ −1 R R,H −1
(ϕ
n+1 (z)) K (ζ
n 1 , z)ϕ n+1 1(ζ )
=
2
(ϕR (ζ )−1 ϕL,∗ (ζ ))H − ϕL,∗ (z)−1 ϕR (z)
1 + kϕL,∗ −1 R (3.94)
n+1 1 n+1 1 n+1 n+1 n+1 (z) ϕn+1 (z)k2
≤
1 − z ζ̄1 |1 − z ζ̄1 |
2
and from (1.79) it follows that (3.94) is bounded when n tends to infinity. Therefore, using
(1.83), we have limn→∞ Bn (z, ζ1 ) = 0. Therefore, taking limit in (3.93), using (1.79) and
(3.88) with m = 1 , we obtain
−1 −1
ϕL,∗ R L,∗
ϕR
lim n+1 (z) (z − ζ 1 )C 1,n (z) = lim ϕn+1 (z) n+1 (z) = 0.
n→∞ n→∞
95
Chapter 3. On the Uvarov and Christoffel spectral matrix transformation
Thus −1
ϕL,∗ R
lim n+1 (z) C1,n (z) = 0.
n→∞
For m > 1, we proceed as in the Proposition 3.2.13 using the Corollary 3.2.19.
R,∗
−1
lim Cm,n (z) ϕL
n+m (z) = 0,
n→∞
3.2.3.3 Case ζj ∈ D
Proof. Let m = 1. If σ ∈ S, then KnR (z, z) converges uniformly on every compact subset
of z ∈ D (see Proposition 1.4.23) so we have limn→∞ kϕRn (z)k2 = 0. Therefore, from (3.90)
R −1 R R −1 R H
R
2
−R
κn+1 κ̃ κ κ̃ − I ≤ kϕ (ζ )k Kn+1 1 1
,
(ζ , ζ )
n+1 1 2
1,n n+1 1,n
2 2
96
CHAPTER 4
Summary
The structure of the chapter is as follows. In the first section, some properties of the
block Hessenberg matrices associated with orthonormal matrix polynomials on the unit
circle are deduced. In particular, we show that the block Hessenberg matrix associated
with a matrix measure supported on the unit circle is unitary if and only if the measure
does not belong to the Szegő matrix class. Then, we deduce explicit relations between
the Uvarov matrix transformation and the corresponding block Hessenberg matrices, and
similar results for the Christoffel matrix transformation. Finally, we show a factorization of
the block Hessenberg matrix. As an application, we provide another proof of the fact that
all the zeros of an orthonormal matrix polynomial are located in the unit disk. In Section
2, we deal with properties of symmetric matrix measures supported on the unit circle.
In particular, given two symmetric matrix measures σ and σ̃, we establish a relationship
between the corresponding CMV matrices C and C, ˜ based in a well-known factorization
in terms of certain 2l × 2l blocks, which are defined in terms of the Verblunsky matrix
coefficients. Then, we consider sequences of Verblunsky matrix coefficients associated with
certain transformations of the measure σ, which constitute a generalization to the matrix
case of the results obtained in [88]. Finally, in the last subsection we analyze properties
of a CMV matrix associated with a new sequence of Verblunsky matrix coefficients that
is constructed from two given sequences.
4.1.1 Introduction
As in the scalar case, there are many similarities between MOPRL and MOPUC. One
of those similarities is related to the matrix representation of the multiplication operator
with respect to the basis of orthogonal polynomials. In the MOPRL case, we have the so-
called block Jacobi matrix (defined as in (1.11)). Block Jacobi matrices give information
about properties of a sequence of MOPRL, such as the distribution of the zeros, among
97
Chapter 4. Block matrix representations and spectral transformations
others (see [65, 154, 160, 163]). For MOPUC, the representation of the multiplication
operator in terms of the orthogonal matrix polynomials yields a block Hessenberg matrix
(see [76, 163] ) that is known in the literature as GGT matrix (see[156] Chapter 4) in the
scalar case. The block Hessenberg matrix also gives information about properties such
as the distribution of the zeros ([163]). Furthermore, block Hessenberg matrices are very
useful in multivariate time series analysis, multichannel signal processing and Gaussian
quadrature formulas for matrix-valued functions (see [21, 161, 163]).
In this section, we deal with properties of the block Hessenberg matrix that represents the
multiplication by z operator with respect to a sequence of orthonormal matrix polynomials
n+1
on the unit circle. Let us express zϕL L
n (z) in terms of (ϕk (z))k=0 , i.e.,
n+1
X
zϕL
n (z) = hL L L L L
n,k ϕk (z), hn,k = hzϕn , ϕk iL,σ ∈ Ml . (4.1)
k=0
−1
L
ρn = κL L
n (κn+1 ) , if k = n + 1,
hL
n,k = −R
−αn κn κk αk−1 = −αnH ρR
H R
n−1 · · · ρ Rα
k k−1 , if 0 ≤ k ≤ n,
0, if k > n + 1.
The previous expression appears in [158] for right orthonormal matrix polynomials.
98
Chapter 4. Block matrix representations and spectral transformations
eiθ dθ
1 2
dσ(θ) = .
2 e−iθ 2 2π
Then, it is not difficult to show that the corresponding monic MOPUC are
zn
R L R L 0
Φ0 (z) = Φ0 (z) = I2 , Φn (z) = Φn (z) = , n ≥ 1,
− 12 z n−1 z n
and we have
3
0 1 0
S0R = S0L = I2 , SnR = 4 , SnL = , n ≥ 1.
0 1 0 34
λL L R R
∞ (ζ) = lim λn (ζ) and λ∞ (ζ) = lim λn (ζ).
n→∞ n→∞
−L −R
H
λL
∞ (0) = lim Kn (0, 0) = lim κn κ−R
n , (4.6)
n→∞ n→∞
and H
−R −L
λR
∞ (0) = lim Kn (0, 0) = lim κn κ−L
n .
n→∞ n→∞
99
Chapter 4. Block matrix representations and spectral transformations
n
!H n
!
Y Y
ρR
k ρR
k = (κ−R H −R
n+1 ) κn+1 ,
k=0 k=0
and thus, taking limits when n → ∞ and using (4.6), we obtain (4.7). In the scalar case,
λn (ζ) = λL R
n (ζ) = λn (ζ) is the well-known Christoffel function (see [156]). The matrix
version of the Christoffel function for a matrix measure supported on the real line was
defined in [70]. The following result states that, as in the scalar case, HϕL is almost
unitary.
HϕL HH
ϕL = I∞ , (4.8)
HH
ϕL HϕL = I∞ − ϕL (0)λL
∞ (0)ϕ
L,H
(0), (4.9)
Proof. For the first statement, notice that I∞ = hϕL , ϕL iL,σ , and we have hzp, zqiL,σ =
hp, qiL,σ for every p, q ∈ Pl [z]. Thus, from (4.4) we get
HϕL HH L L H
ϕL = HϕL hϕ , ϕ iL,σ HϕL
= hHϕL ϕL , HϕL ϕL iL,σ = hzϕL , zϕL iL,σ = hϕL , ϕL iL,σ = I∞ .
Notice that this is the proof given in [163] for the truncated case. On the other hand, let
H
L,H L,H
A(k, n) = αkH κ−R
k ϕn (0) αkH κ−R
k ϕn (0). For n ≥ 0, we have
(n)
HH
ϕL HϕL
(n)
∞
−L H −L
H X
κL κL αnH κ−R L,H
αnH κ−R L,H
= n−1 κn n−1 κn + n ϕn (0) n ϕn (0) + A(k, n),
k=n+1
where (H)(n) and (H)(n) denote the n−th block row and the n−th block column, respec-
tively, of the block matrix H. Using (1.63), we have
∞
(n) −R H H −R L,H
X
HH
ϕL HϕL = I − ϕL
n (0)(κn ) (I − αn αn )κn ϕn (0) + A(k, n),
(n)
k=n+1
2 H
and since I − αn αnH = κ−R κR
n+1 n = κ −R R
κ
n+1 n κ−R R R H −R H −R R
n+1 κn = (κn ) (κn+1 ) κn+1 κn , we get
∞
(n) −R H −R L,H
X
HH
ϕL HϕL =I− ϕL
n (0)(κn+1 ) κn+1 ϕn (0) + A(k, n),
(n)
k=n+1
100
Chapter 4. Block matrix representations and spectral transformations
= I − ϕL L L,H
n (0)λ∞ (0)ϕn (0),
where we have used (4.6). On the other hand, for 0 ≤ j < n, using (1.63) and (4.6)
∞
!
H
(j)
ϕL,H
X
HH = ϕL −R H −R
αkH κ−R αkH κ−R
ϕL HϕL n (0) −(κn ) κn + k k j (0)
(n)
k=n
∞
!
H
ϕL,H
X
−R H −R
= ϕL
n (0) −(κn+1 ) κn+1 + αkH κ−R
k αkH κ−R
k j (0)
k=n+1
..
.
= −ϕL
n (0) lim (κ−R )H κ−R ϕL,H
j (0) = −ϕL L L,H
n (0)λ∞ (0)ϕj (0).
k→∞ k k
−1 −R
Since 0 ≤ kκR
n k2 ≤ kκn k2 , we get
−1
lim kκR R
n k2 = 0 ⇒ lim kκn k2 = ∞,
n→∞ n→∞
101
Chapter 4. Block matrix representations and spectral transformations
−1
0 < lim kλL
n (0) k2 < ∞. (4.10)
n→∞
Since Bn (0) = κR R H = λL (0)−1 , from (4.10) we see that det B (0) is bounded
n (κn ) n n
from above. As a consequence, from [48] (Theorem 19) and (1.84), it follows that
σ ∈ S.
We point out that the block Hessenberg matrix associated with (ϕR n (z))n≥0 has a
similar expression, and the same properties. In the scalar case, the previous theorem can
be found in [156].
In this subsection, we consider the Uvarov matrix transformation with m > 0 mass points
of the matrix measure σ supported on T, defined as in (3.1) by
m
X
dσum (z) = dσ(z) + Mj δ(z − ζj ), (4.11)
j=1
−L
HU L = LL
U ϕ HϕL LU ϕ . (4.13)
L −L
h −1 i
Um,n (z) = κ̃L κ
m,n n ϕ L
n (z) − LL
n (ζ) Ilm + MK L
n−1 (ζ) MK L
n−1 (z, ζ) , (4.14)
and
H −1
κ̃−L −L L
= SnL + FL L
MFL,H
m,n κ̃m,n = S̃m,n n (ζ) Ilm + MKn−1 (ζ) n (ζ), (4.15)
LL
L L
L
L L
n (ζ) := ϕn (ζ1 ), . . . , ϕn (ζm ) ∈ Ml×ml , Fn (ζ) := Φn (ζ1 ), . . . , Φn (ζm ) ∈ Ml×ml ,
T T
KL
L T L
n−1 (z, ζ) := Kn−1 (z, ζ1 ) , . . . , Kn−1 (z, ζm ) ∈ Mml×l ,
L (ζ , ζ ) · · · L (ζ , ζ )
Kn−1 1 1 Kn−1 m 1
L
Kn−1 .. .. ..
(ζ) := ∈ Mlm ,
. . .
Kn−1L (ζ , ζ ) · · · K L (ζ , ζ )
1 m n−1 m m
M1 · · · 0
.. . .. .. ∈ M .
M := . . lm
0 · · · Mm
−L L −L H
h −1 i
κL κ̃ (κ
n m,n n m,n κ̃ ) = κ L
n Sn
L
+ F L
n (ζ) Ilm + MK L
n−1 (ζ) MF L,H
n (ζ) (κL
n)
H
−1
= I + LL L
n (ζ) Ilm + MKn−1 (ζ) MLL,H
n (ζ),
so we obtain (4.16).
To compute L−L
U ϕ , let us write
n−1
X
ΦL
n (z) = L
Um,n (z) + λL L
nk Um,k (z),
k=0
103
Chapter 4. Block matrix representations and spectral transformations
where, for 0 ≤ k ≤ n − 1,
m
L,H
X
−L −L
λL L L
nk = hΦn , Um,k iL,σum S̃m,k =
hΦL L
n , Um,k iL,σ + ΦL
n (ζj )Mj Um,k (ζj ) S̃m,k
j=1
H −L
= FL L L
n (ζ)M[Um,k (ζ1 ), . . . , Um,k (ζm )] S̃m,k .
Thus,
n−1
L,H
X
ΦL L
n (z) = Um,n (z) + FL L
n (ζ)MTm,k (ζ)Um,k (z), (4.17)
k=0
where TL L L
m,k (ζ) = [Um,k (ζ1 ), . . . , Um,k (ζm )]. From (4.14) with z = ζj , j = 1, . . . , m, it follows
that TL,H L −1 L,H L −L H
m,k (ζ) = (Ilm + Kn−1 (ζ)M) Lk (ζ)(κ̃m,k κk ) , and substituting in (4.17)
n−1
−1 L,H
X
L −L L −L H L
ϕL
n (z) = κn κ̃m,n Um,n (z) + LL L L
n (ζ)M(Ilm + Kk−1 (ζ)M) Lk (ζ)(κ̃m,k κk ) Um,k (z).
k=0
As a consequence,
L −L
κn κ̃m,n , if k = n,
L−L
Uϕ = L L −1 L,H L −L H
L (ζ)M(Ilm + Kk−1 (ζ)M) Lk (ζ)(κ̃m,k κk ) , if 0 ≤ k < n,
n,k n
0, if n < k.
On the other hand, since the matrix Szegő condition only depends on the absolutely
continuous component of the matrix measure, and the Uvarov matrix transformation only
affects the singular part, the following result is immediate.
Proceeding as in the Proposition 3.2.2, we get the following proposition that establishes a
connection formula for these matrix polynomials.
Proposition 4.1.7. For all z such that W (z) = Iz − A is non-singular, the sequence of
matrix polynomials (CnL (z) = κ̃L n
n z + · · · )n≥0 defined by
−L −L
CnL (z)W (z) = κ̃L
L L L
n κn+1 ϕn+1 (z) − ϕn+1 (A)Kn (A, A)Kn (z, A) (4.19)
104
Chapter 4. Block matrix representations and spectral transformations
In particular, if ϕL
n+1 (A) is non-singular
H
−L −L −L −L
κ̃L
n κn+1 κ̃L L L
n κn+1 = ϕn+1 (A)Kn+1 (A, A)Kn (A, A)ϕn+1 (A). (4.23)
−L −L
H −L L,H H
κL
n+1 κ̃n κL
n+1 κ̃n = κL L L L
n+1 (Sn+1 + Φn+1 (A)Kn (A, A)Φn+1 (A)) κn+1
−L L,H
= I + ϕL
n+1 (A)Kn (A, A)ϕn+1 (A),
which is (4.23).
105
Chapter 4. Block matrix representations and spectral transformations
where
∞
Kk−L (A, A)ϕL,H
X
L −L H L −L L −L
Λn (A) = k+1 (A)(κ̃k κk+1 ) κ̃k κk+1 ϕk+1 (A)Kk (A, A).
k=n
In particular, if ϕL
n (A) is non-singular for n ≥ 0, then
CL,H CL = I∞ − ϕL (A)λL
∞ (A)ϕ
L,H
(A). (4.25)
Proof. The first statement can be proved as in Proposition 4.1.2. On the other hand, for
n ≥ 0,
−L
H −L
(CL,H )(n) (CL )(n) = κ̃L
n−1 κn κ̃L L L,H
n−1 κn + ϕn (A)Λn (A)ϕn (A). (4.26)
H
From (4.22), 0 = I − κ̃L −L κ̃L −L I + ϕL (A)K −L (A, A)ϕL,H (A) , and thus
n−1 κn n−1 κn n n−1 n
−L
H −L −L
H −L L −L
κ̃L
n−1 κn κ̃L L
n−1 κn = I − κ̃n−1 κn κ̃L L,H
n−1 κn ϕn (A)Kn−1 (A, A)ϕn (A).
Since
we obtain
−L −L
(CL,H )(n) (CL )(n) = ϕL L
n (A)Kn (A, A)Kn−1 (A, A)ϕn (A)
∞
!
X
+ ϕL
n (A) Kk−L (A, A) − −L
Kk+1 (A, A) ϕL,H
n (A).
k=n
Notice that
−L
I − ϕL L,H
n (A)Kn+1 (A, A)ϕn (A) =
−L −L −L −L
ϕL
n (A)Kn (A, A)K L
n−1 (A, A)ϕ n (A) + ϕ L
n (A) Kn (A, A) − Kn+1 (A, A) ϕL,H
n (A)
106
Chapter 4. Block matrix representations and spectral transformations
and thus
−L
(CL,H )(n) (CL )(n) = I − ϕL L,H
n (A)Kn+1 (A, A)ϕn (A)
∞
!
X
+ ϕL
n (A) Kk−L (A, A) − Kk+1
−L
(A, A) ϕL,H
n (A)
k=n+1
..
.
= I − ϕL L L,H
n (A)λ∞ (A)ϕn (A).
so (4.25) follows.
The proof of the next result is analogous to the one given in [156] (Theorem 2.2.1),
taking l(z, ζ) = 12 (1 + zζ −1 )I.
Proposition 4.1.11. Let σ be an l × l Hermitian matrix measure supported on the unit
circle.
2. If |ζ| = 1, then λL
∞ (ζ) = σ({ζ}).
LL
ϕC = (HϕL − A)C
L,H
, (4.30)
and
HC L − A = CL LL
ϕC . (4.31)
CL ϕL (z) = L−L L
ϕC (HϕL − A)ϕ (z).
LL
ϕC = (HϕL − ζI∞ )C
L,H
, and HC L − ζI∞ = CL LL
ϕC .
Note that the previous proposition extends to the matrix case the result given in [45].
Finally, from Lemma 3.2.4 we deduce that if σ ∈ S, then σc ∈ S.
An interesting open problem is to consider the relation between the CMV block ma-
trices associated with these transformations. We will analyze this problem in a future
contribution. In the scalar case, this has been studied in [25].
In this section, we use the ideas in [4] (for the scalar case) to obtain a factorization of
(HϕL )n ∈ M(n+1)l , the truncated block Hessenberg matrix defined as in (4.5). Denote by
Θ ∈ M2l the unitary matrix (see [158], Section 4)
H
ρL
α L
p
H α and ρR =
p
Θ(α) = , with α ∈ M l , ρ = I − α I − ααH . (4.32)
ρR −α
Set Θ̂n (αk ) = Ikl ⊕ Θ(αk ) ⊕ I(n−k−1)l , where ⊕ is the direct sum, and define I0 ⊕ B =
B ⊕ I0 = B for any matrix B ∈ Ml . Notice that
Furthermore, when n → ∞,
n
Y
HϕL = lim Θ̂n+1 (αn−k ). (4.34)
n→∞
k=0
In the scalar case, the above factorization is known as the AGR factorization of the
GGT matrices, due to Ammar, Gragg, and Reichel (see [4]). Two additional proofs appear
in [158] (Section 10). Moreover, as in the scalar case, the above proposition can be
used to show that every unitary block matrix has a factorization without recourse to
orthogonal polynomials (see [158] Theorem 10.3). Another immediate consequence of the
above proposition is the following well-known result of the theory of orthogonal matrix
polynomials.
H α
Since αn−1 n−1 < I, we conclude that all the eigenvalues of (HϕL )n−1 lie in D.
Notice that taking n → ∞ in (4.35), we obtain another proof of (4.8). On the other
hand, from (4.13) and (4.34) we obtain the following factorization of the block Hessenberg
matrix associated with the Uvarov matrix transformation
n
" #
Y
HU L = LLUϕ lim Θ̂n+1 (αn−k ) L−L
Uϕ .
n→∞
k=0
In a similar way, for the Christoffel matrix transformation , from (4.30), (4.31) and (4.34),
we get
n
" #
Y
L
HC L − A = C lim Θ̂n+1 (αn−k ) − A CL,H .
n→∞
k=0
Notice that if A = ζI∞ with ζ ∈ C and taking into account that CL CL,H = I∞ we obtain
n
" #
Y
L
HC L = C lim Θ̂n+1 (αn−k ) CL,H .
n→∞
k=0
109
Chapter 4. Block matrix representations and spectral transformations
4.2.1 Introduction
The block Hessenberg matrix representation has two limitations. In particular, the matrix
is not always unitary. Indeed, it is unitary if and only if the associated measure does
not belong to the Szegő matrix class (see [48, 76]). Also, it has the disadvantage of
having rows with an infinite number non-zero block entries. This represents an important
constraint in the applications. Moreover, its complicated structure yields some difficulties
in the study of spectral theory [100]. A more convenient representation is given by the
so-called block CMV matrix. This matrix is always unitary and their entries are given
in terms of the Verblunsky matrix coefficients. The block CMV matrix comes from the
representation of the multiplication operator in the linear space of Laurent polynomials,
when a suitable orthonormal basis related to the matrix orthogonal polynomials is chosen.
These matrices play a similar role among unitary matrices than block Jacobi matrices
among all Hermitian matrices (see [116]). Moreover, they give a spectral interpretation
for the zeros of orthogonal polynomials that is much simpler than the one given by the
block Hessenberg matrix, according to its special structure (see [42], Theorem 3.10). The
name comes from the initials CMV (Cantero, Moral and Velázquez, [26]), although it had
appeared before in several places (see the discussion in [158, 170]). Recently, CMV block
matrices have been used in the study of Toda type integrable systems [6] and quantum
random walks [24]. In the later, the authors show how the theory of CMV block matrices
gives a natural tool to study these processes and give results that are analogous to those
that Karlin and McGregor developed in the study of (classical) birth-and-death processes
using orthogonal polynomials on the real line [115]. CMV block matrices have also been
used to describe the solutions to the Schur interpolation problem of certain operator-valued
functions in the unit disc [8], in the study of conservative discrete time-invariant systems
[7], and in the development of algorithms for Hessenberg reduction of structured matrices
[90], by using a block Lanczos-type procedure. This indicates that, as in the scalar case,
CMV block matrices are strongly related with the study of block Krylov methods. For a
recent treatment of these methods for Hermitian positive-definite matrices, we refer the
reader to [14].
The sequences (χk (z))k≥0 and (xk (z))k≥0 are orthonormal (see [42], Proposition 3.20), i.e.
By using the above basis, the matrix representation of the multiplication operator f (z) 7→
zf (z) is given by a five-diagonal block matrix called the CMV matrix (see [42], Section
3.11). It is given in terms of the Verblunsky matrix coefficients, and has the following
110
Chapter 4. Block matrix representations and spectral transformations
factorization
C = C(α0 , α1 , . . .) = LM,
where L = Θ(α0 ) ⊕ Θ(α2 ) ⊕ Θ(α4 ) ⊕ · · · and M = I ⊕ Θ(α1 ) ⊕ Θ(α3 ) ⊕ · · · , are tridiagonal
block matrices. In other words, the CMV matrix is given by
H
ρL H ρL L
α0 0 α1 0 ρ1 0 0 ···
ρR H L
0 −α0 α1 −α0 ρ1 0 0 ···
0 H
α2 ρ 1 R H L
−α2 α1 ρ2 α3 H L
ρ2 ρ3L ···
C = LM = 0 .
ρR2 ρ1
R −ρR2 α1 −α2 α3
H −α2 ρL 3 ···
0
0 0 α4H ρR3 −α4H α3 · · ·
.. .. .. .. .. ..
. . . . . .
Notice that C is unitary since Θ(αn ) is unitary (see [158], Section 4.) for every n ≥ 0.
Let σ be a symmetric matrix measure on the unit circle as the Section 1.5. Symmetric
matrix measures supported on the unit circle frequently appear in the study of quan-
tum random walks. For instance, the matrix measure associated
with the well-known
1 1
Hadamard quantum random walk with constant coin H = √12 and local tran-
1 −1
sition is √
1 1 + cos 2θ √ ±1 dθ
dσ(θ) = √ ,
cos 2θ ±1 1 + cos 2θ 2π
supported in [− π4 , π4 ] ∪ [ 3π 5π
4 , 4 ] (see [24], Section 8).
Symmetric matrix measures can be characterized in terms of the corresponding
Verblunsky matrix coefficients as follows.
Theorem 4.2.1. ([42], Lemma 4.1) The Hermitian matrix measure σ is symmetric if and
only if all Verblunsky matrix coefficients αn are Hermitian matrices.
When the Verblunsky coefficients are Hermitian matrices (real numbers in the scalar
case) as noted in Section 1.5 there exists an observation due to Szegő called the matrix
Szegő transformation. Moreover, Theorem 1.5.3 showed a relation between the CMV
matrix and the Jacobi matrix J.
In what follows, we assume σ and σ̃ are positive definite symmetric matrix measures
supported on T. Therefore, the corresponding Verblunsky matrix coefficients (αn )n≥0 and
(α̃n )n≥0 are Hermitian matrices. When σ is a symmetric matrix measure, the associated
MOPUC satisfy some interesting properties. The following lemma states some of them.
Lemma 4.2.2. Assume that κR L
n and κn are positive definite. Then, for all n ≥ 0,
1. κR L H and S R = S L .
n = (κn ) n n
2. (ϕR H L
n (z̄)) = ϕn (z).
3. (ΦR H L
n (0)) = Φn (0).
2. We have (ϕR H L L L
n (z̄)) = ψn (z) and since we know ψn (z) = ϕn (z) from the previous item,
the result follows.
3. Since σ is symmetric, we have αnH = αn . The result follows from (1.38) and (1.39).
In the remaining of this section, we will show that the matrices Θ(αn ) and Θ(α̃n ) are
unitarily similar whenever the corresponding measure matrices are symmetric. Moreover,
we show an explicit formula for the unitary matrices of the similarity relation. Analogous
results for the scalar case have been studied in [139].
Proposition 4.2.3. For n ≥ 0, there exist unitary matrices Ξn ∈ M2l such that
Proof. Since σ is symmetric, we have αnH = αn and thus Θ(αn ) is an Hermitian and
unitary matrix. As a consequence, its eigenvalues are in T and are real. This is, they are
necessarily ±1. Moreover, since the trace of Θ(αn ) is zero, we deduce that each eigenvalue
has algebraic multiplicity l, since the sum of the eigenvalues equals the trace of matrix
(see [19], Section 4.3). On the other hand, since Θ(αn ) is a normal matrix, then by the
Spectral Theorem for normal matrices (see [109], Theorem 2.5.3) there exists a unitary
matrix UΘn ∈ M2l , and a diagonal matrix DΘn ∈ M2l , containing the eigenvalues of
Θ(αn ), such that
Θ(αn ) = UΘn DΘn UΘHn , (4.37)
112
Chapter 4. Block matrix representations and spectral transformations
where we have arranged the eigenvalues Θ(αn ) in such a way that DΘn = [I ⊕ (−I)].
Similarly, for σ̃ we have
Since DΘn = DΘ̃n , from (4.37) and (4.38) we have UΘHn Θ(αn )UΘn = UΘ̃H Θ(α̃n )UΘ̃n , and,
n
as a consequence, Θ(αn ) = UΘn UΘ̃H Θ(α̃n )(UΘn UΘ̃H )H . Therefore, taking
n n
Notice that (4.39) is a change of basis matrix between two orthonormal basis. On
the other hand, since the matrices (αn )n≥0 are Hermitian, then for all n ≥ 0, we have
0 < αn2 = αnH αn < I, and therefore 0 < I + αn < 2I. Also, we have 0 < I − αn . Since
I + αn and I − α√n are positive√definite matrices, they have unique square roots (see [109],
Theorem 7.2.6) I + αn and I − αn , respectively, that are also positive definite. Thus,
we can define the unitary block matrices
√ √ √ √
1
√ I + αn √I − αn 1
√ I + α̃n √ I − α̃n
√ ∈ M2l , √ ∈ M2l .
2 I − αn − I + αn 2 I − α̃n − I + α̃n
Proposition 4.2.4. The unitary matrix Ξn defined in (4.39) has the form
1 πn,0,0 + πn,1,1 πn,0,1 − πn,1,0
Ξn (αn , α̃n ) = ∈ M2l , (4.40)
2 −(πn,0,1 − πn,1,0 ) πn,0,0 + πn,1,1
with q q
πn,i,j = I + (−1)i αn I + (−1)j α̃n ∈ Ml , i, j = 0, 1. (4.41)
113
Chapter 4. Block matrix representations and spectral transformations
Since we know the relation between Θ(αn ) and Θ(α̃n ), it is possible to deduce the
relation between the corresponding CMV block matrices C and C. ˜
C˜ = W CV, (4.42)
where
W = W0 ⊕ W1 ⊕ W2 ⊕ W3 ⊕ · · · , V = I ⊕ V0 ⊕ V1 ⊕ V2 ⊕ · · · , (4.43)
with Wn , Vn ∈ M2l given by
Wn = ΞH H H H
2n Θ(α2n )Ξ2n Θ(α2n ) , Vn = Θ(α2n+1 ) Ξ2n+1 Θ(α2n+1 )Ξ2n+1 . (4.44)
−1
Proof. Let C˜ = L̃M̃ = L̃L−1 CM−1 M̃. Defining W = L̃L and V = M−1 M̃, we have
and thus, defining Wn = Θ(α̃2n )Θ(α2n )−1 = Θ(α̃2n )Θ(α2n )H for n ≥ 0, from (4.36) we
get ΞHn Θ(αn )Ξn = Θ(α̃n ), and substituting in Wn we obtain the first equation of (4.44).
In a similar way,
and defining Vn = Θ(α2n+1 )−1 Θ(α̃2n+1 ) = Θ(α2n+1 )H Θ(α̃2n+1 ) for n ≥ 0, again from
(4.36) we get the second equation of (4.44).
In this subsection, we will consider the effect that some transformations on the matrix
measure have on the corresponding Verblunsky matrix coefficients. For all p, q ∈ Pl [z], we
define the following (right) sesquilinear forms:
1. Christoffel transformation
Notice that, since σ is a Hermitian symmetric matrix measure, it follows that the sesqui-
linear forms associated with both Uvarov transformations are also symmetric. The same
is true for the sesquilinear form associated with the Christoffel transformation if we take
W (θ) = cos θI − A.
Let (CnR (z) = κc,n z n + · · · )n≥0 , (UnR (z) = κu,n z n + · · · )n≥0 and (VnR (z) = κv,n z n +
· · · )n≥0 be, respectively, the sequences of monic MOPUC associated with the sesquilineal
forms (4.45), (4.46) and (4.47).
For the Christoffel matrix transformation, from the Proposition 3.2.2 with m = 1, we
have that
h −1 R i
CnR (z) = W (z)−1 ΦR R
R
n+1 (z) − Kn (A, z) Kn (A, A) Φ n+1 (A) , (4.48)
and −1 R
R
= (κ−R H −R R R H
R
Sc,n c,n ) κc,n = Sn+1 + Φn+1 (A) Kn (A, A) Φn+1 (A), (4.49)
For the Uvarov transformation with one mass point, from the Proposition 3.1.2 with
m = 1, we have that
−1 R
UnR (z) = ΦR R R
n (z) − Kn−1 (ζ, z)M I + Kn−1 (ζ, ζ)M Φn (ζ), (4.50)
and −1 R
R
= (κ−R H −R R R,H R
Su,n u,n ) κu,n = Sn + Φn (ζ)M I + Kn−1 (ζ, ζ)M Φn (ζ),
is a positive definite matrix for n ≥ 1.
Finally, we show the connection formula for the Uvarov transformation with
two mass points. We define KR ~ z) := K R (ζ, z), K R (ζ̄ −1 , z) , FR (ζ)
~ :=
( ζ, n
n−1 H n−1 n−1
R T R −1 T T 0 M
Φn (ζ) , Φn (ζ̄ ) and M := .
M 0
Proposition 4.2.6. The sequence of monic matrix polynomials (VnR (z))n≥0 is orthogonal
with respect to the sesquilinear form (4.47) if and only if the matrix
R (ζ̄ −1 , ζ)M R (ζ, ζ)MH
R ~ I + Kn−1 Kn−1
Λn−1 (ζ) = R (ζ̄ −1 , ζ̄ −1 )M R (ζ, ζ̄ −1 )MH ,
Kn−1 I + Kn−1
and h i−1
R
Sv,n = (κ−R
v,n ) H −R
κ v,n = Sn
R
+ F R,H ~
n (ζ)M Λ R
n−1 ( ~
ζ) FR ~
n (ζ). (4.52)
115
Chapter 4. Block matrix representations and spectral transformations
Proof. Assume that (VnR (z))n≥0 is the sequence of monic MOPUC with respect to (4.47).
We can write
n−1
X
VnR (z) = ΦRn (z) + ΦR R
k (z)λnk , (4.53)
k=0
VnR (z) = ΦR R ~ R ~
n (z) − Kn−1 (ζ, z)MVn (ζ), (4.54)
R T R −1 T T
where VR ~ . Setting z = ζ and z = ζ̄ −1 in (4.54) and re-
n (ζ) := Vn (ζ) , Vn (ζ̄ )
arranging, we get ΛR ~ R ~ R ~ R R
n−1 (ζ)Vn (ζ) = Fn (ζ). The uniqueness of Vn (z) and Φn (z) implies
−1
that ΛR (ζ) ~ = ΛR (ζ)
~ is a nonsingular matrix for n ≥ 1. Thus VR (ζ) ~ ~ and
FR (ζ),
n−1 n n−1 n
therefore from (4.54) we get (4.51).
Conversely, define (VnR (z))n≥0 as in (4.51). Let 0 ≤ k ≤ n, and notice that from (4.47),
we have
H
k R k R −1 k ~
h(z − ζ) I, Vn iR,σv = h(z − ζ) I, Vn iR,σ + (ζ̄ − ζ) M, 0 VR n (ζ). (4.55)
and thus, substituting on (4.55), we get h(z − ζ)k I, VnR (z)iR,σv = 0, so that (VnR (z))n≥0 is
orthogonal with respect to (4.47). For k = n, using (4.54),
D E
h(z − ζ)n I, VnR iR,σ = SnR − (z − ζ)n I, KR ( ~ z)
ζ, MVR ~
n−1 n (ζ). (4.56)
R,σ
1. CnR (0) = AR R R
c,n (A)Φn+1 (0) + Bc,n (A), (4.57)
2. UnR (0) = AR R R
u,n−1 (ζ)Φn (0) + Bu,n−1 (ζ), (4.58)
3. VnR (0) = AR R R
v,n−1 (ζ)Φn (0) + Bv,n−1 (ζ), (4.59)
with
−1
h
L −1 L,∗
−1 L,∗ i
AR H
R
c,n (A) = A (Sn ) Φn (A) K n (A, A) Φ n (A) − I ,
−1 (4.60)
−1 L −1 L,∗
BR H
R
c,n (A) = A (Sn ) Φn (A) Kn (A, A) AΦRn (A),
−1 L,∗
−1 L,∗
AR L H R
u,n−1 (ζ) = I − (Sn−1 ) Φn−1 (ζ) M I + Kn−1 (ζ, ζ)M Φn−1 (ζ),
−1
(4.61)
−1 L,∗
BR L H R
ΦR
u,n−1 (ζ) = −ζ(Sn−1 ) Φn−1 (ζ) M I + Kn−1 (ζ, ζ)M n−1 (ζ),
h i−1
AR
v,n−1 (ζ) = I − (S L )−1 FL,∗ (ζ)
n−1
~ H M ΛR (ζ)
n−1
~
n−1FL,∗ (ζ),
~
n−1
hi−1 (4.62)
−1 L,∗ ~ H
BR L R ~
v,n−1 (ζ) = −(Sn−1 ) Fn−1 (ζ) M Λn−1 (ζ) [Iζ ⊕ ζ̄ −1 I]FR ~
n−1 (ζ).
Proof.
−1 L,∗
−1 R
UnR (0) = ΦR L H R
n (0) − (Sn−1 ) Φn−1 (ζ) M I + Kn−1 (ζ, ζ)M Φn (ζ). (4.64)
L,∗
From (1.41), ΦR R R
n (ζ) = ζΦn−1 (ζ) + Φn−1 (ζ)Φn (0), and substituting in (4.64) and taking
into account (4.61), (4.58) follows.
117
Chapter 4. Block matrix representations and spectral transformations
In this section, from two known sequences of Verblunsky matrix coefficients, we construct
a new one, and we show the relation between the corresponding CMV block matrices.
Verblunsky’s Theorem 1.4.2 guarantees the existence of the corresponding matrix measure,
that can be approximated by using the Bernstein-Szegő approximation theorem (Theorem
3.11 of [42]).
Let (αn )n≥0 , (α̂n )n≥0 be two sequences of Verblunsky matrix coefficients associated
with the positive definite matrix measures σ, σ̂, respectively, supported on T, with corres-
ponding CMV block matrices C = LM, Cˆ = L̂M̂.
We define a new sequence of Verblunsky matrix coefficients (αk,n )n≥0 by combining
k ≥ 1 elements of (αn )n≥0 and k elements of (α̂n )n≥0 , in the following way
αn , n ∈ {0, · · · , k − 1, 2k, · · · , 3k − 1, 4k, · · · , 5k − 1, · · · },
αk,n =
α̂n , n ∈ {k, · · · , 2k − 1, 3k, · · · , 4k − 1, 5k, · · · , 6k − 1, · · · }.
For instance, for k = 1, the Verblunsky matrix coefficients and the Θ matrices are
αn , n is even, Θ(αn ), n is even,
α1,n = ; Θ(α1,n ) =
α̂n , n is odd, Θ(α̂n ), n is odd.
Let Ck = Ck (αk,0 , αk,1 , . . .) = Lk Mk be the associated CMV block matrix. The case k = 1
becomes
C1 = L1 M1 = (Θ(α1,0 ) ⊕ Θ(α1,2 ) ⊕ Θ(α1,4 ) ⊕ · · · )(I ⊕ Θ(α1,1 ) ⊕ Θ(α1,3 ) ⊕ · · · )
= (Θ(α0 ) ⊕ Θ(α2 ) ⊕ Θ(α4 ) ⊕ · · · )(I ⊕ Θ(α̂1 ) ⊕ Θ(α̂3 ) ⊕ · · · )
= LM̂ = C(L̂M)−1 C. ˆ
In general, the expression for the CMV block matrix Ck is given in the following result.
Ck = LP k Qk M̂, (4.65)
118
Chapter 4. Block matrix representations and spectral transformations
Θ(αn )Θ(α̂n )H ,
n ∈ {1, 3, . . . , k − 2} + 2mk, m ≥ 0,
Qn =
I2l , n ∈ {k, k + 2, . . . , 2k − 1} + 2mk, m ≥ 0.
which is (4.65). The odd case can be proved in a similar way. P k and Qk are unitary
because Θ(αn ) and Θ(α̂n ) are unitary matrices for n ≥ 0.
Notice that limk→∞ Ck = LM. Furthermore, if (αn )n≥0 and (α̂n )n≥0 are Hermitian
Verblunsky matrix coefficients, then (αk,n )n≥0 has also Hermitian elements for all k ≥ 1.
According to Theorem 4.2.5, there exist unitary matrices Wk , Ŵk , Vk and V̂k such that
Ck = Wk CVk , and Ck = Ŵk CˆV̂k . Therefore we have Cˆ = W CV , where W = ŴkH Wk and
V = Vk V̂kH .
119
CHAPTER 5
Summary
In this chapter, we deduce conditions such that the Uvarov, Christoffel and Geronimus ma-
trix transformations applied to a matrix measure supported on the real line, are preserved
under the matrix Szegő transformation. In the scalar case, these transformations have
been studied (in both the real line and unit circle cases) on [20, 45, 88, 132, 139, 134, 179]
among others, and the analysis under the Szegő transformation has been developed in
[87, 148].
5.1 Introduction
We aim to study some spectral transformations of matrix measures on the real line under
the effect of the Szegő transformation. Namely, consider a normalized positive semi-
definite matrix measure µ supported on [−1, 1]. As noted in the Section 1.5, µ has an
associated (symmetric, positive semi-definite) matrix measure σ, supported on the unit
circle, which is the result of applying the matrix Szegő transformation to µ. The ap-
plication of the Uvarov, Christoffel and Geronimus spectral transformations to µ yields
another matrix measure, say µ̃, supported on [−1, 1]. Of course, the application of the
Szegő transformation to µ̃ will result in a matrix measure σ̃ supported on the unit circle.
We will determine conditions on the parameters of the Uvarov, Christoffel, and Geron-
imus transformations in such a way that σ̃ is the corresponding spectral transformation of
σ. The main tool will be the relation between the corresponding matrix-valued Stieltjes
and Carathéodory functions given in Theorem 1.5.4. We will consider the aforemen-
tioned transformations because, in the scalar case (see [179]), a finite composition of the
Christoffel and Geronimus transformations can be used to construct any perturbation of
the Stieltjes function of the form
A(x)S(x) + B(x)
S(x)
e = , (5.1)
C(x)
120
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
dµuN (x) = Ru−1 dµu (x)Ru−1 = Ru−1 [dµ(x) + Mr δ(x − β)] Ru−1 ,
(5.4)
dσuN (z) = Tu−1 dσu (z)Tu−1 = Tu−1 dσ(z) + Mc δ(z − α) + MH −1
−1
c δ(z − ᾱ ) Tu .
Let (µun )n≥0 and (cun )n∈Z be the sequences of matrix moments associated with the nor-
malized matrix measures (5.4) respectively. The matrix-valued Stieltjes and Carathéodory
functions associated with the Uvarov transformation are deduced in the following result,
that is a generalization from the scalar case given in [132].
Proposition 5.2.1. Let Su and Fu be the matrix-valued Stieltjes and Carathéodory func-
tions associated with the normalized Hermitian matrix measures µuN and σuN , as defined
in (5.4). Then,
121
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Thus,
∞ ∞ ∞
" #
X 1 X 1 X 1
Su (x) = µu
n+1 n
= Ru−1 β Ru−1
µn + Mr n
x xn+1xn+1
n=0 n=0 n=0
" " ∞ ##
−1 1X 1 n
= Ru S(x) + Mr β Ru−1
x x
n=0
with |β| < |x| in order to guarantee the convergence of the series. As a consequence,
" #
1 1 −1
Ru Su (x)Ru = S(x) + Mr 1− β = S(x) + Mr (x − β)−1 ,
x x
Thus,
∞
X
cu−n z n = Tu−1 I + Mc + MH
−1
Fu (z) = I + 2 c Tu
n=1
∞
" #
X
+2Tu−H (c−n + α −n
Mc + ᾱ n
MH
c )z
n
Tu−1 ,
n=1
n=1
∞ ∞
" # " #
X X
= F (z) + I + 2 (α−1 z)n Mc + I + 2 (ᾱz)n MH
c .
n=1 n=1
Notice that, for convergence purposes, we need |α−1 z| < 1 and |αz| < 1, so that if |α| < 1
the region of convergence is |z| < |α|. In this disc we have
α+z 1 + ᾱz
Tu Fu (z)Tu = F (z) + Mc + MH
c ,
α−z 1 − ᾱz
which is (5.6).
122
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Thus, we get
1 − z2
F1 (z) = Ru−1 F (z) + Mr Ru−1 . (5.7)
z 2 − 2zβ + 1
If we assume Fu (z) = F1 (z), then from Proposition 1.4.10 σuN = σ1 and dσuN = Sz(dµuN ).
Moreover, comparing the coefficients of z in (5.6) and (5.7) we get
(1 − |α|2 )(Mc − MH
c ) = 0.
1. If |α| =
6 1, then Mc is a Hermitian matrix, and Fu (z) becomes
α − ᾱz 2
−1
Fu (z) = Tu F (z) + 2Mc Tu−1 .
(z − α)(ᾱz − 1)
α − ᾱz 2 1 − z2
= 2 , (5.8)
(z − α)(ᾱz − 1) z − 2zβ + 1
α 1+α2
so that ᾱ = 1, and thus α ∈ R and α = 2β. As a consequence,
p
α=β± β 2 − 1, for |β| > 1.
2. If |α| = 1, we get
α − ᾱz 2
Fu (z) = Tu−1 F (z) + (Mc + MH
c ) Tu−1 .
(z − α)(ᾱz − 1)
Thus, Mr = Mc + MH
c and from (5.8) we deduce α = ±1 and β = ±1.
α = ±1 and Mr = Mc + MH
c .
123
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Notice that this means that, under the stated conditions, the (matrix) Szegő trans-
formation of the Uvarov transformation of µ coincides with the Uvarov transformation of
the Szegő transformation of µ. The previous theorem constitutes a generalization of the
results obtained in [87] for the scalar case.
Example 5.2.3. Consider the Uvarov√transformation of the matrix measure (1.91) with
β = 1 and Mr = I2 . Thus, Ru = Tu = 2I2 and α = 1. Then from (1.94), we have
1 1 1 x
Sz(dµuN (x)) = Sz √ dx + I2 δ(x − 1)
2 π 1 − x2 x 1
1 1 cos θ dθ
= + I2 δ(z − 1) .
2 cos θ 1 2π
where B ∈ Ml is a Hermitian matrix and x ∈ / %(B). Notice that, in order to apply the
matrix Szegő transformation, we need µc to be an Hermitian, normalized matrix measure.
As a consequence, we will require that the matrices B and µ commute. Also, we require
xI − B to be a positive semi-definite matrix, so that µc is also positive semi-definite.
Therefore, we will consider the normalized matrix measure
dµcN (x) = Rc−1 dµc (x)Rc−1 = Rc−1 [dµ(x) (xI − B)] Rc−1 , (5.9)
where Z 1
Rc2 = dµ(x)(xI − B) = µ1 − B, (5.10)
−1
and Rc is an Hermitian matrix.
Similarly, on the unit circle, the Christoffel matrix transformation of σ can be defined
as
dσc (z) = (zI − A)H dσ(z)(zI − A)
and the respective normalized matrix measure
dσcN (z) = Tc−1 dσc (z)Tc−1 = Tc−1 (zI − A)H dσ(z)(zI − A) Tc−1 ,
(5.11)
124
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Proposition 5.3.1. Let Sc and Fc be the matrix-valued Stieltjes and Carathéodory func-
tions associated with the normalized Hermitian matrix measures µcN and σcN , as defined
in (5.9) and (5.11) respectively. Then,
Therefore
∞ ∞ ∞
" #
X 1 X 1 X 1
Sc (x) = µcn = Rc−1 µn+1 − µn B Rc−1
xn+1 xn+1 xn+1
n=0 n=0 n=0
∞
" #
X 1
= Rc−1 −µ0 + x µn − S(x)B Rc−1
xn+1
n=0
= Rc−1 [S(x)(Ix − B) − I] Rc−1 ,
Since Fc (z) = I + 2 ∞ c n
P
n=0 c−n z , we get
Tc Fc (z)Tc
∞
X
= I − c−1 A − AH c1 + AH A + 2 (c−n − AH c−(n−1) − c−(n+1) A + AH c−n A)z n
n=1
∞ ∞
" # " #
X X
= F (z) + AH F (z)A − c−1 + 2 c−(n+1) z n A − AH c1 + 2 c−(n−1) z n
n=1 n=1
1
= F (z) + A F (z)A − c−1 + (F (z) − I − 2c−1 z) A − AH [c1 + z(I + F (z))]
H
z
1
−(zAH − I)F (z)(zI − A) + (−z 2 AH + zc−1 A − zAH c1 + A) .
=
z
Notice that cH H H
−1 = c1 , and thus zc−1 A − zA c1 = z(c−1 A − (c−1 A) ) = 2izIm(c−1 A), and
(5.14) follows.
Now, let us denote by F2 the matrix-valued Carathéodory function associated with the
matrix Szegő transformation dσ2 = Sz(µcN ). From (1.97) and (5.13), we get
2z
F2 (z) = Sc (x) = Rc−1 [S(x)(Ix − B) − I] Rc−1
1 − z2
z + z −1
−1 2z
= Rc F (z) I − B − I Rc−1 .
1 − z2 2
125
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Therefore,
R−1 1 R−1
F2 (z) = √c F (z) (z 2 + 1)I − 2zB − (1 − z 2 )I √c .
(5.16)
2 z 2
Notice that, in order to equate Fc and F2 , we will need to impose some conditions on
the matrix A.
Lemma 5.3.2. Let A ∈ Ml be a non-singular and Hermitian matrix. If A1/2 commutes
with Tc , then A1/2 commutes with both Fc and F.
Proof. It is clear that A1/2 Tc = Tc A1/2 implies A1/2 Tc2 = Tc2 A1/2 , i.e.
Z π Z π Z π
1/2 1/2
A dσc = A dσc = dσc A1/2 .
−π −π −π
and, as a consequence, A1/2 commutes with Fc . On the other hand, since A is Hermitian
then from (5.14) it follows that A1/2 commutes with F.
Let us assume that A1/2 satisfies the conditions on the previous lemma. If Fc = F2 ,
then from Proposition 1.4.10 we have σcN = Sz(µcN ). Thus, from Lemma 5.3.2, (5.14)
becomes
1
Fc (z) = Tc−1 −F (z) (z 2 + 1)A − (I + A)z − z 2 − 1 A Tc−1 .
z
Comparing the terms not involving F , we have
Rc−1 2 R−1
√ (z − 1) √c = −Tc−1 (z 2 − 1)ATc−1 = iTc−1 A1/2 (z 2 − 1)iA1/2 Tc−1 .
2 2
Rc−1
√ = iA1/2 Tc−1 . (5.17)
2
As a consequence, from (5.10) and (5.12),
R−1 R−1
−Tc−1 F (z)ATc−1 = √c F (z) √c ,
2 2
126
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Theorem 5.3.3. Let µ be a positive semi-definite, and normalized matrix measure sup-
ported on [−1, 1] and let σ be its associated matrix Szegő transformation, i.e. dσ = Sz(dµ).
Let A ∈ Ml be a nonsingular, Hermitian matrix such that A1/2 commutes with Tc . If
√ p
Tc = i 2Rc A1/2 , A = B ± B 2 − I with B 2 ≥ I,
The following examples show that we can find matrices satisfying the hypotheses of
the previous result.
where we have chosen the square root of B − I2 with positives entries. Notice that A is a
Hermitian and non-singular matrix. Moreover,
√ √ √ √
1/2 1 2+ 2 2 2 1+ 2 1√
A = √ √ = .
2 2 2+ 2 2 1 1+ 2
Consider the normalized semi-definite positive matrix measures (1.91) and (1.92) satis-
fying dσ
= Sz(dµ), and notice that B commutes with (1.91). On the other hand, since
0 1/2
c−1 = , we have
1/2 0
√ √
5√2 + 9 5√2 + 6
Tc2 2
= I2 − 2c−1 A + A = ,
5 2+6 5 2+9
√ p √ √ √ √
a b
thus Tc = , where a = 21 2 5 2 + 15 + 30 + 9 and b = 5 2a 2+6
. It is easy to
b a
see that A1/2 commutes with Tc . Furthermore, since c0 = I2 and c−n = 02 for |n| ≥ 2, the
corresponding matrix-valued Carathéodory function is
1 z
F (z) = .
z 1
127
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
We say that the matrix measure µGN is the normalized Geronimus matrix transformation
of a matrix measure µ if it satisfies
and again proceeding as in the proof for the Christoffel transformation, we get
1
TG−1 F (z)TG−1 = (I − zAH )FG (z)(zI − A) + (−z 2 AH + zcg−1 A − zAH cg1 + A) .
z
128
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Since σG is Hermitian, we have cg1 = (cg−1 )H and thus AH cg1 − cg−1 A = (cg−1 A)H − cg−1 A =
−2Im(cg−1 A). As a consequence, we obtain (5.22).
Now, let us compute F3 , the matrix-valued Carathéodory function associated with the
matrix measure dσ3 = Sz(dµGN ). From (1.97) and (5.21),
2z
F3 (z) = SG (x)
1 − z2
−1
z + z −1
−1 2z −1
= RG F (z)RG + I I − B
1 − z2 2
2 −1
−1 2z −1 (z + 1)I − 2zB
= RG F (z)RG + I .
1 − z2 2z
Therefore,
−1 −1
−1
+ (1 − z 2 )I (z 2 + 1)I − 2zB
F3 (z) = RG 2zF (z)RG . (5.24)
As in the Christoffel case, from (5.23) we can deduce the following auxiliary result.
Let us assume that A1/2 satisfies the conditions on the previous lemma. In this situa-
tion, FG becomes
and from (5.25) we deduce 1i TG A1/2 = √12 RG . Thus, we have proved the following result,
which constitutes the Geronimus transformation’s version of Theorem 5.3.3.
Theorem 5.4.3. Let µ be a positive semi-definite, and normalized matrix measure sup-
ported on [−1, 1] and let σ be its associated matrix Szegő transformation, i.e. dσ = Sz(dµ).
Let A ∈ Ml be a non-singular, Hermitian matrix such that A1/2 commutes TG . If
i p
TG = √ RG A−1/2 , A = B ± B 2 − I for B 2 ≥ I,
2
where B commutes with µ, then Sz(dµGN (x)) = dσGN (z).
129
Chapter 5. On the Szegő transformation for some spectral perturbations of measures
Notice that Example 5.3.5 and Example 5.3.4 also satisfy the hypotheses of Theorem
5.4.3. Furthermore, if we extend (1.18) for matrix values X ∈ Ml , we have
Z 1 ∞
X
S(X) = dµ(t)(X − tI)−1 = µn X −(n+1) , t ∈
/ %(X),
−1 n=0
2 = −S(B).
and, in this situation, RG
130
CHAPTER 6
6.1 Conclusions
The results presented in this work have been focussed on the study of algebraic and analytic
properties associated with spectral transformations of matrix measures supported either
on the unit circle or on the real line. Next, we summarize the main contributions of
this document. We point out that many of the results given in this work for matrix
polynomials have their counterparts in the scalar case, and they were developed by using
procedures and tools that are analogous to the ones used in the scalar case, with the
added complications that result from the non-commutativity of the matrices. Despite the
fact that two orthogonal sequences can be considered in the matrix framework, in most of
the document we only show the results for one of the sequences, since the results can be
obtained in a similar way for the other.
1. In Chapter 2, we show that the perturbation of the matrix moments cj and c−j
of a positive definite matrix measure σ supported on the unit circle results in a
perturbation, defined by (2.24), of the moments µn associated with a positive definite
matrix measure µ supported on the interval [−1, 1], when both measures are related
through the Szegő matrix transformation. Moreover
dx
Sz −1 (dσj (θ)) = dµ(x) + 2Tj (x)mj √ = dµj (x),
π 1 − x2
where dσj and dµj are defined by (2.20) and (2.23) respectively and Tj (x) = cos(jθ)
is the j−th degree Chebyshev polynomial of the first kind on R. The results are
contained in [79], and the particular case l = 1 (scalar case) appears in [78].
131
Conclusions, open problems and future works
of both transformations is the same. These results can be found in [82] (Uvarov
transformation) and [80] (Christoffel transformation).
4. Given two sequences of Verblunsky matrix coefficients (αn )n≥0 , (α̃n )n≥0 , where both
αn and α̃n are Hermitian matrices for n ∈ N, in Section 4.2 we show that the
associated block matrices Θ(αn ) and Θ(α̃n ) are unitarily similar i.e., there exist
unitary matrices Ξn ∈ M2l such that
5. Under some conditions, in Chapter 5 we show that the Uvarov matrix transformation
of a matrix measure µ supported on the interval [−1, 1] coincides with the Uvarov
matrix transformation of a matrix measure supported on unit circle, when both pair
of measures (µ and σ, and their corresponding transformations) are related through
the Szegő matrix transformation. The same occurs for the Christoffel and Geronimus
matrix transformations. These results are contained in [81].
In this section we formulate some open problems that have arisen during our research. We
intend to address these questions in future contributions.
1. In the scalar case it is well known that the zeros of the orthogonal polynomials with
respect to a measure supported on unit circle lie in D, and the Lucas-Gauss Theorem
(see [16]) guarantees that the zeros of the derivatives of the orthogonal polynomials
also lie in D. In the matrix case, the location of the zeros of the derivatives of matrix
orthogonal polynomials, to the best of our knowledge, constitutes an open problem.
This result would allow to study relative asymptotics properties of the sequences of
derivatives of MOPUC.
2. In this document, we consider the case when the orthogonality measure is Hermitian.
For more general measures, sequences of biorthogonal matrix polynomials appear.
It would be interesting to extend the results in this document to the more general
framework of biorthogonality.
132
Conclusions, open problems and future works
where
4. In sharp contrast with the scalar case, there exist matrix polynomials which do not
have a unique factorization in terms of degree 1 factors, or it could happen that a
factorization does not even exist. For example, the matrix polynomial
2 2 0
W (z) = I2 z − z
0 2
can be written as
1 −1 1 1 2 0
W (z) = I2 z − I2 z − or W (z) = I2 z − I2 z
−1 1 1 1 0 2
can not be factorized with degree 1 matrix polynomials. In this work, we studied
the Christoffel matrix transformation with W (z) = m
Q
j=1 (zI − Aj ) with Aj ∈ Ml .
It would be an interesting problem to analyze the case when W (z) is an arbitrary
matrix polynomial. For MOPRL, a Christoffel matrix transformation has been
studied in [2, 3] with W (z) an arbitrary matrix polynomial.
133
Conclusions, open problems and future works
5. In Chapter 4, the relation between the CMV matrices associated with two Ver-
blunsky matrix sequences (αn )n≥0 , (α̃n )n≥0 is studied, with αn and α̃n Hermitian
matrices for all n ≥ 0. The more general situation of non Hermitian matrices is an
open problem. In the scalar case, this has been considered in [139].
8. In Section 5.3, we assume that the matrix measure µ and the matrix B commute.
In this case, it can be shown that µ is unitarily equivalent to a matrix measure
consisting in two blocks (see [117]). It is an interesting open problem to deduce
what kind of transformation is obtained in each block.
1. dµ̃c = (x − ξ)dµ, ξ ∈
/ supp(µ),
2. dµ̃u = dµ + mr δ(x − ξ), ξ ∈
/ supp(µ), mr ∈ C,
dµ
3. dµ̃g = x−ξ + mr δ(x − ξ), ξ ∈
/ supp(µ), mr ∈ C,
A(x)S(x) + B(x)
S̃(x) = ,
D(x)
where A, B and D are polynomials in x. The three transformations defined above are
important due to the fact that any linear spectral transformation of a given Stieltjes
function (i.e. for any polynomials A, B and D) can be obtained as a combination
of Christoffel and Geronimus transformations (see [179]). A similar result holds for
linear spectral transformations of Carathéodory functions, which are defined in a
similar way (see [34]).
134
Bibliography
[1] M. Alfaro and F. Marcellán, Recent trends in orthogonal polynomials on the unit
circle, Orthogonal polynomials and their applications (Erice, 1990), IMACS Ann.
Comput. Appl. Math., vol. 9, Baltzer, Basel, 1991, pp. 3–14.
[5] A.I. Aptekarev and E.M. Nikishin, The scattering problem for a discrete Sturm-
Liouville operators, Mat. Sb. (Russian) 121 (1983), no. 3, 327–358.
[6] G. Ariznabarreta and M. Mañas, Matrix orthogonal Laurent polynomials on the unit
circle and Toda type integrable systems, Adv. Math. 264 (2014), 396–463.
[7] Y. Arlinskiı̆, Conservative discrete time-invariant systems and block operator CMV
matrices, Methods Funct. Anal. Topology 15 (2009), no. 3, 201–236.
[8] , The Schur problem and block operator CMV matrices, Complex Anal. Oper.
Theory 8 (2014), no. 4, 875–923.
[9] R. Askey, Orthogonal polynomials and special functions, Society for Industrial and
Applied Mathematics, Philadelphia, Pa., 1975.
[11] C. Berg, Fibonacci numbers and orthogonal polynomials, Arab J. Math. Sci. 17
(2011), no. 2, 75–88.
135
Bibliography
[13] F. Beukers, Legendre polynomials in irrationality proofs, Bull. Austral. Math. Soc.
22 (1980), no. 3, 431–438.
[14] S. Birk, Deflated Shifted Block Krylov Subspace Methods for Hermitian Positive
Definite Matrices, Ph.D. thesis, Bergische Universität Wuppertal, 2015.
[20] M.I. Bueno and F. Marcellán, Polynomial perturbations of bilinear functionals and
Hessenberg matrices, Linear Algebra Appl. 414 (2006), no. 1, 64–83.
[21] A. Bultheel, M.J. Cantero, and R. Cruz-Barroso, Matrix methods for quadrature
formulas on the unit circle. A survey, J. Comput. Appl. Math. 284 (2015), 78–100.
[22] Zh. Bustamante and G. Lopes Lagomasino, Hermite-Padé approximations for Ni-
kishin systems of analytic functions, Mat. Sb. 183 (1992), no. 11, 117–138.
[23] M. Cámara, J. Fábrega, M.A. Fiol, and F. Garriga, Some families of orthogonal poly-
nomials of a discrete variable and their applications to graphs and codes, Electron.
J. Combin. 16 (2009), no. 1, Research Paper 83, 30.
[24] M.J. Cantero, F.A. Grünbaum, L. Moral, and L. Velázquez, Matrix-valued Szegő
polynomials and quantum random walks, Comm. Pure Appl. Math. 63 (2010), no. 4,
464–507.
[26] M.J. Cantero, L. Moral, and L. Velázquez, Five-diagonal matrices and zeros of or-
thogonal polynomials on the unit circle, Linear Algebra Appl. 362 (2003), 29–56.
136
Bibliography
[29] K. Castillo, A new approach to relative asymptotic behavior for discrete Sobolev-
type orthogonal polynomials on the unit circle, Appl. Math. Lett. 25 (2012), no. 6,
1000–1004.
[30] K. Castillo, L.E. Garza, and F. Marcellán, Linear spectral transformations, Hessen-
berg matrices, and orthogonal polynomials, Rend. Circ. Mat. Palermo (2) Suppl.
(2010), no. 82, 3–26.
[32] , Zeros of Sobolev orthogonal polynomials on the unit circle, Numer. Algo-
rithms 60 (2012), no. 4, 669–681.
[35] M. Castro and F.A. Grünbaum, Orthogonal matrix polynomials satisfying first order
differential equations: a collection of instructive examples, J. Nonlinear Math. Phys.
12 (2005), no. 2, 63–76.
[39] T.S. Chihara, An introduction to orthogonal polynomials, Gordon and Breach, Sci-
ence Publisher, New York - London - Paris, 1978.
[40] A.E. Choque-Rivero and L.E. Garza, Moment perturbation of matrix polynomials,
Integral Transforms and Spec. Func. 26 (2015), no. 3, 177–191.
[42] D. Damanik, A. Pushnitski, and B. Simon, The analytic theory of matrix orthogonal
polynomials, Surv. Approx. Theory 4 (2008), 1–85.
137
Bibliography
[46] D. Day and L. Romero, Roots of polynomials expressed in terms of orthogonal poly-
nomials, SIAM J. Numer. Anal. 43 (2005), no. 5, 1969–1987.
[47] P.A. Deift, Orthogonal polynomials and random matrices: a Riemann-Hilbert appro-
ach, vol. 3, New York University, Courant Institute of Mathematical Sciences, New
York; American Mathematical Society, Providence, RI, 1999.
[48] P. Delsarte, Y.V. Genin, and Y.G. Kamp, Orthogonal polynomial matrices on the
unit circle, IEEE Trans. Circuits and Systems 25 (1978), no. 3, 149–160.
[49] M. Derevyagin, O. Holtz, S. Khrushchev, and M. Tyaglov, Szegős theorem for matrix
orthogonal polynomials, J. Approx. Theory 164 (2012), no. 9, 1238–1261.
[50] H. Dette and D. Tomecki, Determinants of block Hankel matrices for random matrix-
valued measures, Stochastic Process. Appl. In Press, Uncorrected Proof, Available
online 2 March 2019.
[51] H. Dette and J. Wagener, Matrix measures on the unit circle, moment spaces, or-
thogonal polynomials and the Geronimus relations, Linear Algebra Appl. 432 (2010),
no. 7, 1609–1626.
[52] V.K. Dubovoj, B. Fritzsche, and B. Kirstein, Matricial version of the classical Schur
problem, vol. 129, Teubner Verlagsgesellschaft mbH, Stuttgart, 1992.
[53] N. Dunford and J.T. Schwartz, Linear operators, vol. 1, Part I: General Theory,
Interscence Publisher John Wiley & Sons, 1957.
[54] A.J. Durán, On orthogonal polynomials with respect to positive definite matrix of
measures, Canad. J. Math. 47 (1995), no. 1, 88–112.
[55] , Matrix inner product having a matrix symmetric second order differential
operator, Rocky Mountain J. Math. 27 (1997), no. 2, 585–600.
[57] A.J. Durán and E. Daneri, Ratio asymptotics for orthogonal matrix polynomials with
unbounded recurrence coefficients, J. Approx. Theory 110 (2001), no. 1, 1–17.
[59] A.J. Durán and E. Defez, Orthogonal matrix polynomials and quadrature formulas,
Linear Algebra Appl. 345 (2002), 71–84.
[60] A.J. Durán and F.A. Grünbaum, Orthogonal matrix polynomials satisfying-second
order differential equations, Int. Math. Res. Not. 10 (2004), 461–484.
138
Bibliography
[65] A.J. Durán and P. López-Rodrı́guez, Orthogonal matrix polynomials: zeros and Blu-
menthal’s theorem, J. Approx. Theory 84 (1996), no. 1, 96–118.
[66] , Density questions for the truncated matrix moment problem, Canad. J.
Math. 49 (1997), no. 4, 708–721.
[67] , The LP space of a positive definite matrix of measures and density of matrix
polinomials in L1 , J. Approx. Theory 90 (1997), no. 2, 299–318.
[68] A.J. Durán, P. López-Rodrı́guez, and E.B. Saff, Zero asymptotic behaviour for or-
thogonal matrix polynomials, J. Anal. Math 78 (1999), 37–60.
[69] A.J. Durán and B. Polo, Gaussian quadrature formulae for matrix weights, Linear
Algebra Appl. 355 (2002), 119–146.
[71] A.J. Durán and W. Van Assche, Orthogonal matrix polynomials and higher-order
recurrence relations, Linear Algebra Appl. 219 (1995), 261–280.
[72] S. Elhay, G.H. Golub, and J. Kautsky, Updating and downdating of orthogonal poly-
nomials with data fitting applications, SIAM J. Matrix Anal. Appl. 12 (1991), no. 2,
327–353.
[73] H. Flaschka, Discrete and periodic illustrations of some aspects of the inverse method,
(1975), 441–466. Lecture Notes in Phys., Vol. 38.
[75] E. Fuentes and L.E. Garza, Block Hessenberg matrices and orthonormal matrix poly-
nomials on the unit circle, Submitted.
[76] , CMV block matrices for symmetric matrix measures on the unit circle,
Linear Algebra Appl. (In press).
[78] , On a finite moment perturbation of linear functionals and the inverse Szegő
transformation, Rev. Integr. Temas Mat. 34 (2016), no. 1, 39–58.
[79] , Matrix moment perturbations and the inverse Szegő matrix transformation,
Rev. Union. Mat. Argent. 60 (2019), no. 2, (In press).
[80] E. Fuentes, L.E. Garza, and H. Dueñas, On a Christoffel transformation for matrix
measure with support on the unit circle, Submitted.
139
Bibliography
[81] , On the Szegő transformation for some spectral perturbations of matrix mea-
sures, Submitted.
[82] , On the Uvarov transformation for matrix measure with support on the unit
circle, Submitted.
[83] P.A. Fuhrman, Orthogonal matrix polynomials and system theory, Rend. Mat. Univ.
Politec., Torino, Special Issue (1987), 68–124.
[84] J.C. Garcı́a-Ardila, L.E. Garza, and F. Marcellán, An extension of the Geronimus
transformation for orthogonal matrix polynomials on the real line, Mediterr. J. Math.
13 (2016), no. 6, 5009–5032.
[88] L. Garza and F. Marcellán, Verblunsky parameters and linear special transforma-
tions, Methods Appl. Anal. 16 (2009), no. 1, 69–86.
[90] L. Gemignani and L. Robol, Fast Hessenberg reduction of some rank structured
matrices, SIAM J. Matrix Anal. Appl. 38 (2017), no. 2, 574–598.
[93] E. Godoy and F. Marcellán, An analogue of the Christoffel formula for polynomial
modification of a measure on the unit circle, Boll. Un. Mat. Ital. A(7)5 (1991), no. 1,
1–12.
[98] L.B. Golinskiı̆, Schur flows and orthogonal polynomials on the unit circle, Mat. Sb.
197 (2006), no. 8, 41–62.
[99] G.H. Golub and G. Meurant, Matrices, moments and quadrature with applications,
Princeton University Press, Princeton, NJ, 2010.
[100] G.H. Golub and C.F. Van Loan, Matrix computations, third ed., Johns Hopkins
Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore,
MD, 1996.
[101] A.A. Gonchar, The convergence of Padé approximants for certain classes of mero-
morphic functions, Mat. Sb. (N.S.) 97(139) (1975), no. 4(8), 607–629, 634.
[102] F.A. Grünbaum and M.D. De la Iglesia, A family of matrix valued orthogonal poly-
nomials related to SU(N + 1), its algebra of matrix valued differential operators and
the corresponding curves, Experiment. Math. 16 (2007), no. 2, 189–207.
[103] F.A. Grünbaum, L. Haine, and E. Horozov, Some functions that generalize the Krall-
Laguerre polynomials, J. Comput. Appl. Math. 106 (1999), no. 2, 271–297.
[104] F.A. Grünbaum and J. Tirao, The algebra of differential operators associated to a
weight matrix, Integral Equations Operator Theory 58 (2007), no. 4, 449–475.
[105] F.A. Grünbaum and M. Yakimov, Discrete bispectral Darboux transformations from
Jacobi operators, Pacific J. Math. 204 (2002), no. 2, 395–431.
[106] H. Helson and D. Lowdenslager, Prediction theory and Fourier series in several
variables, Acta Math. 99 (1958), 165–202.
[107] , Prediction theory and Fourier series in several variables. II, Acta Math.
106 (1961), 175–213.
[108] R.A. Horn and C.R. Johnson, Topics in matrix analysis, Cambridge University Press,
Cambridge, 1991.
[110] R.A. Horn and V.V. Sergeichuck, Canonical forms for complex matrix congruence
and ∗congruence, Linear Algebra Appl. 416 (2006), no. 2-3.
[111] M.E.H. Ismail, Classical and quantum orthogonal polynomials in one variable. with
two chapters by Walter Van Assche, with a foreword by richard a. askey, reprint of
the 2005 original, vol. 98, Cambridge University Press, Cambridge, 2009.
[112] M.E.H. Ismail and R.W. Ruedemann, Relation between polynomials orthogonal on
the unit circle with respect to different weights, J. Approx. Theory 71 (1992), no. 1,
39–60.
[113] W.B. Jones, O. Njåstad, and W.J. Thron, Moment theory, orthogonal polynomials,
quadrature, and continued fractions associated with the unit circle, Bull. London
Math. Soc. 21 (1989), no. 2, 113–152.
[114] T. Kailath, A. Vieira, and M. Morf, Inverses of Toeplitz operators, innovations, and
orthogonal polynomials, SIAM Rev. 20 (1978), no. 1, 106–119.
141
Bibliography
[115] S. Karlin and J. McGregor, Random walks, Illinois J. Math. 3 (1959), 66–81.
[116] R. Killip and I. Nenciu, Matrix models for circular ensembles, Int. Math. Res. Not.
50 (2004), 2665–2701.
[119] A.M. Krall, Hilbert space, boundary value problems and orthogonal polynomials, vol.
133, Birkhäuser Verlag, Basel, 2002.
[122] M.G. Kreı̆n, The fundamental propositions of the theory of representations of Her-
mitian operators with deficiency index (m, m), Ukrain. Mat. Žurnal 1 (1949), no. 2,
3–66.
[123] E. Kreyszig, Introductory functional analysis with applications, John Wiley & Sons
Inc., New York, 1989.
[124] P. Lancaster and M. Tismenetsky, Matrix analysis, second ed., Academic Press, New
York, 1985.
[128] G.L. Lopes, Convergence of Padé approximants for meromorphic functions of Stielt-
jes type and comparative asymptotics for orthogonal polynomials, Mat. Sb. (N.S.)
136(178) (1988), no. 2, 206–226, 301.
[129] , Relative asymptotics for polynomials orthogonal on the real axis, Mat. Sb.
137 (1988), 505–529.
[133] F. Marcellán and J. Hernández, Christoffel transforms and Hermitian linear func-
tionals, Mediterr. J. Math. 2 (2005), no. 4, 451–458.
[135] F. Marcellán and L. Moral, Sobolev-type orthogonal polynomials on the unit circle,
Appl. Math. Comput. 128 (2002), no. 2-3, 329–363.
[138] F. Marcellán and I. Rodrı́guez, A class of matrix orthogonal polynomials on the unit
circle, Linear Algebra Appl. 121 (1989), 233–241, Linear algebra and applications
(Valencia, 1987).
[139] F. Marcellán and N. Shayanfar, OPUC, CMV matrices and perturbations of measures
supported on the unit circle, Linear Algebra Appl. 485 (2015), 305–344.
[140] L. Miranian, Matrix-valued orthogonal polynomials on the real line: some extensions
of the classical theory, J. Phys. A 38 (2005), no. 25, 5731–5749.
[141] , Matrix valued orthogonal polynomials on the unit circle: some extensions
of the classical theory, Canad. Math. Bull. 52 (2009), no. 1, 95–104.
[142] P.M. Morse and H. Feshbach, Methods of theoretical physics, McGraw-Hill Book Co.,
Inc., New York-Toronto-London, 1953.
[143] J. Moser, Finitely many mass points on the line under the influence of an exponential
potential–an integrable system, (1975), 467–497. Lecture Notes in Phys., Vol. 38.
[145] A.F. Nikiforov and V.B. Uvarov, Special functions of mathematical physics,
Birkhäuser Verlag, Basilea, 1988.
[146] E.M. Nikishin, The discrete Sturm-Liouville operator and some problems of fuction
theory, (Russian) Trudy Sem. Petrovsk. 10 (1984), 3–77.
[147] E.M. Nikishin and V.N. Sorokin, Rational approximations and orthogonality, vol. 92,
American Mathematical Society, Providence, RI, 1991.
[148] F. Peherstorfer, A special class of polynomials orthogonal on the unit circle including
the associated polynomials, Constr. Approx. 12 (1996), no. 2, 161–185.
[149] E.A. Rakhmanov, Asymptotic properties of orthogonal polynomials on the real axis,
Mat. Sb. (N.S.) 119(161) (1982), no. 2, 163–203, 303.
[150] , On the asymptotics of the ratio of orthogonal polynomials, II, USSR Sb.
(1983), no. 46, 105–117.
143
Bibliography
[152] L. Rowen, Ring theory, vol. I, Academic Press, San Diego, 1988.
[153] W. Schoutens, Stochastic processes and orthogonal polynomials, vol. 146, Springer-
Verlag, New York, 2000.
[154] B. Schwarz and A. Zaks, Matrix Möbius transformations, Comm. Algebra 9 (1981),
no. 19, 1913–1968.
[155] B. Simon, The classical moment problem as a self-adjoint finite difference operator,
Adv. Math. 137 (1998), no. 1, 82–203.
[156] , Orthogonal polynomials on the unit circle. Part 1. Classical theory, vol. 54,
American Mathematical Society Colloquium Publications, American Mathematical
Society, Providence, RI, 2005.
[157] , Orthogonal polynomials on the unit circle. Part 2. Spectral theory, vol. 54,
American Mathematical Society Colloquium Publications, American Mathematical
Society, Providence, RI, 2005.
[158] , CMV matrices: Five years after, J. Comput. Appl. Math. 208 (2007), 120–
154.
[159] , Szegős theorem and its descendants: Spectral theory for L2 perturbations of
orthogonal polynomials, Princeton University Press, Princeton, NJ, 2011.
[160] A. Sinap, Gaussian quadrature for matrix valued functions on the real line, Pro-
ceedings of the International Conference on Orthogonality, Moment Problems and
Continued Fractions (Delft, 1994), vol. 65, 1995, pp. 369–385.
[161] , Gaussian quadrature for matrix valued functions on the unit circle, Electron.
Trans. Numer. Anal. 3 (1995), no. Sept., 96–115.
[162] A. Sinap and W. Van Assche, Polynomial interpolation and Gaussian quadrature for
matrix-valued functions, Linear Algebra Appl. 207 (1994), 71–114.
[164] T.J. Stieltjes, Quelques recherches sur la théorie des quadratures dites mécaniques,
Ann. sci. ëcole norm. spér. 3 (1884), no. 1, 409–426.
[165] , Recherches sur les fractions continues, Ann. Fac. Sci. Toulouse Sci. Math.
Sci. Phys. 8 (1894), no. 4, 1–122.
[167] V. Totik, Weighted approximation with varying weight, Lecture Notes in Mathema-
tics, vol. 1569, Springer-Verlag, Berlin, 1994.
[168] W. Van Assche, Rakhmanov’s theorem for orthogonal matrix polynomials on the unit
circle, J. Approx. Theory 146 (2007), no. 2, 227–242.
144
Bibliography
[169] L. Vinet and A. Zhedanov, An integrable system connected with the Uvarov-Chihara
problem for orthogonal polynomials, J. Phys. A 31 (1998), no. 47, 9579–9591.
[170] D. Watkins, Some perspectives on the eigenvalue problem, SIAM Rev. 35 (1993),
430–471.
[172] H.O. Yakhlef and F. Marcellán, Orthogonal matrix polynomials, connection between
recurrences on the unit circle and on a finite interval, Approximation, optimization
and mathematical economics (Pointe-à-Pitre, 1999), vol. 18, Physica, Heidelberg,
2001, pp. 369–382.
[173] , Relative asymptotics for orthogonal matrix polynomials with respect to per-
turbed matrix measure on the unit circle, Approx. Theory & its Appl. 18 (2002),
no. 4, 1–19.
[174] H.O. Yakhlef, F. Marcellán, and M.A. Piñar, Perturbations in the Nevai class of
orthogonal matrix polynomials, Linear Algebra Appl. 336 (2001), 231–254.
[175] , Relative asymptotics for orthogonal matrix polynomials with corvegent re-
currence coefficients, J. Approx. Theory 111 (2001), 1–30.
[177] D.C. Youla and N.N. Kazanjian, Bauer-type factorization of positive matrices and
the theory of matrix polynomials orthogonal on the unit circle, IEEE Trans. Circuits
and Systems CAS-25 (1978), no. 2, 57–69.
[178] Fuzhen Zhang (ed.), The Schur complement and its applications, Numerical Methods
and Algorithms, vol. 4, Springer-Verlag, New York, 2005.
145