Professional Documents
Culture Documents
Textbook Splitting Methods in Communication Imaging Science and Engineering Roland Glowinski Ebook All Chapter PDF
Textbook Splitting Methods in Communication Imaging Science and Engineering Roland Glowinski Ebook All Chapter PDF
https://textbookfull.com/product/approximation-methods-in-
science-and-engineering-reza-n-jazar/
https://textbookfull.com/product/computational-and-analytic-
methods-in-science-and-engineering-christian-constanda/
https://textbookfull.com/product/domain-decomposition-methods-in-
science-and-engineering-xxv-ronald-haynes/
https://textbookfull.com/product/computational-and-analytic-
methods-in-science-and-engineering-1st-edition-constanda/
Mathematical Methods in Science and Engineering 2nd
Edition Selçuk S. Bayin
https://textbookfull.com/product/mathematical-methods-in-science-
and-engineering-2nd-edition-selcuk-s-bayin/
https://textbookfull.com/product/essentials-of-mathematical-
methods-in-science-and-engineering-selcuk-s-bayin/
https://textbookfull.com/product/automation-communication-and-
cybernetics-in-science-and-engineering-2015-2016-1st-edition-
sabina-jeschke/
https://textbookfull.com/product/domain-decomposition-methods-in-
science-and-engineering-xxiii-1st-edition-chang-ock-lee/
https://textbookfull.com/product/essentials-of-mathematical-
methods-in-science-and-engineering-2nd-edition-s-selcuk-bayin/
Scientific Computation
Roland Glowinski
Stanley J. Osher
Wotao Yin Editors
Splitting Methods
in Communication,
Imaging, Science,
and Engineering
Splitting Methods in Communication, Imaging,
Science, and Engineering
Scientific Computation
Editorial Board
J.-J. Chattot, Davis, CA, USA
P. Colella, Berkeley, CA, USA
R. Glowinski, Houston, TX, USA
P. Joly, Le Chesnay, France
D.I. Meiron, Pasadena, CA, USA
O. Pironneau, Paris, France
A. Quarteroni, Lausanne, Switzerland
and Politecnico of Milan, Italy
J. Rappaz, Lausanne, Switzerland
R. Rosner, Chicago, IL, USA
P. Sagaut, Paris, France
J.H. Seinfeld, Pasadena, CA, USA
A. Szepessy, Stockholm, Sweden
M.F. Wheeler, Austin, TX, USA
M.Y. Hussaini, Tallahassee, FL, USA
Splitting Methods
in Communication, Imaging,
Science, and Engineering
123
Editors
Roland Glowinski Stanley J. Osher
Department of Mathematics Department of Mathematics
University of Houston UCLA
Houston, TX, USA Los Angeles, CA, USA
Wotao Yin
Department of Mathematics
UCLA
Los Angeles, CA, USA
Operator-splitting methods have been around for more than a century, starting with
their common ancestor, the Lie scheme, introduced by Sophus Lie in the mid-1870s.
It seems however that one had to wait after WW2 to see these methods joining the
computational and applied mathematics mainstream, the driving force being their
applicability to the solution of challenging problems from science and engineering
modeled by partial differential equations. The main actors of this renewed inter-
est in operator-splitting methods were mainly Douglas, Peaceman, Rachford, and
Wachpress in the USA with the alternating direction implicit (ADI) methods and
Dyakonov, Marchuk, and Yanenko in the USSR with the fractional step methods.
These basic methodologies have known many variants and improvements and gen-
erated a quite important literature consisting of many articles and few books, of the-
oretical and applied natures, with computational mechanics and physics being the
main sources of applications. In the mid-1970s, tight relationships between the aug-
mented Lagrangian methods of Hestenes and Powell and ADI methods were iden-
tified, leading to the alternating direction methods of multipliers (ADMM). Albeit
originating from problems from continuum mechanics modeled by partial differen-
tial equations and inequalities, it was quickly realized that the ADMM methodology
applies to problems outside the realm of partial differential equations and inequali-
ties, in information sciences in particular, an area where ADMM has enjoyed a very
fast-growing popularity. The main reason of this popularity is that most often large-
scale optimization problems have decomposition properties that ADMM can take
advantage of, leading to modular algorithms, well suited to parallelization. Another
factor explaining ADMM’s growing popularity during the last decade was the dis-
covery around 2007 of its many commonalities with the split-Bregman algorithm
widely used first in image processing and then in compressed sensing, among other
applications.
Late 2012, the three editors of this book were participating in a conference in
Hong Kong, the main conference topics being scientific computing, image pro-
cessing, and optimization. Since most lectures at the conference had some rela-
tions with operator splitting, ADMM, and split-Bregman algorithms, the idea of
a book dedicated to these topics was explored, considering the following facts:
v
vi Preface
(i) The practitioners of the above methods have become quite specialized, form-
ing subcommunities with very few interactions between them. (ii) New applications
of operator-splitting and related algorithms appear on an almost daily basis. (iii)
The diversification of the algorithms and their applications has become so large that
a volume containing the contributions of a relatively large number of experts is nec-
essary in order to interest a large audience; indeed, the last review publications on
the above topics being quite specialized (as shown in Chapter 1), the editors did
their very best to produce a large spectrum volume.
Following a Springer agreement to publish a book on operator splitting, ADMM,
split-Bregman, and related algorithms, covering both theory and applications, ex-
perts were approached to contribute to this volume. We are pleased to say that most
of them enthusiastically agreed to be part of the project.
This book is divided in chapters covering the history, foundations, applications,
as well as recent developments of operator splitting, ADMM, split Bregman, and re-
lated algorithms. Due to size and time constraints, many relevant information could
not be included in the book: the editors apologize to those authors whose contribu-
tions have been partially or totally overlooked.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Roland Glowinski, Stanley J. Osher, and Wotao Yin
1 Motivation and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Lie’s Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 On the Strang Symmetrized Operator-Splitting Scheme . . . . . . . . . . . . 5
4 On the Solution of the Sub-initial Value Problems . . . . . . . . . . . . . . . . . 6
5 Further Comments on Multiplicative Operator-Splitting Schemes . . . . 7
6 On ADI Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
7 Operator Splitting in Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
8 Bregman Methods and Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . 14
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
vii
viii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
Chapter 1
Introduction
Abstract The main goal of this chapter is to present a brief overview of operator
splitting methods and algorithms when applied to the solution of initial value prob-
lems and optimization problems, topics to be addressed with many more details
in the following chapters of this book. The various splitting algorithms, methods,
and schemes to be considered and discussed include: the Lie scheme, the Strang
symmetrized scheme, the Douglas-Rachford and Peaceman-Rachford alternating
direction methods, the alternating direction method of multipliers (ADMM), and
the split Bregman method. This chapter also contains a brief description of (most
of) the following chapters of this book.
In December 2012, the three editors of this volume were together in Hong-Kong,
attending a conference honoring one of them (SO). A most striking fact during this
event was the large number of lectures involving, if not fully dedicated to, operator-
splitting methods, split Bregman and ADMM (for Alternating Direction Methods
of Multipliers) algorithms, and their applications. This was not surprising consider-
ing that the title of the conference was Advances in Scientific Computing, Imaging
Science and Optimization. Indeed, for many years, operator-splitting has provided
efficient methods for the numerical solution of a large variety of problems from
R. Glowinski ()
Department of Mathematics, University of Houston, Houston, TX 77204, USA
e-mail: roland@math.uh.edu
S.J. Osher • W. Yin
Department of Mathematics, UCLA, Los Angeles, CA 90095, USA
e-mail: sjo@math.ucla.edu; wotaoyin@math.ucla.edu
Mechanics, Physics, Finance, etc. modeled by linear and nonlinear partial differen-
tial equations and inequalities, with new applications appearing almost on a daily
basis. Actually, there are situations where the only working solution methods are
based on operator-splitting (a dramatic example being provided by the numerical
simulation of super-novae explosions). Similarly, split Bregman and ADMM algo-
rithms enjoy now a very high popularity as efficient tools for the fast solution of
problems from the information sciences. Finally, ADMM algorithms have found
applications to the solution of large-scale optimization problems, due to their ability
at taking advantage of those special structures and associated decomposition prop-
erties, which are common in large-scale optimization.
What the three editors observed also was the lack of communications between
the communities and cultures, associated with the ‘vertices’ of the ‘triangle’ PDE
oriented scientific computing - information sciences - optimization (some bridges
do exist fortunately, but it is our opinion that more are needed).
From the various facts above, the three editors came to the conclusion that time
has come to have a multi-author book, with chapters dedicated to the theory, prac-
tice, and applications of operator-splitting, ADMM, and Bregman algorithms and
methods, one of the many goals of this book being also to show the existing com-
monalities between these methods. Another justification of the present volume is
that the number of publications on the above topics with a review flavor are scarce,
the most visible ones being, by chronological order:
1. The 1990 article by G.I. Marchuk in the Volume I of the Handbook of Numerical
Analysis [23]. This book size article (266 pages) is dedicated, mostly, to the
numerical solution of linear and nonlinear time dependent partial differential
equations from Mechanics and Physics, by operator-splitting and alternating
direction methods, making it thus quite focused.
2. The (94 pages) article by R.I. McLachlan, G. Reinout W. Quispel in Acta
Numerica 2002 [24]. Despite the broad appeal of its title, namely Splitting
methods, this long article is mostly focused on the time-discretization of dy-
namical systems, modeled by ordinary differential equations, by (symplectic)
operator-splitting schemes. It ignores for example the contributions of Dou-
glas, Marchuk, Peaceman, Rachford (and of many other contributors to splitting
methods).
3. Distributed optimization and statistical learning via the alternating direction
method of multipliers [1]. This large article (122 pages) appeared in Founda-
tions and Trends’ in Machine Learning; since its publication in 2011 it has been
the most cited article on ADMM methods (2, 678 citations as of November
26, 2015), an evidence of both the popularity of ADMM and of the impor-
tance of this publication. Actually, this article is mostly concerned with finite
dimensional convex problems and ignores applications involving differential
equations and inequalities.
It seemed to us, at the time of the above Hong-Kong conference (and also now),
that a book (necessarily multi-authors), less focused than the above publications,
1 Introduction 3
and blending as much as possible the methods and points of view of the various
splitting sub-communities, will be a useful and welcome tool for the splitting com-
munity at large. It will be useful also for those in search of efficient, modular, and
relatively easy to implement solution methods for complex problems. We are glad
that Springer shared this point of view and that outstanding contributors to this type
of methods accepted contributing to this volume.
In the following sections of this chapter we will give a relatively short com-
mented description of operator-splitting methods and related algorithms, and pro-
vide the historical facts we are aware of. As expected, references will be made to
the following chapters and to many related publications.
2 Lie’s Schemes
According to [5] it is Sophus Lie himself who introduced (in 1875) the first operator-
splitting scheme recorded in history ([21], a reprint of the 1888 edition), in or-
der to solve the following initial value problem (flow in the Dynamical System
community):
dt + (A + B)X = 0, in (0, T ),
dX
(1.1)
X(0) = X0
where in (1.1): A and B are two d × d time-independent matrices, X0 ∈ Rd (or Cd )
and 0 < T ≤ +∞. The solution of problem (1.1) reads as:
X(t) = e−(A+B)t X0 . (1.2)
t t
n
From the relation limn→+∞ e−B n e−A n = e−(A+B)t , Lie suggested the follow-
ing scheme for the approximate solution of problem (1.1) (with Δ t > 0,t n = nΔ t,
and X n an approximation of X(t n )):
X 0 = X0 ,
(1.3)
X n+1 = e−BΔ t e−AΔ t X n , ∀n ≥ 0.
Since computing the exponential of a matrix is a nontrivial operation, particu-
larly for large values of d, splitting practitioners prefer to use the following (matrix
exponential free) equivalent formulation of scheme (1.3):
X 0 = X0 . (1.4)
1
For n ≥ 0, Xn → X n+ 2 → X n+1 via:
• Solve
dX1
dt + AX1 = 0, in (t n ,t n+1 ),
(1.5)
X1 (t n ) = X n ,
4 R. Glowinski et al.
• and set
1
X n+ 2 = X1 (t n+1 ). (1.6)
• Similarly, solve
dX2
dt + BX2 = 0, in (t n ,t n+1 ),
1 (1.7)
X2 (t n ) = X n+ 2 ,
• and set
From its equivalent formulation (1.4)–(1.8), one understands better now why the
Lie scheme (1.3) is known as a fractional-step time discretization scheme. It is in
fact the common ancestor to all the fractional-step and operator-splitting schemes.
One can easily show that the Lie scheme (1.3), (1.4)–(1.8) is generically first order
accurate, that is, X n − X(t n) = O(Δ t). If A and B commute the above scheme is
obviously exact. Its generalization to splitting with more than two operators is dis-
cussed in Chapter 2. The generalization to nonlinear and/or non-autonomous initial
value problems is very simple (formally at least). Indeed, let us consider the follow-
ing initial value problem:
dt + A(X,t) = 0, in (0, T ),
dX
(1.9)
X(0) = X0 ,
and set
j
X n+ 2 = X j (t n+1 ). (1.12)
Assuming that the operator A j are smooth enough, the generalized Lie scheme
(1.10)–(1.12) is generically first order accurate at best. As shown in the follow-
ing chapters, problem (1.9) is, despite its simplicity, a model for a large number of
important applications: For example, it models the flow (in the dynamical system
sense) associated with the optimality conditions of an optimization problem (= 0
being possibly replaced by 0).
1 Introduction 5
is second order accurate (and exact if AB = BA). Scheme (1.13) can be generalized
1
as follows in order to solve the initial value problem (1.9) (with t n+ 2 = (n + 12 )Δ t):
X 0 = X0 . (1.14)
1 1
For n ≥ 0, X n → X n+ 2 → X̂ n+ 2 → X n+1 via
• Solve
1
dX1
dt + A1 (X1 ,t) = 0, in (t n ,t n+ 2 ),
(1.15)
X1 (t n ) = X n ,
• and set
1 1
X n+ 2 = X1 (t n+ 2 ). (1.16)
• Similarly, solve
1
dX2
dt + A2 (X2 ,t n+ 2 ) = 0, in (0, Δ t),
1 (1.17)
X2 (0) = X n+ 2 ,
• and set
1
X̂ n+ 2 = X2 (Δ t). (1.18)
• Finally, solve
1
dX1
dt+ A1 (X1 ,t) = 0, in (t n+ 2 ,t n+1 ),
1 1 (1.19)
X1 (t n+ 2 ) = X̂ n+ 2 ,
• and set
The various splitting schemes we encountered so far are generically known (for
obvious reasons) as multiplicative splitting schemes. Actually, these schemes are
only semi-constructive, in the sense that one still has to specify how to solve in
practice the various sub-initial value problems they produce. For a low order scheme
like the Lie scheme a very popular way to proceed is to solve the sub-initial value
problems using just one step of the backward Euler scheme; applying this strategy
to problem (1.9), with A(X,t) = ∑Jj=1 A j (X,t), one obtains the following scheme
X 0 = X0 . (1.21)
1 j J−1
For n ≥ 0, X n → X n+ J → . . . → X n+ J → . . . → X n+ J → X n+1 via:
∀ j = 1, . . . , J, solve
j j−1
X n+ J − X n+ J j
+ A j (X n+ J ,t n+1 ) = 0. (1.22)
Δt
Scheme (1.21)–(1.22) (known by some practitioners as the Marchuk-Yanenko
operator-splitting scheme) is first order accurate, at most, however, its robustness
and simplicity make it popular for the solution of complicated problems with
poorly differentiable solutions involving a large number of operators. Actually,
scheme (1.21)–(1.22) can accommodate easily non-smooth, possibly multi-valued,
operators.
1 Introduction 7
6 On ADI Methods
X n+1 using a scheme of the backward (resp., forward) Euler type with respect to
1
A1 (resp., A2 ) on the time interval [t n ,t n+ 2 ]. Then, one switches the roles of A1
and A2 on the time interval [t n+1/2 ,t ]. The following scheme realizes this pro-
n+1
gram:
X 0 = X0 . (1.24)
1
For n ≥ 0, X n → X n+ 2 → X n+1 as follows:
Solve
1
X n+ 2 − X n 1 1
+ A1 (X n+ 2 ,t n+ 2 ) + A2(X n ,t n ) = 0, (1.25)
Δ t/2
and
1
X n+1 − X n+ 2 1 1
+ A1 (X n+ 2 ,t n+ 2 ) + A2 (X n+1 ,t n+1 ) = 0. (1.26)
Δ t/2
In the particular case where A1 and A2 are linear and time independent linear opera-
tors which commute, the Peaceman-Rachford scheme (1.24)–(1.26) is second order
accurate; it is at best first order accurate in general. The convergence of the above
scheme has been proved in [22] and [18] under quite general monotonicity hypothe-
ses concerning the operators A1 and A2 (see also [10, 11], and [20]); indeed, A1 and
A2 can be nonlinear, unbounded, and even multivalued (if this is the case 0 has to
replace = 0 in (1.25) and/or (1.26)).
Remark 1. For those fairly common situations where A2 is a smooth univalued op-
erator, but operator A1 is a ‘nasty’ one (discontinuous and/or multivalued, etc.), one
should use the equivalent formulation of the Peaceman-Rachford scheme obtained
by replacing (1.26) by
1
X n+1 − 2X n+ 2 + X n
+ A2(X n+1 ,t n+1 ) = A2 (X n ,t n ). (1.27)
Δ t/2
X 0 = X0 . (1.28)
1 Introduction 9
X n+1 − X̂ n+1
+ A2 (X n+1 ,t n+1 ) = A2 (X n ,t n ). (1.31)
Δt
Remark 5. At those wondering how to choose between Peaceman-Rachford and
Douglas-Rachford schemes we will say that on the basis of many numerical experi-
ments, it seems that the second scheme is more robust and faster for those situations
where one of the operators is non-smooth (multivalued or singular, for example),
particularly if one is interested at capturing steady state solutions. We will give a
(kind of) justification in Chapter 2, based on the inspection of some simple particu-
lar cases.
10 R. Glowinski et al.
Remark 6. Optimization algorithms and ADI methods did not interact that much
for many years. The situation started changing when in the mid-1970s unex-
pected relationships between some augmented Lagrangian algorithms and the
Douglas-Rachford scheme (1.28)–(1.30) were identified (they have been reported in
Chapter 2, Section 3, and Chapter 8, Section 1, of this volume). This discovery lead
to what is called now the Alternating Direction Methods of Multipliers (ADMM),
and was known as ALG2 at the time. Although the problems leading to ADMM
were partial differential equations or inequalities related, this family of algorithms
has found applications outside the realm of differential operators, as shown by [1]
and several chapters of this book. Further details on the ‘birth’ of ADMM can be
found in [15] and the Chapter 4 of [16].
The above remark is a natural introduction to the more detailed discussion, below,
on the role of operator-splitting methods in Optimization.
Several chapters of this book study operator splitting methods for solving optimiza-
tion problems in (non-PDE related) signal processing, imaging, statistical and ma-
chine learning, as well as those defined on a network that require decentralized com-
puting. They cover a large variety of problems and applications. The optimization
problems considered in these chapters include both constrained and unconstrained
problems, smooth and nonsmooth functionals, as well as convex and nonconvex
functionals.
Operator splitting methods are remarkably powerful since they break numeri-
cally inconvenient combinations, such as smooth + nonsmooth functionals, smooth
functionals + constraints, functionals of local variables + functionals of the shared
global variable, and convex + nonconvex functionals, in a problem and place them
in separate subproblems. It also breaks a sum of functionals that involve different
parts of the input data into subproblems so that they can be solved with less amounts
of memory. These features are especially important to the modern applications that
routinely process a large amount of data.
Operator splitting methods appear in optimization in a variety of different spe-
cial forms, and thus under different names, such as gradient-projection, proximal-
gradient, alternating-direction, split Bregman [19], and primal-dual splitting meth-
ods. All of them have strong connections (often as special cases) of the forward-
backward, Douglas-Rachford, and Peaceman-Rachford splitting methods. Recently,
their applications in optimization have significantly increased because of the emerg-
ing need to analyze massive amounts of data in a fast, distributed, and even stream-
ing manner.
A common technique in data analysis is sparse optimization, a new subfield of
optimization. Sparse here means simple structures in the solutions — a generaliza-
tion from its literal meaning of having very few nonzero elements. Owing much
Another random document with
no related content on Scribd:
kanssa! Hänen tunteensa ovat kuin lähteen kumpuamista, aavistaa,
että noiden ilmausten takana on todellinen suuri tunne. Itse
olemuksellaan, sillä tuskallisen kauniilla runoudella, joka säteilee
hänen persoonallisuudestaan, hän kirkasti koko kappaleen. Tämä
ilta oli huomattavin ja kaunein kaikista hänen näytäntöilloistaan.» —
*****
*****
On joskus sanottu, että Kaarlo Bergbom oli joka kerta, kun Ida
Aalberg tuli Suomalaiseen teatteriin, äärettömän iloinen ja että hän
huokasi helpotuksesta joka kerta, kun näyttelijätär taas lähti pois.
Heidän kirjeenvaihtonsa antaa sangen paljon tukea tämmöiselle
väitteelle. Aina 1880-luvun alusta lähtien Suomalaisen teatterin
johtaja kirjoittaa niin nöyriä ja pyytäviä kirjeitä, että toisinaan tuntuu
melkein pahalta, kun näkee, miten kopeasti ja vaateliaasti Ida
Aalberg niihin vastaa. Kaarlo Bergbomia kohtaan Ida Aalberg esiintyi
loppuun asti kuin hemmoteltu lapsi, joka äksyilee ja niskottelee ja
kokeilee, kuinka pitkälle vanhempien rakkaus ulottuu. Suhde ei tosin
tullut milloinkaan myöhemmin niin huonoksi, kuin se oli ollut 1890-
luvun alkuvuosina, mutta liiankin selvää on, että Ida Aalbergin
käyttäytyminen vanhaa suosijaansa kohtaan myöhemminkin jätti
toivomisen varaa.
Kunnioittaen
Kaarlo Bergbom.»
Kunnioituksella
Ida Aalberg-Uexküll.»
*****
*****
Kaarlo Bergbomin johdon suurin saavutus lienee siinä, että hän loi
kotimaisen ohjelmiston, vaikutti itseensä draamatuotantoon. Siinä
suhteessa Ida Aalberg koetti olla ehdottomasti uskollinen
traditsioneille. Ottamalla näyttämölle Linnankosken »Ikuisen
taistelun», Maila Talvion »Anna Sarkoilan» ja Eino Leinon
»Alkibiades» näytelmän hän ei noudattanut vain omia mielitekojaan
ja primadonna-vaatimuksiaan, vaan myöskin, ja lähinnä, Kaarlo
Bergbomin traditsioneja. Onni ei kruunannut hänen rohkeita
pyrkimyksiään. Kritiikki ei tosin sivuuttanut hänen innostuneen
työnsä tuloksia kylmästi, vaan tunnusti, että ohjaustoiminnan takana
oli voimakas tahto ja rikas mielikuvitus, mutta sittenkin jäi menestys
puolinaiseksi. Ibsenin »John Gabriel Borkmanin» esityksestä
suomalainen kritiikki sanoi tuomionsanoja ja paljon jyrkemmässä
muodossa kuin aikaisempi ulkomainen kritiikki »Hedda Gablerista»
ja »Rosmersholmista». Vapaaherra Uexküll-Gyllenband avusti
parhaansa mukaan Ida Aalbergia tässä ohjaustoiminnassa, mutta
hänen »uusi tekniikkansa» tai »psykologinen» suhtautumisensa
tehtäviin ei tullut ainakaan yleiseksi tietoisuudeksi Helsingin
arvostelijain kesken, kun he arvostelivat Ida Aalbergin ohjausta ja
näyttelemistä. »John Gabriel Borkmanin» esitys oli Ibseniin
kohdistuvaa väkivaltaa, sen voi lukea suopeidenkin arvostelujen
rivien välistä, Anna Sarkoilaa Ida Aalberg ainakin toisten mielestä
tulkitsi liiaksi suureen tyyliin.
Tarvittiin vain pieni kirje Juhani Aholta, kun Ida Aalberg unohti
ylpeytensä ja saapui vuoden 1914:n alussa Kansallisteatterin hänen
kunniakseen toimeenpanemiin suuriin juhliin.