OPTIMAL CONTROL and FORECASTING of COMPLEX DYNAMICAL SYSTEMS

'.V:«i..

Ilya Grigorenko
World Scientific

OPTIMAL CONTROL and FORECASTING of COMPLEX DYNAMICAL SYSTEMS

This page is intentionally left blank

OPTIMAL CONTROL and FORECASTING of COMPLEX DYNAMICAL SYSTEMS

Ilya Grigorenko
University of Southern California

USA

\{p World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONGKONG • TAIPEI • CHENNAI

Hackensack. In this case permission to photocopy is not required from the publisher. This book. OPTIMAL CONTROL AND FORECASTING OF COMPLEX DYNAMICAL SYSTEMS Copyright © 2006 by World Scientific Publishing Co. All rights reserved. London WC2H 9HE British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.Published by World Scientific Publishing Co. Pte. 5 Toh Tuck Link. ISBN 981-256-660-0 Printed in Singapore by World Scientific Printers (S) Pte Ltd . Inc. Danvers. Ltd. 222 Rosewood Drive. Singapore 596224 USA office: 27 Warren Street. may not be reproduced in any form or by any means. recording or any information storage and retrieval system now known or to be invented. Pte. please pay a copying fee through the Copyright Clearance Center. MA 01923. Suite 401-402. including photocopying. Covent Garden. without written permission from the Publisher. electronic or mechanical. or parts thereof.. Ltd. USA. For photocopying of material in this volume. NJ 07601 UK office: 57 Shelton Street.

To my beautiful wife Elena

This page is intentionally left blank

Preface

Chance, however, is the governess of life. Palladas, 5th Century A.D. Anthologia Palatina 10.65 This book has appeared by choice but also by some chance. I gave a talk in summer 2003 at Max Plank Institute of Complex Systems in Dresden, Germany, where I was kindly invited by Prof. Dr. Jan-Michael Rost. It happened that among the people who has visited my talk, was a representative of the World Scientific Inc. One month later I received an invitation to write this book. The purpose of this text is to summarize and share with a reader the author's experience in the field of complex systems. The title of this book were constructed to maximally cover different topics of the author's recent research, as fully as it is possible to do in few words. The main target of this book is to show the variety of the problems in modern physics which be formulated in terms of optimization and optimal control. This idea is not new. From the 18th century it is known, that almost any physical (or mechanical) problem one can formulate as an extremum problem. Such approach is called Lagrangian formalism, after the great French mathematician Lagrange. The text is written in such a way, that all the chapters are logically coupled to each other, so the reader should be ready to be referred to different parts of the book. In this book the author tried to adopt a naive division of the complexity hierarchy. The simplest case is control and forecast of the systems, which one can describe with the help of linear differential equations, and control fields enter in these equations as additive inhomogeneous terms. The sitvii

viii

Optimal control and forecasting of complex dynamical

systems

uation becomes more complicated, when control appears as not additive, but multiplicative. It leads to a nonlinear problem for the search of control fields. A typical example-control of a quantum system, where a control field enters into the Schrodinger equation as a product with the system's wavefunction. The next level of complexity appears when the nonlinearity of controlled system is taken into account. Such problems are still tractable. As an example one can consider control of Bose-Einstein condensate (BEC), which dynamics is described by the Gross-Pitaevsky equation. Note, it is assumed, that we still know the explicit form of the mathematical equations governing the system's dynamics. However, the dynamics of the controlled system could be very complicated (chaotic). Additional complexity can be achieved, if dynamics of the system becomes not deterministic, with addition of some stochastic component. And the most difficult situation happens when we need to control a black box system (like biological, financial or social systems), for which ab initio evolution equations are unknown. Chapter 1 provides an introduction to the long mathematical history of the calculus of variations, starting from the Ferma's variational principle, the famous Bernoulli brachistochrone problem and the beginning of the calculus of variations. Despite of the limited applicability of analytical methods, the calculus of variations remains a vital instrument to solve various variational problems. The author could go deeper into the ancient times and start his story with the princess Dido's problem, but he has feeling that the brachistochrone problem belongs to the scientific history, and Dido's problem is just a beautiful ancient legend based on Virgil's Aeneid, without clear proof of the Dido's priority. In chapter 2 we discuss different aspects of numerical optimization, including effectiveness of the optimization algorithms and multiobjective optimization. We make a brief review of some popular numerical methods which could be useful for solution of various problems in optimization, control and forecasting. We give a broader review of the so-called "Quantum Genetic Algorithm", which operates with smooth differentiable functions with a limited absolute value of their gradient. As an example, we demonstrate its ability to solve few-body quantum statistical problems in ID and 2D as a problem of minimization of the ground state energy, or maximization of the partition function. Different scenarios of the formation and melting of a "Wigner molecule" in a quantum dot.

We introduced a novel approach. In chapter 6 we briefly discuss different aspects of forecasting and its connection with optimization and chaos theory. In this chapter we give a generalization of the Lorenz system using fractional derivatives. E. S. Prof. We show. J. M. K. Ilya A. Khveshchenko. Garcia. which permits us to obtain new analytical solutions of different optimal control problems. in general. who helped and governed my research last years. change the optimal control fields. D. which result in non-unitary evolution of a quantum system. Matisov and Dr. Prof. E. which we discuss in the previous chapters. In chapter 4 we discuss a problem of optimal control in application to nanoscale quantum systems. H. F. Prof. I would like to thank my scientific supervisors and colleagues: Prof. Prof. how the "effective dimension" of the system controls its dynamical behavior. Dr. Levi. We also solve a problem for optimal control of the induced photo-current between two quantum dots using genetic algorithm. that an optimal design of artificial quantum bits can decrease by an order of magnitude the number of errors due to quantum decoherence processes.Preface IX Chapter 3 outlines some elements of the chaos theory and a deep connection between nonlinearity and complexity in different systems. We analyze how decoherence processes. V. Dr. I would like to conclude this introduction with acknowledge of my teachers and colleagues. Dr. In chapter 5 we continue to consider control of quantum systems with particular application to quantum computing. G. We have shown. I. Grigorenko . and leads to a faster performance of basic quantum logical operations. Mazets. since decoherence. including a transition from chaos to a regular motion. Dr. Dr. Dr. B. Bennemann. Haas and Prof. This question is very significant for future design of nanoscale devices. The most of the results which were presented or mentioned in this book were done in a close collaboration with these nice people. A. significantly limits optimal control.

This page is intentionally left blank .

.1.2. .1.1 Sensitivity analysis 1. 1.2 The "beautiful" Brachistochrone Problem 1. 1.8 Conditional extremum: Lagrange multipliers method 1.4 A word about distance between two functions . .1 The beginning: Fermat's variational principle . 1.1.3 Summary 2.Contents Preface 1. Analytical methods in control and optimization 1.1 Sensitivity to numerical errors 2.2.1 Calculus of variations 1.3 Euler-Lagrange equation 1.6 Generalizations of the Euler-Lagrange equation .1. . 1. .1.1. .1.7 Transversality conditions 1.1.2 Optimal control theory 1.1.9 Mixed Optimal problem 1.2. Numerical optimization 2.3 Problems with constrained control 1.2 Null controllability 1. .5 The Brachistochrone problem revisited 1. .10 Approximate methods of solution-Ritz's method .3 Multiobjective optimization xi vii 1 1 2 4 6 11 12 14 16 16 19 20 21 25 26 26 27 29 29 30 33 34 . .2 Global Optimization: searching for the deepest hole on a golf field in the darkness using a cheap laser pointer 2. .2.1.1 The halting problem and No Free Lunch Theorem 2.

2 Extension of the QGA to quantum statistical problems 2.3. 4. 4.1 Density matrix formalism 4.1 An alternative analytical theory 4. Optimal control and quantum computing 35 37 38 42 44 49 57 58 66 69 74 76 77 80 83 91 93 95 96 99 100 105 109 114 117 119 121 124 138 145 147 .6 Optimal control of nanostructures: double quantum dot .1 Analytical solution for optimal control field 4.3 Estimation of the absolute bound for the control due to decoherence 4.2 The weighted-sum method 2.5 Optimal control of a time averaged occupation of the excited level in a two-level system 4.6 Introduction to genetic algorithms 2.8.8 Application of the GA to the eigenproblem 2.7 Analytical theory for control of multi-photon transitions .4 An approximate analytical solution for the case of a two level system 4.4 Simplex method 2.8 Summary 5.5 Simulated annealing: "crystallizing" solutions 2.10 Summary 3.xii Optimal control and forecasting of complex dynamical systems 2. Optimal control of quantum systems 4.5.1 Pareto front 2.6.8.2 Liouville equation for the reduced density matrix 4.3.2 Control of chaotic dynamics of the fractional Lorenz system 3.5.1 The optimal field for the control of the photon assisted tunnelling between quantum dots 4.9 Evolutionary gradient search and Lamarckianism 2.7 GA for a class of smooth (differentiable) functions 2.1 The ground state problem in one and two dimensions 2. .2 Optimal control at a given time 4.3.5.3 Summary 4.8. Chaos in complex systems 3.3 Formation of a "Wigner molecule" and its "melting" 2. .1 Lorenz attractor 3.3 Modern variational approach to optimal control of quantum systems 4.

1 5. 175 Forecasting of the solar activity 176 Noise reduction and Wavelets 177 Finance and random matrix theory 179 Neural Networks 180 Summary 181 183 197 Bibliography Index .4 6.9 Forecasting of financial markets 171 Autoregressive models 172 Chaos theory embedding dimensions 174 Modelling of economic "agents" and El Farol bar problem .Contents xiii 5.6 6. Robust two-qubit quantum registers Optimal design of universal two-qubit gates Entanglement of a pair of qubits Summary 147 157 166 168 171 Forecasting of complex dynamical systems 6.4 6.2 5.3 5.3 6.2 6.1 6.5 6.8 6.7 6.

Chapter 1 Analytical methods in control and optimization 1. optimal control. where the independent variable is itself a function. one usually cannot learn much from the particular numerical solution of a complex optimization problem-it can be too complicated for analysis. One of this techniques. rarely can be solved analytically . Being useful only in some simple cases. All these optimization problems.some numerical methods must be used. Thus. Engineering and Finance often pose us problems which have in common that one have to choose the best (in a certain sense) solution among a huge set of possible ones. In this chapter we are going to discuss methods of analytical solution of optimal control problems. one might say that a functional is a kind of function. optimal design. based on variational approach. in particular. Chemistry. This method can be considered as a generalization of the condition for a local extremum of the real variable functions f'(x) = 0 to the problems of functional analysis. to obtain analytical solutions of optimization or optimal control problems. A functional is a correspondence which assigns a definite real number to each function belonging to some class. Fortunately. the Variational Calculus provides us a method based on the Euler-Lagrange theory. one can get some insight from studying relatively simple problems that can be solved analytically. optimal decision. However. which involve nonlinearity and multi-dimensionality. Such problems are involving optimization. long-term and short-term forecasting etc. which were developed by many outstanding mathematicians during last 300 years.1 Calculus of variations Physics. it is very transparent to demonstrate complexity and richness of the optimal control theory. That is why this book begins with the introduction to analytical techniques. 1 .

Marin Cureau de la Chamber. the time of travel along a trajectory is proportional to its length. Here we also would like to mention that the Snell's law probably have been discovered first by 10th-century Islamic scholar Abu Said al-Ala Ibn Sahl in his work "On the Burning Instruments" in the 10th century.1 The beginning: Fermat's variational principle The name of a great French mathematician Pierre Fermat (1601-1665) is connected to many famous and even intriguing pages of Mathematics. In his letter to a colleague.1650). and then rediscovered independently by several European scientists. so that the time of the travel must be minimal. the minimum time trajectory connecting two points A and B is simply the straight line. It is named as Fermat's variational principle in geometrical optics. In a homogeneous medium (for example. a professor of mathematics at Princeton in 1993. The law can be formulated as follows. which gave rise of the many branches of the functional analysis and optimal control theory. where the speed of light is constant at all points and in all directions. Therefore. that was originally established experimentally by Dutch mathematician Willebrord van Roijen Snell (1580 — 1626). dated 1st January 1662. Let us give a brief description of the Snell's law. Fermat's basic idea was that a ray of light goes along a trajectory (among all possible trajectories joining the same points). Fermat has attached two papers: "The analysis of refractions" and "The synthesis of refractions". It is less known that Fermat made the first formulation of the variational principle for a physical problem. since we a going to use it in the next subsection. including another great French mathematician and philosopher Rene Descartes (1596 . Finally the theorem was resolved by Andrew Wiles. everyone heard about the great "Fermat's Last Theorem" that was challenging mathematicians for more than 300 years. 1. By means of his principle Fermat was able to give completely clear derivation of the Snell's law of light refraction.1 Optimal control and forecasting of complex dynamical systems In the following section we give a short historical survey of the most significant discoveries in the theory of the variational calculus. where he considered a problem of light propagation in optical medium. Many mathematicians questioned Fermat's claim that he knew a proof of this theorem. Let us consider a parallel beam of rays of light be incident on a horizontal plane . in the air). Perhaps.1.

1). Quite remarkable. many . 1. 1. titled "Two New Sciences". and his work. After the Fermat's discovery of the first variational principle. Snell's law says: sinai vi The where a and f3 are the angle of incidence and the angle of refraction (reckoned from the normal N to S ) and v\ and V2 are the speed of light above and below S. in one of his above mentioned letters Fermat cited a great Italian scientist Galileo Galilei (1564 — 1642). In this work. Galileo apparently first considered the problem of finding the path of the fastest descend under the action of gravity. k s interface 5 between two homogeneous optical media (see Fig. published in 1638.Analytical methods in control and optimization 3 i Fig.1 Snell's law. For Fermat it was very significant that Galileo's solution (which was incorrect) was not just a straight line representing the shortest way.

2 The line of the shortest decent. Determine the path ACB along which a body C which starts moving from the point A under the action of its own gravity reaches the point B in the shortest time". Nowadays it is a common knowledge in science that any natural law can be formulated as an extremal problem.1. 1. The statement of the problem was followed by a paragraph in which -0. Here we can quote a great Swiss mathematicians Leonhard Euler (1707 — 1783): "In everything that occurs in the world the meaning of a maximum or a minimum can be seen". Johann Bernoulli explained his readers that the problem is very useful in .5 -2. etc. 1. Variational formulations became known in Mechanics. Electrodynamics. Quantum Mechanics. the brachistochrone.5 -1. He stated there the following problem: "Suppose that we are given two points A and B in a vertical plane.5 Fig. Quantum Field Theory. The variational principle inspired metaphysically inclined thinkers who understood it as a concrete mathematical expression of the idea of a great German philosopher and scientist Gottfried Wilhelm Leibnitz (1646 — 1716) that actual world is the best possible world.2 The "beautiful" Brachistochrone Problem After Fermat's papers of 1662 it was not much progress on the subject until in June 1696 an article by a great Swiss mathematician Johann Bernoulli (1667-1748) was published with an intriguing title: "A New Problem to Whose Solution Mathematicians are Invited".4 Optimal control and forecasting of complex dynamical systems others were proposed by different scientists.

First of all let us set a coordinate system (x. which we have mentioned above.2). and that the curve is very well known to geometers (Fig..y(x)) is assumed to be equal to y/2gy(x).. Jacob Bernoulli (brother of Johann Bernoulli). Its velocity depends only on the current heights y(x) and according to the Galileo's law. However.Analytical methods in control and optimization 5 mechanics. Another solutions were provided independently by Leibniz. that it was a great British mathematician and physicist Sir Isaac Newton (1642 — 1727). Very interesting solution was given by Jacob Bernoulli. the solution is not a straight line AB. neither Leibnitz nor Johann Bernoulli was aware about Galileo's work in 1638. W. we can split the medium into many thin parallel layers with local . He noted. Bernoulli's original idea was to apply the Fermat's principle to the brachistochrone problem. Leibniz's solution was based on the approximation of curves with broken lines. y) in the vertical plane so that the x axis is horizontal and the y axis is directed downward (see Fig. 1. the most frequently mentioned solution belongs to Johann Bernoulli himself. that one can formulate original problem as the problem of light propagation in a nonhomogeneous medium. where the speed of light at a point (x. (g is the acceleration of gravity). This solution was based on adoption of the Huygens' principle and the concept of "wave front". One of the most influential mathematicians at this time. In order to obtain the solution in analytical form.* . The curve of the shortest decent or the brachistochrone ("brachistochrone" means in Greek "shortest time") turned out to be a cycloid.2)." ("to judge a lion from its claw"). 1. v = ^J2gy{x). Apparently. The velocity of the body C does not depend of the shape of the curve y(x) because the body moves without friction. The Bernoulli's solution is following. The problem is to find the optimal path y(x) in order to minimize the total time T. V JAB y/2gy(x) (1. and anonymous author. after Johann Bernoulli. G. Leibnitz recognized this problem as "so beautiful and until recently unknown". Some time after the publication of his article Johann Bernoulli gave the solution of the problem by himself.2) where ds is the differential of the arc length. and now we know exactly. Experts immediately guessed "Ex Ungue Leonem. The total time of the decent along the path y{x) from the point A to the point B is equal to: T=[ JAB * = / .

we obtain sin(ai) Vi sin(a2) V2 sin(ai) Vi sin(a^) VN const.4) where v(x) = y/2gy(x) and a(x) is the angle between the tangent to the curve y(x) at the point (x. i = 1. In his systematic study Euler treated 100(!) special problems and not only solved them but also set up the beginnings of a real general theory. when Euler was only 25.6 Optimal control and forecasting of complex dynamical systems speed of light vt. .3 Euler-Lagrange equation (1.)) —W11 = const. Going to the limit of the infinitely thin optical layers. Nowadays we can say that in this work he established the theoretical foundations of a new branch of mathematical analysis. 2. that the elegant method. 1.y(x)) and the axis y: sin(a(x)) = 1/y/l + (dy(x)/dx)2. And in 1744 Euler published a manuscript under the title "A method of finding curves possessing the properties the properties of a maximum or a minimum or the solution of the isoperimetric problem taken in the broadest sense".7) It was already clear at that time.. (1-3) where on are the angles of incidence of the ray. cannot be applied to any problem. In 1732. Applying the Snell's law. S/ = I/o(l-cos(t))/2. (1. the equation of the brachistochrone y(x) can be rewritten in the following form: y/l + (dy(x)/dx)*y/tfx) = const. (1-6) This equation can be easily integrated.. we conclude that sin(a(a. proposed by Bernoulli. is very desired.1. he published his first work on variational calculus. v(x) (1.. N..5) Thus. and we obtain the solution as the equation of a cycloid : x = xo + x\(t — sin(£))/2. and some general method of solving problems of this type.

On the other hand. (1. His main idea was to consider a "variation" of the curve.T]. Let us suppose that a{t) ^ 0 at a point r G [0. It can be easily be verified. by the Mean Value Theorem one can show that exists 7} £ [0.9) condition: F(t.3). Thus x(t) satisfies conditions of the Fundamental Lemma. and which can be formulated as the problem to find unknown function x(t). Let a continues function a(t) has a property: / a(t)x(t)dt = 0 Jo for any continuously differentiable function x(0) = x(T) = 0. This is the method that we shall use to derive the famous Euler equation. Let construct a function x(t) = (t — n ) 2 ( i — T 2 ) 2 .x(t). which later Lagrange called the Euler equation.Analytical methods in control and optimization 7 In his work. which plays an essential role in the variational calculus. so the functional J(x) achieves its extremum: J(x)= J to The function F is problem-specific and it is usually called functional density. t € A (see Fig. similar to the brachistochrone problem. But before we start to derive the Euler-Lagrange equation let us proof first the socalled Fundamental Lemma. leta(t) > a > 0. 19 years old Joseph Louis Lagrange (1736 — 1813) sent a letter to Euler where he described his "method of variations". D . which is assumed to be an extremal. Eleven years later. that x(t) is continuously differentiable function with x(0) = x(T) = 0. and arrive at a contradiction which proves the Lemma.8) Proof. 0. otherwise. For definiteness. T2] on which a(t) does not vanish. In 1759 Lagrange published the work where he had elaborated his new methods of the calculus of variations. With this method Euler derived a second-order differential equation for the extremals. The Fundamental Lemma. The Euler's method was based on the approximation curves with broken lines. t e A. Then the continuity ofa(t) implies that there is a closed interval A = [ri. Then a(t) = 0.1. x(t) satisfying (1. in 1755. Euler considered the problems.T]. such that J0 a(t)x(t)dt = a(ri) f0 x{t)dt > 0.x(t))dt~*extr. Later on the Lagrange method became generally accepted by other mathematicians.

8). for optimal x*: J{x*) > J(x* + afi). so 7(a) < 7(0) for all a. if we assume x* to be a global maximum. We suppose x* = x* (t) is an optimal solution of the variation problem given by Eq. Now let us start the derivation of Euler-Lagrange equation. and let fi(t) be a function which satisfies boundary conditions /J.(1. 1. for all a. function a(t) (see in the text). we have 1(a) = Jta Here 7(0) = J(x*).3 The Fundamental Lemma.4. This condition can be formuF(t. For each real number a.*i). If we keep the function fi(t) fixed.x*+an. Clearly.x*+aji)dt.10) see Fig. (1. Note that if a is small. the function x(t) is "near" the function x*(t) in some sense.11) . Putting 1(a) = J(x* + afi). (1.Let us modestly restrict ourselves to the class of functions that satisfy the boundary conditions and have the first and the second derivative continuous on the t e (*o. the integral J(x* + a/x) becomes a function of a alone. 1.These functions we will call admissible functions.8 Optimal control and forecasting of complex dynamical systems A Fig. let us define a new function x{t) by X(t) = X*(t) + afi(t).(to) = M(*I) = 0. The more precise meaning of "nearness" of functions we shall consider later.

yt Fig. Now.4 "Weak" variation of the optimal solution x*. (1.5 "Strong" variation of the optimal solution x*.12) This is the condition we use to derive the Euler equation.Analytical methods in control and optimization 9 Fig. looking . 1. lated as dl/da\ la=0 = 0. 1.

12) reduces to ddFi .16) In the argument leading to this result. . we get: fu \dF da a=o JtQ L ax fdF\ .10 Optimal control and forecasting of complex dynamical systems at the Eq.(1.. we see that to calculate 1(0) we must differentiate the integral with respect to the parameter a appearing in the integrand.(l.ll).18) ..17) J t0 Here is worth to note. so the condition Eq. Eq. ^ dF . the Euler equation becomes: dF_ \ p(t) — const. *i f*1 d dF (1.(1. However. The result is as follows dl_ da Let us now integrate the second term in Eq.ll) and rearranging. Let us mention two special (simple) forms of the Euler-Lagrange equation: 1) If the Lagrangian F does not depend on x explicitly.15) 0..16) holds for all functions /j. According to the Fundamental Lemma it follows that the term in the brackets must vanish for all t G [to.13) Using Eq. .(l. .14) (1. N (1. that the Euler-Lagrange equation is a first-order condition for "local" optimum analogous to the condition that the partial derivatives are 0 at a local extreme point in the "static" optimization. . dt ox J (dF\ .13) by parts: ffl 9F .ti]: 9F^_dLdF_ _ dx dt dx (1.(1. fi(t) was a fixed function. dx (1.(t) which are 0 at to and t\. s Now recall that (i(to) = fi(ti ddF-i . = n I 1 l-8x--Jt-dx-ht)dt [dF °- (1.

1. Let us hence generalize the concept of "nearness" or "closeness" of two functions.(l. Vi=Vi(x). 2) If the Lagrangian F does not depend on t.1833) : FZ±(t.x*(t))<0.22) depends not only on the function itself.^)dx (1. Hence. given by Eq.(1. the functional J= f1 F{x.20).Analytical methods in control and optimization 11 It is called the momentum integral. 1. (1.20) for all t £ [to.t\]. A necessary condition for J. The corresponding condition for a minimum is obtained by reversing the inequality in Eq. Let two functions be given by the equations: y = y(x).(1. In the derivation of the Euler-Lagrange equation we just assumed that the perturbed solution x in Eq.8) to have a maximum at x* = x*(t).4 A word about distance between two functions In seeking the extremum we must compare values of a function for two "nearby" solutions.y. The Legender's necessary condition: Now let us suppose that the condition of Eq. (1.F = E(t) = const. the Euler equation possesses the first integral: dF — .17) is satisfied. the values of the functional for functions between which the distance .x*(t).(1.lO) is somewhere "near" the optimal x* one. but also on its derivative -j-y{x). (1.19) It is called the energy integral (both the names originate from classical mechanics). was first obtained by a great French mathematician Adrien-Marie Legendre (1752 . However. since it will be useful further.21) Let the maximum of the absolute value of the difference \yi(x) — y(x)\ < e designate the distance between these functions.

let us take the functional j and consider the function -!(£)*• y = l/nsin(nx). Let us bound the greatest of the maximums of the differences: dx . for example. Now we can use it in order to solve the Bernoulli's brachistochrone problem.(1.12 Optimal control and forecasting of complex dynamical systems is small (less than any previously assigned number e) may differ radically (see.5). 1.3 we have derived the Euler-Lagrange equation for the extremals. Let us present an example.d2wi dx d2y. such a "strong" variation in Fig. it is necessary to generalize the definition of "nearness" of functions.26) . Let us choose the j/-axis to be vertical and the x-axis horizontal. for "nearby" functions the values of the functional may differ substantially.1. According to the Galileo's hypothesis the velocity at the height y is given by the relation: „2 v2 = 2gy(x). 1. by bounding the distance between nth-order derivatives of the corresponding functions.e. Meanwhile. (i 23) - (1-24) whose distance from the horizontal axis (i. from the function y = 0) is 1/n and tends to zero as n increases without limit.23) for the function Eq. but the value of the functional is zero for y = 0.24) is independent of n and equals 7r/2. (1. the value of the functional Eq..5 The Brachistochrone problem revisited In the subsection 1.(1. In order to avoid this. Hence. .1.dnyx dnyt < En dxn dx1* These type of conditions we will call the nth-order distance between two functions.

32) We again obtained the cycloid solution given by Eq.30) we obtain y + c = ±a(2r + sin(2r))." (1.e. It is known that for very small oscillations the period of a simple pendulum moving on an arc of a circle is nearly independent of its amplitude. The time T to fall from point A to B is given in parametric form: where x = dx/dt. the motion is exactly harmonic). the it tautochrone of Huygens. F± = const = l/\/2a is the first integral.32). Huygens showed that when the pendulum bob moves along an arc of cycloid. this curve has been known and of interest since about 1500.(1.(1.(1. x2 + y2 (1.. equivalently. One of its most remarkable properties.27) does not contain x explicitly. After some manipulation this condition reduces to the nonparametric equation: Experience has shown it is convenient to introduce as a parameter the angle that the tangent to the curve makes with the y-axis. But for a finite-sized oscillation this is no longer so. its period is precisely independent of the amplitude of its excursion (i. Note.Analytical methods in control and optimization 13 where g is the constant of gravity.28) or. Notice that the integrand function in Eq. However.One will be astounded when I say that the Cycloid. y — dy/dt. (1..30) hence the relation given by Eq. One of the Euler equations for this problem is therefore d dt dT V± dt y/vix2 +y2)i (1. Johann Bernoulli in his paper remarked that ".(1. thus T2^2=cos(r). was discovered in 1673 by a famous Dutch mathematician and physicist Christiaan Huygens (1629 — 1695).(1.31) and Eq. is the sought Brachistochrone. and from Eq. that of tautochronism.31) .27) is equivalent to: x = 2acos 2 (r) = a ( l + c o s ( 2 r ) ) .

The integrand depends on two variables: Another type of generalization that we shall briefly mention is the one in which the variable function depends on more than one variable.33) where x(t) and its first n — 1 derivatives have preassigned values at to and t\ and F is a given function.x(t). many applications can have complicated formulation.6 Generalizations of the Euler-Lagrange equation In the subsection 1. rti J= Jto F(t. 2) The function depends only on one variable. (1..x{n))dt^extr.3 the Euler-Lagrange equation was derived using the following assumptions: 1) The integrand depends only on one function. (1... The integrand depends on higher-order derivatives: Consider the problem of finding a function x(t) that maximizes or minimizes functional J. We are going to use this equation in application to optimal control of quantum systems in chapter 4. fXl J= fVl dz dz / F(x..—. With suitable restrictions on J.1.35) . One can show that a necessary condition for x{t) to solve this problem is that the following generalized (Euler-Poisson) equation equation is satisfied: (W_d_dF_ dx dt dx + d2 OF dt2 dx ^ dn OF ' dtn dx^ ' ^ ' ' See Gelfand and Fomin [Gelfand (2000)] for more details. we pose the problem of maximizing or minimizing functional J. 3) The functional consists of only one term having the integral representation (see. Now we show some possible generalizations of the Euler's approach for more complicated problems.14 Optimal control and forecasting of complex dynamical systems 1.x(t).8)).1.y. In practice.z. for example. where one cannot meet these simplifying assumptions.x(t). Eq.—)dxdy->extr.(1.

(L38) with a proper boundary conditions...x{t). it is very difficult to obtain analytical solution of such a system in general case.x(t0) = x°. x(i) = (xi(i).£„(£))..35) is given by the Euler-Ostrogradskii equation: dF dz d dF dx dz'x d dF dy dz'y For completeness the solution of the equation Eq.39) In this special case the Euler equation is not a differential.36) should satisfy properly defined boundary conditions on the 2D domain. Degenerate functionals: Let us now consider the case when Fy/y. there are many situations in which several unknown functions are involved..y)+B(t. Of course.y)^dt. Obviously. It represents a finite (algebraic) equation which have to be satisfied by the unknown . Jtn I to L dt (1.37) where x(t) = (a. y). there has been only one unknown function. Several unknown functions: In the variational problems discussed so far.x n (£)).i(*). = 0 identically (compare also with the Legendre's condition described above). Such kind of functionals are called degenerate functionals. In order to solve the optimization problem.k{t))dt-^extr. Let us briefly consider the following problem: rti J= Jt0 F(t..Analytical methods in control and optimization 15 defined on a surface z — z(x.(1. In this case we can write it as linearly dependent on dy/dt: J f1\A(t... .(1. (1. The necessary condition for the extremum of the functional Eq. the unknown functions must satisfy a system of the Euler-Lagrange equations: dF eZ-dia£r > d dF n 0 t =1 '"-> > n .

In this case it can be written as an integral of a total differential.(1.(1. Without proof (which we leave as an exercise) the variation of the functional with free endpoints is given by: 5J = £ (Fy . that right end point (2:1. (1.1. The functional has a "potential" form of a total differential and is independent of the path of integration. The first one is when the Eq.(1.~Fy.40) is satisfied identically for any function y.1.2/1) must lay on a certain curve given by equation: 2/i = <t>{xi).41) Suppose.42) In a similar way one can impose the transversality condition for another endpoint.43) (1.8 Conditional extremum: Lagrange multipliers method Now we turn to consider a class of variational problems when the functions yielding the extremum of the functional are themselves subject to some . 1. The situation becomes even more complicated if we would like to impose a condition. (1.16 Optimal control and forecasting of complex dynamical systems function: There are two possible cases.]^.)\x=Xl=0. when endpoints of the solution are not fixed.)5ydx + SyFy.7 Transversality conditions Sometimes it is necessary to solve more complicated variational problem. that the free endpoints should lie on certain curves. Its value is fully determined by the difference between final and initial points.40) is not satisfied identically and one can determine one or several extremals y to optimize Eq. J is independent on y. described by two different functions y = (p(x) and y = ip(x). Thus. It follows then that the transversality condition is: (F + (<p'-y')Fy. The second case is when Eq. 1.39).

i. J (1.y.z(x).z). In all we must determine three unknown functions: y(x). Let us consider as an example the problem of finding the shortest distance between two points. These constraints can be given in algebraic or integral form. such as the Lagrange method of undetermined multipliers. 't0 Jto by methods described in the previous sections.45) In principle.49) and X(x).z)=0.44). and the minimum of the usual functional of one variable might be sought.z')dx (1.e.y'. . Such problems appear very frequently in different application and called the problems with a conditional extremum.(1.y. it is necessary to introduce an intermediate function H = F + \{x)<f>(x. be substituted into the integral Eq. (1.44) under the condition that the curve connecting these points lie on a certain surface.46) under the condition <f>(x. as yet unknown.45). and it is more convenient to utilize another method of solution. or by a differential equation.z.Analytical methods in control and optimization 17 additional conditions (coupling equations). the sphere with the radius R: x2+y2+z2=R2.48) (1. for example.(1. the minimum of the integral S= Til+y^ + z'^dx. (1. the variable z may be expressed in terms of y and x from the coupling equation Eq. The following simple mnemonic rule exists: In order to find the extremum of the functional J= / J to F(x. such elimination of variables often leads to very complex computations.y. and to seek the extremum of the functional rt / Hdx. (1. Such kind of approach will be demonstrated in chapter 4.47) where X(x) is a function of a. However.

For such kind of problems one should use the Lagrange multipliers formalism. In fact.yi. They are written exactly as for the simplest problem except that the intermediate function H = F + X(x)<j> plays the part of the function F.49) Hy ~ dx~Hy' =Fy ~ dx~Fy' + ^A(X) = °' Hz ~ TxUz' =Fz ~ iLFz' + ^A(:c) = °' (L50) and the coupling equation Eq.(1..(1. this is an example of a constrained problem.52) This problem (when derivatives enter into the coupling equation) is called the general Lagrange problem. This rule can be extended to the case when several conditions of the type Eq. Here we denote partial derivatives by the corresponding subscripts.18 Optimal control and forecasting of complex dynamical systems We have three equations for their determination: the two Euler-Lagrange equations for the functional Eq. y.y.53) I = ['l G(x. (1. peri=around).e.y'. Let us now mention about such significant optimization task as isoperimetric problem. (1.y. is a differential equation: 4>(x. i. One can express the area and the perimeter in the integral form: J = / / 0 J tt0 J(x.47) are given. In this case the intermediate function has the the form: n H = F(x.y')da > = L.51) The same rule is retained if the coupling equation contains derivatives of the desired functions.y')dx J to where L is fixed.y.y^ + ^2\j(x)<l>j(x. The function X(x) is designed the Lagrange multiplier. i=i (1.z') = 0. The isoperimetric problem . Such class of problems is usually called isoperimetric problems (in Greek iso=equal. The mnemonic rule is extended also to seeking the transversality conditions.47).(1. y')dx — max J(x. From these equations we indeed can find the desired functions.yi). It was already known in ancient times that the circle is a shape that encloses the maximum area for a given length of perimeter.z.

54) The problem Eq.1.x*(ti)). at at where H = F + A(x)(*' .x(t))dt + E(x(ti)) -> extr. £(x(ti)) can represent the occupation of a certain state of a quantum system at a given time. This problem can be formulated as: F(t.The first constraint for the solution of the above formulated problem is determined by the initial condition x*(to) = XQ. 1. For example. J to (1.y')dx. in a quantum control problem (see chapter 4). = —A = 0. for the right endpoint in this case.(1. while x{t\) is to be determined by the maximization.56) / where E is a given function. The second constraint can obtained from the appropriate transversality condition-an additional boundary condition that must be satisfied by the optimal solution when the ends are not fixed. Obviously. (1.(1. Let us introduce a new function *(£) ¥(*) = I G(x.G).55) Hy — -T-Hq.56).. If we assume differentiability of the function S. We start from the assumption that we know a solution of the problem Eq. (1. which measures the value assigned to the terminal state.9 Mixed Optimal problem In many variational problems it is natural to include in the maximization problem a function. x(t0) = x0.y. x(t).xo) and (ti. the problem can be .53) is equivalent then to the problem of finding functions y and \I/. all these functions result in the same value to the second term of Eq. x* must satisfy the Euler-Lagrange equation Eq. The Euler-Lagrange equations in this case are: Hy ~ JtHy = 0.17) in [£o>*i].Analytical methods in control and optimization 19 one can reduce to the general Lagrange problem. We also know about x* that it has an extremal value compared with all admissible functions whose graphs join (to.(1.56): x* = x*(t).(1. Thus.

20 Optimal control and forecasting of complex dynamical systems transformed into: S(x(ii)) .x) Jto with the Eq. One of the best approximate methods is the Ritz's method. often it is enough to know only an approximate solution of the extremum problem.10 Approximate methods of solution-Ritz's method Usually determination of the extremals is a hard task.(1.56) as: fti m a x / F(t. It is based on reduction of the original problem of the extremum of a function of an infinite number of variables to the solution of a finite number of equations with a finite number of unknowns.(1. or 3F_ dx t=ti OX i Thus we have the following result: If x*(t) solves problem Eq.56). »=i (1.1.(1. To be specific.61) . (1. Fortunately. can conclude. (1.(1.57) Thus.x) + E'(x)x)\t=tl = 0.17) and the transversality condition Eq.(1.60) y*(x) = ^2<iiil)i(x). ax (1.(1.58) + E'(x(t))x(t)dt. because one have to solve nonlinear differential equations of a high order. let us consider a problem of extremum of the functional: J= f Jo The main idea of the Ritz's method is to represent solution y(x) as a series: N F(x.x. Note that is a standard variational problem with "free" right-hand side. that the transversality condition for the problem is given by &(F(t.59).E(x(t 0 )) = / 1 -fv(x(t))dt Jto dx = f 1 Jto dJ: ^h(t)dt.58) boundary condition x(t0) = x0 and x(ti) is free.y')dx.y.x.58) Than we Eq. we can reformulate the problem Eq. 1. then x*(t) satisfies the Euler-Lagrange equation Eq.

(1. In general.64). and we are looking for a solution that maximize the integral cost functional Eq. An initial state is assumed to be: x(to) = x° and a terminal state reached at some time t\ is given by one of the following: a)x(ii) = x1 b)x(ti) > x1 c)x(ti) free.t). there is no reason to suppose that the optimal control always exists.2 Optimal control theory Now we are going to discuss some aspects of the optimal control theory.t)dt — max.u.(1. We are going to establish a condition in order to distinguish the optimal control from the other controls that simply transfer the system from f° to xl and do not maximize the functional I .Analytical methods in control and optimization 21 where tpi(x) are certain chosen functions (basis). 1.60). . However. (1. that transfers the system from the initial state x° to the final one x1.62) In general.63) where g could be any vector functions of suitable dimensionality. let us do not dwell on existence problems and just simply suppose there is such a control. The reduced problem now is to find optimal parameters {a. This solution will be called an optimal control.i} which lead to extremum of Eq. the Ritz's method will be more exact. In order to do this one have to solve a system of N equations and N variables: dJ(y*) dai (1. "Performance" of the optimal control is measured by an integral criteria I (a cost functional) of a general form: 1= Jto h(x.u. > (1-64) Let us assume that there is a set of possible solutions of the optimal control problem. and one can even make an estimation of the V error made. the greater the number of terms T taken into account. The general theory applies to problems where the dynamics of the controlled object have a state space realization in the form of a finite set of simultaneous first order dynamic equations (state equations) in form: i=f(x.

65) The Maximum Principle transfers the problem of finding a u(t) which maximizes the integral in Eq. defined by (we assume for simplicity dimension of the problem N = 1): H(x.64).64) subject to the given constraints. Hence. to the equation in Eq. And in the opposite case. Such a useful necessary condition was found by a great Russian mathematician Lev Pontryagin (1908 —1988) and published in the series of papers appeared between 1955 and 1961. the constraint is a differential equation on the interval [io.*i] a n d it can therefore be regarded as an infinite number of ordinary equations. As to the problem above. a Lagrangian multiplier. In the Lagrangian multiplier theorem the Lagrangian function plays an important role.t) = p0h(x. The maximum principle is particulary convenient for the investigation of systems where the desired cost functionals and coupling equations are linear and the constraints are imposed only on the controls.(1. (1. it will be useful to find a necessary condition that is analogue to the Euler-Lagrange equation. Precisely formulated.t). and is known as the Pontryagin Maximum Principle (PMP). the PMP is as follows: Let u* be a piecewise continuous control denned on [to. any control u that fails this condition.63) we associate a number p(t) for each t € [io>£i]The function p{t) is usually called an adjoint function (or variable) associated with the differential equation. Associated with the constraint in the Lagrangian problem there is a constant.u.u.t) + pf(x. to the problem of maximizing the Hamiltonian function with respect to u.(1. for example: max|M| < 1. cannot be optimal.(1. In addition it tells us how to determine the p-function. one for each t.63). any control that satisfies this condition could be optimal and needs to be checked carefully.(1. we associate a number Po to the function h in Eq. ti] which solves . The comparable function in the present case is the Hamiltonian function. In the optimal control the problem is to maximize an integral subject to a constraint in the form of the differential equation given by Eq.u. Before we formulate PMP let us recall the Lagrange problem of maximizing a function subject to a constraint in the form of an equation.22 Optimal control and forecasting of complex dynamical systems Thus. If it is possible to find such a condition. In addition.p.

69) . max / x(t)dt Jo subject to ±(t) = x(t)+u(t).67) (1. the inequality in Eq. because the boundary conditions on the adjoint equations Eq. there no standard procedure guarantees a solution. > H(x*(t). As we see later.p(t).(1. and let x*(t) be the associated optimal path.t) (1.p(t). b) p(h) > 0 c) p(ix) = 0 Note. (b). (c) there corresponds a transversality condition: a) p(t\) no condition.p(t). In general. (1.*i].To each of the terminal conditions (a).66) H(x*(t). whereas the boundary conditions on the state equations are usually specified at the initial time to.p(t))^(0.Analytical methods in control and optimization 23 problem Eq. The p-function is equal to po = 0 or po = 1.(1. Our first example is a pure mathematical problem and it is almost easy as can be.67) is valid for all t G [*o.0). the manner in which PMP can be used differs significantly from one problem to another. Let us make us further note: Pontryagin's equations can be extremely difficult to solve analytically. that is.t).u*(t). Then there exists a constant po and a continuous and piecewise continuously differentiable function p(t) such that for all t G [to.63).t).Now we have to take a small brake.(1. Let us now consider two examples in which the Maximum Principle in each case produces a unique candidate for optimality. Eq.Such twopoint boundary value problems are notoriously intractable.(1.70) (1.64) with terminal state either a).68) are usually specified at the terminal time ti.t) for all admissible solutions u. b) or c). u* maximizes H(x*(t).*i] (po.p(t).u.u*(t). Except at the points of discontinuities of rt'H-ap (i-68) where ^ .u.= Hx(x*(t).

77) we see moreover that p(t) = —e 1 _ t < 0 for all t.71). Now let us turn to condition Eq. u*(t) is the value of u € [—1. From Eq.1.75) and Eq. so A = e and therefore.1] which maximizes H(x*{t). (1-72) Let us now assume that the pair (x*(t). p(t) = e 1 . we have previously decided to choose u(t) as a continuous .1]. it is a linear equation with constant coefficients so the solution is p(t) = Ae~* -p0 = Ae~* . x(l) is free and a restriction on control variable u(t): \u(t)\ < 1.76).1]. When t € [0.74) is particularly simple. u*(t)) solves the problem Eq.66) we obtain in particular that po and p(l) cannot both be 0. that p{t) > 0 in the interval [0.(1.(1.1] which maximizes p(t)u. we see that by Eq.u. (1.' .1. according to Eq.p{t).77) Thus the adjoint function is fully determined. a n d thus by Eq. Since p(l) = 0. u*(t) that value of u £ [—1.p(t)) ^ (0. 0) for all t e [0. the equation QU* P(«) = — ^ .(1.69)Eq.p(t) > 0.73).68). (1. Since p(l) = 0. po ¥" 0. po = 1. For t = 1.= . We have now to put down all the information supplied by the PMP To proceed. (1.t) =p0x+p(x + u) = (po+p) +pu. However. we conclude.1).73) The adjoint function p(t) satisfies.73) does not determine u*(l). (1.(1. Since only the term p(t)u in the Hamiltonian depends on u. we apply the transversality condition given by c): (1-74) p(l)=0.P ( * ) Since x(l) is free.(1.(1. According to the PMP there must exist a constant p0 and a constant function p(t) such that (po.(1. so this case the maximum value of p{t)u is attained for u = 0.24 Optimal control and forecasting of complex dynamical systems where x(0) = 0. notice that the differential equation for p(t) in Eq.P o .t) = (p0+p(t))x*(t)+p{t)u.(1.71) Let us now make use of the PMP The Hamiltonian of this problem reads: H = H(x.p. for each t € [0. Moreover. In fact.76) To determine the constant A.u.66). (1.75) From Eq. (except at the points of discontinuity of u*(t)).(1. l). 0 = p(l) = Ae~l — 1.(1.(1. p{t) = 0 and Eq.

the optimal control is given by Eq.1]. is called bang-bang control. x*(t) = e* . so 5 = 1.71) we se that u*(t) = 1 produces the highest value of x{t) for any t £ [0. a very important question is to what degree does the existence of an error e in the control affect the cost functional. Actually. That is why we need to develop numerical methods to find optimal control. and hence u*(t) = 1 must maximize J0 x{t)dt. l ] . 1. However. t e [ 0 .M. t Naturally. Hence we must put u*(l) = 1.2. (1.(1.78) and the associated optimal path is given by Eq.l)dt = e .P. If the value of the cost functional changes substantially for small e then it is necessary to utilize a very accurate control assuring optimality of the cost functional. (1. one have to do a sensitivity analysis. From Eq.79).Analytical methods in control and optimization 25 function at the endpoints of its domain of definition. Hence.(1.2. unfortunately. then an essentially simpler control may be used.1.79) We have now proved that if the given problem has a solution. By the initial condition x*(0) = 0. most of control problems.(1.70) and Eq.70). so our proposal for an optimal control is this: u*(t) = l . In order to quantify the effect of a small perturbation of the optimal control field.78) The associated path must satisfy Eq. If small deviations from optimality are not dangerous.(1. works in a simple case. Note: The control law that requires every control variable to take one or other of two limiting values. are much more complicated and have no simple analytical solutions.1 Sensitivity analysis I l (e4 . Solving this linear differential equation we get x*(t) = Be1 — 1. our effort was not necessary for solving the problem. It is possible to estimate approximately the change in the functional for deviations of the desired function from the extremal by means of the magnitude of the second variation: . The corresponding value of the criterion function is (1.80) o We have illustrated above how the P.(1.

since: rank[B..2.2.87) .(1. y = -ywhere we assume that |u(i)|<0. .82) It is useful to introduce a definition of null controllability: the domain C G Rn is defined as a set of points.3 Problems with constrained control One example of the use of the P.85) x3. It is possible to proof a theorem.26 Optimal control and forecasting of complex dynamical systems 1.84) is null controllable. B[l. Jo (1. (1. (1. it it possible demonstrate. that the above condition is not necessary. (1. Note.1 ] . [0. described by a system of differential equation (A. each of which can be steered to x\ = 0 with some bounded control \u(t)\ < +oo in finite time. (1.. 1.2 Null controllability Let us consider a linear control process. One possible representation of such a constraint is to introduce a cost of control criterion.0]] = 1 < 2.86) However.84) The linear approximation of the system near the point (0.-l].0][0. u) = Ax + Bu. that the linear approximation is non-degenerate (A is nonsingular) but the algebraic criterion of controllability Eq.AB] = rank[[l. let us consider the nonlinear system: x = — x + u(t). in control theory has been in connection with continuous-time problems having realistic constraints on the range of values accessible to the controls u. B are matrices): £ = f(x. when nonlinear phenomena are involved! To illustrate this.. 0]. Note. An~1B] = n.83) where n is the dimension of the system. A2B.83) is not fulfilled.P. that the complete nonlinear system Eq.M. with A = [[-1. (1. AB. that such a system is null controllable if and only if rank[B.0) is: x(t) = Ax + Bu(t).(1. having quadratic form: / u2dt<E..

(1. we have to note. where some numerical techniques will be discussed. Then we demonstrate the derivation of the Euler equation that is actually a cornerstone of the calculus of variations. Further generalizations and developments. Then we make a jump to XX century and formulated the Pontryagin's variational principle for optimal control with some examples. and some times more realistic representation is to introduce a saturation inequality: •U-min <U< Umax.64) under the constraint like Eq.89) tends to infinity.88) is satisfied. the problem of the extremum of the functional Eq.t o o a s m CO > and the integral Eq. that there are not so much control problems having analytical solutions. (1. then u2m .88) In a case of symmetric constraints the condition max|u| < 1 is equivalent to the condition: U fT -.(1. if u < umin or u > umax. conversely. (1.89) takes a given value T.2m dt < T.(1. As a first example we have considered the "Brachistochrone problem" that had stimulated the birth of the calculus of variations in the end of the XVII century. which we shall use in order to derive analytical solutions for some optimal control problems in the next chapters. and that stimulates us to open the next chapter. were discussed.88) can be formulated as an isoperimetric problem: find the extremum of the functional under the condition that the integral Eq. . Hence. However.Numerical optimization 27 Another. 1. then u2m — 0 and the > value of the integral will be less than T. Indeed.89) here m — oo.(1.(1. on at least a small section. if the condition Eq.3 Summary In this chapter we have briefly reviewed the history of the calculus of variations. including Lagrange multipliers technique.

This page is intentionally left blank .

1 The halting problem and N o Free Lunch Theorem In computability theory the halting problem is a decision problem which can be informally stated as follows. We also are going to mention some related problems such as general effectiveness of algorithms (No Free Lunch Theorem).Chapter 2 Numerical optimization With my two algorithms. The alternative is that it runs forever without halting. Given a description of an algorithm and its initial input. A great British mathematician Alan Turing (1912 — 1954) proved in 1936 that a general algorithm to solve the halting problem for all possible inputs cannot exist. if God will! Al-Khorezmi. given an initial state. 2. In mathematics and computer science an algorithm (the word is a derivation from the name of the Persian mathematician Al-Khorezmi (or. We say that the halting problem is undecidable. determine whether the algorithm. (780-850). no. one can solve all problemswithout error. alternatively. ever halts (completes). will terminate in a corresponding recognizable end-state. (1981)). Multi-objective Optimization and some useful numerical optimization algorithms. In this chapter we are going to discuss some issues of Global Optimization. Since there is no recipe to determine whether any numerical optimization algo29 . 1. Al-Khwarizmi)) is a finite set of well-defined instructions for accomplishing some task which. complexity and scaling of the most common optimization problems. when executed on this input. (in Science Focus [IBM]. A quite similar situation with a general optimization problem.

that a given objective function has property of con- . but this superiority must be compensated by poorer performance on other problems. Then. such that /* is the smallest feasible objective function value within this environment. In the same way one can formulate the problem for search of a global maximum. The main result of the NFT is that if one take an average over the space of all possible optimization problems.1. for further information one can read a nice overview [Schumacher (2001)]. if for any x G A: /(x*) < / ( x ) .the value / * = /(x*) > —oo is called a global minimum. This theorem had caused a wide response of the scientific computation society. In practice. The problem of determining a global minimum point is called the global optimization problem. the number of local minima is quite large. just wait and hope that the algorithm will find a solution and stop before a deadline. or to prove that it does not exist. We are not going to consider here NFT in detail. some of the algorithms may outperform others on a certain set of problems. / is called the objective function. all optimization procedures are equally efficient! Of course. the main goal of the global optimization problem is summarized in the following definition: Let us consider a region A in A^-dimensional space. The definition of a local minimum of an objective function: / ( x ) . Another interesting issue is effectiveness of different optimization algorithms. if an e-environment C/(xo) : |xo—x| < e. and set A is called the feasible region. see Fig. x* is a global minimum point.the value / * is called a local minimum. There were many attempts in the past few decades in order to answer the question: if one consider the whole Universe of optimization problems.2 Global Optimization: searching for the deepest hole on a golf field in the darkness using a cheap laser pointer In a very general form. In one can proof. can one find an algorithm which is superior compare to another one? And finally. we have simply to run the algorithm on our computer for a reasonable time. 2.2.30 Optimal control and forecasting of complex dynamical systems rithm will find a global extremum. in 1997 it was published a work by Wolpert and Macready where was presented a formulation and a proof of so-called No Free Lunch Theorem (NFT) [Wolpert (1997)]. For a given function / ( x ) .

Unfortunately. note that it has many local and one global minimum. optimum of a given optimization problem instead of only a local one. Convexity-is a geometrical property of a curve. Of course. that any local minimum of a convex function is also a global minimum. It can be proofed. we will call an objective function / convex. A convex function is a real-valued function / defined on an interval [a. b] if for any two points x and y laying in [a. b] and any a £ [0. Most of real-life optimization problems are non-convex. in global optimization no general criterion exists for identification .1 Example of a ID objective function. A strictly convex function will have at most one global minimum. the following inequality holds: f(ax + (1 — a)y) < af{x) + (1 — a)f(y). 2. Taking into account this fact.1]. one is usually interested in finding a global Fig. if it has exactly one local minimum (and it is also the global minimum). then optimization problem becomes easier task.Numerical optimization 31 vexity. otherwise we will call it non-convex.

As an example of a frequent optimization problem we can mention a global optimization problem caused by the least square estimation of model parameters. Let us consider an observed data sequence (x*. Another bad message. which often can be nicely approximated by some exponential function of N.. the last function has 10 10 local minima when n = 10. i = 1. In a very natural way the discussion of global optimization turned from continues variables to discrete ones by introducing the idea of a grid search technique..xn) = lOsin(Trxi)2 + (xn . ?/i). N. Generally. (2. For instance. where a solution can be represented as a bit (1 or 0) string with the length N. These two examples illustrate how fast the complexity of global optimization grows with the dimension of the problem.2) This function has an incredible number of local minima.. the search space grows according to the laws of Combinatorics... to find the global optimum by full (exhaustive) search in these examples is unrealistic. Xi) which describes the dependence between "effect" y^ and "cause" xit the model parameter vector a must be estimated in order to minimize the sum of squared differences between measured reality and model predictions: N Note. As a general rule.. optimization problems with discrete object variables are called combinatorial optimization problems. Let us imagine an optimization problem.1) 2 (1 + 10sin(7rx i+1 ) 2 ). let f(xi. but only a single global minimum.l ) 2 + ]T(x* . that the "effect" y^ could in principle. Assuming the existence of a model or at least a hypothesis y(a... that almost all practically interesting optimization problems contains huge amount of local minima... One example of such a problem is travelling salesman problem: . Another example is from the multidimensional optimization.. One can easily calculate that there are 2 ^ possible variants of the unknown solution. Here yi are dependent variables ("effect") and X% 2LT6 independent variables ("cause"). xn) is defined as n-l f(xu .32 Optimal control and forecasting of complex dynamical systems of the global minimum. that makes the problem strongly non-local.. Clearly. depend on all x.

2. These numerical issues require users of numerical software to exercise great care when interpreting their results. When p = 0. According to the exponential growth of the search space as it is mentioned above.4. Computer methods for solving nonlinear problems typically use floating-point numbers to approximate real numbers. these methods are bound to make numerical errors. assume that p is the output of some numerical computation) can have fundamental implications for the results of an application.1 Sensitivity to numerical errors Today's computers and computers of the near future can manipulate and store only a finite amount of information. 2.g. Wilkinson's problem. although probably small considered in isolation. which consists in finding all solutions to the equation Y[(x + i)+px19=0.2 3 « 10~ 7 . the best we can hope for in general is a point close to a solution (preferably with some guarantee on its proximity to the solution) or an interval enclosing a solution. When p = 2 . and minimizes the total cost of edges. —9. it has no solution! Wilkinson's problem clearly indicates that a small numerical error (e. includes every other vertex exactly once. Since there are only finitely many floating point numbers. Since the solution of a nonlinear problem may be a real number. These errors.4]. Consider. for instance. that cannot be represented in finite space or displayed on a screen in finite time. the equation obviously has 11 solutions. .. it is not surprising that the decision problem to check whether a given feasible solution of a smooth.3) in the interval [—20. may have fundamental implications on the results.Numerical optimization 33 Find a path through a weighted graph which starts and ends at the same vertex. i=l 20 (2. non-convex nonlinear optimization problem is not a local minimum needs could be an extremely hard problem.

34

Optimal control and forecasting

of complex dynamical

systems

2.3

Multiobjective optimization

Most realworld engineering and scientific optimization problems (including optimal control, forecasting, least square estimation of model parameters, etc.) are multi-objective, since they usually have several objectives that may be conflicting with each other and at the same time must all be satisfied. For example, if we refer to the optimal control of a complex chemical reaction, where the multiple products of this reaction can be controlled, we will usually want first to maximize some final products, but at the same time, as a second task, we would like to minimize the outcome of some others products, that could be in a contradiction with a first goal. Another example is that we want to find the best fitting of the data using the most simple hypothesis y(a, Xi) in Eq.(2.1). Obviously, these two objectives (best fitting and simplicity of the hypothesis) are conflicting with each other. For illustration let us consider a classical example of Schaffer's function [Schaffer (2001)]: F{x) = (hix), f2(x)) = (-x2, -[x - l) 2 ). (2.4)

Obviously, there is no single point that maximize it all the components of F, see Fig.2.2. Multi-objective optimization (also called multi-criteria optimization) can be defined as a problem of finding a n-dimensional vector of variables called optimization parameters x = <xi,...,xm> which satisfies all imposed constraints and optimizes a m-dimensional vector objective function f(x) = / i , . . . , fm . Each element of f(x) corresponds to a particular performance criteria. No surprise that they are usually in conflict with each other. To solve the multi-objective optimization problem means to find such an optimal solution x* which would minimize values of all the components of the objective function. As we have mentioned in the previous chapter, in many optimization problems there are usually some restrictions imposed by the particular characteristics of the environment or resources available. For example, in the optimal control of molecules using laser pulses the energy of the laser pulse or the minimum possible pulse duration is bounded. And as we will see, the optimal solution will depend on this bounding value. Such kind of re-

Numerical optimization

35

0

-0.5
-1

-1.5
-2

-2.5
-3

-3.5
-4 -1

-0.5

0

0.5

1

1.5

2

Fig. 2.2

Example of the Shaffer's function.

strictions must be satisfied in order to consider that a certain solution is acceptable. These constraints can be expressed in form of mathematical inequalities: Gi(x)<0,i or equalities: Hi(x)=0,i = l,..,k. (2.6) = l,..,k, (2.5)

Note that k, the number of imposed constraints, must be less than n, the number of decision variables, because if p > n the problem is said to be over-constrained, since there are no degrees of freedom left for optimization.

2.3.1

Pareto

front

As we have seen, a multi-objective optimization problem usually has no unique, perfect (or "Utopian") solution. However, one can introduce a set of nondominated, alternative solutions, known as the Pareto-optimal set, named after a brilliant Italian Economist, Sociologist and Philosopher Vil-

36

Optimal control and forecasting of complex dynamical

systems

fredo Pareto (1848-1923). Pareto-optimal solutions are also called efficient, non-dominated, and non-inferior solutions. We say that x* is Pareto optimal if there exists no feasible vector x* which would decrease some criterion without causing a simultaneous increase in at least one other criterion. Pareto optimum almost always gives multiple solutions called noninferior or non-dominated solutions. In other words, for problems having more than one objective function (for example, Fj, j = 1,2,..., M and M > 1), any two solutions xi and X2 (having P decision variables each) can have one of the two possibilities- one dominates the other or none dominates the other. A solution x\ is said to dominate the other solution x%, if both the following conditions are true [Deb (1999)]: 1. The solution Xi is no worse than #2 in all objectives. 2. The solution x\ is strictly better than X2 in at least one objective. The set of all such solutions which are non-dominated constitute the Pareto front. These solutions are in the boundary of the design region, or in the locus of the tangent points of the objective functions. In general, it is not easy to find an analytical expression of the line or surface that contains these points, and the normal procedure is to compute the points belonging to the Pareto-optimal front and their corresponding function values for each of the objectives. When we have sufficient amount of these, we may proceed to take the final decision. More formally, one can introduce such kind of definition of the Pareno optimality: For an arbitrary minimization problem, dominance is defined as follows: Pareto dominance A vector u = (ui,...,un) is said to dominate v = (vi,..., vn) if and only if u is partially less than v , i.e., for all i £ 1, ...,n , u^ < Vi implicative exists i S l,...,n : Uj < Uj.

Numerical

optimization

37

Pareto optimality A solution xu G U is said to be Pareto-optimal if and only if there is no xv e U for which v = f(xv) = (vi,...,vn) dominates u = f(xu) = (ui,...,u n ). Pareto optimality can be illustrated graphically (see Fig.2.3)

Pareto front

Pig. 2.3

Example of the Pareto front.

by considering the set of all feasible objective values, i.e., the set of all points in the objective space corresponding to the number of degrees of freedom.

2.3.2

The weighted-sum

method

In order to characterize a "quality" of a certain solution, we need to have some criteria to evaluate it. These criteria are expressed as some functions of the decision variables, that are called objective functions. In our case, some of them will be in conflict with others, and some will have to be minimized while others are maximized. These objective functions may be commensurable (measured in the same units) or noncommensurable (measured in different units). In general, the objective functions with which we deal in engineering optimization are noncommensurable. We will designate the objective functions as: fi(x), f2(x),..., f„(x). Therefore, our objective functions will form a vector function f (x) which will be defined by a vector:

which rapidly becomes one of the most popular methods of optimization. the weightedsum approach can be particularly sensitive to the setting of the weights. Hext. depending on the problem.e.f2(x). In every iteration of the algorithm.. . Weighting coefficients rt.7) The easiest and perhaps most widely used method to handle a multiobjective optimization problem. and Himsworth [Spendley (1962)].. 2. . . The simplex method was based on earlier developed numerical optimization algorithm by Spendley.. this method has its disadvantages. is the weighted-sum approach. the volume of the hull is not zero). In the conclusion. .38 Optimal control and forecasting of complex dynamical systems f(x) = (h(x). (2. i=l (2.4 Simplex method In 1965 Nelder and Mead [Nelder (1965)] developed a simplex method for a function minimization. for example. / ( X J V + I ) of the current simplex are arranged in ascending order according to their objective function values (we consider minimization problem): /(xi) < /(x2) < . It is assumed that the points satisfy the non-degeneracy condition (i.. A simplex is a convex hull of N + 1 points in the .9) .fn(x)). (2.. / ( x 2 ) .are real values which express the relative "importance" of the objectives and control their involvement in the cost functional./V-dimensional search space.8) However. < / ( X J V + I ) . the multi-objective optimization can be identified as one of the most challenging optimization problems. the vertices / ( x i ) . In this approach the cost function is formulated as a weighted sum of the objectives: N F(x) = ^ r i / i ( x ) . An alternative way to determine Pareto front and solve multiobjective optimization problem is to use multiobjective genetic algorithm [Zitzler (1999)].

Here we are going to describe the operations in more details.4 Reflection scheme for the Simplex method. Let us first try construct a reflection point as follows: ^reflect = < X > + a ( < X > . (2.10) i=l The simplex method attempts to replace the current worst vertex by a new one that is generated by the following three operations: 1) "reflection" 2) "expansion" 3) "contraction". We refer xi as the best point and x^r+i as the worst point. i. then we replace .Numerical optimization 39 Fig. Let us introduce the mean of all points except the worst one: < x>= 1 N (2. 2. if the reflected point improves on the worst point but is not better that the best point so far. a 4) "shrink" step is carried out. Only in the case these fails.refiect) < /(xjv+i).11) If / ( x i ) < f(x.e.X j y + l ) .

if the reflected point would still be the worst if it replaced the worst point so far. Depending on whether the reflected point is better or worse than the worst point so far. i. then an outside contraction point ^-contract = < X > + / 3 ( x r e / .< X > ) . then we create an expansion point X-expand = < X > +j(xrefiect< X >).< X >). then a contraction is attempted. . i. ^contract then an inside contraction point (2.12) Then we replace the worst point and the iteration is terminated. e c t . (2.13) (2) If /(x r e /(ect) > /(XJV+I).refiect) < / ( X J V + I ) .e. 2. e c t ) > / ( X J V ) . XJV+I by ^reflect and the iteration terminated.e. (2. two types of contraction are possible: (1) If f(x. if the reflected point is better than the best point so far.5 Expansion scheme for the Simplex method.40 Optimal control and forecasting of complex dynamical systems Fig. XN+I by the better of xreftect and xexpan<i In the case / ( x r e / . If f(xrefleet) < / ( x i ) .14) = < X > +^(xjv+l.

.15) According to Nelder and Mead. Fig./(x r e /j e ct))) t n e n t n e worst point XJV+I is replaced by xContract and the iteration if terminated. 2.. depending on the quality of the . Clearly. If all of the above have failed to generate a point that it is better than the second worst. the purpose of these operations is that "the simplex adapts itself to the local landscape.Numerical optimization 41 k cont cont Fig.7 Shrink scheme for t h e Simplex method. but the best are replaced by new points: x'i = < x i > +<J(x'i . then all the vertices x. (2. if f(xcontract) < nun(/(xjv + i).x i ) .6 Contraction scheme for the Simplex method. elongating down inclined planes. N + l. 2.. In either case.. i = 2. and contracting in the neighborhood of a minimum". changing direction on encountering a valley at an angle.

stochastic methods apply a degree of randomness to the decision-making process. the methods requires either 1. like Travelling Salesman Problem (TSP).(3 = 6 = 0. 2. or N + 2 objective function evaluations per time step.5. While still being a rather popular optimization method. in particular. In simulated annealing. where deterministic algorithms work better. than simplex method. this method can be easily trapped in a local minimum.5 Simulated annealing: "crystallizing" solutions Stochastic optimization techniques are optimization algorithms. the simplex search is known to be frequently ineffective. We illustrate the "reflection".7). These methods. Under these conditions. The random nature of stochastic algorithms has discouraged their use in some applications.4-2. In the nest pages we are going to discuss some popular stochastic methods. The almost universally recommended values for the parameters are a = 1. "expansion". These methods which we are going to use further in this book are simulated annealing and genetic algorithm. which we are going to discuus in the next sections.2. in cases of searching over multi-modal landscapes.42 Optimal control and forecasting of complex dynamical systems new points that are generated. deterministic rules-based algorithms have difficulty while stochastic methods persevere. that in general more suitable for the global nonlinear optimization. that utilize some random elements. They are also more robust. and 7 = 2.the ability to escape from a local minimum and stability in the presence of noise. the process by which atoms cool in molten metal to crystallize into their solid state is modelled. Genetic algorithms implement the Darwinian concept of natural selection to evolve a randomly generated set of mediocre initial guesses into a population of . "contraction" and "shrink" steps in a case of two dimensions (see Figs. the probabilistic elements in stochastic algorithms provide them a capability not possessed by deterministic methods . like simulated annealing or genetic algorithm are inspired by natural systems in physics and biology. However. There are other methods. especially in the area of trajectory optimization. 2. Unlike deterministic search algorithms that locate optima by systematically searching the solution space using gradient information (like Newton's gradient method). Real engineering and scientific optimization problems are often characterized by a highly multimodal search space containing numerous local optima.

where the metal reaches its most stable. and if the change in the total energy dE (or objective function) is positive. At the heart of the simulation lies the Metropolis algorithm [Metropolis (1953)]. and under certain constraints. The simulation of annealing is conducted by extending the Metropolis algorithm. The atomic structure. This component is fully responsible for the stochastic nature of the algorithm and that gives the algorithm its capability to avoid local minima. and the Metropolis simulation is conducted for the new temperature. Nowadays simulated annealing is one of the most highly regarded. The simulated annealing algorithm combines the Metropolis algorithm with a temperature schedule in an attempt to simulate the physical behavior of atoms as they cool from their liquid state into a solid state with a minimum energy. thus achieving a perfectly ordered crystal and the lowest possible energy state. it is accepted with a probability given by the Boltzmann factor e x p ( — j ^ ) . The objective of the annealing process therefore is to cool the metal so slowly. wellunderstood and widely applied stochastic algorithms. Simulated annealing can be seen as a generalization of a Monte Carlo method for examining the equations of state and frozen states of n-body systems. The system temperature is then decreased following a user-defined temperature schedule.e. [Kirkpatrick (1983)]. Simulated annealing is a technique first developed by Scott Kirkpatrick et al. The simulated annealing is able to search for a global optimum. . These temperature reductions continue until the simulated temperature of the system has reached absolute zero. The simulated annealing algorithm derives its name because it emulates the metallurgical process. is first initialized to a randomly determined state. that all of its atoms align in the same direction. which essentially is random perturbations of the system's parameters to minimize the total energy of a system.Numerical optimization 43 optimal solutions. the set of problem parameters. The crucial link between these simulation with optimization process is the analogy drawn between the energy of a given configuration and the value of the objective function. minimum energy crystalline structure. i. The Metropolis algorithm is then run at the initial usersupplied simulated temperature for a number of iterations large enough to consider the system near thermal equilibrium. which models the behavior of large systems of particles in equilibrium at a given temperature.

Although claims of guaranteed convergence might be somewhat inflated. Simulated annealing is thus not a panacea.6 Introduction to genetic algorithms Now let us discuss genetic algorithm. Genetic algorithm is a subset of the class of stochastic search methods (we have discussed in the previous section another stochastic search methodsimulated annealing). genetic algorithm operates on a . The Darwinian theory of evolution explains the adaptive change of species by the principle of natural selection. even to approximate an optimal solution arbitrary closely. simulated annealing remains a powerful optimization algorithm for multi-dimensional and multi-modal problems due to the probabilistic hillclimbing capabilities it possesses. biological evolution.44 Optimal control and forecasting of complex dynamical systems it converges to a globally optimal solution with a probability 1. 2. but rather a powerful alternative in the family of stochastic optimization algorithms. which favors those species for survival and further evolution that are best adapted to their environmental conditions.g. which was formulated for the first time by a great British naturalist Charles Darwin (1809 . and for some problems (e. belong to an interdisciplinary research field with a strong connection to Biology. the TSP) it requires more computation that a complete enumeration of the search space! As can be seen from the previous discussion. the conditions which must be in place for the simulated annealing algorithm to ensure an optimal solution greatly constrains the number of problems for which convergence can be guaranteed. application of the algorithm to real-world problems brings to light the impracticality of many of the algorithm's criteria for convergence. Holland (1978)]. this requires that an exponential number of trails are performed. Genetic algorithm is based on a model of natural. and decision support in almost any engineering discipline. which we are going to use as a tool in the following chapters. While convergence of the algorithm to an optimal solution can statistically be guaranteed in the theoretical realm. Genetic algorithm is in fact.1882) in 1859 in his book "The Origin of Species". Genetic algorithm became widely known after it was formally introduced in the 1970s by John Holland at University of Michigan [Holland (1975). Whereas most stochastic search methods operate on a single solution to the problem at hand. Artificial Intelligence. However.

2. genetic algorithm exhibits a large degree of parallelism.8 0 0 A general scheme for genetic algorithm. The genetic algorithm then creates initial population of solutions and applies genetic operators such as mutation and crossover [Sutton (1994)] to evolve the solutions in order to find the best one(s).Numerical optimization 45 population of solutions. making it possible to effectively exploit the computing power made available through parallel processing. In addition. 2. definition and implementation of the genetic operators. definition of the objective (fitness) function. Let us outline some of the basics of genetic algorithms. one must represent a solution to the problem as a genome (or chromosome). 3. To use genetic algorithms. The three r^> Population Selection parents in proportion to their fitness 0 1 0 0 0 1 1 1 ^Mffffffjffll -CU ih^tttt^g •8 Fig. most important aspects of using genetic algorithms are: 1. . definition and implementation of the genetic representation.

10 Crossover operation between two bit strings. because of its flexibility. find multiple optima. or parallelize the algorithms.9 1 0 |1 |0 * 0 1 1 0 |0 |0 0 1 Mutation operation on a bit string. the genetic algorithm should work well. There are many ways . 2. 2.46 Optimal control and forecasting of complex dynamical systems Once these three steps have been performed. Beyond that one can try many different variations to improve performance. 1 1 0 1 0 0 1 0 * 0 t 0 1 1 0 1 1 1 0 0 1 0 1 + 0 1 0 1 0 0 1 Fig. T1 1 Fig. The genetic algorithm is very simple. yet it performs well on many different types of problems.

lists.9. However. Let us now discuss main genetic operators: copy. The disadvantage of the bit string representation is that some of the bits have exponentially bigger weights than others. even allowing for permutation or inversion of bits. The copy or reproduction operator merely transfers the information of the "parent" to an individual of the next generation without any changes. The mutation operator introduces a certain amount of randomness to the search. Holland worked primarily with strings of bits [Holland (1975)]. However. Another possible alternative is to use a bit string. Typically the crossover operation is defined so that two individuals (the . crossover. but employ a Gray code-an ordering of In binary numbers such that only one bit changes from one entry to the next. In this case a small perturbation (mutation) of the higher order bits will not change the initial number dramatically. After the seminal work of Holland the most common representation for the individual genomes in the genetic algorithm is a string of bits.2. The result is that a random flip of the high order bit could change the solution dramatically. see Fig. The reason is that the definition of the genetic operators in this case is very simple. the Gray codes for 4 or more bits are not unique. It can help the search find solutions that crossover alone might not encounter. Usually the mutation represents the application of the logical "NOT" operation to a single bit of a "gene" at a random position. That is why we are going to explain first the definition of genetic operators using bit representation. and place the "offspring" far away from its "parent" with a poor probability to improve the fitness value. mutation. copy (reproduction)) for any representation that one decides to use. Basically. One have to remember that each individual must represent a complete solution to the problem one is trying to optimize. one must define genetic operators (initialization. One can use different representations for the individual genomes in the genetic algorithm. crossover and mutation. the representation right and the operators right. but one can use arrays. and many parameters that can be adjusted. and also need some algorithm for coding and decoding. Since the algorithm is separated from the representation of the problem. then variations on the genetic algorithm and its parameters will result in only minor improvements. searches of mixed continuous/discrete variables are just as easy as searches of entirely discrete or entirely continuous variables. trees.Numerical optimization 47 to modify the basic algorithm. or any other object. if one gets the objective function right.

The transformation from raw objective scores to scaled fitness scores is called scaling. There are many different scaling algorithms. a random position is chosen at which each partner in a particular pair is divided into two pieces. Note. The simple genetic algorithm is described by Goldberg [Goldberg (1989)]. then the population will quickly converge to that individual. Linear scaling transforms the objective score based on a linear relationship using the maximum and minimum scores in the population as the transformation metric. Some of the most common are linear (fitness proportionate) scaling. Often the objective scores must be transformed in order to help the genetic algorithm maintain diversity or differentiate between very similar individuals. Sometimes the crossover operator and selection method lead to a fast convergence of the population of individuals that are almost exactly the .2. For a complete description of each of these methods. In the steady state genetic algorithm only a few individuals are replaced each "generation". but it truncates to zero the poor performers. sigma truncation scaling. Sharing derates the score of individuals that are similar to other individuals in the population. It is a generational algorithm in which the entire population is replaced each generation. Sigma truncation scaling uses the population's standard deviation to do a similar transformation. If one use a selection method that picks only the best individual. There are different implementations of the schedule of the genetic operations. Some of the most common methods include "roulette wheel selection" (the likelihood of picking an individual is proportional to the individual's score). In a simple crossover.48 Optimal control and forecasting of complex dynamical systems parents) combine to produce two more individuals ("the children"). then the best of these are chosen for mating). Each "parent" then exchanges a subsection of itself with its partner (see Fig. The selection method determines how individuals are chosen for mating. see Goldberg's book [Goldberg (1989)]. The primary purpose of the crossover operator is to get "genetic material" from the previous generation to the subsequent generation. and sharing. So the selector should be biased toward better individuals. that application of the crossover operation between identical "parents" leads to the same "children". but should also pick some that aren't quite as good (but hopefully contain some good "genetic material"). This type of replacement is often referred to as "overlapping populations". "tournament selection" (a number of individuals are picked using roulette wheel selection.10). and "rank selection" (pick the best individuals every time). Two of the most common genetic algorithm implementations are "simple" and "steady state".

one have to take an additional care about smoothness and differentiability of the obtained solutions. for example. 2. or interaction potentials derived from the tight-binding Hamiltonian [Deaven (1995). Especially successful applications of the genetic algorithm were performed in optimal control theory [Vajda (2001)].Numerical optimization 49 same. Genetic search method has been recently applied for plenty of various optimization problems in science. In these works the global minimum of the cohesive energy was obtained for different cluster species using LennardJones potentials [Judson (1992)]. such kind of solutions do not belong to the class of our interest. to optimize the atomic structures of small clusters [Judson (1992).7 GA for a class of smooth (differentiable) functions The following extension of GA was originally developed to demonstrate how to obtain ground state wave-functions of a quantum system confined in external potential. Michaelian (1998). In general. However. Although this type of optimization problem belongs to the class of quadratic optimization and GA definitely is not the fastest way to solve such type of problems. In the next section we present an extension of genetic algorithm to search optimal solution in the class of smooth functions. but on the other hand one need it to maintain diversity. Deaven (1995). On one hand. and therefore. the genetic algorithm could be used for both discrete/contineous optimization. As applications of this new technique we will consider search for a ground state function for a given system's Hamiltonian. if one is going to apply a "classical" version of the genetic algorithm to search for optimal continuous and differentiable function. the likelihood of finding new solutions typically decreases. ionic potentials [Michaelian (1998)]. or an optimal shape of electric field to control a nanoscale device. it is desired that the genetic algorithm finds "good" individuals. the genetic algorithm is much more robust than other search methods in the case of a noisy environment. Garzon (1998)]. When the population consists of similar individuals. As we have mentioned. Garzon (1998)]. or/and if the search space has many local optima. The direct application of the mutation and crossover rules leads to generation of "children" with discontinuity at the positions of the crossover or mutation operations. it could be easily applied to the case of optimal control problems to obtain realistic solutions (continuous and with .

One can write an inequality for the ground state energy E0 in this case: E0 < <#|£|*>.(2. Let ^(xi.16). Since this method was originally applied to quantum systems. N-l (2. We assume that \ is normalized and & (*|\1/) = 1. Hint refer to kinetic. where (throughout the section we use atomic units fi.--.17) N Hint = Yl Yl V& ~*i)i=l j=i+l Operators Hkin.18) is attained.18) Starting with a population of trial wavefunctions one can run the evolutionary procedure (GA) until the global minimum of the energy functional given by Eq.Xff) be an arbitrary TV-body wavefunction.50 Optimal control and forecasting of complex dynamical systems limited absolute value of local gradient. we called it Quantum Genetic Algorithm (QGA). because it usually easier avoids local minima compare to gradient-based methods. that corresponds to a finite time resolution of the control field). Let us first consider a quantum mechanical ground state problem for a system described by the Hamiltonian Eq. the corresponding minimization problem becomes nonlinear and GA can serve better. potential and interaction energy. when it is treated under the mean field approximation (using Hatree-Fock or Density Functional theory). In our approach a wavefunction vjr(x) is discretized . Hpot. In the case of a few-body interacting quantum system. We assume H be the Hermitian Hamiltonian operator of a TV-body quantum mechanical system: H = Hkin + Hpot + Hint.X2. (2.(2.=m=e=l) (2.16) Hkin = r 2_1V»2' Z i=i N 1 N Hpot = "£jU{xi). Let us start with the description of a quantum system. For simplicity let us consider first the ground state problem for one particle in one dimension.

(x)} is created through application of the (iii) A new population { ^ genetic operators.. ) — . and represented by the "genetic code" vector \&j = ^(xi) (see Fig.11). Y(x) ¥ ( * . L.(x)} consisting of Npop trial wave functions. (v) Steps (iii) and (iv) are repeated for the successive generations . there are different ways to describe the evolution of the population and the creation of the offsprings. The genetic algorithm we propose to obtain the ground state of a quantum system can be described as follows: (i) We create a random initial population {v^. i = l. (iv) The fitness of the new generation is evaluated. As we have mentioned before.11 Representation of the discretized in real space wavefunction ^ ( x ) as a genetic code vector. which replaces the old one.Numerical optimization 51 on the mesh {2^} in real space..2.W. where L is a number of discretization points. T L-2 Fig. (ii) The fitness E[^\ ] of all individuals is determined.. 2.

4 0. 2. 0. Note the discontinuity of the function 'J'j at the position of the crossover operation.8 Coordinate x (arbitrary units) Fig.2 0.6 0.13 Result of the direct application of the "classical" crossover operation. Usually. that leads to extremely high kinetic energy of the "offspring".12 Two randomly chosen wavefunctions for the crossover operation. The vertical dashed line shows the position of the crossover.4 0.52 Optimal control and forecasting of complex dynamical systems {*?\x)} until convergence is achieved and the ground-state wave function is found.6 0. 2.8 Coordinate x (arbitrary units) Fig. 0. the real space calculations deal with boundary conditions on a .2 0.

. It is also useful to generate initial population with the symmetry properties. 2. One l H Coordinate x (arbitrary units) Fig.e. If any approximate form of the solution is known. For simplicity we set ^(a) = <]/(&) = 0. and if the size of the box is large enough boundary effects on the results of our calculations can be minimized. b] and the width (Tj £ (0.. Therefore.14 Example of a smooth step function St(x) operation (see text). one can also employ. Inside the box one can simulate different kinds of external potentials.Xjfja)){x . for example. After a few iterations successful offsprings will converge to the improved solution. used in the "uncertain" crossover can significantly reduce computational costs choosing a wise guess of initial wavefunctions. . i.19) with random values for the peak position Xj € [a. However. Npop... (2. we consider a well with infinite walls at x = a and x = b.6 — a]..j = 1. in order to describe a wave function within a given interval in one dimension a < x < b we have to choose boundary conditions for ^ ( a ) and $(b). . one can generate random initial population "near" this solution. the periodic boundary conditions for a "ring" system.a){b -x). As initial population of wave functions satisfying the boundary conditions tyj (a) = fyj (b) = 0 we choose Gaussian-like functions of the form ^j(x) = Aj exp(-(a. whereas the amplitude Aj is determined from the normalization condition J \^(x)\ dx = 1 for given values of Xj and aj.Numerical optimization 53 box.

6 0.13).4 0. . The smooth or "uncertain" crossover is defined as follows. Let us take two randomly chosen "parent" functions $[s)(x) and $>2a)(x) (see Fig. 2. and therefore.5 0 0 0.20) as = V2s\x) St(x) + V{s){x) (1 . However.19). we should define three kinds of operations on the individuals: the copy. We can construct two new functions \&f+ \x).2.5 2 ^-.2. mutation of a wavefunction.St(x)). very large) kinetic energy. the crossover and the mutation operations have to be redefined to be applied to the quantum mechanical case. 2.(2. and crossover between two wavefunctions (see Fig.2.2 0. The vertical dashed line shows the position of the crossover operation. 1-5 >—' l 0.12).ty2 V[s+1){x) V2s+1){x) = <&{s\x) St(x) + V2s\x) (1 (x) St{x)) (2. While the copy operation has the same meaning as in previous applications of the GA. we shall show that the QGA successfully finds solutions starting from a random population denned byEq.8 1 Coordinate x (arbitrary units) Fig. As we mentioned above.54 Optimal control and forecasting of complex dynamical systems reflecting the symmetry of the Hamiltonian. It means that the "offsprings" have infinitely (practically. cannot be considered as good candidates to be the ground state wavefunction (see Fig. To avoid this problem we suggested a new modification of the genetic operations to apply to smooth and differentiable wavefunctions. The reason is that after straightforward application of the crossover operation between two "parents" one unavoidably obtains "children" with discontinuity at the position of the crossover.15 Result of the application of the "smooth" crossover operation.12).

that the crossover operation does not violate the boundary conditions and application of the crossover between identical wavefunctions generates the same wavefunctions. Note.0 < x < d. = {(x. To avoid this problem we define a new mutation operation as ^'+1\x) = *W(a. with f = (x.Numerical optimization 55 where St(x) is a smooth step function involved in the crossover operation. For a simplified study let us consider a two-particle system described by a wave function ^ J J F C P I .y). which is the Slater determinant consisting of orthogonal and normalized one-particle wave functions ipu(r). 0 < y < d} We assume again that the external potential outside Q. This means that the optimized ^HF will represent the exact ground-state wave function for the case of noninteracting particles. v = 1. We consider St(x) = (l+tanh((:r—xo)/k%))/2. for each step of the QGA iteration we randomly perform copy.2. The mutation operation in the quantum case must also take care about smoothness of the generated "offsprings". Note. For simplicity we choose ^r(x) as a Gaussian-like ^r(x) = B exp(—(xr — x)2/km) (x — a) (b — x) with a random center xr G (a.(2.20) becomes the "classical" crossover operation. crossover and mutation operations. It is very easy to extend the quantum genetic algorithm to treat quantum systems of a few interacting particles in two dimensions.2. (2.) + * P ( x ) . that so defined mutation also does not violate the boundary conditions. We perform calculations on a finite region f2 where we discretize the real space. After each application of the genetic operation (except coping) the new-created functions are normalized. In the limit kc — 0 one obtains the usual Heaviside step function St(x) = > 6(X—XQ) and the transformation Eq. ^ ) . and a small amplitude B that can be both positive or negative. The result of the smooth crossover is presented in Fig. whereas for the interacting case ^HF will correspond to the Hartree-Fock . is infinitely high. An example of the function St(x) is shown in Fig. b)) and kc is a parameter which allows to control the sharpness of the crossover operation.21) where ^fr(x) is a random mutation function. width km € (0. Q. In order to find the ground state.y). where xo is chosen randomly (XQ S (a.2. b — a). In "classical" GA it is not possible to change randomly the value of the wave function at a given point without producing dramatic changes in the kinetic energy of the state.15. b).14.

Now. kc is a parameter which allows to control the sharpness of the crossover operation.i = l.y) + 4°J \x. (2.b. we construct each \I>.y) ail =0..r 3 )dfidr ? 3 .y) St(x.2 and random values for xv.( x ~ ^ ) 2 a {y X.° (x.25) (1 St(x.l = l. \x.y) = (l+tanh((aa.y) ld d i>\.-|-&2/ + c)/fc.v a ~2Vv)2 )x(d . We define St(x. \x..y) fulfill zero condition on the boundary dQ Mx.v.St(x. Note.56 Optimal control and forecasting of complex dynamical systems approximation to the ground-state wave function.y) + ^. so that the line ax + by + c = 0 cuts fi into two pieces. For this purpose. that defined in such way the wave functions ipj{x. and its sign is chosen randomly.y) ew v) a s (1 .y) where St(x.y)\ dxdy = 1. The amplitude Av is calculated from the normalization condition J f \ipj(x.2).r ? 2 )*i(fi.V f° r each wave function. y) is a 2D smooth step function which produces the crossover operation.Npop. ii.y) (i. . when the QGA finds the global minimum. it corresponds to the ground state of H.x)y(d .y)) (2. y) = Av e x p ( .ld\x. = V . This means that the expectation value of the energy for a given individual is a measure of its fitness.i f W ) St(x.v (2. As in the one dimensional case. yv and for <7x. (2.?))/2.y)..22) with v = 1.24) Jo where H is the Hamiltonian of the corresponding problem. one can construct two new functions i>ivew)(x> y)Mlew)(x> 4:eW)(x.y) = i>l. Given two randomly chosen single-particle "parent" functions ip.c are chosen randomly.y) and ip^0 (x. VY. Now we define the smooth crossover in two dimensions.Npop is chosen randomly.23) The so constructed initial population {^i} corresponds to the initial generation. an initial population of trial two-body wave functions {\l/i}. and we apply the QGA to minimize the energy.u = 1. Y.y)). the fitness of each individual \& j of the population is determined by evaluating the energy functional E{ = Etyi] = [ *:(ri > r2)H(fi. . using Gaussian-like one-particle wave functions of the form 1>„(x. where a. By virtue of the variational principle.

The procedure is repeated until convergence of the fitness function (the energy of the system) to a minimal value is reached. Concerning our choice of the GA parameters. If size of the box is large enough. namely due to thermal and quantum fluctuations. 2. In the rest cases we perform the coping operation. .8 Application of the GA to the eigenproblem In this section we present results of the ground and excited states calculations for interacting particles in a confined quantum system (like a quantum dot system) using the Quantum Genetic Algorithm. With the help of the QGA we investigate formation of the "Wigner molecule" in systems of few confined electrons. crossover and mutation operations. Rx. the population size of only 200 "parents" usually guarantees a good convergence of the algorithm. Then we compute the partition function and the excitation spectra of strongly interacting few body systems. This serves as a good test for the method developed in this work. that application of the defined above mutation operation does not violate boundary conditions. First we perform calculations of the ground state for different simple one and two dimensional systems and compare the obtained results with known analytical solutions. Ry and AT. Then. boundary effects become negligible.485 for the probability of a crossover operation. During our calculations we set different sizes of the population up to Npop = 1000.y) as 4:ew\x. Inside the box CI one can simulate different kinds of external potentials. yr.y) is a random mutation function. the fitness of the individuals is evaluated and the fittest individuals are selected. for the following examples we have used Pm = 0. However. We choose ipr{x. We also investigate two different mechanisms for the so called "melting of the Wigner molecule".Numerical optimization 57 In the same manner we modify the mutation operation for random "parent" ip(°ld){x.015 for the probability of a mutation and Pc — 0.y) = ^(x.y). for each iteration of the QGA procedure we randomly perform copy.y) as a Gaussian-like function tpr(x. After each application of a genetic operation the new-created functions should be normalized and orthogonalized. Note. As it was discussed before. (2-26) where ipr(x.y)+Mx.y) = Ar exp(—(xr — x)2/R2 — (yr — y)21Ry) x(d — x)y(d — y) with random values for xr.

while the analytical . The value of the u is chosen to be rather large. up to an error of 1 0 .17 the calculated ground density state is shown for to = 2\/l0• 102. In this case the ground state energy is E = w/2.L] being in the infinite square well.5) 2 .58 Optimal control and forecasting of complex dynamical systems 2. We also performed calculations for other analytically solvable problems. The mean population energy is defined using calculated energies of all population members. For each iteration of the QGA we evaluate the fitness function for the different individuals of the population: Ej = E[ipj] =< tyj \H\^j >. Let us start from the ground state problem for one particle captured in the region [0. (2. However.2.0.2.27) In Fig. In the figures presented below we show the results for the density probability of the ground state and the behavior of the fitness function during the iterative GA-procedure.29. then follow the steps described above. This process is repeated until the values of the fitness function converge to the minimal value of the energy.17 we show the evolution of the mean energy of the population. (2.9348.16 we show the evolution of the mean energy of the population.1 The ground state problem in one and two dimensions With the purpose to test the QGA. first we apply it to calculate the ground state wave function ^{x) in the case of different external potentials in one and two dimensions.5) 2 /2). namely the harmonic potential U(x) = \UJ2{X — 0.28) In Fig. The ground-state energy calculated using our method is very close to the exact value E = ir2/2 = 4. In the inset of the Fig. It is clear that the QGA converges rapidly to the ground state.8. It converges slower than in the case of the infinite well because the QGA should find rather localized solution. The analytical solution gives the lowest energy state with energy E = n2 I'll? and corresponds to the ground state wavefunction *(x) = y/2/Zsm(irx/L). it converges after only 30 iterations. therefore one can neglect the influence of the walls on the final result.2. For the ground-state energy the QGA yields E®GA = 316.16 we show the calculated ground state particle density |^(x)| 2 for a potential well with infinite walls at x = 0 and x = 1 (throughout this section we use atomic units h = e = m = 1).. and the ground state wavefunction is given by V(x) = L/rr) exp(-<v(x .5 % already after 20 iterations.2. In the inset of Fig..

18 we present the calculated ground state density for an anharmonic potential of the form U(x) = fc0 . 2. k3 = 0. It converges slower than in the previous two cases.87. In the inset of Fig. if.7074997. The reason is that the QGA operates is more complicated space of possible solutions. the spatial distribution of the electron density has not one but two maxima.k2x2 + k3x3 + k4x4. In Fig.2.5.02% after 30 iterations. result is E = 316. whereas the value obtained by using the spectral .18 we also show the evolution of the mean energy of the population. (2. for example. we performed calculations for the case of anharmonic potential with two distinct minima. We use these values of the parameters in order to compare with existing calculations performed using the spectral method [Feit (1982)]. Our calculated ground-state energy is E®GA = —144. algorithm converges after 200 iterations.2. and k4 = 1.29) with k0 = -137.Numerical optimization 59 Fig.22 which represents a discrepancy of less than 0. In order to check whether the QGA finds a solution in more complicated cases.16 Ground state spatial density distribution of an electron \^(x)\2 in a onedimensional infinite well (defined on the interval [0. k2 = 7. Nevertheless. Inset figure: evolution of the fitness as a function of the number of iterations.1]) calculated using the QGA.

4 x. •.31) .30) where Q is the charge of each ion and Xi its position. The potential is re-scaled.)|2 (solid line) for an electron in a ID harmonic potential (dotted line).8 0. 2. (atomic units) Fig.06% after 200 iterations. which is given by U(x) = J2 Q ^/{x . we use for the initial populations trial functions of the form ^j(x) — yj-Aj exp(—(x — Xj)2/a'j)(x — a)(b — x).17 Calculated spatial distribution of electron density |<&(a..96. in the context of the Coulomb explosion of small clusters induced by intense femtosecond laser pulses [Garcia (2000)]. here 5 is a cutoff parameter for the Coulomb potential in ID.. i 10 . In the QGA-calculations for this potential.Xi)2 + S (2.2 0. and in order to speed up the convergence process. the discrepancy is less than 0. Our next example deals with the ground state of an electron subject to a ID potential produced by a chain of 5 positive charged ions. (2.6 0. i 3D J " - -n< ) 11 1 1 30 • Iterations • * •g ••3 4 - £• 2 . i. The inset shows the evolution of the fitness as a function of the number of iterations.-••'- / 1 \ 0. method is E = —144. for instance.e.60 Optimal control and forecasting of complex dynamical systems 10 1 1 1 - / \ 3000 \ 1 1 PC 2OD0 *> == 1000 / \ >> I / 1 y . This smooth ID ionic potential has been used.

where the amplitudes Aj. having 5 peaks.19.5) 2 .37. For simplicity we consider first a system of two noninteracting fermions trapped into a harmonic potential V(x) — \w2{x — 0.Numerical optimization 61 1 1 1 i ' -135 i_ ' 1 1 1 ' /- f £ S 2- Energy -140 -145 k ) 1 100 20CV \ -1 y 'V 4 6 x. respectively. that the calculated probability distribution. a = 0. In our calculations we have used Q = 5. b) are random numbers. It is clear that using appropriate initial guess for the form of the wave function can considerably accelerate the convergence of the QGA.2. In this example the genetic process converges after only 20 iterations.31.25.2. where 4>i(x) and foix) are the ground-state and first excited state of the single-particle Hamiltonian. and S = 0. Note. widths 0 < aj < b — a and peak positions Xj £ (a. The triplet-state wave function of two noninteracting electrons having the lowest energy can be written as ^(x.18 Calculated spatial distribution of the electron density |\P(x)| 2 (solid line) for an electron in an anharmonic potential of the forth order (dotted line). .25. Now we study the simplest case of the few body problem.19. Xi = 13.6 = 50. The inset shows the evolution of the fitness as a function of the number of iterations. In the inset of Fig. shown in Fig. has the same symmetry properties as the external potential U(x). (atomic units) Iterations - 10 Fig. 2. We assume that the wavefunction of electrons has symmetric spin part. and therefore we search for the antisymmetric spatial solutions.19 we plot the evolution of energy of an average of the fitness in the population.x') = [<j>i(x)<f>2(x') — (f>2(x)(pi(x')]/y/2.

0 < y < d} in two dimensions. 5 ( ) - t 13-0. With the help of the QGA we have determined (j)\{x) and <p2(x).16) we use a high-order finite-difference formula [Bickley (1941)].2 (X i A 10 JUUJ 1 . y).x') for the harmonic potential described above. while the analytical result is E = 2w = 894. and consequently ty(x. . (atomic units) Fig. 1 - I 20 30 40 50 x.20 the functions |</>i|2(:r) and |</>2|2(:r) for this case are shown using UJ = \/20 • 102. T h e inset shows the convergence behavior of the fitness function. For this calculations the individuals of the successive populations were the pairs {(p1(x). The value of the length d is 1 atomic unit. For practical purposes we consider a rectangular box O = {(x.4 - Energy ' ' -9 j •a 0.(2.19 Calculated spatial distribution of the electron density | * ( i ) | 2 (solid line) for an electron in a potential produced by a chain of positive ions (dotted line). < d. Let us now test the QGA by determining the ground state wavefunction in two dimensions under different external potentials. In all 2D examples presented here we use a lattice with 100 by 100 grid points. The region of discretization Q is chosen as follows.2. In Fig. For the ground-state energy the QGA yields EQGA = 894 go. that this procedure yields both the two-particle triplet state with the lowest energy and first two single-particle states of the single particle Hamiltonian.43.0 < a. We also assume that the external potential outside O is infinitely high.62 Optimal control and forecasting of complex dynamical systems 1 i * i 1 1 1 s-0. 2. Note.2 t I 10 20 -9.(f)2(x)}. To calculate the kinetic energy term in the Hamiltonian given by Eq.

y2) = 4[sin(7rai)sin(7T2/i)sin(27ra:2)sin(7r2/2) — sin(7r:E2) sm(iry2) sin(27r:ri) sin(7T2/i)] (2. We found an excellent agreement with the analytical solution.2/i.yi.2 0.6 x. One possible solution is given by y{xi.32) The QGA procedure converges rapidly to a solution having the same symmetry of the function given by Eq. The ground state of this system is degenerate. 1 1 50 - 100 Iterations f £1 :J •3 * •* • / \ '• ^*" 0.(2.8 £ 0. The ground state is given by *(xi.r')\2dr' obtained from the QGA.Numerical optimization 63 1? 15 1 1 ' 1 1 1 bH 2000 ' 1 ' - C 1000 S 3 A\ f 1 w 5 10 0) N ^ . In our second example we perform the evaluation of the ground state for two noninteracting particles (in a triplet state) confined in the infinite well. The overall shape of the solution and its symmetry are in a good agreement with the exact result. (atomic units) Fig.22 we show the ground state spatial density pQGA{r) = f \^QGA(r. The convergence behavior of the fitness function is shown in the inset.20 Calculated densities \cf>i(x)\2 (solid line) and |<?>2(z)|2 (dotted line) of two orbitals which build the first triplet-state wave function for two noninteracting electrons in a ID harmonic potential (dashed line).32). 2. and the wave functions corresponding to the different degenerated states are antisymmetric.4 0.X2.2. In Fig. In our first example we perform the evaluation of the ground state for an electron confined in the infinite well.£2.2.2/2) = 4[sin(7rai)sin(7r2/i)sin(7riE2)sin(7r2/2) (see Fig. The calculated value of the ground en- .21).

0.5) 2 + (y. (2.0.543619) is also in a very good agreement with the analytical result (E = 77r2/2=34.543615).5) 2 ). ergy (E C?G ' 4 =34. the relative error being less than io-5%. In the next example we determine the ground state of two noninteracting particles (in a triplet state) in a 2D harmonic potential described by the potential U(x.0 Fig.33) .21 Ground state spatial density distribution of an electron | * ( x ) | 2 in a infinite well calculated using the QGA. 2. y) = ±UJ\{X .64 Optimal control and forecasting of complex dynamical systems 1.

0024.V) for the ground state of two noninteracting fermionic particles (triplet state) in a square infinite well.5) 2 ))(z 2 .0. (2.2.x2.23 we present the ground state density PQGA{%.an).^ V ((a* .0. In this case there is also a good agreement between the result obtained from the QGA and the exact analytical solution. . 2. The analytical solution for one of the degenerate triplet state reads 3 y(xi.22 Density distribution PQGA(X.yi. y) for this problem.Numerical optimization 65 Fig.5)2 + (yi .34) In Fig. which compares well with the exact one (E = 3ui = 300). using w = 10 2 .y2) = — e x p ( . The calculated value of the ground-state energy is EQGA = 300.

Note.V) f ° r the ground state of two noninteracting fermionic particles (triplet state) in an external harmonic potential.2 Extension lems of the QGA to quantum statistical prob- Now we are starting to discuss a possible generalization of the QGA for the case of quantum statistical problems.35) . that ^>i corresponds to the ground state.. It can be shown that the partition function Z of the quantum system satisfies the following inequality [Peierls (1936)]: Z>Z' = ^e-W*\A\*k\ k (2. one needs only a small modification of the QGA. Let {vPfc} be an arbitrary orthonormal set of N body wave functions (\J/fc = $>k(x1). that we count eigenstates in such way.xiy)). 2..66 Optimal control and forecasting of complex dynamical systems Fig.23 Density distribution PQGA{X. 2. but also exited states of a few body quantum system. In order to compute not only a ground state. For this purpose we use a variational formulation for the partition function Z of a many body quantum system.8.

16) and the dimensionless parameter (3 is proportional to the inverse temperature: 0 = -j^p. neglecting the occupation of the levels with higher energies. Therefore. In practice. full ab initio quantum mechanical calculations of the excitation spectrum are quite difficult even for the case of very few interacting particles.36) Following the spirit of the QGA we assume that each "parent's" gene represents the set of M orthonormal wavefunctions {^k}. For a simplified study of the problem.35). where ks is the Boltzmann constant.35) one can obtain a good approximation of eigenfunctions and eigenvalues of the Hamiltonian H and also the partition function Z by running an evolutionary procedure until the sum in Eq. The Hartree-Fock level is implemented across nuclear and atomic physics as a first step towards solution of the quantum many-body problem and suffices for a qualitative discussion of phase transitions and thermal behavior [Staroverov (2002)].(2. 1 < k < M.(2. The reason is huge data amounts rapidly increase with a number of particles considered in the system [Bruce (2000)].(2. that occupation of the neglected levels does not exceed a certain value at a given temperature T..(2. quantum mechanical calculations with exact many body wave functions are limited by a number of 3 —4 particles in 2D. one can take in account the lowest M levels of the system. The A^-particle wavefunction is represented by the . Let us consider a system of N spinless particles occupying K single particle states (N<K). we reduce the dimension of the many particle wavefunction ^ using the Hartree-Fock approximation.35) becomes equivalent to the variational + * principle for the ground state energy EQ: £b<<*i|ff|*i>. for a finite temperature calculations using Eq.(2.Numerical optimization 67 where the Hamiltonian H is defined by Eq.x^). For this system one can construct M = Nu^!_Ny TV-particle wavefunctions ^k(xi. In practice. . In the limit when the temperature goes to zero (T — 0 or /? — +oo) Eq. The number of considered levels M can be chosen in such way. that corresponds to 6. corresponding to all possible configurations.35) attains its maximum possible value.According to Eq. This is the simplest way to account for electron-electron interactions within the quantum system [Yannouleas (1999)]. (2.or 8-dimensional wavefunctions in the configuration space. The equality holds only in the case if {^k} is the complete set of eigenfunctions of the Hamiltonian H.

35).37) where indexes 1 < ii.. An initial population of Npop trial many-body states {^>Jk}. As in the case of searching for the ground state. using the "smooth" mutation and crossover. For this purpose we construct K x Npop single particle wavefunctions ip? using a Gaussian-like form #'(*) = A\ exp ( .. i. As in the previous calculations.x). b—a] for each single particle wavefunction. This means that the set {$>k} will represent the exact excitation spectrum for the case of noninteracting particles. crossover and mutation operations.. b). After each application of a genetic . . The initial population of {ipj} is orthogonalized for each "parent". We generate random values x\ E (a.fjv) <N\ (2. Npop is chosen randomly.(x . (2.iw < K counts the single particle states.38) where 1 < i < K and index j denotes that the single particle wavefunction ipl(x) belongs to the "parent" with index j . The algorithm is implemented as follows. For each iteration of the QGA procedure we randomly perform copy. For simplicity let us consider one dimensional problems.68 Optimal control and forecasting of complex dynamical systems Slater determinant: *fe(x 1 . However. . we perform calculations on a finite region [a. we define "quantum" or "uncertain" analogies of the genetic operations on the individuals. b] where we discretize the real space. . For the interacting case {^k} will correspond to the Hartree-Fock approximation. .(2. the functional to be maximized by the QGA is the sum Z'. As in the case of searching for the ground state. generalization of the presented algorithm to higher dimensions is straightforward. defined in Eq.S>f/{ai?) (x . as we have seen. that defined in such way wavefunctions ipf (re) fulfill zero conditions on the boundaries.. and u\ S (0.a) (b .. applied to single particle wavefunctions ^{x). Note.e. The amb 2 plitude A\ is calculated from the normalization condition J \ipj (x)\ dx = 1. that crossover operations are applied between randomly chosen single particle wave functions ^(x) and ipj2(x) ("parents" j i and j'2 respectively) corresponding to the same single-particle excitation state i. The fitness function.Z2. . Note. "offsprings" of the initial generation are formed through application of genetic operators on the genetic codes. j = 1 .

As we mentioned above. density of particles p(x) for any value of the temperature parameter /3 (see [Militzer (2000)]): p{x) = p((3.22) = [V'l^ 1)^2(^2) — ^2(^1)^1(^2)] for particles in a triplet state. we construct an initial population {^k(xi. After convergence of the QGA we determine the spatial density of the particles given by PQGA(X) = f\$QGA(x.Numerical optimization 69 operation the new-created single particle wavefunctions are normalized and orthogonalized.dxN-i. First of all. For Q = 0 (noninteracting case) the solution is built . Q is the strength of the inter-particle interaction.3 Formation .x2)} consisting of antisymmetric functions of the form $ (a? 1. over the many-body states. Maximizing Z'. for example..2.x')\2dx'. In Fig. (2..x)dx1.. In our calculations we set S = 3.. Ja (2. 2.41) v(zi -x2y + s at T = 0. x) is given by (0 x) = Efcexp(-/3(*fc(x)l£l^(x)))|*fc(x)l2 £ f c exp(-/3<* f c (x)|#|* f c (x)>) ' and the summation is carried out where we use notation x = {x\. Here 5 is a smoothing coefficient.24 we show the calculated PQGA{X) corresponding to the Hartree-Fock approximation to the ground state for different values of the interaction strength Q. Therefore. The procedure is repeated until convergence of the fitness function to a maximal value is reached. of a "Wigner molecule" and its "melting" The fact that for noninteracting quantum problems the QGA yields good convergence to the exact results motivated us to apply our approach to test its efficiency for interacting systems.39) where p(/3. the fitness of the individuals is evaluated and the fittest individuals are selected. we seek for the Hatree-Fock approximation of the ground state.— Q .. Then one can use this set in order to compute any kind of quantum statistical values. Then. we study two interacting particles (in a triplet state) in ID using an artificially smoothed Coulomblike interacting potential between them: V(xltx2) = .XN}.xN) = / p(P. one obtains the "best" set of the excitation spectrum {stfe} for a given system.8.

Creffield (1999).2.42) Fi ~ r 2 | Instead of scaling of the strength of the inter-particle interaction Q. This is a common problem in the physics of quantum dots [Serra (1999).25 we show the calculated and .f2) = — (2. the particles are well delocalized and minimize the kinetic energy. Akman (1999). This corresponds to the low-density case. Now let us determine the ground state of two particles strongly interacting via the repulsive Coulomb potential in 2D. Yannouleas (1999)]. 2.24 Density distribution PQGA(X) of two interacting particles in a triplet state in one-dimensional infinite wef}.70 Optimal control and forecasting of complex dynamical systems Fig. This effect increases with increasing interaction strength.X2) = Q/ {x\ — X2)2 + 3 between the particles. for this example we have re-scaled the size of the box by setting d = 1000. In this case the repulsive interaction potential is given by U(f1. up from the ground state and the first excited state of the single particle problem. in the interacting case the repulsive term of the Hamiltonian forces the particles to be far away from each other. In Fig. As a consequence. In contrast. For Q = 100 the overlap between one particle wavefunctions becomes negligible. The different curves refer to different strengths Q of the interaction V(x\. and leads to a localization near the opposite walls.

symmetrized density distribution PQGA{X. which minimizes the energy of the system. When thermal fluctuations (in energy units) becomes comparable with the energy of the inter-particle interaction. The spatial coordinates are shown in the units of d. melting of the Wigner crystal. i. Note.V) of two interacting electrons in a triplet state at zero temperature T = 0.25 Electron density distribution PQGA{X. the opposite to the Wigner crystallization transition occurs. that for the two-particle ID system discussed above this effect also takes place upon increase of the interac- .e.Numerical optimization 71 Fig. the contribution of the Coulomb interaction is more important oc d~l. 2. In his pioneering work [Wigner (1934)]. than the contribution of the kinetic energy oc d~2 of the particles. This effect can be attributed to the well known Wigner crystallization in solids. This leads to a strong localization of the particles in opposite corners of the square box.For low electronic densities. Wigner pointed out that the electron gas would crystallize at sufficiently low temperature and density. Note the localization of the electron density that corresponds to formation of the "Wigner molecule".V). in a square infinite well in the low density limit (large box size d = 1000).

v.41). S = 1/10.26 we show the electron density for the case of a two . Next. '••1 20 40 60 Position.l • 10 6 (dashed line). which is also described as formation of the "Wigner molecule". Yannouleas (1999)]. we present the results of calculations of the partition function Z together with the excitation spectrum.2.72 Optimal control and forecasting of complex dynamical systems tion strength Q (see Fig.(2. has been obtained recently using other theoretical approaches [Crefneld (1999). 3 11 "0 II / . For instance.:/' \ -. we can compute the particle density at different temperatures. we again use the smoothed interaction potential given by Eq. In fact.' J\ : i : ' v. 4 . First we present the results of our calculations of the melting of a "Wigner molecule" in one dimension due to increasing of thermal fluctuations.2. 2. 2 6 1 1 ' i 1 ' 1 ' I5 1 -a o . In these calculations we use L = 100 that guarantees a low electron density and set Q = 1. The Wigner crystallization in quantum dots.l • 10 5 (solid line). our calculated density shown in Fig. In order to eliminate logarithmic singularity which is typical for Coulomb interaction in one dimension. atomic units Fig.. Jauregui 1993). tt 80 100 < . We consider particles in a triplet state being in the ID infinite well with the width L and interacting via the repulsive Coulomb potential.25 is in good agreement with other methods [Crefneld (1999)]. . using /3 = 4 • 10 7 (dotted line).I--" .tS • ^. the excitation spectrum can be used to derive any kind of quantum statistical values. « S 2 .24). Once calculated.f\\\ •n /Xl—t \ ~ ~ y .2. In Fig.26 Particle density p(x) of two interacting particles in an infinite well with the width L = 100..2 • 10 6 (dash-dotted line).

Numerical optimization 73 particle system.5 • 10 5 (solid line).2 • 10 6 .5• 10 6 . this transition is quite smooth. 1. The density of particles p(x) clearly becomes more delocalized with increasing of the temperature (decreasing of the parameter /?) and for /3 = 1 • 105 it is almost uniform. in the quantum regime.1.. recently a new "Wigner molecule melting" scenario which is caused by quantum fluctuations and exists even at zero temperature [Filinov (2001)] has been studied.1. Now let us investigate the behavior of the finite quantum system at low temperatures. Because of the small amount of the particles in the system. In contrast to the above demonstrated thermal effect. It was shown.5• 105 and for the same size of the well.2. the kinetic energy Ekin of a quantum system with fixed . 2.27 Particle density p(x) of three interacting particles in an infinite well with the width L = 100.1 • 10 6 . As in Fig. the Wigner crystal has melted! In Fig.e. initially localized particle density is smeared out with increasing of the temperature.1 • 105 and we choose size of the well L = 100. In our calculations we set the parameter (3 = 4 • 10 7 .5 • 10 6 (dashed line). using parameter (3 = 6-107. atomic units 100 Fig. that derealization of the electron density occurs when the ratio of the kinetic energy to the Coulomb energy exceeds a certain threshold [Filinov (2001)].3• 10 6 .26.27 we plot the particle density for three particles with parallel spins. As we mentioned above.2. using j3 = 6 • 10 7 (dotted line). 3 • 10 6 (dash-dotted line). i. 1. 20 40 60 80 Position.

28 Rescaled particle density p(x) of four interacting particles in an infinite well at zero temperature using width of the well L = 500 (dotted line). i.28 we present results for the calculation of «20 *-> 'c 3 • B 15 h i-i 'w C "O aj £. atomic units Fig.9 Evolutionary gradient search and Lamarckianism Now let us talk about possible combination between global GA optimization and local search techniques.6 Position. the particle density p(x) for different sizes of the well L = 500.1 at zero temperature. L = 100 (dashed line). The evolutionary gradient search algorithm of Salomon [Salomon (1998)] is a search method that employs yet another. 2.2. "evolutionary inspired" estimation of the local gradient of the objective function.100.' i 0. In Fig./ • 1/ > ' • > /: — • < • ' V / A t 0. In order to visualize these results we re-scale calculated p(x) on the interval [0.74 Optimal control and forecasting of complex dynamical systems number of particles and variable size L scales as Ekin ~ L~2. that with decreasing of the size of the system electron density tends to be more delocalized.10 u :/ :\ 5 * A :' / A /:' \: I ~o ":/ 'A •/ / • \ . with decreasing of the size of the system quantum fluctuations become more and more significant (due to the Heisenberg's uncertainty principle) and quantum phase transition would occur.1]. The state of the evolutionary gradient search algorithm at time . From Fig. Note. a kind of "Wigner molecule melting" takes place in the system.2 \ i . 2. L = 1 (solid line). while the Coulomb energy Ec scales only as Ec ~ L~l.28 it becomes clear.2.8 0.e. Therefore. the it looks quite different from the "melting" due to thermal fluctuations.4 0. V \)> 0.

It seems that the evolutionary gradient search method assigns weights that are proportional to the fitness advantages of all offspring.. with offspring with a negative fitness advantage receiving negative weights./(x)] Zi 1 A (2. where the Zj are random vectors with independent components distributed according to the Gauss normal distribution. when an individual's fitness and genotype are returned after execution of local gradient search for some small percentage of time. the direction given by Eq. are generated. While the evolutionary gradient search method has not yet been studied in great detail. Jean-Baptiste Pierre Antoine de Monet. It is interesting to compare the direction given by the negative of Eq.43) as an estimation of the local gradient of the function. This has the effect of slowly moving the search to beneficial areas of the search . A offspring candidate solutions x + hzi. [/( x + ^ i ) . (2. Salomon has shown that for small h and sufficiently large A. where usually k = 1..Numerical optimization 75 t is described by a base point x and a step length h. and the step length h is multiplied by k if the longer of the two test steps was more successful and it is divided by k if the shorter of the two test steps prevailed. Implementation of the "Lamarckianism" is using such a hybrid strategy.Y. The base point is then updated by performing the test step with the higher (measured) fitness advantage...8. the evolutionary gradient search method computes: d h (x) = . Rather then performing selection the way an evolution strategy does. an iteration of the evolutionary gradient search procedure requires A + 2 evaluations of the objective function. In every iteration. Chevalier de Lamarck was a famous French naturalist (1744— 1829). One test step has length hy/Nk. Clearly. but to interpret an offspring candidate solution with a negative fitness advantage over the best point as evidence that a step should be taken in the opposite direction. the other one has length hy/N/k.43) agrees closely with the gradient direction at local x. Another way to improve search of genetic algorithm is to move from Darwin's to Lamarck's formulation of the evolution. i = 1.43) in which the evolutionary gradient search method proceeds with that of other strategies. The method then proceeds by taking two test steps from the base point in the negative direction of dh(x). it seems conceivable that the "genetic repair" effect may be present in evolutionary gradient searches. A.(2. The motivation for this step is the wish to not discard the information carried by the offspring that are rejected.

and we gave some examples how use GA to solve stationary few-body quantum mechanical eigenproblems.10 Summary In this chapter we have discussed some problems which arise in a context of global optimization search.76 Optimal control and forecasting of complex dynamical systems space. . including the Simplex method and Simulated Annealing. which characterizes the ratio between Darwinian and Lamarckian methodology. As an example. and which decreases during search. local optimization can help on the later stage of the whole optimization procedure. we learned a bit about NFT and its main consequences. 2. Such a gradual movement may provide faster algorithm execution since an increasing number of solutions in the genetic population will represent good initial guesses. In this case it could be a good idea to introduce an effective rate. We gave a more detailed introduction into genetic algorithms. when the area of the global optimum is already localized by the genetic algorithm. We made a brief review of some popular numerical methods of optimization. and we also have discussed multi-objective optimization. we considered formation and melting of a "Wigner molecule" in a system of a few electrons confined in a quantum dot. limiting the amount of inefficient search by the localized search technique in poor areas of the search space. In particular. In particular. including quantum statistical calculations for strongly interacting systems.

since there are no rigorous mathematical description of the rational (or irrational) behavior of a human being. although randomness and determinism often coexist. Predictability thus precludes complete randomness and signifies determinism. all chaotic systems are nonlinear. social and economic phenomena. however. All these physical systems can be described within framework of well established and experimentally proven theories. most of the environmental.Chapter 3 Chaos in complex systems The Universe presents us with an infinite variety of complex dynamical systems. one can try to develop some interdisciplinary ideas which are common for different systems and may provide some insight of the complex phenomena in general. prediction is impossible. like social systems. In general. Thus we can hope.and even attoseconds. not any nonlinear system is chaotic. fluid dynamics. One of such ideas is the concept of chaos. that the knowledge which is obtained from the studies of chaos in natural sciences. molecular biology. With truly random data. like celestial mechanics or quantum theory. there is absolute determinism. Chaotic behavior can be observed in most fields of science: subatomic and molecular physics. Perhaps the ultimate test for chaos is the accuracy of short-term predictions. The story of multiple attempts to understand chaos 77 . However. Their spacial and temporal scales vary from intergalactic distances and millions of years to atomic and electron motion and femto. and prediction is possible in principle. chemistry. plasma physics. with chaotic data. Conversely. at least on short-time scales. A much higher level of complexity is associated with the description of dynamics in human society or financial networks (stock market). can then be applied to more complex examples.

one should solve much more complicated three-body problem: the motion of the Moon.1) where dli(t) is the radius of the ellipsoid along its ith principal axis. we can predict its state x(t) at time t = T with complete precision. who has rigorously proofed the impossibility of integrating the three-body problem in general case. Sensitivity to initial conditions means that nearby points in phase space typically "repel" each other. the failure to solve the three-body problem analytically. So. The surprising result was obtained by a great French mathematician Henri Poincare (1854 — 1912). when Sir Isaac Newton has derived analytically Kepler's elliptic orbits and set equations of motion of a two-body problem that can be integrated exactly. motivated most brilliant mathematicians at that time for search of a general solution. Although few analytical solutions for some particular cases of the problem were given in XIX century. the series will converge extremely slow. with radius dr sitting on the initial state of a trajectory. However. we are not interested in the case of . if one starts to consider the motion of the Moon in more details. The impossibility to integrate equations of motion of a physical system is tightly connected with unpredictability its time evolution. to obtain the Lyapunov spectra. In a deterministic time evolution. at least for a while. because at least the gravitation of the Sun has to be taken into account. The i-th Lyapunov exponent is defined by Ai= l i m l l n ( ^ ) . Note. this (even extremely small) error Ax(0) may grow exponentially with t. imagine an infinitesimal small sphere (in the case of three dimensions). is one even be able to obtain partial analytical solution under special conditions (for example. That is. and the Sun. that will virtually destroy our predictions! Technically. after a finite time t all orbits which have started in that sphere will be in the ellipsoid. The flow will deform this sphere into an ellipsoid. he immediately realizes. In practice. t-»oo t \dr I (3.78 Optimal control and forecasting of complex dynamical systems begins 300 years ago. that it cannot be well approximated by a two-body problem. the Earth. if we know the initial state x(0) of the system at time t = 0 exactly. But if there is an imprecision Ax(0) in the initial definition of x(0). The way to characterize this exponential grow is to calculate the Lyapunov exponents. for a planar motion).

Imagine. in the case of someone playing dices and attempting to reproduce a certain combination and fails. Actually. and the trajectory of the system will be captured in so-called strange attractor. This problem is know as problem of efficient shadowing of the system. because they would eventually move apart to the opposite ends of the Universe. this strange attractor will have a fractional dimension! Such sensitive dependence on initial condition often appears in our life. and the system undergoes irregular oscillations (if they were regular they would be predictable). In fact. Any numerical methods are based on approximations. there exists a . The theorem states that a dynamical system can be reconstructed from a sequence of observations of the state of the dynamical system. This situation is usually referred to chaotic behavior. but theory and applications were developed only six decades later. when the motion in phase space is bounded and any two points will eventually reach a maximum separation and then begin to approach each other again. for example. because of the above mentioned reason. The reason is that only in 1950 — 1960 the growing computational power of computers permitted scientists to make possible numerical integration of systems of nonlinear equations and perform numerical experiments. Shadowing is a branch of chaotic systems theory that tries to show that. Now. As a result. It does this by trying to show that. using so-called Takens' delay embedding theorem. We will talk about in more detail in chapter 6. which of the order of t oc 1/A. we are looking for the opposite case. and chaotic systems are characterized by highly sensitivity to approximations. in some cases. for any particular computed solution (the "noisy" solution). the volume of the accessible phase space can monotonously decrease until it becomes a set of measure zero. even in the face of the exponential magnification of small errors. Obviously. which is a result of Floris Takens on the embedding dimension of nonlinear (chaotic) systems. in the era of digital computers. The mathematical possibility of chaos was well understood 100 years ago by Hadamard and Poincare. most likely what we know about properties of nonlinear differential equations is usually based on numerical integration techniques. How to do prediction for a chaotic system one can learn from mathematics.Chaos 79 unbounded dynamics. the evolution in time of a chaotic system has limited predictability. numerical solutions have some validity. A is the largest positive Lyapunov exponent. when the distance between two initially nearby points diverge from each other exponentially for all times.

whereby an iterative refinement algorithm is applied to a noisy solution to produce a nearby solution with less noise. [Maddox (1989)]. By projecting out the concept of chaos on to the natural world of earthquakes or the Sun activity. the cause and effect. this randomness is true. There is another issue worth to mention. it is called a true shadow of the computed solution. who believed that the uncertainty of our predictions justifies a probabilistic description of the world. many scientists started to believe that concept of chaos will explode the naive determinism of Newton's mechanics. After the discover of the chaos phenomena. One should notice. is deterministic like as the common heat transfer equation. Chaos usually assumes that there is always finite horizon of the reliable forecast of the system's dynamics. If this iterative process converges to a solution with noise close to the machine precision. Up to a certain precision one can simulate randomness with some deterministic pseudorandom procedure. for example. pure and unavoidable. it seems that the true randomness exists only in quantum mechanics. biological evolution. when the meteorologist Edward Lorenz working at MIT published an analysis of a simple system of three .1 Lorenz attractor The Lorenz attractor dates from 1963. see. while randomness usually means the dynamics is completely unpredictable. However. 3. There are some important consequences of the idea of chaos. It is not a poor pseudorandomness of dices and other gambling devices. One have to distinguish between chaos and randomness. If such a solution exists. it is widely accepted that chaos is a generic property of all multidimensional nonlinear systems. and even the human society. that the Schrodinger equation which describes the quantum evolution of a very specific dynamical variable. where there is a breakdown of determinism.80 Optimal control and forecasting of complex dynamical systems true solution with slightly different initial conditions that stays uniformly close to the computed solution. or a strong chaotic process with a large positive Lyapunov exponent that makes any (even short-term) forecast very difficult. a wavefunction. the resulting solution is called a numerical shadow. One of them was pointed out by Poincare. An approximation to true shadowing is numerical shadowing. given initial (and final) state and trajectories. If we accept postulates of quantum mechanics. for more details.

The Lorenz model was derived from the Navier-Stokes equations by making similar simplifications and results in a system of three coupled nonlinear first-order ordinary differential equations: —a. convection is described using the Navier-Stokes equations in partial derivatives. Fortunately.2) possess some surprising features. i. xz. later named the Lorenz at- . The dependent variables x(t). . It is frequently possible to expand the dependent variables of a partial differential equation in an infinite discrete set of coupled ordinary differential equations for the time dependence of the Fourier coefficients.(3. there is also some magic "order" in the system: numerical solutions of the equations. which governs the formation of convection rolls between two parallel surfaces at different temperatures. This type of unpredictability is a characteristic feature of chaos. see [Schuster (1988)]. His interest was in predicting atmospheric dynamics with the ultimate goal of developing long-term weather prediction tools. the equations are "sensitive to initial conditions". What he found through obtaining numerical solutions of these seemingly simple equations had a deep impact on our understanding of the possible behaviors in nonlinear equations. that makes analysis quite complicated. = d a(y-x). b is related to the shape of the convection rolls. Note. He developed the equations as a simple model for the so-called Rayleigh-Benard convection problem. consist of curves which approach a curious two-sheeted surface. s —y = px-y- (3. In general. plotted in three dimensions. Lorenz has pointed out that Eq. y(t) and z(t) essentially contain the time dependence of the stream-function and temperature distributions expressed as Fourier expansions. partial differential equations may often be thought of as infinite systems of ordinary differential equations.2) where a is the Prandtl number. Conversely. and p is related to the Rayleigh number (or the temperature difference between the surfaces). meaning that tiny differences at the start become amplified exponentially as time passes. For derivation of the Lorenz equations.e.Chaos 81 differential equations. respectively. the ratio of the kinematic viscosity divided by the thermal diffusivity. In particular. that the equations are nonlinear due to the xz and xy terms in the second and third equations.

a saddle point. mathematicians have lacked a rigorous proof that exact solutions of the Lorenz equations will resemble the shape generated on a computer by numerical approximations. and they . Lorenz attractor became more than 40 years ago the one of the wellrecognized symbols of modern nonlinear dynamics and chaos theory. One can possibly find it in every textbook on theory of dynamical nonlinear systems and it associated with appearing of an order within chaos (see Fig. There is an unstable equilibrium. and are pushed away to the left or right. that symbolizes an order in chaos.1). First its existence was established only on the basis of numerical integration of Lorenz equations on computer. 3. adjacent curves are pulled apart this is how the unpredictability is created and can end up on either side of the saddle. However. tractor. at the origin. only to circle round to pass back by the saddle.3. The geometry of the attractor is closely related to the "flow" of the equations the curves corresponding to solutions of the differential equations. The trajectory repeatedly passes this point. As they loop back.82 Optimal control and forecasting of complex dynamical systems Fig.1 The famous Lorenz attractor. The result is an apparently random sequence of loops to the left and right.

having many degrees of freedom and hence governed by a set of complex multidimensional differential equations.Chaos 83 even also could not prove that its dynamics are genuinely chaotic. the system can have an effective non-integer dimension E defined as a sum of the orders of all involved derivatives. in Computer Science and Biology it is "the edge of chaos". and in Demographics and Linguistics it is called "Zipfs law". the weather or corporations in the marketplace. Thus. Another example. In Econometrics it goes under the name of distributions with "fat tails". Only in 1998 Warwick Tucker has proved that Lorenz system indeed define a robust chaotic attractor [Tucker (1999)]. then its behavior might be in practice impossible to predict. Thus. As it was shown recently [Hilfer (2000)]. is the success in the modelling of financial series using Autoregressive Fractionally Integrated Moving Average models (ARFIMA) [Doornik (1994)]. What is really hard to imagine that some times even very simple systems.2 Control of chaotic dynamics of the fractional Lorenz system Complex systems can consist of an enormous number of simple units whose interactions can lead to unexpected collective behavior. The dream of complexity theory is to discover the laws governing such complex systems. because it builds a bridge between empirical laws from numerical experiment and mathematically rigorous proof. in Physics it is referred to as "critical fluctuations" or "universality" . It is easily accepted by our intuition that if a system consists of many interacting subsystems. 3. In this section we introduce a generalization of the Lorenz dynamical system using fractional derivatives. many of these general "power law" dependencies can be naturally introduced as solutions of differential equations with fractional derivatives. described by simple nonlinear equations. The "power law" is a distinctive experimental signature seen in a wide variety of complex systems. Another striking observation. We found that the system with E < 3 can exhibit chaotic . fractional derivatives could be a significant ingredient in modelling of complex dynamical systems. is that increasing of the dimensionality of some type of dynamical systems can lead to decreasing of the probability. whether they be ecosystems. that this system can be chaotic! [Sprott (2003)]. This result is of great importance. can have very complicated chaotic solutions.

Therefore.7 < 1 it is possible to obtain a system with an effective noninteger dimension £ < 3. Another difficulty is that fractional derivatives have no evident geometrical interpretation because of their nonlocal character [Podlubny (2001)]. The dimension E of such system can be defined as a sum of the orders of all involved derivatives. As we mentioned before. under which the system undergoes a transition from a chaotic dynamics to a regular one. It was found that various. one can mention studies on viscoelastic bodies. where Xn is a discrete state variable. The usefulness of fractional derivatives in quantitative finance [Scalas (2000)] and quantum evolution of complex systems [Kusnezov (1999)] was recently demonstrated. however one should remember. The discrete-time dynamical systems can exhibit chaotic behavior even in one dimension (see. One have to stress that this theorem is applicable to continues-time dynamical systems and not to discrete maps. However. there are multiple nonequivalent definitions of fractional derivatives [Hilfer (2000)]. phase transitions of fractional order. for example. most of the studies mentioned above were performed on the basis of linear differential equations containing fractional derivatives.84 Optimal control and forecasting of complex dynamical systems behavior. that this definition is not rigorous. chaos cannot occur in two-dimensional systems of autonomous ordinary differential equations. the most famous example of a continuestime three-dimensional system which exhibits chaos is the Lorenz model [Lorenz (1963)]. for example. the famous logistic map [Ott (1994)]). An interesting finding is that there a critical value of the effective dimension E c r . One possible explanation of such unpopularity could be that. One should also mention recent attempts to introduce a local formulation of fractional derivatives [Kolwankar (1998)] and to give some geometrical interpretations [Podlubny (2001)]. anomalous diffusion and description of the fractional kinetics of chaotic systems (for review see [Hilfer (2000)]). According to the Poincare-Bendixson theorem (see. [Hirsch (1965)]). However. especially interdisciplinary applications can be elegantly described with the help of the fractional derivatives. many years they were not used in physics. The main consequence of this limitation that the dynamics of such systems cannot be chaotic. during the last ten years fractional calculus starts to attract much more attention of physicists and engineers. polymer physics. Although fractional derivatives have a long mathematical history. A natural question arises whether such system can exhibit chaotic behavior. )3. which can be represented as Xn+i = f(Xn). In connection with this question one should . by using fractional derivatives of orders 0 < a. As example.

Chaos 85 also mention the work of Hartley et al. Here we assume 0 < a. p — 28. There are several definitions of fractional derivatives [Hilfer (2000)]. so that in the . We estimate the largest Lyapunov exponent in this case./3. Substantially. the Caputo fractional derivative is a formal generalization of the integer derivative under the Laplace transformation [Scalas (2000)]. In our calculations we use the following values of the parameters a = 10. that is not the case for the Riemann-Liouville derivative. b = 8/3.4) is that the Caputo derivative of a constant is equal to zero.7 < l . An alternative definition of fractional derivatives was introduced by Caputo [Caputo (1967)]. dt0V = px -• = y - xz (3. Now let us introduce a fractional generalization of the Lorenz system: = a(y -x). (3. Caputo derivative of order a is a sort of regularization of the Riemann-Liouville derivative and it is defined through The main advantage of the definition Eq.5) we define as a sum of the orders a + /? + 7 = E. The Riemann-Liouville derivative of order a and with the lower limit a is defined as where r ( a ) is the gamma function and n is an integer number chosen in such way. The effective dimension E of the system Eq. that n — 1 < a < n. we determine a critical value E c r . Moreover. where the authors studied chaotic motion of the Chua-Hartley system of a fractional order [Hartley (1995)].5) xy-•bz. In this section we are going to investigate dynamics of the fractional Lorenz system and we find that it can be chaotic with E < 3. r > 1 and the time derivatives are in the Caputo sense.(3. Probably the most known is the Riemann-Liouville formulation. under which E < E c r the dynamics of the considered system becomes regular.

7).(3.(3.7) where Ea is the one-parameter Mittag-Leffler function [Ederlyi (1955)] defined by: OO fc * < > = £i^+T)'< a>0 >»* OO fc ^ and the two-parameter Mittag-Leffler function [Ederlyi (1955)] defined by: For a = 1.(3.5) for different values of the parameters a. Thurston (1972)].3. Ea and EaiCC both reduce to the usual exponent function. Generalization of dynamical equations using fractional derivatives could be useful in phenomenological description of viscoelastic liquids like. In Fig.x(0)=xo.(3. (3.5) reduces to the common Lorenz dynamical system exhibiting chaotic behavior. This is a computationally expensive problem since for numerical integration it requires C(n 2 ) operation counts. obtained using Eq. Let us start from the analytical solution of a linear fractional differential equation: ^ x = Ax + f(t). for example. (3.7).(3. r and different initial conditions. The first finding is that the fractional Lorenz system can exhibit chaotic behavior with the effective dimension £ < 3.r)a)f(r)dT.(3. human blood [Hilfer (2000).5) on each step of integration and iterative application of the Eq.5).(3. We integrated Eq.T)a^Ea.(3.7. The system Eq. We have checked our numerical scheme by comparison results for Eq.a(A(t Jo .86 Optimal control and forecasting of complex dynamical systems case a = /? = 7 = r = l the system Eq.5) is in fact a system of coupled nonlinear integro-differential equations with a weakly singular kernel. and by the standard fourth order Runge-Kutta method for the case a = /? = 7 = r = l. The numerical scheme we implemented in our calculations is based on the linearization of the system Eq. where n is the number of sampling points [Diethelm (1997)].6) With the help of the Laplace transformation [Podlubny (1997)] one can easily obtain the solution of the Eq.2 we show the dynamical portrait of the system .6) in the form: x(t) = x0Ea(Ata) + [ (t. (3.

Chaos

87

{x(t),y(t),z(t)} using parameters r = 1, a = /? = 7 = 0.99. Thus, the effective dimension of the system is E = 2.97 < 3. We set the initial conditions at t = 0 as {xo, yo, ZQ} = {10,0,10}. Note, that the system exhibits chaotic dynamics similar to the case of the common Lorenz system. Moreover, one probably can also define the set of points which could be characterized as a strange attractor. However, this set is slightly deformed compare to the "classical" Lorenz attractor. We have to stress, that it is rather time consuming to define Lyapunov exponents for the nonlocal system, like Eq.(3.5). In order to resolve this difficulty we define the largest positive Lyapunov exponent A using an implicit procedure developed for the time-series data. With the help of the free-ware package TISEAN [Hegger (1998)] we estimated the largest Lyapunov exponent A for the case shown in the Fig.3.2. We found A « 0.85 that corresponds to the chaotic regime. Note, that for the common Lorenz system A « 0.906 [Sparrow (1982)]. We conclude that decreasing of the effective dimension E induces some effective damping in the system. By decreasing the parameters a,/3,7 one obtains a further decreasing of the largest Lyapunov exponent.

Fig. 3.2 Dynamical portrait of the fractional Lorenz system using parameters a = (3 = 7 = 0.99 and having the effective dimension S = 2.97. Note formation of the attractor, similar to the Lorenz strange attractor.

At a certain critical dimension E c r the dynamics of the system undergoes qualitative changers and becomes regular for any initial condition. This is a new interesting result which, for our knowledge, was not described before.

88

Optimal control and forecasting of complex dynamical

systems

50 40 30 20 10

30 20 10 - 5 -10 1 X 10
15

-io
-20
20^30

y

Fig. 3.3 Dynamical portrait of the fractional Lorenz system using parameters a = /3 = 7 = 0.96 and having the effective dimension £ = 2.94. Note, that the strange attractor does not exist and system is attracted by one of the two focuses: (3v / 8,3\/8,26) and (-3^,3^,26).

We obtain the lowest value of the system's effective dimension E c r « 2.91, for which chaotic regime is still possible. This corresponds to the case a « 0.91,/? = 7 = 1. The obtained critical values of the parameters a, /3,7 reflect the fact, that the first, the linear differential equation in the system Eq.(3.5) seems to be "less sensitive" to the damping, introduced by the fractional derivative, than others, nonlinear equations. If we restrict ourselves for the case of equal derivative orders a = (3 = 7, the effective critical dimension for this symmetric case is even higher: £sim ^ 2.94. In the Fig.3.3 we show the dynamical portrait of the system setting parameters r = 1, a = /3 = 7 = 0.97, with the corresponding effective dimension £ = 2.91 < ££J.m- We use the same initial conditions as for the previous examples. Note, that in this case the system exhibits a strong damping of the oscillations. Dependent on initial conditions, the trajectory of the system is attracted by one of two centers given by

(x, y, z) = (±y/b{r-l),
da d13

±y/b(r-l),

r - 1).

(3.10)

These points can be easily defined from the stationarity condition: _ di 0. (3.11)

Note, that the stationarity condition Eq.(3.11) has the usual form because

Chaos

we use the Caputo's fractional derivatives, and it is not applicable, if one uses the Riemann-Liouville formulation Eq.(3.3). However, the obtained critical value S c r « 2.91 is not the "universal threshold" for any continues-time chaotic system of fractional dimension. We found that it is a value that characterizes a particular dynamical system. In order to illustrate this we repeat the simulations shown in the Fig.3.3 with the same initial conditions, but with the changed parameter r = 3. In the Fig.3.4 we show the dynamics of the variable z(t). The system Eq.(3.5) in this case exhibits a "stronger" nonlinearity, which possibly compensates the damping effect described above, and one again obtains chaotic behavior.

Fig. 3.4 Evolution of the variable z(t) of the fractional Lorenz system. We use parameters a — (3 = 7 = 0.96, r = 3. The effective dimension £ = 2.94. Note, that unlike in the Fig.3.3, the dynamics of the system is chaotic.

We also found that under certain conditions the system Eq.(3.5) can exhibit quasi-periodic oscillations with stable periodic orbits. In the Fig.3.5 we show an example of such quasi-periodic dynamics of the variable z(t) using parameters values r = l,a = (3 = l and 7 = 0.98, that corresponds to the effective dimension £ = 2.98. We used the same initial conditions as for the previous Figs.3.2-3.4. Note, that after some time of transient behavior

90

Optimal control and forecasting of complex dynamical

systems

Fig. 3.5 A quasi-periodic evolution of the variable z(t) of the fractional Lorenz system. We use parameters a —' f} = 1,7 = 0.98, r = 1.

the system evolves quasi-periodically. For different initial conditions the dynamics of the system shows different limiting cycles having the same two symmetric centers (fixed points) given by Eq.(3.10). One can understand the obtained results shown in the Figs.3.2-3.5 in the following way. Any chaotic system is characterized by a strong sensitivity to its initial conditions, and the "memory" time of the system can be estimated as tmem RS A - 1 , where A is the largest positive Lyapunov exponent. On the other hand, the introduction of fractional derivatives leads to a non-locality in time domain (see definition Eq.(3.4)), which can be interpreted as the presence of long "memory". The competition between these two tendencies was a subject of the presented investigations. Now we discuss a question whether introduction of fractional derivatives should always lead to stabilization and damping of the chaos in the dynamical system. Let us consider Eqs.(3.6),(3.7) in more detail. In the case of A < 0 and in the limit t -> +00, Ea(Ata) oc t~a [Scalas (2000)]. Therefore, one obtains only the power law convergence of two close trajectories instead of the exponential one for a = 1. Thus, even small changers of

3. There are some interesting questions which are still open. that fractional dynamics do not represent a group f(t + s) = f(t) * f(s) in time domain.Optimal control of quantum systems 91 the orders of derivatives could lead to dramatic changers of the Lyapunov spectrum and of the whole dynamics. We have introduced a fractional generalization of the Lorenz model which could be useful in phenomenological description of viscoelastic liquids and other dynamical systems with a long "memory". In general. the exponential divergence of two close trajectories. If A > 0. We demonstrated that the dynamics of the system is strongly sensitive to the values of the orders of the involved derivatives a. as in the * case of a = 1. in the limit t — +00.7 leads to a damping in the system. which now can be a non-integer value.3 Summary In this section we discussed an example of chaotic phenomena in Nature. which is characterized by a power law divergence of close trajectories. .91 chaotic motion of the system is not possible. when a small decreasing of the order of the derivative a could lead to a stronger sensitivity of the whole nonlinear system to the initial conditions. so one cannot apply the Poincare-Bendicsson theorem rigorously. We study how the dynamics of the system depends on the effective dimension S. 3.7 and as the result. One can imagine a situation. We discovered that under the certain critical dimensionality S c r < 2. to the effective dimension S. To discriminate between chaotic and ordered orbits. We found that the fractional Lorenz system exhibits rich dynamical properties and can be chaotic with effective dimension £ less than 3. under which any nonlinear system cannot be chaotic? And how far could it be from the value predicted by the Poincare-Bendixson theorem ( £ " " = 2)? One should aware though. f3. decreasing of the parameters a. Ea(Ata) oc exp(A1^at) and one obtains. we also estimated the largest Lyapunov exponent in particular cases. Does the lowest universal bound £ " " exist. Another interesting question is whether the introduction of fractional derivatives of the distributed order [Chechkin (2002)] in nonlinear systems could help in the description of "edge of chaos".

This page is intentionally left blank .

where the authors suggested a variational formulation of optimal control in quantum systems and a procedure to solve this problem. many theoretical and experimental investigations were devoted to the problem how "to teach" lasers to drive molecular reactions in real time [Bardeen (1997). [Brixner (2001)]). However. Peirce (1988). After the seminal work of Judson and Rabitz [Judson (1992)]. for example. Furthermore. spectrum and polarization (see. laser pulses) to drive a certain physical quantity. Thus. Optimal control of the internal motions of a molecule is achieved by exploiting a variety of interference effects associated with the quantum mechanics of molecular or electron motion. Hornung (2000)]. Using a variational formulation of the control problem it was shown. with the help of optimal control field one can induce chemical reactions which are not possible or very difficult to carry out [Judson (1992)]. Ultrashort laser pulses can be used as an ideal tool to manipulate quantum objects. to reach a desired value at a given time [Ohtsuki (1999). Apalategui (2001). which may be responsible for the yield of a chemical reaction can be controlled. For instance.Chapter 4 Optimal control of quantum systems In the last decade the development of laser systems opened the way for creation of ultrashort femtosecond pulses with controlled shape. the population of a certain vibrational state. de Araujo (1998)]. like the population of a given state. de Vivie-Riedle (2000). which 93 . even for the simplest control problems the obtained fields usually have a rather complex nature and cannot be easily interpreted [Zhu (1998)]. Vajda (2001). The idea consists in using the pulse-shaping techniques to design pulses or sequences of pulses having a given optimal shape (and phase) so that the desired atomic wave packet dynamics is induced.g. how to construct optimal external fields (e. since the optimal field arises in the formalism [Judson (1992)] as a solution of the system of coupled nonlinear differential equations.

In this chapter we present a new alternative approach which permits to obtain analytical solutions in some simple problems. Peirce (1988)]. which permits to derive analytical solutions for the optimal fields. Then we give a brief overview of the modern variational formulation for the optimal control problem. how should it be modified in the presence of relaxation in the system?" In the following chapter we develop a new formulation of the optimal control problem in quantum systems.94 Optimal control and forecasting of complex dynamical systems are treated numerically by application of iterative methods. for example. An interesting problem which is usually not mentioned. An interesting example of such optimal control was recently performed on a system of shallow donors in semiconductors [Cole (2001)]. First we will give a short introduction into the density matrix formalism. is: "If the optimal control field for a quantum system without relaxation is known. there is also a problem related to the multiplicity of the obtained optimal solutions which are local extrema of the control problem. In many situations the controlled system cannot be treated as isolated. while the total transferred charge is proportional to a time integral over the occupation of a certain excited state. This approach also permits us to investigate optimal control of simple quantum systems with relaxation. in [Ohtsuki (1999). In this case some limits of the optimal control of the system should exist [Schirmer (2000)]. a more detailed manipulation of real systems may require the control of physical quantities during a finite time interval. we give an alternative approach. it is necessary to develop a new theory. due to coupling to the environment (thermal bath) or due to contact with measuring devices. to guarantee their uniqueness at least for simple problems. Although the maximization of a given objective at a certain moment (as it was considered in earlier works mentioned above) is relevant for many purposes. could play a significant role. which is useful in description of realistic quantum mechanical systems in a contact with environment. The search for optimal fields able to perform such control quantum systems is a vital problem for which no analytical description has been given so far. that allows us to describe optimal control of . Using new analytical approach we derive a differential equation for the optimal control field which we solve analytically for some limiting cases. There is another point worth to note. Therefore. therefore dissipative and relaxation processes. Using pulses of various shapes and duration one can control the photo-current in the system. which were developed. The question is to estimate these limits quantitatively. After that. Using our theory we are going to consider and solve such kind of optimal control problems.

. or density matrix operator is defined as: n P = J2wi\iPi><iPi\t=l (4-1) In order to present Eq..n) with the statistical weights Wi.(4. W a i iJ<k- (4-4) ..3) < 4>i\p\4>k >= Y.. we choose a convenient orthonormal basis \(f>\ >..(4. Let us consider a mixture of independently prepared quantum states \tpi >.1 Density matrix formalism Here we would like to outline main ideas behind the density matrix formalism that permits us to perform quantum mechanical description of a system embedded in some environment.. These states are not necessarily orthornormal to each other. We also introduce a new type of constraint on the control field which limits the minimal width of the envelope of the resulting field. that is connected |i/>i > through the relationship: n 3= 1 n < ^ | = ^a*fc<^lfc=i (4-2) Than the expression given by Eq. T] time interval.. usually described as a thermal bath.1) in a matrix form. Optimal control of the system at a given time T is only a special case of our general theory. in equivalent way n (4. |0„ >.J2WiaiJaik\<f>i><4>kl or.. 4.1) can be rewritten as n n n p = ^2Y. The statistical operator p.Optimal control of quantum systems 95 a quantum system over a finite [0. (i = l. This constraint naturally arises if one tries to find the optimal pulses with experimentally achievable modulation of the control fields.

2 Liouville equation for the reduced density matrix In this section we are going to derive. In order to derive it. the density matrix contains all significant information about quantum mechanical system and we will use it in our further calculations.6) one easily obtains the quantum Liouville equation. which describes a quantum system being in contact with environment. The density matrix of a quantum system satisfies the quantum Liouville equation.7) Note. the density matrix of the . Let us consider a quantum system A with the Hamiltonian HA in contact with another system B (heat bath) with the Hamiltonian Hg. using the projector technique. 4.4) can be considered as a definition of the matrix p.p}. | < Ut)\ = J$< Mt)\. As we have shown in the previous section.96 Optimal control and forecasting of complex dynamical systems The last expression Eq. which is very useful to describe the evolution of quantum systems interacting with decohering environment: ih ^=Hp-Hp=[H. We assume a weak interaction between subsystems in order to justify using of the perturbation theory.(4. One can think about systems A and B as two subsystems interacting with each other. Thus. (4. i ( 0 1 ). (4.(4-5) Using the time-dependent Schrodinger equation and its Hermite conjugate §-t\A(t) >= ~H\i>i{t) >. the Liouville equation for the reduced density matrix. we should remember the following expression for the density matrix operator: | p ( t ) = |Ew^<(t)><Vi(t)i = E^|( l v . i ( i ) > ) < v ' i W I + £^ l v ' i W > l( < v . that the Hamiltonian of the system H can explicitly contain an external control field.

11) One can immediately check that from the definition Eq. (4.15) = (1 .p)/9 (4.8) corresponding equation of motion for the reduced density matrix a(t).(4. (4.9) The trace operation TTB means that we take the diagonal sum of the following operator in the Hilbert space of the system B [Toda (1983)]. (4. (4.P)CPp + (1 . (4. we assume that the density matrix of the system B is at thermal equilibrium at temperature T and it is given by PB = exp(-/3FB)/TrB[exp(-/3FB)].p]. In order to make further progress.12) thus. P is indeed a projector.13) For a Liouville operator £ and an operator F we can always write: exp(i£)T = exp(-iHt/h)Fexp(iHt/h).10) Then our task is to derive from the Eq. (4.8) where HAB is the interaction between the systems A and B. . Let us also introduce for a brief notation the corresponding Liouville operators: C = CA+CB + CAB.14) We start from the separation of the Liouville equation into two equations: ih ihd{1 ~tP)p iif = PCPp+P£(1 . Let us introduce the reduced density matrix a that describes the system A and is given by a(t) = TrB[p(t)].Optimal control of quantum systems 97 whole system p obeys the quantum Liouville equation: ih^ = [HA + HB + HAB . (4.P)C{1 .11) it follows: P2 = P.P)p. Let us introduce a projector operator: Pp{t) = pBTrB[p{t)}=pBa(t).(4.

.19) Note. In many situations it is enough to consider the interaction between the systems A and B in the bilinear form: HAB = A(t)B(t).16) where we use notation (1 .16).18) into the first equation in Eq. Now we just formally solve the second equation in (4.{A(r)B(T-t). (4.J dTCABpB°(T)Z(t-T).P) = P'. let us consider the second-order perturbation. P'CAB)0. that will give us ih^=CAa i (4.18) where Z(t) = exp(—1(CA + CB + P'CABY/^)Now let us substitute the solution Eq.(4. ~^- = £ABpB<r(t) + (CA (4. We can write ih-^ ih = £Aa + + CB+ TrB[£AB6(t)}. Then we can rewrite Eq.(4.20) Jo /o \ dTTrB[£AB(exp(-i(£A + CB)(t - T))CABPBO{T)\.21) where operator A{t) belongs to the system A.98 Optimal control and forecasting of complex dynamical systems It is easy by summation of both equations obtain again the Liouville equation Eq.16) and obtain 6{t) = Z{t)a .r)]. that this equation is still exact. In order to deal with it.17) *-ijf (4. and operator B(t) corresponds to the system B.(4.(4.(4. Eq.P)p{t) = 9{t).8). (4. (4.22) Note that while the original equation Eq.22) for the reduced density matrix a is .19) in the interaction representation as iK % = -^2 £dTTrBUA(t)B(0). and we arrive to iK-£ = CA<J + TrB[CAB(Z(t)a) % -J dTCABpBa{T)Z{t . (1 .(4.(4.8) for the full density matrix represents unitary evolution.pBp(t)}}}.

4. instead of o~ used in this section. Since the system is in the contact with environment. it was shown [Ohtsuki (1999). ft is the dipole moment operator and E(t) is a control field. and time evolution of the system is described by the Liouville equation: ihp = LtotP = [H. let us consider a problem to find an optimal control pulse shape that transfers a quantum system with dissipation from the initial into some "target" state. The optimal control. Let the Hamiltonian of the controlled system interacting with external laser field E(t) is given by H = H0 + pE(t).23) where Ho is the Hamiltonian of the unperturbed system (for example. Let us briefly describe this method: Following [Peirce (1988). Peirce (1988)] how to construct optimal external fields to drive a certain physical quantity like the population of a given quantum state to reach a desired value at a given time. In our further calculations we traditionally denote p the reduced density matrix.3 M o d e r n variational a p p r o a c h t o o p t i m a l control of quantum systems Using the Lagrangian formalism for the control problem.Optimal control of quantum systems 99 non-unitary. molecule Hamiltonian). it can be characterized in terms of the reduced density matrix p(t). p] + fp. . This is due to irreversibility introduced by the trace operation over subsystem B and partial loss of the information. (4. we are going to implement. Let us assume that the initial density matrix is given by p(0) = po. should be performed in the shortest possible time for a given pulse energy.25) where A is a Lagrangian multiplier and £ is a Lagrangian multiplier density. (4. (4. Ohtsuki (1999)]. and the target state is given by p(T) = prThe main idea is that the optimal control problem can be formulated in terms of the maximization the following functional S=<pT\p>+\j E2(t)dt+ J E(t)(ifi^-t-Ltot)p(t)dt.24) Here the operator T describes relaxation processes in the system.

3. The terminal condition is S(T) = PT.(4. some sophisticated numerical procedures are necessary. LA (4. by using the above described procedure it is not possible to obtain any analytical solution even for the simplest control problems.27) into Eqs. The variation with respect to the Lagrangian multiplier density S(i) gives the Euler-Lagrange equation: iht = L\otE.\M\p >. we are not .Here we use notation Lf = [Ho.^ . Let us limit ourselves on the case of monochromatic control fields only E(t) = V(t)sm(wt). the second term bounds the energy of the control pulse and the third term makes us sure that the evolution of the system obeys the Liouville equation./]. Thus.25) maximizes the overlap with the target state. iht = L f E . The modulation of the field amplitude should be slow enough.26) we finally obtain a closed system of equations: ihp=Lp--^p<Y.24)./] + T / . (4.(4.100 Optimal control and forecasting of complex dynamical systems The first term in the equation Eq. Here we develop an alternative new approach that permits us to obtain solutions of analytical form in some simple cases.26) where f denotes the Hermite conjugation. (4. In order to solve a problem analytically one has to make some reasonable assumptions. By substituting Eq.S < Y.28) with the initial p(0) = po and the terminal state E(T) = pr. The obtained solutions represent global extrema of considered control problems within certain approximations.\M\p>.27) where the operator M is given by the commutator with the dipole moment operator Mf = [//.(4. In order to solve these equations. This means that we fix the carrier frequency w and search for the optimal shape (envelope) of the field V(t).(4. In the next pages we are going to discuss when such assumption is valid. so it does not affect the assumption of monochromaticity. 4.1 An alternative analytical theory Unfortunately.After some manipulations for the control field we have E(t) = ±<Z\M\p>.

one should search for a differential equation containing both the pulse area 6 [Shore (1989)].t) of a controlled quantum system that satisfies the Liouville equation with dissipative terms. and /u being the dipole matrix element of the system. (2) With the help of this solution we derive an explicit ordinary differential equation. It will be shown below. Here V(t) is a pulse shape (or "envelope") and w is a carrier frequency.Optimal control of quantum systems 101 going to consider any frequency modulation in our theory. as we show later. (4. with 6 defined as 9{t) = n f dt'V{t'). for the optimal control field. In the same way. to + T] with boundary conditions requires a differential equation of at least forth order for 9(t) due to the boundary conditions for 9(t) and 6{t) at to and £0 + T.29) and its time derivatives 6 = V(t). Therefore. This solution has a relatively simple functional dependence on the shape of the control field. where V(t) is the external field envelope. that for certain quantum systems with decoherence a forth order differential equation for the control fields arises naturally using variational approach as the Euler-Lagrange equation. however. for the case of optimal control of dynamical quantities at a given time to. Note. The evolution of such system obeys the quantum Liouville equation for the . that one can guess the minimum order of the Euler-Lagrange equation from some general physical arguments. This method consists of two steps: (1) Under certain conditions we are going to derive an approximate analytical solution for the density matrix p(V(t). Let us consider a quantum-mechanical system which is in contact with an environment and interacts with an external field E{t) = V(t)cos(wt). this theory can be also applied when the multi-photon resonance takes place in the system. Since the memory effects are expected to be important. which is in fact the Euler-Lagrange equation. the differential equation satisfied by 6(t) must be of at least second order to fulfill the initial conditions 0(to) and 9(to) = V(to). the control of time averaged quantities over a finite time interval [to. For simplicity we consider in this section only the case of one-photon resonance.

33) where 5(t) is the Dirac's delta function.TJdt = Cob(p(T)). The functional density C\ is given by CMt). in order to obtain the optimal shape V(t) on a finite control time interval [0.V(t). T] we propose the following Lagrangian (that is different from the Lagrangian. The first term in Eq. for example. (4. ^ f ) ] ^ (4-30) where /3 is a Lagrange multiplier and Q(t) is a Lagrange multiplier density. in [Judson (1992)]) L= I h ^ + ^ W j ^ + ^ ^ ^ W .30) explicitly includes the description of the optimal control problem. ^ ) = Cob(p(t)) + W2{t) +Ai(j^j . The requirement .(4.34) Jo Jo Jo M The third term in Eq.(4. The second term in Eq. proposed. The control of a time averaged dynamical quantity requires the search for an optimal shape V(t) of the external field. Control at a given time T can be obtained as a special lim f Cob(p(t))5(t . (4-32) where A and Ai are Lagrange multipliers. (4. that atomic units fi.30) ensures that the density matrix satisfies the quantum Liouville equation with the corresponding Liouville operator Z(t): i^t = Z(t)p(t).(4.32) imposes a constraint on the total energy EQ of the control field 2 f E2{t)dt w / V2(t)dt = / e-^-dt = E0. The second term in Eq.=m=e=l are used. Thus.31) We assume that p(t = 0) = po is the density matrix corresponding to the initial conditions.32) represents a further constraint on the properties of the pulse envelope. C0b(p(t)) refers to a physical quantity to be maximized during the control time interval.(4. Note.102 Optimal control and forecasting of complex dynamical systems density matrix p(t) with dissipative terms. (4.

The above formulated control problem is V(t) " t Fig. However.31) can be written in the time-ordered form: p(t) = f exp (~ J Z(t')dAPo. Let A be a minimal experimentally achievable duration of the pulse. and therefore excludes infinitely narrow or very abrupt. one can eliminate fast-oscillating terms like exp(iwi) and exp (—iuit) in the operator Z(t). A formal solution of the Liouville equation Eq. (4. .1). then A . bounds the absolute value of the time derivative of the pulse envelope dV(t)/dt. and one can learn more about its validity in [Shore (1989)].1 oc R (see Fig. we shall show that under certain conditions it is possible to obtain an analytical solution for V(t).35)). Let us assume that if the applied field is in resonance with the controlled system. So. step-like solutions.(4.Optimal control of quantum systems 103 where R is a positive constant. This approximation usually called the Rotating Wave Approximation (RWA).36) where T is the time ordering operator. 4. which cannot be achieved experimentally.1 Constraint on the minimal width A of the envelope V(t) of the optimal pulse (see Eq.(4.31) that leads to a nontrivial dependence of the density matrix p(t) on the field shape V(t).(4. rather complicated due to nonlinearity in the functional C0b(p(t)) and time dependence of the operator Z[t) in the Liouville equation Eq.4. in new variables (so-called rotating basis) the evolutionary operator Z(V(t)) depends explicitly on time solely through the field envelope V(t).

38).36).(4.(4. so one can obtain an explicit expression for the functional Eq. jt\an(t)\<ZLwn{t)\an{t)\.32): d = Ci{0.t).(4.(4. the first term on the left-hand side of Eq. if we assume that we know at least an approximate analytical solution of the Liouville equation.37) and get an(t) exp( / u>n(t')dt') + wn(t)an{t) exp( / un{t')dt') -'to Jt0 = (4. let us substitute Eq. given in terms of the explicit functional . (4. (4.39) Indeed. Let us suppose that we need to solve a system of linear differential equation: jtp(t) = Z(t)p(t).(4.(4.9.40) is small compared to the second one and may by omitted.e.40) Z(t)anexv{ J con(t')dt').(4.41) Now. The essence of the adiabatic approximation is as follows [Akulin (1992)]. while Z(t) is the time-dependent n x n matrix. Jt0 Given the slowness condition Eq.40) only keeps the terms yielding the identity that determines the eigenfrequency wn. using the adiabatic approximation. (4. (4-38) Then the adiabatic approximation of the solution of the set of linear equations Eq. Then Eq.39) in Eq. Under this approximation the density matrix p(t) depends only on the pulse area 8(t) and time t. one can neglect the time ordering in Eq.(4.104 Optimal control and forecasting of complex dynamical systems Then.37) may be represented in the form: Pn = anexp( / un{t')dt').37) where p(t) is the n-dimensional vector. We choose initial conditions: p(t)\t0 = p{to)If eigenvectors an(t) and eigenfrequencies ujn(t) of the matrix Z{i) vary slowly with t: —wn(t) « (w„(t)) 2 .

Equation (4.2). The corresponding extremum condition S£i = 0 yields the high-order Euler-Lagrange equation ' » * . R and EQ can be also subjects of the optimization. This description is equivalent to the Markovian approximation of the decoherence dynamics [Weiss (1981)].Optimal control of quantum systems 105 dependence p = p(9.t).43) one can assume natural boundary conditions 0(0) = 0(0) = 8(T) = 0. how to obtain analytical solutions for some simple optimal control problems. we apply our theory to a two level quantum system with relaxation (see Fig.(4.30) will be satisfied automatically. One can represent the system Hamiltonian HQ which interacts with an . dt dt ~ 2 09 (4.4 A n approximate analytical solution for the case of a two level system To illustrate the above described method let us consider an optimal control of a two level quantum system. Note.t). dt2 36 or.(4.A 4^ + + A2 ^ . 4. which also ensure that the control field is zero at the beginning and at the end of the control interval: V(0) = V(T) = 0.43) is highly nonlinear with respect to the pulse area 9(t) and usually can be solved only numerically. + In order to solve Eq. In general.43) can describe optimal control in real physical situations. (4. the constants 9?. that this equation is only applicable if one able to determine an explicit expression for the density matrix p = p(9(t).72.(4. in the next paragraphs we are going to demonstrate.42. However. We will treat the quantum evolution of the system under the RWA.43) is the kernel result of the presented theory and provides an explicit ordinary differential equation for the pulse area 9(t).^ ^ ^ 0 .(4.32) d49 d29 p?dCob(p) . and 8C oc SCx. then the first term in the Eq. 0(T) = &T. that Eq.4. In order to show that Eq. We also include decoherence effects phenomenologically using relaxation and dephasing rates: 71 . using Eq. The choice of the constant 9T depends on the control problem.43) <«£_«§L_0. dt d9 09 (4. Note.

This means that we measure the field amplitude in energy units. 4. /Z12 = M21 is the dipole matrix element.(4. 72 (see text) interacting with resonant external field E(t). ei 0 0 e2 + 0 -H2iE'{t) -iii2E(t) 0 (4. Evolution of the system is described by the Liouville equation for the density matrix p(t) [Allen (1975)]: .45) are widely used for the description of different quantum effects. or response of donor impurities in semiconductors . dt i (4.2i = M12 = 1.45) ~QT = w0Pl2 + E(t)(p22 .t) in the dipole approximation: H = HQ + H^. dt . The occupations of the levels are described by the diagonal density matrix elements p n and p22- external electric field E(t) — V(t) cos(o. P21 = P*2.9p22 E(t)pi2 -E*(t)p2l-illP22. where OJQ = (. Eqs.dp i-^~ 11 = E*(t)p2i .2 A two level system with energy levels ei. like for instance.H2P12.2 — £1 and 71 and 72 are phenomenologically introduced relaxation and dephasing rates.106 Optimal control and forecasting of complex dynamical systems 22 E(t) 11 Fig./Oil) . and the sign * denotes complex conjugation. 62 a n d with relaxation constants 71.E(t)p12 + H1P22. interaction of atoms or molecules with resonant laser field.44) where ej and €2 are the energy levels of the quantum system. Without loosing of generality one can set p.

49) 4 Z(si).46) have the form idp(t)/dt = Z(t)p(t) and still difficult to integrate analytically. P21 =P2iexp(-iwt). that p n + P22 = 1 and ^21 = P\2. where (4. To make a further progress in integration of Eqs.36) for the Liouville operator Z(i) as: p(t) = exp(-ifi(*))/3 0 .(4. and the excited level is empty. We write the time ordered product Eq.47) Note. However. .t ^ t'. one can note.(4. Let us assume that the carrier frequency of the control field u is chosen to be the resonant frequency u = UIQ.p2l) + niP22.72 = 0. then [Z(t). [ Jo X L \z(s2).72 ^ 0. or excitation of surface.48) fi(t) = J Z{Sl)dsx + 1 J [Z(S1).i72/5i2. That corresponds to the case when the level with the lowest energy (ground level) is initially occupied. = V(t)(p22 . since the operator Z(t) explicitly depends on time. the commutators [Z(t).46) let us use the Magnus series [Burrage (1998)].(4.dp l i = V{t)(pU . P22 = P12 — P21 — 0. and the system Eqs.t ^ t'. (4..pn) . Using the Rotating Wave Approximation one can eliminate fast oscillating terms exp(iu.(4.. (4. that if 71.46) is integrable analytically.46) m .Z(t')] = 0. and for 71.We set the initial conditions as Pu = 1.into image charge states at noble metal surfaces [Hertel (1996)]. After the application of the RWA Eqs.£) or exp(—iujt) and derive the following equations: .Z(t')} ^ 0.Optimal control of quantum systems 107 to Terahertz radiation [Cole (2001)].dpu = V[t){p2\ ~ P12) ~ HlP22. p Z(s2)dsds\ 2 1 + (4. where we use the following notation: P12 = pi2exp(icot). [ Jo 2 Z(s3)ds3 dsi + .

43). b): f(b)-f(a) = f'(c)(b-a).(4. Applying the theorem to the term JQSl Z{s2)ds2 in Eq. if one can define f'(x) on the interval (a.c))ai] (4.51) where c € (a.%))l dt J\ Than.2. (4. that gives: \\Z(Sl)\\ » J [Z(S1). ! .(4.(4.(4. (Z(*i) + —^(si dZ(c1)i .52) where c € (0.55) Here t.T).50) with si S [0.53) where c\ G (c. The condition that this will be a good approximation is \\Z(Sl)\\>\\[z(Sl).(4. t) is equivalent that we neglect all the terms in the series Eq. our assumption about the special form p = p(9. This is the condition.50) we obtain: |Z(Sl)||» Z(Sl). For a two level system under the RWA the Liouville operator Z reads (see Eq.46)) as 0 0 -V(t) V(t) i7l -iji V(t) -V(t) -V(t) V(t) -i72 0 V(t) \ -V{t) 0 -t72 / Z(t) = (4. under which it is possible to integrate Eqs.Z(c)Sl (4.49) except the first one.pZ(s2)ds2 (4.46) and use the solution for the consequent substitution into the Euler-Lagrange equation Eq. s{). using the explicit form of the operator Z(t) we get : |^)| > "^ " ^ 7 . Let us know apply the Mean Value Theorem to the term Z(c) again.54) (4.(4.56) . Z(s\) = 0 we rewrite Eq.53) as ^ 1 )|»THfz( Sl) .T]. that for a function f{x). si). = 1. dt (4.ti £ (0. .108 Optimal control and forecasting of complex dynamical systems Thus. Replacing the terms s\ — c and si by their maximum possible value T and taking into account that Z(s{). b). Now we use the Mean Value Theorem.

The influence of boundary conditions on the shape of the control field is also investigated.56) and truncating the series Eq. where (4.Optimal control of quantum systems 109 While the initial conditions for the density matrix p(to) we set as /pn(to)\ P22{to) n(t \ _ _ I Pmto) \p2l(to)J (4.(4.58) becomes exact when 71 = 72 = 0 or for a constant amplitude of the control field V(t) = Vo. Then we are going to present both numerical and analytical studies of the control problem using a simplified Lagrangian. let us consider optimal control of a two level system. that the approximate solution Eq. The expression of Eq.58) has the form p — p(0(i). it is easy to obtain an approximate analytical solution for the occupation P22(t)' P22(t) = 262(t)F~1 ( l . We assume that initially the ground state of this system is occupied and the excited level is empty.1 ) .( 7 i + 7 2 ) t / 2 ) # . 4.(4.( 7 l + l2)t/2) +(7i + I2)t sinh(F) e x p ( .(4.(4. We are looking for the optimal shape of the field V(t) that maximizes the .49) after the first term.cosh(ff) e x p ( . pu = 1 and P22 — 0.5 Optimal control of a time averaged occupation of the excited level in a two-level system In this section we are going to study the influence of relaxation in the controlled system on the shape of the optimal field.55).t) and therefore we can apply Eq. Note.58) ^=\/((7i-72)2i2-16^2W)A and ^ = 7i72*2 + 46>2(0.57) Using Eq. (4. In order to make a systematic study of the problem of control over a finite time interval.(4.43)) for various parameters of relaxation. as we already have mentioned above.(4. that is reflected by Eq.43) to solve optimal control problem. First we perform numerical integration of the Euler-Lagrange (see Eq.

(4. or in other words. in Eq.62) Using the boundary conditions 0(0) = 0(0) = 0(T) = 0 and 6(T) = 7r/2. correspondingly.(4.59) Note.58) and then solve the corresponding Euler-Lagrange equation by standard numerical techniques using the fourth order Runge-Kutta method and "shooting" method for the boundary problem. (4. the optimal pulse shape V(t) for different values of the relaxation constants 71.110 Optimal control and forecasting of complex dynamical systems mean (time-averaged) occupation of the excited state P22 over time interval t € [0. In this limit the corresponding Euler-Lagrange equation for the optimal pulse envelope 6{t) (see Eq. that guarantee that the field V(t) is zero at the beginning and at the end of the control interval. For simplicity we choose the duration of the control interval T = 1.60) In the limit of a weak relaxation in the system.(4. (4. (4.32) we consider the objective functional density: Cob{p{t)) = P22(t). to maximize the population P22. the expression for the occupation of the exited state P22 obtains very simple form P22 = sin 2 (0(i)). Therefore.A i ^ + A^-^sin(20)=O. .(4.(4. the value «2 is proportional to the measured photo-charge. In the presence of decoherence (71T.61) within the Rotating Wave Approximation. The resonant tunnelling photo-charge through an array of coupled quantum dots is also proportional to such an expression [Stoof (1996)].61) we use the explicit formula for the P2i{t) given by Eq.43)) reads . We also considered different values of the pulse energy EQ and the pulse curvature R of the control fields. and the pulse area is 7r/2. 72.Eq. 72T ^ 0) instead of the solution Eq. We have calculated the optimal pulse area 0(t) and.62) is known as a limiting case of the dispersive sine-Gordon equation and it is exactly integrable [Bogdan (2001)]. T]. (4. since P22W is proportional to the observed photo-current in the Terahertz experiments on semiconductors [Cole (2001)]. that maximizes the functional: n 2 = f [ P22(t)dt.

We found that the optimal pulse obtained for the system with relaxation improves the value of ri2 by 50% with respect to a Gaussian-like pulse and by 25% compare to a square pulse of the same energy. One can suggest a simple physical interpretation of this result.3 The optimal control field V(t) which maximizes the value of «2 (see text) over control time interval [0. 4. Therefore.8 and the pulse curvature R = 182. Solid line: solution of the Euler-Lagrange Eq.4.Optimal control of quantum systems 111 5 > 4 3 la -t-H ottl °0 0.8 Time Fig. In Fig. Dashed line: optimal field for a system with relaxation (71T = 272T = 0. Using Eq. We also find that in the case of a system without relaxation the pulse vanishes faster when the population inversion has been achieved. whereas for a system with relaxation the optimal pulse amplitude seems to decrease slower.(4.(4. The pulse energy is £ 0 = 4. This solution leads to a rapid increase of the population P22{t) and therefore to a maximization of a time averaged occupation 112.2.2) for the same value of the pulse energy E0.1].32.4 0. that for both cases the pulse maximum occurs near the beginning of the control interval.58) we can estimate that in the presence of external field the occupation of the upper level P22(t) decays exponentionally with the rate (71 + 72)/2 = 37i/4 that is slower than free decay of the system with the rate 71.2) with the same pulse energy and curvature R = 134.6 0. Note. relaxation effects can be partially compensated by a .2 0.62) for a system without relaxation (71T = 72T = 0).3 we show the optimal field V(t) for a two level system without relaxation (71T = 72^ = 0) and with relaxation (71T = 272T — 0.

longer application of the field during control interval. We choose relatively large pulse energy EQ of the control pulse in the case with relaxation in order to compensate rapid decay of the exited level. This indicates that V(t) fulfills the condition given by Eq.112 Optimal control and forecasting of complex dynamical systems 0.3 for a system without relaxation (thick solid line) and with relaxation (dash dotted line using an approximate formula given by Eq. 4.55) on the control interval [0.46).4 we show the corresponding dynamics of the population of the excited level p2i{t) for both cases. As we mentioned before. Indeed.2 n 0 *5 1 . I i 1 i i • I 0.(4.2 0.6 0. in this case control is not coherent within all control interval: Tjx^ > 1Since the optimal pulse amplitude does not change significantly over time interval. In Fig.4.(4. thin solid line-numerical solution of the Liouville equation Eq.(4. one can conclude that in the strong relaxation regime a pulse with a constant envelope V(t) = VQ will be a good approximation for the optimal .4. for which the controlled system is close to the "saturation" regime pn = p22 = 1/2.6 03 3 OH 0.4 OH O 0. In Fig. for a system with relaxation it also agrees well with the numerical solution of the Liouville equation Eq.T] and therefore our approach is self consistent.58) is exact for a system without relaxation within the RWA. Let us examine now the case of a strong decoherence. the expression given by Eq.(4.4 Dynamics of the population P22{t) corresponding t o the fields shown in Fig.5 we compare the optimal field V(t) for a two level system without relaxation (71T = 72 T = 0) and with relaxation (71T = 2-y2T = 5).58).4. However.8 c 0 •£J 0.(4.4 0.8 1 Time Fig.46)).

4.8.4. Because the optimal field envelope is almost constant (except intervals near the boundaries) this expression works also well.Optimal control of quantum systems 113 0. 4.6 we show the corresponding dynamics of the population of the excited level P22(t).6 0. In order to illustrate this we plot in Fig. one. Note.(4.4 0.Surprisingly. We found that the value of the averaged occupation n 2 increases both for a system with and without relaxation monotonously with the energy Eo of the optimal control field.72.7 the averaged occupation n 2 as a function of the energy Eo and the curvature R of the optimal fields for a system without relaxation (71T = 72 T = 0).57 and the pulse curvature R = 128.2 0.54 and the curvature R = 808.The explanation of this result is that the analytical expression for p22 given by Eq. . Dashed line shows the optimal pulse for a system with relaxation (71T = 272T = 5) with the energy Eo = 53. that pulses of fixed shape (for instance Gaussian) would show an oscillating behavior for increasing energy due to Rabi oscillations [Speer (2000)].58) is exact for a pulse with a constant envelope V(t) = VQ. In Fig. the analytical expression for P22M works well even for relatively large relaxation rates 71. solid line).4. The monotonous increase of the n 2 with EQ is a feature which characterizes the optimal pulses.5 Optimal control field V(t) for a two level system without relaxation (71T = 72T = 0. The pulse energy is Eo = 4.8 Time Fig.

58).(4.2 rA/ a o.43). 4.2 / \ x^x.63) The second order differential Eq.64) (4. the Lagrangian density C\ for optimal control problem becomes £± = P22(t)/T + \e\t)/n2. we analyze the problem in certain limiting cases.4 0.6 o J3 3 i y / / / Jy. while the corresponding Euler-Lagrange equation is given by 2\0(t) . ) i 0.35)).8 1 Time Fig.58) becomes simply P22(t) = sin2 (#(£))• In order to make the problem analytically solvable. thin solid line-numerical solution of the Liouville equation.8 C T3 0. for which we choose 0(0) = 0 and 0(T) = TT/2 (which ensure the population inversion)./i 2 sin(20(i))/T = 0.4 o OH 0.1 Analytical solution for optimal control field In order to obtain an analytical solution for optimal control field. For that purpose we exclude the constraint on the derivative of the field envelop (see Eq.6 0. Equation (4.(4. . s kv *=<-- ^\ 1 .5. Thus. 4. For instance.114 Optimal control and forecasting of complex dynamical systems 1 0.(4.64) requires two boundary conditions.64) is similar to the equation for a mathematical pendulum and can be solved analytically in terms of special functions. 1 i 0.(4.6 Dynamics of the population P22M for a system without relaxation (thick solid line) and with relaxation (dash dotted line-using formula (4. (4. if ji^T <C 1 one can neglect decoherence within the control interval and Eq. we reduce the order of the Euler-Lagrange equation Eq.

—).k). 4.(4.67) Here am is the amplitude of the Jacobian elliptic function [Abramovitz (1972)]. finally 0= am(nV(0)t.65) where C = -XV2(0) is a constant of integration and V(0) corresponds to the initial amplitude of the control field at t = 0.0 Fig. The first integral of the Eq. (4. = t.50 PwJb 3 '° 3 5 4.66) (4.68) .(4. (4.Optimal control of quantum systems 115 cT 0. After the second integration we obtain: sin 2 (^') and. (4. The shape of the optimal control field V(t) is obtained through differentiation: V{t) = 6(t) = V(0)dn(fiV(0)t.7 Dependence of the time-averaged occupation n2 as a function of the pulse energy E0 and the pulse curvature R (see Eq.64) reads as: &-0- de \92(t)/fi2 + sin2 (d(t))/T = C.35)) of the optimal pulses.

that the control field must be nonzero at the beginning: V(0) ^ 0.00). Eq. with natural boundary conditions V(—00) = V^+oo) = 0 and 0(+oo) = TT that means the system returns back in the ground state after the interaction with a control field. 4.(4. In this > > case.68) can be significantly simplified to 1 £) V(t) = — —-arccos 2exp(/xV(0)t) . that V(T) — 0. namely.For the sake of analytical solution we consider an infinite control Time (arbitrary units) Fig.69) Let us now consider a control problem with the opposite goal.(4. Dashed line: corresponding dynamics of the occupation of the upper level P22(t)- time interval t € (—00.(l+exp(2fiV(0)t))l (4.116 Optimal control and forecasting of complex dynamical systems where dn is the Jacobian elliptic function [Abramovitz (1972)] and parameter k = — xvho) • With a help of the condition on the pulse energy £0 (see Eq. Note. the problem to minimize the time-averaged occupation of the excited level P22. If we consider the limit C — 1 then we find. Integrating the corresponding Euler-Lagrange equation one can easily obtain that the optimal field in this case described by a soliton solution .8 Solid line: optimal control field V(t) which minimizes the value of 712 (see text).34)) we can determine the Lagrange multiplier A.

we can treat it as the sine-Gordon equation in the limit of the optically thin media)..(4. but it also minimizes possible energy losses which are proportional to n2 = J_ °° P22(t)dt in the limit of a weak relaxation. instead of Eq.(4. we consider the Lagrangian density: C1=p22(T) + \e2(t)/^.2 Optimal control at a given time Now we demonstrate that there is an essential difference between optimal control problem with a goal to maximize an objective at a given time T or to maximize the same objective.(4.t) sin(20(t)) = 0.68) we set 71 = 72 = 0.73) is the envelope V(t) with a constant amplitude: V«) = ^ .74) . (4.5.72) that corresponds to a problem of maximization of the occupation of the upper level p2i{T) at a given time T. All they are the optimal pulses corresponding to different values of the applied pulse energy EQ.(4. One can check that the solution of Eq.34). averaged over the control interval [0.(4. Using other asymptotic values of the pulse area 0(+oo) = Nir.(4.= .64)). one can immediately reproduce 2TT.70) is not only one possible solution that propagates without losses (since Eq. For this purpose. as for derivation of Eq. where N is an integer number N = 2.8): V(t) = . 4.fi2S(T .63).73) where S(t) is the Dirac delta function. (4.3-K.3. soliton solutions(l).(4. In order to make a comparison with the solution given by Eq. Using the same procedure. we obtain that This result means that the soliton Eq. • (4...70) VXooBh{(t)/y/X/J?) Using the normalization condition for the pulse energy EQ given by Eq.4.T]..64) does not have spatial variables. we obtain the corresponding Euler-Lagrange equation 2\6{t) .Optimal control of quantum systems 117 (see Fig... (4.

As in the case of the solutions of the Eq.43) the control field is "broader" for the system with relaxation. It is important to point out that the Lagrangian density of the form of Eq.68) and Eq. P22CO) of the controlled system.34)). one cannot demand extra boundary conditions for the control field. In the Fig. Therefore.2) and without (71 = 72 = 0) relaxation. Otherwise one would obtain the trivial solution V{t) = 0.68). As in Fig.(4.63) for a two level system with (71T = 272T = 0. for instance.12 we plot the corresponding . and Herschel Rabitz using iterative numerical technique [Zhu (1998)].(4. Jair Botina.11 we also show results for the case of large decoherence 71T = 272T = 5 .4. if conditions on V(0) and V(T) have to be imposed the Lagrangian leading to at least a forth order differential equation is necessary. Now let us investigate the question how decoherence affects the shape of optimal control fields.5 the control field is close to be constant in the strong decoherence regime. This analytical result one can compare with a numerical solution for a similar problem obtained by Wusheng Zhu.72) always leads to a second order differential equation for the control fields as long as the condition p = p(6(t).(4.4. The boundary conditions V(0) = V(T) = 0 change dramatically the shape of the optimal fields.(4.(4.4. as we have shown before.4.118 Optimal control and forecasting of complex dynamical systems that reflects the fact that there is only one optimal pulse with energy EQ = 7r 2 /(4T/i 2 ) which drives the occupation ^22 to 1 at given time T.10 we plot corresponding dynamics of the population P22{t)The overall behavior of p22{t) is similar to that of the populations shown in the Fig. which is not consistent with the condition on the pulse energy (see Eq. in order to compensate its decay of the exited state.t) is satisfied.(4. From the comparison between solutions Eq. but they do not affect significantly the dynamics (e.4. For the system with relaxation we use the analytical solution for the optimal control field given by Eq. In the Fig. This means that the essential physics of the optimal control is already contained in the second order differential equation.g.4.4.4. and permits to perform more detailed coherent control of quantum systems.9 we plot optimal control field V(t) which maximizes the simplified Lagrangian given by Eq. Therefore. V(0) = V(T) = 0. In Fig. As in Fig.9 the field envelope V(t) is maximal at t = 0 and exhibits a monotonous decay.63) or Eq.74) one can conclude.(4.(4. In both cases the field has its maximum value at t = 0 and exhibits a monotonous decay. that the formulation of the control over time interval is a nontrivial generalization of the optimal control theory. In the Fig.

Solid line: system without decoherence (71X = 72X = 0).2) with the same energy. With the purpose to estimate quantitatively. 4.75) (4.6 the system is close to the saturation regime.3 Estimation of the absolute to decoherence bound for the control due It is known. As in the Fig.4.4.2*/0(t) « 1.( 7 i + 72)*/2))/2. how relaxation and dephasing processes limit control of the time average n-2 of the occupation of the upper level.4 0. (4.(4. T ] . we analyze the occupation p22(t) (see Eq.13 the dynamics of the occupation P22W for .9 Optimal control field V(t) which maximizes the time averaged occupation ni in a two level system. t 6 [ 0 .Optimal control of quantum systems 119 0. The pulse energy is EQ = 4.5. 4.58)) in more detail. the occupation P22W always lies under the curve P22 aa! (i) = (1 + e x p ( . For a strong control field satisfying 7i. In order to illustrate this bound we plot in Fig.76) This means that it exhibits an absolute upper bound. Dashed line shows the optimal pulse for the system with relaxation (71X = 272X = 0.6 in both cases. population P22(t) dynamics. that decoherence in the systems creates obstacles for optimal control [Ohtsuki (1999)].6 Time Fig.

77) within the considered model (the Markovian approximation for the decoherence ef- -u P22(t)dt < 1/2 + (l-exp(-(7l+72)r/2) (71 + 72)T (4.4 0. thin solid line-numerical solution of the Liouville equation Eq. For a system in the weak decoherence regime (71.(4. so that ground and exited levels are approximately equally occupied within control interval. P22 can be fully inverted and remain in this state during the control time interval so that P22W — 1 and the maximum possible value > of the controlled objective is achieved: n 2 = 1. Therefore.5.46)). It is not possible to overcome the limit given by Eq.77) . 40 randomly generated control pulses applied to a two level system with relaxation parameters j\T — 272T = 1 and setting T — 1.(4. Pn ~ p 2 2 « 0.58).(4.77).6 Time Fig. In the strong relaxation limit 71.(4.5 and n 2 ~ 0.23" ~ 1 system is in the saturation regime.2^ <C 1) the populations p n . Using expression Eq.120 Optimal control and forecasting of complex dynamical systems 0.10 Dynamics of the occupation pii (t) for system without decoherence (thick solid line) and with relaxation (dash dotted line-using an approximate analytical solution given by Eq. one can investigate two limiting cases. due to dissipative processes the following inequality holds for the controlled averaged value of p 2 2 : n2 that is the absolute limit for the optimally controlled upper averaged occupation in a two level system with relaxation. 4.

the theory predicts the minimum possible limit for the effective decay rate ^eff > (ji + 72)/2 = 3.72.(4. 4. those states are characterized by 71 = 5 • 1 0 1 3 s . With Eq.77) we can.Optimal control of quantum systems 121 JU —^ /4-» > 25 T3 20 < u 13 15 »—H 5 3 s 10 -4-) O CU 5 0 02 0. for example. For example. 72. when the design and control of femtosecond laser fields exhibited its great success (80th-90th). lead to a value of 712. Thus. However. that is up to 50% higher. described only by 71. RWA.1 .(4. fects. Dashed line: optimal field for a system with decoherence (71 = 272 = 5) with a pulse energy Eo = 89. the optimal pulse that satisfies Eq. According to Hertel et al. and the adiabatic approximation). solid line).43) provides the highest possible value of the objective for a given pulse energy.50. The pulse energy is Eo = 20. [Hertel (1996)]. obtained in our calculations.11 The optimal control field V(t) for a two level system without decoherence (71 = 72 = 0. in comparison with pulses having square amplitude or Gaussian pulses. determine the maximal possible lifetime for an image state at a C u ( l l l ) surface which can be achieved by pulse shaping.4 o!6 o!8 Time Fig.75 • 1 0 1 3 s .1 and 72 = 7i/2. mesoscopic science . under the described assumptions.6 Optimal control of nanostructures: dot double quantum Within the same time period. 4. the optimal fields.

6 • I 0.6 0. Excited electronic states in quantum dots usually persist for a relatively long time because they interact in a very restricted way with their environment.8 C o \S 0. Normally.(4. 4. size. nanowires. composition and position of various quantum dots and other nanostructures.2 .4 .(4. etc.4 0. In these studies the semiconductor dots and rings are made from indium arsenide embedded in gallium arsenide [Groom 2002]. quantum dots. They were grown using techniques developed within the past decade that allows much smaller nanostructures to be created.2 i 0 i I 0.) and further investigations of their properties promises a technological breakthrough in many areas.58).122 Optimal control and forecasting of complex dynamical systems 0. The ultimate goal of mesoscopic science is to create useful electronic and optical nano-materials that have been quantummechanically engineered by tailoring the shape. Fabrication of mesoscopic systems and nanostructures (like clusters. which are often described as artificial solid-state atoms. I 1 "3 OH O OH Time Fig.46)). a further development of quantum dots (QD).12 Dynamics of t h e occupation P22 (t) for system without decoherence (thick solid line) and with decoherence (dash dotted line using Eq. Such studies provide new perspectives on the internal quantum-mechanical workings of quantum dots. I 0. thin solid line-numerical solution of the Liouville equation Eq.8 . I 0. provides a possibility to develop a new generation of semiconductor lasers. advanced as a new era in physics and technology. For example. such interactions lead to decoherence or destruction .

making computation in orders of magnitude faster than it is possible today. it is a very attractive idea to combine femtosecond dynamics and mesoscopic physics and to perform coherent control on mesoscopic objects. of the quantum state. Because the underlying principles of optimal control of quantum dynamics are broadly applicable.78) . As a result.( 7 l + 7 2 ) t / 2 ) ) / 2 . 4. For such systems the electronic degrees of freedom might offer the possibility of control by pulse shaping. quantum dots may provide an excellent solid-state system for exploring advanced technologies which based on quantum coherence. such as those involving quantum computing.2T<l. in terms of decoherence rates and control interval T: 7i. An important requirement for optimal control of quantum dynamics is the existence of phase coherence over a time range comparable to the duration of the pulse sequence.13 Dynamics of the occupation P22M for 40 randomly generated control pulses using 71T = 272T = 1 (thin solid lines). The thick solid line represents a bound for the possible values of p??2ax(t) = (1 + e x p ( .Optimal control of quantum systems 123 Fig. For example. it may be possible to create and control superimposed or even entangled quantum states using highly coherent laser stimulation [Oliver (2002)]. External control of the full quantum wavefunction in a semiconductor nanostructure may even lead to revolutionary new applications. (4.

Then we formulate the control problem that is reduced to the control of a time averaged current between two quantum dots. The optimal solutions were obtained numerically using genetic algorithm. We also investigate the influence of decoherence in the system on the shape of the optimal fields. 4. In particular. as we described it in chapter 2. Many of the quantum-dot systems currently being studied have the potential to be combined into molecular complexes with one. We consider a device that is a double quantum dot coupled to two metal- . Heberle (1999)].6. Bonadeo (1998). We search for the shape of the optimal control field in order to maximize the transferred charge in the system. like the photon-induced current [Cole (2001). One can imagine growing these solid-state "atoms" or "molecules" within structures containing electronic or magnetic gates and optical cavities. We also briefly describe the method that we employed for the numerical solution. perhaps all connected together by quantum wires. we give a physical picture and equations of motion in terms of reduced density matrix. [Grigorenko (2002)].or two-dimensional structures. Coherent control of carrier dynamics in mesoscopic systems using external ultrashort time-dependent fields has become a subject of active research in recent years [Cole (2001). In next section we are going to study the possibility of the optimal control in nanostructures. Dupont (1995)]. Using a double quantum dot system as a model device we determine optimal control fields which maximize the transferred charge in the system. One of the primary interest of scientists is the investigation of the currents in such systems resulting due to interaction with external coherent fields. Sequences of control pulses of different durations affect the systems in different ways and permit to perform some restricted manipulation of physical quantities.124 Optimal control and forecasting of complex dynamical systems This condition can be fulfilled by mesoscopic systems like quantum dots. First. As a model device we consider an electron pump based on resonant photon-assisted tunnelling through a double quantum dot [Stafford (1996)].1 The optimal field for the control of the photon sisted tunnelling between quantum dots as- Now let us we investigate optimal control of the carrier dynamics in nanostructures. the study of photon induced and photon suppressed quantum dynamical tunnelling [Grifoni (1998)] has attracted much attention due to potential applications of these effects in quantum computing. Heberle (1999).

then the double quantum dot works as an "electron pump": the Rabi oscillations of the electron occupations occur between the levels 1 and 2. The applied voltage is biased in such a way that the chemical potential of the left reservoir /J. can be expressed as 2 HDQD = 5^£i(t) c+Ci + d (4c2 + c+ Cl ).79) where d is the tunnel coupling between the two dots.4. no current flows in the absence of external fields.14. The double quantum dot can be modeled by only two non-degenerated and weakly coupled electron levels with energies e\ and £2. Since we also assume that the coupling between the quantum dots is very weak. (4.80) where cf (q) is the creation (annihilation) operator for an electron on dot i and the diagonal matrix elements are given by 6i(t) = (—l)l/2(Ae + F(t)). in absence of external perturbation the level " 1 " is occupied whereas level "2" is empty. The Hamiltonian for the metallic reservoirs and the tunnel barriers is .Therefore.R is lower than that on the right reservoir /J. This device is illustrated in Fig. and the right quantum dot ("2") is coupled to the right reservoir.Optimal control of quantum systems 125 lie contacts (reservoirs) and configured as an electron pump as described in [Stafford (1996)]. The left quantum dot ("1") is connected to the reservoir on the left.By sweeping the gate voltages one can vary Ae = £2 — £\The bonding and antibonding states. which causes the on-site energies to oscillate against each other. Speer (2000)]. We are going to find the time-dependent optimal amplitude V(t) which maximizes the transferred charge in the system. which are assumed to be non-degenerate. The intradot interactions are absorbed in the on-site energies. have an energy splitting of AE = Eantibonding ~ Ebonding = \ / A e 2 + 4d 2 . The Hamiltonian of the double quantum dot coupled to the external field F(t) = V(t)coswt. t=i (4. that are a superposition of the wavefunctions corresponding to an electron in the left or in the right dot. If an external resonant gate voltage is applied to the system. and electrons can tunnel from the left to the right reservoir [Stafford (1996).L.

c-i.R creates an electron of momentum k in reservoir £ = L. Similarly. If the resonant control field is applied. so that once an electron has jumped from the second quantum dot to the reservoir it cannot jump back any more. the electron can jump to the right quantum dot and it can further tunnel to the right contact. The quantities Ww. Electron can tunnel from the left contact to the left quantum dot. and the occupation operators ni and n-i are given by n\ = d[cx and ni = c%. the time scale for the tunnelling process between the second dot and the reservoir on the right is determined by a transfer rate T2 = 27rpfl(e)|Wicft| .81) Here. Thus.l=L. We assume that the reservoir on the right has a broad band of unoccupied states. k. U is the magnitude of the interdot electron-electron repulsion. where PR is the density of states in the right reservoir. 4. In the derivation of equations of motion we have used the following approximation. given by [Stafford (1996)]: HRT = Y.For simplicity.R. with £ = L. cj^.R k £ ™ CM CM + 5Z WkL (C*L CI + ct c k J k c c + J2 ™ ( kR 2 + 4 ckfl) + Urnn2. spin is neglected.126 Optimal control and forecasting of complex dynamical systems maximal *• • current K Fig. represent the tunnel matrix elements between the reservoirs and the quantum dots. W (4. the transfer rate .14 Illustration of the "electron pump" device: a double quantum dot coupled to contacts.

periodic in time and with a constant amplitude V(t) — VQ.84) (4.4.i r 2P22 + d(p2i .P21).82) d r2 r2 + 2£ l ( 0 P l 2 + d(P22 .^7P21 = ~iyP21 + 2£ 2 (t)P21 + d(p22 ~ P l l ) . the master equation for the density matrix p(t). Equations (4.85) where J^ is the Bessel function of order N.(4. For simplicity we can assume symmetric coupling to the reservoirs: T 2 = I Y Equations (4. considering only an isolated system of two quantum dots in an electric field.£.£. an electron placed on one of the dots will oscillate back and forth between the dots with the Rabi frequency (Ti = T 2 = 0) UR\ ^ . P22 = 0. For the given above Hamiltonian.„.For U = 0 we write po = 1 — pn. For instance.Optimal control of quantum systems 127 to the left contact Ti is given by Ti = 2irpL(s)\WiiL\ . d Jfi.14.82) can be solved analytically in some limiting cases. The initial situations are set as pn = 1. We are going to consider the photon assisted tunnelling when the one-photon resonance condition HUJ = \ / A £ 2 + 4d2 (4. In order to describe the electron dynamics we use a density matrix approach similar to that was derived in [Stoof (1996)]. ^ g T P l 2 = -IYPl2 (4. Nhuj = \/Ae2+4d2.P l l ) . N refers to the number of photons absorbed by the system in order to fulfill the resonance condition From the numerical integration of Eqs. (4. whereas the case U — 00 requires > Po = 1 — p n — P22> which projects out double occupancies [Stoof (1996)]. as can be inferred from Fig.pn).83) is satisfied. ih-glPw ~ ..82) allow us to investigate the case of zero and infinite interdot Coulomb repulsion U by choosing the proper expression for the quantity Po.82) one can obtain the charge QT transferred from the left to the right reservoir due to the action of the . which describes the evolution of the system reads: ^gl/On = iTiPo + d(p12 .

(4. + ^ p22(t). We assume that the control field is active (non-zero) within the time interval [0. For that purpose we write the current operator J = id/h~(c~l c 2 — c2 c^) which leads. the second one is an integral (time average).(4.87). However. which satisfies Q1pax = QT\Yopt{t)\The problem of finding Vopt(t) is complicated because of its nonlinearity contained in p22{t). For instance. It is important to point out that QT = QT[V(t)] is a nonlinear functional of the field amplitude V(t). In the Eq. (4.87) Obviously. QT only represents the transferred charge to the right reservoir when T2 ^ 0. the functional to be maximized by the successive generations is the transferred charge Qr[V(4)] (see Eq.T]. the second term indicates that. that the fitness function in fact is a combination of two parts. if the external field has a Gaussian shape V(t) — Voexp(—t 2 /2r 2 ) of duration r.T]..e. Otherwise it represents the charge in the right quantum dot.82) to the time dependent average current (/(*)> = eTr{pj} = e ^ . As initial population of field amplitudes satisfying the boundary conditions . Therefore. (4. i. Our goal is to find the optimal pulse shape Vopt(t) which maximizes QT. Thus we have a hybrid control problem. depending on the form of V(t). The net transferred charge from left to the right quantum dot QT is obtained as QT = J dt (/(*)) = ^ / dtp22(t) + eP22(T). after the field is switched off (t > T.e. then QT shows Stiickelberg-like oscillations as a function of r [Speer (2000)].87)). and can exhibit different types of behavior. In our present approach a vector representing the "genetic code" is just the pulse shape V(t) discretized on a time interval t € [0.128 Optimal control and forecasting of complex dynamical systems external field over a finite time interval [0. i. one is the value of a function at a given time. T] and it satisfies the boundary conditions f (0) = V(T) = 0. we are going to determine the shape of the optimal control fields numerically using the genetic algorithm as a global search method. in combination with Eqs. Note. the Gaussian shape of V(t) does not necessarily maximize the transferred charge.86) where e is the electron charge and TV is the trace operation.(4. the charge remaining in the second quantum dot ep22(T) will be completely transferred to the right reservoir. The fitness function.

The parameters used in calculations are given in terms of the tunnelling matrix element d. and no interesting transient dynamics can be observed.87). as we shall see.(4.Old. From the elementary analysis of Eq. Finally.T). Jo 2 JQ (4. (4.(2./ V2(t)dt. we choose the control interval T = lOOH/d which is large enough to allow back and forth motion of the electrons between the quantum dots and is of the order of h/T.T].88) with random values for the position of the maximum tj G [0. The energy difference Ae = £2 — £i must be much larger than d to ensure that the ground state of the double quantum dot is localized on the left side and the excited state is localized on the right side.82) it is clear that the optimal pulse first should be able to transfer an electron from the .(4.21)).. Let us first discuss the results for [7 = 0.(4. and the optimal control fields have a nontrivial shape and induce complicated dynamics of the electron occupations in the system. Therefore we set As = 24d.e.20). The peak amplitude I? for each pulse is calculated from the condition that all pulses must carry the same energy E = J F2(t)dt « . neglecting the inter-dot Coulomb repulsion.82). If T is large. In this calculations we also set a implicit constraint on the minimal width of the pulse in order to describe pulses which can be achieved experimentally. Although the above formulated control problem (Eqs.tjffr2) t (t . The value of the pulse energy E can be considered also as a parameter to be optimized. T] and the duration Tj € (0. This also leads to a sharper resonance behavior.Optimal control of quantum systems 129 we choose Gaussian-like functions of the form V}°\t) = 1° exp(-(* . the systems saturates very rapidly to Pu = P22 = 1/2. In calculations we use symmetric coupling of the quantum dots to the reservoirs I^i = T2 = T and compute optimal field shape for different values of the coupling constants T = 0. O.(2. i. so that the Rabi oscillations do not become over-damped.89) is applied to a rather simple quantum system.89) Equation (4.05(1 It is important to point out that r must be smaller than d. (4. it is a rich problem. 0.89) represents an integral isoperimetric constraint for a possible solution. In our calculations this minimal width is naturally determined by the discretization of the time interval and by the smoothness parameters kc and km of the crossover and mutation operations (see Eqs.

2 f 2 3 time [h/d] 4 Fig. and it is a priori not clear which one maximizes QT.4 0. for example. an electron placed on one of the dots will oscillate back and forth between the . for example. For T —> 0. with a constant amplitude Vo. (a) Solid line: reference square pulse of duration r = 7rf2^. Dashed line: optimal pulse shape for the maximization of the charge transferred from the left to the right quantum dot.82) can be solved analytically for some limiting cases. period of the Rabi oscillations tu = LJR-1. there are many different pulse shapes able to achieve this situation. Eqs. 4.15 Optimal control field for the isolated double quantum dot ( r = 0). If.130 Optimal control and forecasting of complex dynamical systems (a) f'"~~"~x 40 i 20 t \\ \ \ I i n 2 v 3 time [ti/d] 4 0. the external field is periodic in time V(t) = Vocoswt.8 f J (b) 1 0. However. (b) Corresponding time-dependence of the occupation P22 (t) on the second dot for the pulses shown in (a).(4. T h e pulse energy is Eo. ox . left to the right quantum dot (inversion of the occupations) and then to keep this situation as long as possible. and decoherence time t^ = Ti"1. and energy EQ.6 |o.There are three basic time scales involved in the problem: the control time interval T. intensity Vo yielding the first maximum of J\{Vo/hJ) (see text).

In principle one would expect that the reference pulse denned above exactly achieves an inversion of the occupation in the double quantum dot within the shortest time (assuming only one-photon absorption).90 ) being Ji the Bessel function of the order 1. Tien (1963)]: n = 2d/fiJi(V0/M.2 S 10 "53 0 " 0 • 20 • 40 • 60 • 80 1 100 time [»/d] 0 20 40 60 time [ft/d] 80 100 Fig.15 we compare the . which was assumed valid to derive expression Eq. Note.Optimal control of quantum systems 131 J U 0. which is obtained when the ratio x — VQ/JVJJ is the first maximum of the the function J 1(2:). and one can obtain slightly better beyond the adiabatic approximation.4 0. Using this property we construct a reference pulse of square shape (V(t) = VQ for 0 < t < r and V(t) = 0 otherwise) with intensity V0 as defined above and duration r = irQ^ax. In Fig.16 Optimal pulse shape for the maximization of the charge transferred from the left to the right quantum dot. Pulse energy is E = 0. ( 4 . In the following we will use the energy E0 of such reference pulse as a unit of pulse energies. 4.8 T3 3 "Si 0) 20 1 0.6 | 0. dots with the Rabi frequency [Zeldovich (1967). Inset: the corresponding time-dependence of the occupation P22W on the second dot. for pulses of a constant amplitude there is an upper limit O m a x for the Rabi frequency. that UJ must fulfill the resonance condition Two = %/Ae2 + Ad2. Thus.(4. as we show below. if the system absorbs one photon. such a pulse shape is not absolutely the best one.90). because the Rabi frequency changes in time. However. r = 10 _ 6 d.4. The description of the tunnelling dynamics for pulses of varying intensity is more complicated.57i?o.

since very narrow pulse has very large .2 0 20 40 60 time [h/d\ 80 100 Fig. leading to the maximum possible value Q™ax = eTT/h + e. 4. However. Pulses with zero width and infinite energy cannot describe a realistic situation. a pulse of infinitely small width (delta-like pulse) should yield to the maximal QT. In the following examples we are searching also for the minimal pulse energy E. if no constraints are imposed on the width of the pulses. genetic algorithm finds a pulse shape which induces a slightly faster transfer of the charge.15(b). the energy of such pulse would diverge. for such pulses the whole model would break down. 0.Old. Note that.132 Optimal control and forecasting of complex dynamical systems OVJ (a) 60 •E. Pulse energy is E = 4.17 (a) Optimal pulse shape which induces maximal current for T = O.40 e £ 20 4— I 20 1 0.8 40 60 time [ti/d] 80 100 jus I 0. effect induced on the isolated double quantum dot ( r = 0) by the reference square pulse with that induced by the optimal one calculated using genetic algorithm and having the same energy EQ. Moreover.Such pulse would produce p22{t) = 1 over the whole time control interval.4 a.6 I 0. This result inspires us to perform calculations for more complicated problems. AS one can see in Fig.26Bo (b) Corresponding behavior of P22(t).4.

4. leads to more symmetric and smooth optimal solutions.18 (a) Optimal pulse shape for T = 0. Note. In Fig. 4.17 we show the optimal field envelope and the induced occupation P22W dynamics in the case of weak coupling to reservoirs with coupling constant T = O.4. that the optimal field is structured as a sequence . that we employed for this calculations.16 we show the optimized pulse shape for the maximization of the charge transfer in the almost isolated double quantum dot (T = 10~6d).Old. this occupation remains constant in time. Pulse energy is E = 121E0 (b) The corresponding behavior of f>2i{t)- spectral width and would excite many levels in each quantum dot.16. inducing the inversion of the occupations.4. The corresponding evolution of the occupation of the second quantum dot is shown in the inset of Fig. As a consequence QT is maximized.15 and Fig.4. The optimal pulse excites the system at the beginning of the control time interval.Optimal control of quantum systems 133 40 60 time [h/d\ Fig. From the comparison between Fig.16 we see that a limitation of the minimal pulse width.4. P22(t) reaches the value 1 when the pulse goes to zero. In Fig.05d. Since TT/hbar is very small.

19 Illustration of the optimization process using genetic algorithms.J0 dtp22(t) as large as possible.134 Optimal control and forecasting of complex dynamical systems 20 40 60 80 100 20 40 60 80 100 time [h/d] time [h/d] 0 20 40 60 80 100 20 40 60 80 100 time [h/d] time [h/d] Fig. whereas the second pulse acts to increase ep22(T). 4.4. but with larger coupling constant.17(a)).18 shows the results for the same system.87)). QT is maximized. P22(t) starts to decrease as soon as the first pulse goes to zero. Figure 4. However. of two pulses (see Fig.4. The first one acts at the beginning and has the proper shape to bring the occupation of the second quantum dot to a value close to 1. Shortly before the end of the control time interval the second pulse brings the occupation p22(t) again to a high value (Fig. in this case the optimal solution also exhibits pulses at the beginning and at the end of the control interval. Evolution of the "fittest" pulse shape for maximization of the current for T = 0.18(a).05d. .05d. The first pulse tends to keep the term ^j/*.4. The structure of the optimal pulse can be easily interpreted with the help of the expression of QT as a functional of p22{t) ( se e Eq. namely T = 0. As a consequence.82). but also a complicated sequence of pulses between them. As can be seen in Fig. since TT/H = 1 and according to Eqs.(4.17(b)).(4.

20 the evolution of the .Optimal control of quantum systems 135 60 3. shown in Fig.e.19 (1st iteration) one of the pulses of the initial population ("parents" ) is plotted. As all other "parents".4.4. for instance. In the limit of large T we expect a square pulse to maximize QT.19. To illustrate this convergence we show in Fig. In Fig. for the case of T = 0. the envelope of the fittest pulse of the tenth generation Fig. "The best representative". After just 30 iterations the pulse shape already exhibits most of the features of the optimal pulse.05rf. the field structure in the middle of the time interval becomes more important. . • "O il) Ui a 43 o <4t/1 I-I <i> s 03 ' 0 10 20 30 Iteration 40 50 Fig. 4.19.19)). in this case the pulse only needs to transfer charge at some constant rate with a value of the order of I\ In order to illustrate the progress achieved by genetic algorithm during optimization process we show in Fig. and after 50 iterations it converges to the optimal one.(2. which prevents P22W from decay to zero and "stabilizes" it around the saturation value 1/2.05d. exhibits several peaks.9 2 g u.4.4. at the state where both dots are equally occupied (Fig.1 3 2.18(b)).. induces an excitation of the system at the early beginning of the control time interval.19 shapes (envelopes) of the fittest pulses at different stages of the genetic evolution. If the coupling constant T is increased further. The successive application of the genetic operations improves the pulse shape and transforms the initial Gaussians into more complicated pulse sequences.20 Evolution of the transferred charge for increasing number of iteration of the GA for r = 0.4.4. . As a result. i. this pulse is a Gaussian-like (see Eq. since the system is overdamped and goes into the saturation regime.

i. transferred charge Qx[V(£)] as a function of the number of iterations of the genetic algorithm. the first pulse pumps one electron from the left to the right quantum dot. namely there is some sequence of pulses not only at u dtf)22(t).21 we show the optimal pulse shape which maximizes solely the mean occupation of the second quantum dot.4. 4. and for the above discussed problem as well. Indeed. 0.e. The reason for the position of the second pulse is that its efficiency in increasing P22M is maximal when P22(t) reaches its minimum.136 Optimal control and forecasting of complex dynamical systems 200 40 60 100 time [h/d] Fig. It is clear that after 50 iterations the pulse induces a transferred charge very close to the optimal one. pulse energy is E = 48AEo) and T = 0. at the end of the control interval.05d (dash line. (4.Old (solid line.21 Optimized pulse form for T = O.05d Interestingly.Old. the optimal control fields have a tendency to be "broader" in the case with larger decoherence in the systems. One should also mention. the optimal pulses have also two peaks structure. as for the case of optimal control of a two-level quantum system.91) . This result is a direct consequence of the nonlinear character of the considered control problem. using as a fitness function the time average of the occupation on the second quantum dot: P22 = using r = O. pulse energy is E — 13Eo) for the maximization of the average occupation P22 of the second quantum dot (see text). For the sake of comparison in Fig.

74 0. The circles represent the value of QT for U — oo.01 ' 0.025 ' 0. Therefore an electron from the left reservoir can jump into the double quantum dot only when the previous electron has already left the system and was transferred to the right reservoir.015 ' 0. > Pulse shape optimal pulse rectangular pulse Gaussian pulse constant pulse QT 1.04 ' 0. This is due to the fact that U — oo prevents double • occupancies in the system.045 ' 0.Optimal control of quantum systems 137 1' 0.22 Dependence of the total transferred charge QT (diamonds) induced by the optimal pulse as a function of the coupling V to the leads for U = 0. but also some structure in between. 4.4.77 the beginning and at the and of the control interval.035 ' 0.02 ' 0.22).29 0. Prom this example one can learn that it is a general property of the control fields for systems with decoherence and this property one finds for various functional to optimize. but for the case U — oo using the same set of coupling parameters T.85 0. .03 ' 0.05 T/d Fig. We found that the > repulsion between quantum dots leads to a relatively smaller net transferred charge (see Fig. In order to investigate the influence of the interdot Coulomb repulsion U we perform calculations similar to those described above.

the effect of weak decoherence on the shape of optimal control fields can also be taken into account.5 times more charge than the best rectangular pulse. Thus. using the direct numerical integration of equations of motion for a double quantum dot system. the weak response limit is not restricted to single-photon processes.17) induces clearly more transferred charge than pulses having other shapes. the optimal pulse found by the genetic algorithm (already shown in Fig. and 1. An optimal solutions for the case of control over time interval are presented.74 times more charge than the best Gaussian pulse. For example. because the overall mechanism is quite complex due to the multiphoton nature of the laser driven dynamics. It is important to point out that the rectangular and Gaussian pulses mentioned in the Table are the fittest ones among rectangular and Gaussian pulses. we indicate in the Table the values of the transferred charge QT for the coupling constant T = O. it was shown that under two-photon . Since multi-photon interaction usually assumes significant nonlinearity of the corresponding system's dynamics. to emphasize our results and to show that pulse shaping can indeed lead to a remarkable enhancement of the photon assisted current through double quantum dots. All these analytical and semi-analytical results were obtained under the assumption of weak control field and one-photon resonance. in [Speer (2000)]. As it was shown in the previous sections. In this case two. 4. the optimal pulse induces 1. In this chapter we already have demonstrated that some analytical solutions of optimal control problems can be obtained using the adiabatic approximation together with variational formalism.138 Optimal control and forecasting of complex dynamical systems Finally. As expected.4.Old and pulses having different shapes V(t) but carrying the same energy E.or even multiphoton process will be responsible for the population transfer. We investigate how the order of the photon resonance affects the shape of the optimal control field. one can consider optical control of an excited state which is not accessible via a direct electric dipole transition from the initial ground state. respectively. the integration is usually done using numerical procedure. For example.7 Analytical theory for control of multi-photon transitions Using Floquet formalism and the adiabatic approximation we develop a theory for optimal control of a quantum system interacting with external electric field under multi-photon resonance. However.

iha2 = Eia-i + H2iV(t)ai cos(uit) + /J. so we can determine qualitatively and quantitatively how the order of the multi-photon resonance change the shape of the optimal control field. even if one restricts himself to simple Gaussian control pulses. That is why it is significant and highly desirable to develop analytical theory which will help to get more insight for optimal control of quantum systems under the multi-photon resonance. which interacts with a classical laser pulse.. The total Hamiltonian of the system is then of the form: H{t) = H0 + nV{t) cos{cjt). Let Ei. In order to obtain analytical solutions in close form it is useful to assume that the field's envelope V(t) changes adiabatically slowly with time. thus we take into account transitions between levels |1 > and |2 > and between levels |2 > and |3 >. V(t) describes the envelope of the control field. described by the Hamiltonian Ho with eigenstates \i > (i — 1. Our approach to optimal control problem is based on derivation of analytical solutions for the occupation of the quantum level using the adiabatic approximation and consequent application of the variational principle.. Let us first focus on the case of a three level system and two photon resonance in the system. and neglect the direct transition between |1 > and |3 > (Mis » 0).92) where fx is the dipole matrix.93) .2Cos(u)t). We assume the so-called "cascade" scheme of the levels.. We choose the initial condition as oi(0) = 1. The dynamics of a three-level quantum system interacting with external field can be described then as: ihai = E\di + H2iV(t)a. with the corresponding dipole matrix elements /U12 and ^23. Thus we restrict ourselves to optimal control fields.E2.Optimal control of quantum systems 139 resonance the population transfer can be more effective than in one-photon case.03 interacting with an external control field E(t) — V(t)cos(ut). We consider an N + 1 level quantum system. (4..02(0) = 03(0) = 0. E3 are the corresponding energy levels described by the amplitudes 01. which modulation of the (4. iM3 = E3C13 + V32V (t)a2 cos(urt).02. The corresponding Euler-Lagrange equations determining the optimal control field are derived in close and integrable form. N + 1).23V(t)a3 cos(wi).

02. 0 /2 + H23V(t)a3. (4.97) where A2 = UJ\ — w is the (non-zero) de-tuning of the field respect to the intermediate level. which is proportional to the square of the field amplitude. l^32V(t)a2..140 Optimal control and forecasting of complex dynamical systems field's envelope does not affect the carrier frequency ui. one obtain an infinite system of coupled equations. containing only resonant terms 01. We represent the amplitude a^ (i = 1. that the occupation of the system begin to oscillate between levels 1 and 3 (similar to Rabi oscillations) with a frequency.-1.-2 + For a slowly changing field amplitude V(t) and under the condition of the exact two-photon resonance 2w = E3 — E\.-2.02.3) as an infinite series of quasienergetic functions 0.0.i(t) = ^2 i=—00 ai:i(t)exp(ilujt).94) Here we use notation for the transition frequencies u>\ = Ei — Ei. Because the Hamiltonian of the system is almost periodic in time (for the fields envelope V(t) changing slowly on the time scale of w _ 1 ) we can use the Floquet approach (see.03. (4.-i/2 i M 2 .(4. for example. (4.i = (£2 fiwK-i + /*2iV(t)oi. By neglecting rapidly oscillating quantities and keeping equations.2.2^)03.95) By substituting this expression into the Eq. a nice review [Grifoni (1998)]) in order to get analytical solutions for the amplitudes a 1. .03.-i/2. one can apply the adiabatic approximation and obtain analytical expression for the occupation of the third level P33(t): P33(i) = |a 3 (i)| 2 « |a 3 .^: 00 a. we obtain: iMifl = Eiaifl +//2iV"(t)o2. that is characteristic of the two-photon process.._2 = (E3 .2 (i)| 2 = sin2 ( g | J* V*{t>)dt>).u>2 = E3 —Ei.96) iftd3.-i| 2 remains approximately zero. Note. .-2/2. Within our approximations the occupation of the intermediate state P22 ~ |«2.93) and making equal the terms with the same frequencies. We also assume the case of two-photon resonance: w = a>i + w2(4.

(4.(4.Optimal control of quantum systems 141 Let us now analyze the obtained result.99) applicable if all the A. we can obtain the solution for the upper level pN+i. opposite to the case of one-photon resonance... M21M23 (4. Using the expression for />33(i) Eq.N+i{t) in a close form: PN+i. which we considered earlier [Grigorenko (2002)].(4. Using the formalism based on the variational principle that we presented in the previous sections of this chapter. But in the same time one can find an infinite set of the field envelopes V(t) with the same energy E but with different final pulse areas OT OC / 0 V(t')dt'.(4.i+i is the dipole matrix element between levels I and / + 1 . one can uniquely determine shape of the driving V(t).. Note. By using the same quasi-energy method in resonant approximation and the adiabatic approximation. it is easy to see that the population of the third level becomes maximal if the energy of the pulse is 27rAo -+2im M21M23 Etot = 4A9 — .l.N+i(t) = sin 2 ( / Q. the description of optimal control can be determined by a func- .. where by fixing the final state and the pulse's energy. the first observation is that the final value of the occupation ^33 depends only on the pulse energy E oc fQ V2(t')dt' and does not depend on the pulse area 9 <x J0 V(t')dt' of the control field.98) It is possible to generalize the solution given by Eq. . I = 1. we can set the Lagrangian of the optimal control problem and derive the corresponding Euler-Lagrange equation for the optimal control field amplitude V{t). This result is in dramatic difference with the case of one-photon resonance.N). From Eq. Here p.97). Again.i.97) for the case of N + 1 level system with the cascade scheme of transitions (assuming the transitions only between the neighboring states \l >-\l + 1 >.N(t')dt'J = ^([^ W I n(^)]i'v»<^)- (4») Here we use notation for the detunings A/+i = LJI+I —UJ and the transition frequencies wi = Ei+\—Ei. are non-zero.97). one restricts the pulse energy E to have certain values. and assuming N photon resonance: HNu> = . P33(T) = 1) of the system..n = 0. that Eq.. Fixing the final state (for example.Ew+i — E\.

99) into the Lagrangian Eq. A . For simplicity of the analysis let us neglect the constraint on time resolution of the optimal pulse in Eq.105) As an example.1 a R. (4.103) The third term is a constraint on the minimal experimentally achievable duration (time resolution) of the control pulse A: This condition excludes infinitely narrow or abrupt step-like solutions.(4. 106) .100) is a constraint on the total energy E0 of the control field / E2(t)dt « I f V2(t)dt = E0. (4.142 Optimal control and forecasting of complex dynamical systems tional density L L = Lob + \V2{t) + \J^^-\ .100) where A. as: / Lob(t)S(t-T)dt Jo = Lob(T). (4.(4.102) where 6(t) is the Dirac delta function.(4. Lob refers to a physical quantity P to be maximized during the control time interval [0. Jo 2 J0 (4./V+1-level quantum system under the condition of the N -photon resonance (EN+I — E± = HNu). One can substitute the analytical solution Eq.T] of a time-averaged occupation of the upper level \/T J0 pN+i.101) The control at a given time T can be obtained as a special case.(4.100).T]: L T Lobdt. (4.100). The second term in Eq. (4. we can consider optimal control over finite time interval t £ [0. Ai are the Lagrange multipliers.N+i{t')dt' of the . Then the description of optimal control is equivalent to find an extremum of quantity S S= f Ldt = \ [ e2/Ndt+ Jo Jo [ Jo Pff+itN+1(aQ)dt.

6 Time. we can write immediately the first integral: L ./v+i I I i =7 ( 2A'+\ )• reads: 2 2 • and we denote a = ^ e corresponding Euler-Lagrange equation 7 5 ^ " A — (— . since the Lagrangian does not depend explicitly on time.4 0. 4. JV = 3 (thin solid line).Optimal control of quantum systems 143 where we introduce notation Q(t) = J0 VN(t')dt' ^/ijv.1 ) 9 ^ 9 ^ + a s i n ( 2 a 9 i v ) = 0.2 0.and JV = 10-photon resonance (dotted line).108) where the constant C is determined by the boundary conditions. . we obtain: t + C = \N'2{\ dQ' N o Jo (C sin\f3e'))N/2 (4. • (4. 99 (4.23 Optimal control fields maximizing a time averaged occupation of the highest level in the case of JV = 1 (solid line).109) 0. Integrating. JV = 5 (dashed line) . t Pig.9 ^ = C .107) Note.

It is quite interesting. shown in the previous figure.8 c o cTO..4 Q. to maximize the time-averaged occupation of the N + 1 level (the upper one).2 0. in the case of N = 1.N+I driven by the optimal fields.6 5 0.24 we plot the corresponding population dynamics pN+i^+i(t) of the controlled iV+1 level. 4.5 .and 10-photon resonances.144 Optimal control and forecasting of complex dynamical systems 0. The control pulses have the same pulse energy E = 3.4. This solution can be expressed in terms of the hypergeometric function in some limiting cases.5 Time.3.and N = 10-photon resonance (dotted line).4.5 .23 we show the solution V(t) of the optimal control problem. r 0. In the Fig.v+i.32. S. the two-photon control is degenerate under the assumptions of the adiabaticity of the control field. One can make an observation. that the optimal control pulse has a tendency to be more localized in time for the case of larger N. that the overall behavior on the p.and 10-photon resonances. N = 5 (dashed line) . The main advantage of our approach that one can obtain some insights .24 The corresponding population dynamics of the occupation of the highest level PN+I. In the Fig. As we said. 0.v+i is very close for different cases of JV = 1. in the case of N = 1 (solid line). t 1 Fig.3. N = 3 (thin solid line).

because its sensitivity to the adiabatic approximation. our approach avoids multiplicity of solutions. then this approach yields the global extremum. We found. We discussed what kind of algorithms (both numerical and analytical) can be used to calculate the optimal control fields. In particular. We generalized our theory for the case of multiphoton resonance and we found that the two-photon resonance should be treated as a special case. on the shapes of the optimal control fields. we investigated the effect of uncontrolled interaction with environment.Robust qubits 145 regarding shape optimization and from the comparison with numerical results. the in the presence of decoherence the energy. stored in the control field. compare to the case without relaxation. We obtained the shape the control fields for some simple optimal control problems in analytical form. One can also look for a problem not to maximize occupation of the certain state. but. Thus. for example. will be distributed more uniformly.8 Summary In this chapter we review some modern applications of the optimal control theory to quantum mechanical systems and nanoscaled devices. that cause decoherence in quantum systems. When the approximations do a good job. . that is a goal in some applications. to induce some specific coherence between certain states. 4.

This page is intentionally left blank .

Chapter 5 Optimal control and quantum computing In this chapter we are going to continue our discussion of the optimal control of quantum systems with emphasis on applications to quantum computing. The list of discussed decoherence suppression/avoidance techniques includes such proposals as error correction (including the automatic one through dissipative evolution) [DiVincenzo. Lidar (2000). Then our method is further generalized to the case of time-dependent processes (gates). Applying optimization method based on genetic algorithm we find a combination of the inter-qubit couplings that provides for the lowest possible decoherence rates of the two-qubit register. Deutsch (1997))]. each of which is subject to its own dissipative environment. However. and the use of the quantum Zeno effect [Hwang (2000)]. We are going to carry out a systematic analysis of the system of two coupled qubits (quantum bits). none of these recipes are entirely universal and their 147 . 5.1 Robust two-qubit quantum registers The rapidly advancing field of quantum computing continues to bring about a deeper understanding of the basic quantum physics alongside prospects of the ground-breaking potential technological applications. Unfortunately. However. Zurek (1996)]. dynamical decoupling/recoupling schemes (like "bang-bang" pulse sequences)[Viola (1998)]. one of the main obstacles on the way of implementing quantum protocols has so far been and still remains a virtually unavoidable environmentally induced decoherence. in the last decade it was considerable progress in resolving of the decoherence problem. Duan (1997). encoding into decoherence-free subspaces [Palma (1996). the sequences of which constitute various quantum computing protocols.

Nakamura (2003). overcoming the above hurdles might be particularly challenging in solid-state architectures. a rather stringent requirement of a completely symmetrical qubits' coupling to the dissipative environment (decoherencefree subspaces) or a need for the frequency of the control pulses to be well in excess of the environment's bandwidth (dynamical decoupling/recoupling).y. we first focus on the problem of preserving an arbitrary initial state ("quantum memory") of a basic two-qubit register.0. The previous research of the problem in question [Governale (2001).Cj) and the fluctuating hi(t) = (0. We find that.0. First. Albeit being more feasible in some (most notably.148 Optimal control and forecasting of complex dynamical systems actual implementation may be hindered by such factors as a significant encoding overhead that puts a strain on the available quantum computing resources (error correction). In contrast. /ij(£)) components. we are going we explore yet another possibility of thwarting decoherence by virtue of permanent (yet. Therefore. our discussion pertains to the generic two-qubit Hamiltonian 6 = E **&+kw) + E i=l. despite its being often thought of as a nuisance to be rid of.2 a=x. Thorwart (2001)] have been largely limited to the range of parameter values corresponding to the available experimental setups [Mooij (1999). the latter representing two independent dissipative reservoirs. we show how our optimization method can be further extended to the case of arbitrary time-dependent two-qubit gates which can alone suffice to perform universal quantum operations (including those involving single-qubit rotations). in the next section. the qubits' "cross-talk" may indeed provide for an additional layer of protection against decoherence. In what follows. It is for this reason that a physicsconscious engineering of robust multi-qubit systems and a systematic approach to choosing the optimal (coherence-wise) values of their microscopic parameters appears highly desirable. Then. we switch to the conventional singlet/triplet basis formed by the states: | TT>= . tunable) inter-qubit couplings.z J «*i52» (51) where each qubit is exposed to a local magnetic field comprised of the constant Bi = (Aj. liquid-state NMR and trapped-ion) as compared to other proposed implementations of quantum computing protocols. To this end. Vion (2002)]. they have not systematically addressed any issues related to a possible role of the inter-qubit couplings in reducing decoherence.

0.l.Robust qubits 149 (1. As in the earlier work [Governale (2001)]. Ez = —Jx + Jy + JZ. (5.0.1). (-1.0.4) (5.0) T . appears to be most coherence-friendly. and we expect any more sophisticated analy- .0) T . (0.J 2 ) 2 + 2A2)/A.0). 1/V2(| Ti> +1 1T>) = (0.| I T > ) = (0.Jx + ]/{Jy-Jz)2+2A2. and 1 / V 2 ( | U > .0).0.0. £ 4 = . Our resorting to the approximation which is known to become unreliable in the (arguably. In this basis. most important for quantum computingrelated applications) short-time (compared to the pertinent decoherence times) limit is largely due to the simplicity of the ensuing optimization procedure.2) and the corresponding eigenvectors are given by the expressions: E1 = JX + sJ(Jy-Jz)2 + 2A2. our main conclusions do not critically depend on the approximation employed. l . | | i > = ( 0 .Jy A Jz-e \ 0 0 0 0 \ 0 0 -Jx-Jy-Jz) ^ .1.0). and (l. where A = (Ai + &i)l*J2~ and e = ei + £2. weak-coupling and Markovian) approximation.1) T .0.(5.-(-Jy + Jz . the non-dissipative part of the Hamiltonian of two identical qubits (Ai = A2 and t\ = e%) reads: / Jz+e A Jx-Jy A Jy-Jz + Jx A H0 = Jx .y / ( J y .0. in hindsight. 0 .In the following discussion.0. The spectrum of Eq. we treat the effect of the reservoirs in the standard Bloch-Redfield (hence. we concentrate on the vicinity of the "co-resonance" point e = 0 which. 1. 0 ) r . Nevertheless. (1. E2 = —(Jx + Jy + Jz).3) respectively. -(-Jv + JZ + yJ(Jy .Jz)2 + 2A2)/A.1. in agreement with the experimental data for the Josephson charge-phase qubits [Vion (2002)].

. (a.8) is the spectral density of the reservoirs of bandwidth u>c (0(t)-is the step function) and o-fnk.. Throughout this section we use the units h = ks = 1 and the value of the qubit-bath coupling a = 10.4): | * i > = | | > .g.7) is taken in the sense of its principal value..nk] + + a2.o\nk in the eigenbasis of the Hamiltonian HQ.|*3 > = 1/V2(| i> +1 T>).(5.6) of the partial decoherence rates [Weiss (1981)] Mmnk ~AZP = ^S(u)nk)[allmalnk °U~7. 1*4 > = . 7J2~[al.lm(T2. By analogy with the previous work of the subject [Governale (2001)] the the reservoirs are chosen to be Ohmic in order to facilitate the use of the Fermi's Golden rule for computing Eq.5) where u)nrn = (En — Em)/h are the transition frequencies. r r rrm (5.7).nk + CT2tlm<T2.l Pkl(t).(5.nki V5-7) I where 5nm is the Kronecker symbol..3 . |* 2 > = | T>. the second integral in Eq. and the elements of the relaxation tensor are given by the linear combinations tinmkl = Sim/. akin that of Refs..150 Optimal control and forecasting of complex dynamical systems sis (e.a2nk are the matrix elements of the Pauli matrices <y\nk. b = 1.2__. (5.lm^l. In order to quantify the quality of the register's performance for the broadest range of initial states we made use of the purity [Poyatos (1997)]: 1 16 The density matrix pP = pP(t) is evaluated for all 16 initial conditions p>(0) = \%n X ¥in\ using product states * | n = | * a > i <g>|tf6 > 2 . [Loss (1998)]) to deliver qualitatively similar results. S(w) = aw coth r^#(w c — w) (5. Rnmkl k. Under the assumption of the Born approximation the standard BlochRedfield equations for the reduced density matrix p read [Weiss (1981)]: Pnm(t) = -iWnm Pnm{t) . Note.

In Fig.10 ) where po(t) represents the unitary evolution of an initial state gives rise to the identical results for the optimal inter-qubit couplings (see below). the coupling strength errors that are likely to be present in any realistic setup tend to increase with the overall magnitude J of the inter-qubit coupling. The latter then undergoes a generation cycle after which the worst configurations are discarded and certain recombination procedures ("mutations") are performed on the rest of the population.9) computed for this optimal configuration against the results pertaining to the Ising (Jx = Jy = 0. Besides.Robust qubits 151 l / \ / 2 ( | l> +i\ | > ) which evolve in time in the presence of decoherence. Jz = 0).11) of the vector J = (Jx. .z = J/VS) and non-interacting {JXyylZ = 0) cases. The search for the optimal set of parameters begins with a number of random sets of parameters {Ja} that. XY (Jx = Jy = J/VZ.2 we contrast the purity Eq.Jz)However. in all the practically important cases where the qubits emerge as effective (as opposed to genuine) twolevel systems. Adjustment of the optimal interaction configuration is done by application of genetic algorithm. in the language of [Holland (1975)].y. we discussed intensively in the previous chapters. Jy = J ) . allowing for either sign of the latter. Computing the matrix elements Eq. in what follows we fix the length of the vector J and look for an optimal configuration of its components. constitute an initial "population".Jy.(5. J 2 = J and Jx = Jz = 0.5. Therefore. We have also checked that the alternate use of the fidelity 1 16 F = T^J2Tr{p(t)p°w> j=i ( 5 .(5. The iterations continue until the set of parameters J converges to a stable solution representing the optimal configuration J°pt. an unlimited increase of J is not possible without leaving the designated "qubit subspace" of the full Hilbert space. Heisenberg (Jx. which belongs to the class of the so-called intelligent global optimization techniques [Holland (1975)].7) one finds that the decohering effect of the environment tends to decrease with an increasing length J = yjjl + J* + J? (5. The new solutions ("offsprings") are selected on the basis of the corresponding purity or another chosen fitness function.

or E2 = E3. One example of such a robust state is offered by the initial density matrix with the only nonzero entries P22 = P24 = P42 — /?44 = 1/2 that correspond to the pair of degenerated states with lower energy. Jy — 0.VJ2 . Jz = 1. J > A. Notably.5.152 Optimal control and forecasting of complex dynamical systems A closer analysis of the optimal configuration reveals that it corresponds to an incidence of a double degeneracy amongst the eigenvalues of the Hamiltonian Eq. the double degeneracy condition can only be fulfilled for a sufficiently strong (non-perturbative) inter-qubit coupling. Ei = £4. on the basis of the results depicted in Fig. Near the double degeneracy point (w = 1) the rate is reduced by orders of magnitude.5. that corresponds to the overall length J = 2A. Conversely.98Aw.3 we show some non-zero elements of the tensor Rnmki as a function of a single parameter w introduced through the relations Jx = 0 . This observation suggests a systematic way of constructing a class of input states that undergo a slow decay governed by pure dephasing (no relaxation).4 one can also identify those initial states whose decay can only be made worse by increas- .5. This result can be read off from the two-dimensional plot of Fig.1): Ei = E3.4 we plot the absolute value of the purity decay rate \dP(t)/dt\ as a function of the parameter w for the above initial state. J«* = ±\(VJ2 + &2. Taking into account the constraint imposed on the overall length J we then obtain the optimal inter-qubit couplings j°pt = 0. (5. E\ = E4. These conditions are satisfied provided that J°pt = 0 and 2J°ptJ°pt = A 2 .12) J°pt = ± i ( \ / J 2 + A2 + VJ2 ~ A 2 ). contrary to the case of general encoding. .(5.25Aw. Rationalizing these findings.5. we surmise that a high stability of the doubly degenerate ground state of the two-qubit system is reminiscent of the notion of "supercoherent" decoherence-free subsystems introduced in [Bacon (2001)]. thus restricting the possible optimal solutions to a hyperbola in the Jy — Jz plane. In Fig.A 2 ). while T is relatively low. It can be readily seen that under the double degeneracy condition many of the partial decoherence rates A. This choice of the parameters leads to the double degeneracy for w = 1. In Fig.1 representing the rate of the purity decay \dP/dt\ as a function of Jy and Jz for Jx = 0.nmki drop to their lowest possible values ~ aT.

Robust qubits 153 2. A word of caution is in order. however.0 1.(5. for the optimal configuration shows no abrupt changes upon varying the strength of the coupling a from its lowest values that do comply with .8 -1.5 0.0 -0.5 2.7).13) It turns out.(5.0 Fig. ing J.5 1. any degeneracy may invalidate the use of the Golden rule expressions Eq.0 -1. In principal.5) remains stable even close to the double degeneracy points. whose (sufficient) condition of applicability reads inax\Rmnki\ <C min\un (5. 5. One example of such a fragile state is given by the initial density matrix pn — pn = p 3 1 = pm = 1/2 which represents the upper pair of degenerate levels.1 Purity decay rate \dP/dt\ as a function of Jv and Jz for Jx = j£ p * = 0.0 0. that in the problem at hand the solution of the master equations Eq. and T = 10~5A.

jopt _ (o. yields the Hamiltonian where all the terms commute with each other and there is again no relaxation. XY Jx = Jy = J/y/2.0. (5-14) It is also worthwhile mentioning that beyond the lowest order in the qubit-bath couplings all the degeneracies will generally appear to be lifted. for A = 0 and e ^ 0 the optimal configuration of the couplings. unless they are protected by the symmetries of the full Hamiltonian Eq.154 Optimal control and forecasting of complex dynamical systems Fig. T = 1 0 _ 7 A and different inter-qubit = 0 (thin solid line). in this section we demonstrated that permanent interqubit couplings might serve as a new way of protecting quantum registers .JX = Jz = 0 dotted line). We also would like to stress that the optimization of the inter-qubit couplings can be carried out for arbitrary values of the parameters e and A. / / \ / 3 (dash-dotted line). 5. Heisenberg Ja = J = J°vt (solid line). Jz = 0 (dash-double .(5. of time for J = 2 A . J ) . Jx = Jy = 0 (dotted (dashed line). Notably.2 Purity as a function couplings: non-interacting Ja line) and Jy = J. To summarize. Ising Jz = J. and the optimal configuration the above conditions all the way up to the regime min\wmn\ < max\Rmnki\ < max\u>mn\. thereby resulting in the standard avoided level crossing situation.1).

Jz = 2Au) the matrix elements vary monotonously with w.Robust qubits 155 0. e = 0. Note a dramatic drop of some of the matrix elements at w = 1.0021 1 1 • 1 • 1 1 0. and Jz = 1. 5.0015 - Fig.25Au>.0021- 0.98Au) as a function of the parameter w.3 Upper figure: non-zero matrix elements of the relaxation tensor Rnmkl computed for T = 1 0 _ 5 A . Bottom figure: for Jx = Jy = 0. Jx = 0. Jv = 0.0015 - 0. this possibility has already been discussed in the context of constructing logical qubits and performing fault-tolerant computations in various strongly correlated spin systems [Kitaev (2003)]. In the existing proposals the prototype logical qubits are envisioned as topological ground and/or quasi-particle bulk/edge states of rather exotic . against decoherence. While still remaining relatively unexplored in the mainstream quantum information proposals.

5.7 A (solid line) and T = A (dashed line). Moreover.\ « 0*0. suggest that a much more modest idea of augmenting the other decoherence-suppression techniques with a properly tailored permanent inter-qubit couplings might still result in a substantial improvement of the quantum register's performance. | i | i | \ \ 0.\N. since creating only a handful of such qubits takes a macroscopically large number of interacting physical ones. the results we have presented in this section. Thus.002 T3 1 \ \ \ \ \ \ \ N N \ N. The rapid pace of a technological progress in solid-state quantum com- . chiral spin liquids [Kitaev (2003)]. the optimization method employed in this section can be rather straightforwardly extended to the case of time-dependent gates and other quality quantifiers (quantum degree. 5. the nearly perfect isolation of (topo)logical qubits from the environment can also make initialization of and read-out from such qubits rather challenging. etc.003 ..156 Optimal control and forecasting of complex dynamical systems U.UU1 .-SX.4 Purity decay rate \dP/dt\ as a function of the parameter w (see Fig. entanglement capability. >/ S ' 0.) as well as beyond the Bloch-Redfield approximation and the assumption of the Ohmic dissipative environments.5 Fig.5 *T . Nonetheless.001 ^V \ s/' ~ Nw \ n 0. albeit enjoying an exceptionally high degree of coherence. Besides.3) for T = 1 0 . the (topo)logical qubits require an enormous overhead in encoding. 1 W 1.

the latter are crucially important. despite its providing a convenient means of designing logical circuits. an arbitrarily complex quantum protocol can be decomposed into a sequence of single-qubit rotations and two-qubit gates [DiVincenzo (1995)]. charge [Nakamura (2003)] and charge-phase [Vion (2002)] superconducting qubit architectures) provides one with a strong hope that the specific prescriptions towards building robust qubits and their assemblies discussed in this section could be implemented before long. the phase [Mooij (1999)]. The new protocols require tunable inter-qubit couplings but. unlike in the most examples of the previously proposed protocols. the lowest possible decoherence rates. since any realistic qubit system would always suffer a detrimental effect of its dissipative environment. Conceivably. In the practical implementations. concomitantly. a significantly simpler alternative to the above approaches .Robust qubits 157 puting (particularly. in return. a number of authors applied the optimal control theory with the goal of implementing an arbitrary unitary transformation independently of the initial state [Ramakrishna (2002)]. there have been various attempts to improve on the performance of universal quantum gates by searching for their optimal implementations among the entire class of the two-qubit Hamiltonians with the most general time dependent coefficients. Recently. However. show a significant improvements in the quality of gate operations. are carried out in a single step. According to one of the central results obtained in quantum information theory. However. In the search of a more sophisticated analytical approach. Our optimization procedure can be further extended to combinations of elementary two-qubit as well as irreducible many-qubit gates [Grigorenko (2005)]. The resulting complex system of nonlinear integraldifferential equations can be solved numerically with the help of the Krotov's or similar iterative algorithms [Krotov (1996)]. 5.2 Optimal design of universal two-qubit gates Now let us construct optimized implementations of some universal twoqubit gates that. in practice such a decomposition may not necessarily achieve the shortest possible times of operation and. the outcome of a typical tour-de-force variational search such as that of [Niskanen (2003)] appears to be a complicated sequence of highly irregular pulses whose physical content might still remain largely obscure.

158 Optimal control and forecasting of complex dynamical systems would be a straightforward implementation of a desired unitary transformation in the smallest possible number of steps..15) n Jo between the time-ordered evolution operator governed by the Hamiltonian H and the target unitary transformation X.Jz A2 V Jx. we also make specific predictions for such a viable candidate to the role of a robust two-qubit gate as a pair of the charge-phase Josephson junction qubits (dubbed "quantronium" in [Vion (2002)]) which are tuned to their optimal points and coupled (both. the overall loss of coherence accumulated during a gate operation was evaluated on the basis of its total time.i [ ° H(xi(t'))dt')\\ -* min (5.e i . therefore. In the standard basis where af 2 are diagonal.Jy Ai A2 . We are going to construct one-step implementations of some universal gates (e. One such example is given by the two-qubit SWAP gate which can be readily implemented (up to a global phase) with the use of the Heisenberg-type inter-qubit coupling that remains constant for the duration of the gate operation (see. Here \xi(t)\ < ai represent the tunable control parameters whose physically attainable values are generally bounded.T e x p ( . where a constant decoherence rate was assumed and. the noiseless part of Eq..1) takes the form /Jz+ei+e2 A2 Ai Jx .16) Jz) Ho In order to facilitate our analysis of the decohering effect of the environ- . and the Probenius trace norm is defined as \\Y\\ = \JTr\Y^Y].ei .e2 + \ (5. In contrast with the previous works. during which the parameters of the qubit Hamiltonian remain constant.(5. we quantify the adverse effect of the environment by actually solving the corresponding master equation for the density matrix of the coupled qubits. On the basis of our general conclusions. In this way.Jy A2 ei . [Zhang (2003)]). we account for the fact that the decoherence rates depend on (and vary with) the changing (in a step-wise manner) parameters of the time-dependent two-qubit Hamiltonian. e.g. CNOT gate). inductively and capacitively) to each other. The problem of implementing a given unitary transformation in the course of a quantum mechanical evolution of a generic two-qubit system can be formulated as the condition of the minimum deviation \\X .g.e2 .Jz Jx + Jy Ai Ai Jx + Jy e2 .

And in Fig. therefore. as in the previous section. its minima will correspond to the double degeneracy. To that end. 6{UJ) is the step function.6 we plot the function. We then demonstrate that within such a class of quasi-stationary Hamiltonians.5 we plot the decoherence rate \dP/dt\ (note that in the Markovian approximation and in the short time limit the purity decays as a linear function of time) as a function of Jy and Jz.Robust qubits 159 ment. assume the Ohmic nature of the dissipative reservoirs < hi(t)hi(t') > described by the spectral function 5(w) = auj coth ^6(uc — w) with the bandwidth LOC. the relaxation rates attain their minimum values at those points in the parameter space where the largest possible number of the transition frequencies vanish as a result of the onset of degeneracy. Obviously. that helps us to iden- . as revealed by the analysis done in the previous section. The underlying (energy exchange-based) mechanism of the suppression of relaxation can be explained by the fact that near a degeneracy point and at low temperatures the partial relaxation rates (5. the problem of preserving an initial state ("quantum memory") of an idling pair of coupled qubits (see also the later works [You (2004)] for similar results). a contribution to the overall decoherence factor stemming from the relaxation processes can be significantly reduced by tuning the Hamiltonian parameters towards the point where a pair of the lowest eigenvalues of the quasi-stationary Hamiltonian Eq. we just have shown above.5. commute at different times {[H(t).(5. In order to further illustrate the above point.29) becomes degenerate.. the problem of constructing an optimized (coherence-wise) implementation of a given universal gate allows for a rather simple and physically transparent solution. pure dephasing.7) vary with the transition frequencies w^ as a sum X ^ i = i 4 cij lwu I w n e r e the coefficients c^ are essentially independent of w^-.5. in Fig. we restrict our analysis to the Hamiltonians that remain constant for the entire duration of the gate operation (H(t) = Ho6(t)9(tenc[ — t)) and. processes is generally unavoidable and can only be suppressed by lowering the temperature of the reservoirs. which quantifies the degree of the degeneracy Q = \E\ — En\ + \E2 — E3\. in what follows we again. Below. a contribution due to the other.H(t')] = 0). Notably.e. Therefore. This assumption allows one to treat the environment in the standard Bloch-Redfield (i. weak-coupling and Markovian) approximation. Namely. we invoke a direct relationship between a possibility of decoherence suppression and a spectral degeneracy.

)2. 5. the second one. the two-dimensional degenerate subspace is (partially) protected by the energy gap separating it from the rest of the spectrum.5 demonstrates that \dP/dt\ has the absolute minimum at the point characterized by the incidence of double degeneracy (Ei = Ei.6 there are two points. This more relaxed constraint on the Hamiltonian parameters may provide an extra freedom in improved implementations of the logic gates. we impose the less stringent condition of a single degeneracy between the lowest pair of energy levels.[Vion (2002)]: t\ = e2 = 0. that the latter occurs for the inter-qubit coupling Jopt satisfying the conditions J°pt = 0. where Jy > Jz is not coherent-friendly. However. for the sake of physical clarity. In this case. (5. 2J°ptJ°pt = A2. while attempting to find a one- . corresponding to the double degeneracy.160 Optimal control and forecasting of complex dynamical systems tify the above described effect. we can now try to satisfy them in the case of various universal gates.A 2 ) 2 + (J y + J. Having identified the conditions providing a suppression of decoherence. we kept the length of the vector constant. in obtaining Figs. since in the case of realistic Josephson qubits an unlimited increase of J would have resulted in an unwanted leakage from the designated two-qubit Hilbert subspace. that leads to additional stability of the two-qubit system. Again.5.5.E^ = Ez) between the eigenvalues given by the expressions £1.18) Note. thereby giving rise to the exponential suppression of some relaxation rates at low temperatures. Besides. (5. since we assume the coupling to the reservoirs along axis z. Furthermore. Figure 5. that breaks the symmetry between these two points.17) We have already shown in the previous section.6 we put the parameters of both (chosen to be identical.J 2 ) 2 ! £3. the emergence of even a single degenerate pair of the two lowest eigenvalues appears to provide a relative improvement as compared to the non-degenerate case. For a starter. The Jz > Jy is coherent-friendly because the resulting Hamiltonian will approximately commute with the coupling operator. Ai = A2 = A) qubits into the coherence-friendly "quantronium regime" of Ref.4 = -Jx T v / ( A 1 . albeit being less effective than the onset of double degeneracy. in the Fig.5.2 = Jx T V / ( A 1 + A 2 ) 2 + ( J J / .

the latter appears to feature a substantial degree of ambiguity (5.20) HCNOT = ith/to log (XCNOT) = A(C + B)A \ .5 The purity decay rate \dP/dt\ as a function of interaction coefficients Jx and Jv and Jz = J*'-J*-J*. 5. In the noiseless case. this goal can be accomplished by virtue of computing a matrix logarithm of XCNOT directly. J ~ 2A. T = 0.19) Vooio/ 2 10 *- 2 10" B'ig.Robust qubits 161 step implementation of the standard CNOT gate /1000\ 0100 0001 XcNOT= (5.001A. However.

0 £'-£.Jx and Jy and J with the previous figure).E&\ + \E2 .: 0. we also use notation /oo C = 00 00 V0 0 0 0 -IT/2 TT/2 0 \ 0 TT/2 -TT/2/ + TT^O^.21) / 0 0 1 1\ /I 100\ 001 1 1 100 + 27rn2 B = 27rn! 1100 0011 \l100/ V0011/ • 2imiE.E3\ as a function J2 __ j 2 _ j 2 j = 2A. (compare als of interaction coefficients .22) where E is the unit matrix. 5. (5.23) . (5.0 J. Fig.162 Optimal control and forecasting of complex dynamical systems 4. and A is the block-diagonal matrix (ei$3 0 \ 0 e" (5. where t0 is the protocol duration.6 T h e measure of the double degeneracy \E\ .

66. the new one-step implementation takes only about 15% of the time required for implementing the standard one and it results in approximately 10 times better performance. one might attempt to impose the double degeneracy condition Eq. that such a substitute solution would only be acceptable if any one-qubit rotation can be performed quickly and. A straightforward analysis reveals that one does reproduce the CNOT gate up to a global phase < o = 1/4 (so that XCNOT / > = exp(—wr<^o)exp(—itoHo/fr)) while achieving a single spectral degeneracy {Ei = 1.(5.. Jy = 0.g.. therefore. . As the next step.g. E2 = E3 = -0. £3 = 0. and continuous phase variables <j> and fa. however. e2 ss -0. the equivalence between different gates is established on the basis of their Makhlin's invariants (see.1exp ( . Jz = £2 where all the energies are measured in units of h/to. In Fig. Ai = 0.A2 « 1. Jx = 0.25. e. conceivable that this stringent condition may not allow for a construction of an arbitrary two-qubit gate.(5. albeit not necessarily identical.5.875. however. one could then settle for a more readily attainable goal of constructing a gate X' which is only equivalent {X' = U\®U2XU[®U2 where Uiti and U[ 2 are single-qubit rotations). It is.i^al) exp ( .5.i | ^ ± p ) . [Zhang (2003)]) 2 2 XCNOT oc exp ( .625) with the following choice of the parameters in Eq.C] = 0 ) .Robust qubits 163 parameterized by the arbitrary integers n. In such a case. (observe that [5.29): ei = -0. to a target gate X.125.7 we contrast the resulting optimized gate against the standard (multi-step) CNOT protocol (see. it contributes negligibly towards the overall loss of coherence.24) exp ( . [Zhang (2003)]) _tr2[m(X)]-tr[m2(X)} 2 = 1 tr?[m(X)} 16Det[X] ' ADet[X] ' .i-alz) x (5.ija\al) ^ ) exp ( . Formally. Notably. It should be noted. The high dimension of the invariant subspace of the CNOT's equivalence class (see below) is a rather unique property of this particular gate. e.18) that is expected to result in an even better performance.

recently an alternative (dubbed by the authors as the "B") gate with the invariants G\ — Gi = 0 was proposed [Zhang (2003)].90A we arrive at the one-step implementation of an equivalent of the B-gate that takes only m 42% of the duration of the protocol of [Zhang (2003)] and results in a much improved (by a factor of 6) gate purity. the CNOT gate owes its popularity to the fact that a generic two-qubit gate requires no more than three applications of the CNOT complemented by local one-qubit rotations [Zhou (2004)]. Jy = 0.(5.(5. An example of a gate incompatible with the double degeneracy condition is given by the before mentioned VSWAP gate which has G\ = —i/4.24). As far as practical recommendations are concerned. With the aim to a further improvement. By contrast. XB = &XQ and /10 0 i \ O i l O Q = 0 i -1 0 (5. the CNOT equivalence class which has the invariants Gi = 0.26) VlO 0 -i) It is worth mentioning that in the case of the double degeneracy condition Eq. Namely. This result is consistent with the CNOT implementation given by Eq. we find that in the case of the equivalence class of CNOT the double degeneracy condition can be met with the following choice of the coupling parameters: Jopt = . the above theoretical predictions can be applied to coupled Josephson junction qubits where the typical value of A is of order 1 GHz [Vion (2002)]. which fact severely restricts the set of logic gates that can be constructed this way. as compared to that of [Romero (2004)]. We find that our approach which takes a full advantage of the double degeneracy condition offers a superior realization of the Bgate's universality class. Jz = 1. For a large part.18) G\ appears to be a real number. J = 2A.164 Optimal control and forecasting of complex dynamical systems where m(X) = X%XB. Unlike CNOT. Our optimized protocol appears to be « 2. the B-gate only needs to be used twice in order to implement an arbitrary two-qubit gate (up to singlequbit rotations). by choosing Ai = A2 = A.3 shorter and suffers ss 25 times less decoherence. Jx = 0. We found that this protocol outperforms the CNOT-equivalent gate obtained by using the Heisenberg inter-qubit coupling [Romero (2004)]. Assuming the strength of the inter-qubit coupling J = 2A ~ 2 GHz.52A.(32 = 1 c a n D e attained with the use of the Ising coupling along z axis.

respectively.98) GHz. The time is normalized on the longer duration of both protocols.Robust qubits 165 Time Fig.4 which corresponds to a characteristic relaxation time of « 2500 ns.^ .7 Comparison between 5-step implementation of the CNOT gate with the optimal one-step implementation. Note. 5.1. However. the one-step implementation permits easily to construct any fractional power of the gate Xa. . we find that in the case of the overall purity decay 5P(T) ~ 10~ 4 the control pulse may not deviate by more than 0.A 2 ) ^ . These estimates have to be contrasted with the experimentally measured single-qubit relaxation and dephasing times which were found in Ref. As regards the largest acceptable error in the gate's parameters. [Vion (2002)] to be 1000ns and 20ns. (VJ 2 + A 2 + V J 2 . As we have mentioned above. {0. we also can perform a one-step implementation of RNOT and FFT2 gates.0.25. and the same allowed deviation is for its amplitude.A 2 ) / 2 } = (0. ( V ^ + A W .3%. that the maximal allowed amplitudes of Ja in both cases are limited to 2A. we were able to find control parameters which correspond only to a single degeneracy of the unperturbed Hamiltonian. Using the described approach. For temperatures T ~ 0.1K the purity degradation is of order 1 — P « 4 x 1 0 .

Entanglement of a pair of qubits is a quantity that characterizes what the amount of information one can learn about the first qubit by measuring the second one. which is coupled to one (the nearest) trapped charge. that is also can be treated as a two-level system. in other words. albeit capable of providing an exponential suppression of decoherence. then the measurement of one qubit is enough to completely determine the state of the second qubit in the pair.. 5. The increased stability against decoherence is achieved thanks to the special choice of the Hamiltonian parameters which provides for an incidence of degeneracies in the instantaneous energy spectrum. however. Let us consider a qubit which is in a contact with a primitive bathanother two level quantum system. Although our model is quite simple. this approach differs from the previous proposals for constructing "supercoherent" qubits which. One can systematically investigate different regimes and even obtain some analytical solutions. It should be possible to further generalize our results to the case of a sequence of several two-qubit as well as irreducible multi-qubit gates.g. Entanglement may be an important resource for quantum computing. The latter condition. is unlikely to be fulfilled in the case of effective (e. For a system consisting of two interacting qubits the von Neumann . Josephson junction based) pseudo-spin Hamiltonians due to the almost immanent presence of permanent (non-spin rotationally invariant) local terms (in our notation S-fields) in any many-qubit counterparts of Eq.1). And quite opposite. that it can exhibit rich behavior in time that is strongly dependent on the type of coupling between these two subsystems. if a two qubit system is in the singlet state (like a spin singlet). For example. require at least four physical qubits governed by the Hamiltonian that conserves the total spin [Bacon (2001)].166 Optimal control and forecasting of complex dynamical systems To summarize. in this section we present a systematic approach to a one-step implementation of highly robust two-qubit universal gates. we can show. This situation might be realized experimentally as a one-qubit device.3 Entanglement of a pair of qubits Besides fidelity and purity there is another interesting quantity to characterize quantum system-this is entanglement. these two qubit are uncorrelated. any product state has zero entanglement. thus suppressing the effect of the relaxation processes.(5. Also.

the entropy of the subsystem can be used as a measure of entanglement. the rate of increase of the subsystem's entropy reflects the leakage of the .6 0. which is a deep result linking quantum mechanics to information theory and thermodynamics. (5. as a function of parameters (/> and tp (see text).2 0. It also can be shown that unitary operators acting on a state of the whole system (such as the time evolution operator obtained from the Schrodinger equation) leave the entropy unchanged.8 1 Fig. and the marginal density matrix p\ is obtained by partial tracing over the second qubit: p\ = Tr^p). Note that in this case p = p2.27) where k is the Boltzmann's constant.4 0.17T of unitary evolution of the whole two-qubit system. The entropy of any pure state is zero. In a case when the observed subsystem system initially is in a pure state. entropy 5i of the first qubit if defined as: Si =-kTr(Pllog(Pl)). when the whole system is in a pure state \t}) >< ip\. 5. In the case.8 Plot of the entropy of the first qubit after t = 30.Robust qubits 167 0 0. This associates the reversibility of a process with its resulting entropy change. which is unsurprising since there is no uncertainty about the state of the system.

at time t = 30. . d\.p}. For even larger times it looks more fractal-like.17T. that one can use tunable parameters of basic elements of quantum computer in order to optimize its performance and stability against decoherence effects.8 we plot the entropy Si as a function of parameters <> /. Note. let us consider a quantum system of two interacting qubits with the Hamiltonian H: /Jz + e1+e2 Ai Jx-Jy A2 Jx + Jy Ai Ai Jx-Jy \ ^ ' \ e2 .28) where pij. the marginal density matrix for the first qubit can be written as: P1 = ( P\\ + PZZ P34 + P12 I . Explicitly.ei .9) becomes rather complicated. In Fig.30) By parameterizing the interaction vector (Jx.. As we already did before. 6.4 Summary In this chapter we considered an application of optimal control and optimal design approach to the problem of quantum computing. Jy = J sm(4>) sin(6).168 Optimal control and forecasting of complex dynamical systems information due to coupling to the unobservable subsystem (second qubit in our language).8 we can see that the entropy map Si(<j). (5.e i -C2 + JzJ The system's density matrix p(t) satisfies quantum Liouville equation: ih~p = Lp=[H. Jz). i. We have restricted </> £ [0. Jx = Jcos(</)) sin(0). 6). n] and 0 £ [0. Jz — Jcos(6) we can plot the entropy of a subsystem (which also characterizes the entanglement in the whole system) S^ = Si(t. 5. i \P43 + P21 Pll + P44 (5. . We set the initial pure state as p(0) = [[1000] [0000] [0000] [0000]]. TT/2] because of the symmetry of the map. Jy. This pattern reflects the sensitivity of the system to small perturbations of coupling parameters on big time scales.j = 1. the lighter areas in the plot correspond to lower values of entropy. 4>.5. It was shown.d2 = d — 0.Jz A2 A2 .0001.5. In Fig.4 are the density matrix elements of the total system. for a particular choice of the Hamiltonian's parameters: ei = e2 = 0.to higher. and having the interaction strength J = 1.. and the darker.

. the significance of the optimal control becomes obvious.Forecasting 169 We have obtained that even in the case of simple two-qubit system one can improved performance and stability of the system by approximately factor of 10. We also have calculated the entropy of a pair of interacting qubits. One could also expect that the expectation value of the difference in performance between optimal and non-optimal realization should increase with increasing of the object's size. But one have to take into account that for larger systems the sensitivity to the optimal design implementation could also increase. because the spectrum of interacting threeand multi-qubit systems becomes more and more close to the continuum spectrum of a macroscopic system. and have shown its complex dependence on the interaction parameters. Since for large subsystems a "plain" design have even less chances be the optimal realization.

This page is intentionally left blank .

6. for example. Financial markets are continuously moni171 . The way. In addition. Fur das Goldhaarige Mddchen. complex systems are usually associated with the situation when it is extremely difficult to deduce system's behavior from the first principles. how it works. that without such "behavioral Lagrangian" one cannot derive the correspondent Euler-Lagrange equations. This means.Chapter 6 Forecasting of complex dynamical systems Es gibt Zeit fur die Mdrchen. As we have discussed in the previous chapters. on the current level of our knowledge. that make forecast very nontrivial problem. they are systems composed by many agents which are interacting between them in a highly complicated way. in quantum mechanics. In particular more needs to be said on the actual relationship between the required accuracy and the computational effort.1 Forecasting of financial markets Financial markets can be regarded as model complex systems. it is simply impossible to predict. the behavior of a human being purely from a priori principles. complex systems are characterized by multiple temporal and spatial scales. for example. For example. The approximation and forecast of dynamical systems seems to be a field in which many open problems must be studied in the near future.G. Fiir der Honigaugen Traum. we cannot formulate "behavioral" variational principles (the universal principle that we have mentioned in the first chapter). Fiir der zauber Baum. In fact. I.

the stochastic component is an essential part of them. i. (see. which are usually multivariate (multi-dimensional). VAR. FGARCH [Hendry (2001)]. their statistical properties are changing in time! One cannot treat such systems simply as deterministic. Thus. not correct estimation of the model parameters. In the following paragraphs we are going to talk about some of these methods. The results of these studies show the existence of several levels of complexity in the price dynamics of financial asset. like multi-agent forecasting programs and packages. a philosophical question is it possible at all to predict future and if the answer is positive. of cause. It was shown. different autoregressive models. for example [Hegger (1998)]) that an implicit assumption of noise-free input for the time series can lead to it systematically wrong results of the model estimation. they are non-stationary. how to quantify predictability? Let me leave this question open. ARFIMA. highly correlated. ARIMA. the data exist down to the scale of each single communication of bid and ask of a financial asset (quotes) and at the level of each transaction (trade). they have no simple statistics (like Gaussian random numbers) and finally. one have to use an approach that tackles the noisy time-series. Let us mention some of the most common and frequently used forecasting methods. One can write an . Neural Networks. This effect has as greater impact as the actual level of the noise is higher. including fractionally integrated. Besides that. highly volatile and have interesting scaling properties. based on Chaos Theory and econometric models. GARCH. 6. There are forecasting techniques. ARMA. these is. The main idea is to predict future values of the time series with the help of the past (lagged) values.172 Optimal control and forecasting of complex dynamical systems tored. Financial markets provide us with a high-frequency time series. (we did a short introduction to fractional derivatives in chapter 3): AR.2 Autoregressive models A category of models what is commonly used for forecasting is the class of autoregressive models. one obtains biased. As a consequence.e. The availability of this enormous amount of data allows a detailed statistical description of several aspects of the dynamics of asset price in a financial market. All this makes forecast of financial markets extremely complicated and challenging problem.

<Zt.2.. . Intuitively. . Suppose that we observe {Yi.It will be useful to distinguish between two types of forecast... the ex post and ex ante forecast.2) with variance <r.Yn+m} for m > 0.. the future observations are not available for checking. The forecast is h steps ahead.. m = 1. After fitting a model... On the other hand. It is used as a means to check against known data so that the forecasting model can be evaluated.. the ex ante forecasts observations beyond the present. one can formulate a moving average (MA) time series model: N Yt=YJhZt-i. On the other hand. i=l (6..Yn}. and we are interested in forecasting a future value Yn+m. one can understand white noise as completely random sequence without systematic structure.Yn}. The ex post forecast observations when the "future" observations are known for certain during the forecasting period.. The estimation period in this case is T.. and we assume that is has zero mean.Forecasting 173 autoregressive model as: N Yt = YjaiYt-i + Zu (6.Yn}.3) Let us assume that we have observed a time series {Yi..Ztl >=a2S(t-t1). . h is known as the lead time or horizon... (6.Yr} (T < n) to estimate a model and use the estimated model to forecast {Yr+i.. when we forecast {Yn+i.1) where Zt is some stochastic process.. for example a white noise sequence. . we estimate a future value Yn+m at time n based on the fitted model. and in this case.. . we are doing ex ante forecasts... By introducing a weighted average of Zt..Yn}: we may use {Yi. while the actual value of Yn+m is unknown. These are ex post forecasts since we can use them to compare against the observed {Yr+i. In terms of autocorrelation it is "delta" correlated in time... ..

Zn-(d-l)t)> (6-4) where t is time-delay. Then the remaining problem is how to choose the t and d.3 Chaos theory embedding dimensions If we construct a forecasting model. And this mapping has the same dynamic behavior as that of the original unknown system in the sense of topological equivalent. For more discussions on this topic. because we have only a finite number of data points available with finite measurement precision.5) if the observation function of the time series is smooth..g. Generally there are three basic methods used in the published literature. i. and d is sufficiently large. Lyapunov exponents) on the attractor (e.g. Due to similarity of work between the present t. N. a good choice of t is deemed to be important in phase space reconstructions.u •••.Xn-.. [Ott (1984)]. it does not matter what time delay is selected in a "generic" sense. If the time series is generated by a deterministic system. From the Takens theorem.e. Suppose we have a scalar time series. which include computing some invariant (e. .g. singular value decomposition [Broomhead (1986)]. . d represents the dimension of the state space of the underlying system. For example. timedelay and embedding dimension. and the method of false neighborhoods [Kennel (1992)]. then by the embedding theorems [Takens (1981)]. One can do it with the help of chaos theory and so-called embedding theorem. [Grassberger (1983)]). there generically exists a function F: that Vn+1 = F(Vn). But in practice. In addition. we have to take care that this model will be not overcomplicated. and n = (d — l ) i + l .174 Optimal control and forecasting of complex dynamical systems 6. X\. . such that the above mapping exists. . The time-delay (time lag). correlation dimension. (6. .. we would like to identify the order of the autoregressive model. see e. d is so-called embedding dimension. represents the time interval between sampled observations used in constructing the d-dimensional embedding vectors.X2---XNWe make a time-delay reconstruction of the T phase-space with the reconstructed vectors: V Vn = (xn. determining a good embedding dimension d depends on actual choice of t. Another interesting issue is the choice of the embedding dimension from a time series.

However. The rules of the game that everyone have to decide at the same time whether they will go to the bar or not. genetic algorithm is heuristic search mechanism based on the notion of biological evolution. Some interesting insights could be gained in the modelling of economic "agents" with bounded computational skills and/or resources. economical modelling one of the most challenging problems. So. in our case these are market forecasts. New Mexico. On Thursday night. we have a "game" with the following rules: If less than 60% of the population decide this evening go to the bar (in our case 60 people). all of these people want to go to the popular El Farol Bar to hear lovely Irish music. each agent must choose between investing in either a risky or riskfree asset by developing a forecast of the price of the risky security and then determining the position in that security or in the risk-free asset to be taken. Potential solutions that produce inferior results are discarded from the pool. is to use genetic algorithm to attempt to learn about the behavior of the market. One of the most efficient ways to generate a forecast. As we have described in the previous chapters. they are not required . Genetic operators are then applied to the remaining potential solutions to replenish the pool. persons are allowed to communicate with each other before deciding to go to the bar. In some variants of the problem. However.4 Modelling of economic "agents" and El Farol bar problem As we have mentioned.Forecasting 175 6. The ultimate goal of each agent is to maximize its individual wealth. The problem is as follows: there is a finite population of people (let's say 100) in a small town. It got its name from a bar in Santa Fe. very simple. which offers Irish music on Thursday nights. and it's no fun to go there if it's too crowded (or fully occupied). but rich multi-agent models were developed to solve El Farol bar problem. are evaluated against some objective function and those solutions that produce the best results are kept. One can use artificial agents to simulate individual investors. If more than 60 people make this decision. A pool of potential solutions. They cannot wait and see how many others go before deciding to go themselves. they'll all have a better time than if they stayed at home. they'll all have a worse time than if they stayed at home. since the El Farol is quite small (limited resource). One of the first. The El Farol bar problem is a problem in game theory that was created in 1994 by Brian Arthur [Arthur (1992)]. To accomplish this goal.

These applications can be particularly interesting. because this kind of language allows us to simulate effects of collective behavior. and thus it will be crowded. may be important in the shaping of market prices. high variability in phase and amplitude. geophysical applications. such as herding. high dispersion level.5 Forecasting of the solar activity Solar activity forecasting is another important topic for various scientific and technological areas. one should be aware of recent works which stress that investors are not fully rational. Nevertheless. cooperation. The particles and electromagnetic radiations flowing from solar activity outbursts are also important to long term climate variations and thus it is very important to know in advance the phase and amplitude of the next solar and geomagnetic cycles.176 Optimal control and forecasting of complex dynamical systems to tell the truth. high frequency radio communications. if everyone uses the same strategy it is guaranteed to fail. nobody will go. the solar cycle is very difficult to predict on the basis of time series of various proposed indicators. electric power transmission lines. or have at most bounded rationality. likewise. and that behavioral and psychological mechanisms. like space activities related to operations of lowEarth orbiting satellites. However. that no matter what strategy each person uses to decide if they will go to the bar or not. clustering etc. The extension of the theoretical results applied to economic multi-agent models seems promising and certainly deserves further work. 6. due to high frequency content. With these models one can capture the workings of the processes stage by stage as they are observed and to reproduce the known outcomes. Numerous techniques for forecasting are developed to predict accurately phase and amplitude of the future solar cycles. The the problem is constructed in such way. . if that method suggests that the bar will be crowded. Many attempts to predict the future behavior of the solar activity are well documented in the literature. everyone will go. Now you can appreciate the significance and non-triviality of this problem. This topic is also complicated by the lack of a quantitative theoretical model of the Sun magnetic cycle. then if that strategy suggests that the bar will not be crowded. but with limited success. If everyone uses the same strategy. and thus it will not be crowded. noise contamination.

By design the wavelets usefulness rests in their ability to localize a process in time frequency space.Forecasting 177 6.t is simply the dilation (by a) and translation (by b) of the function. In particular. the kernel from which wavelets are formed is well localized in time space. V flattens out horizontally. Wavelets can be thought of as the derivative at any order k of a smoothing kernel.b = M V V ( ^ ) . which is only localized in frequency. before making any forecasting model. Like any smoothing kernel. At high frequency levels. many useful applications of wavelets one can find in noise reduction. making it well suited in identifying long periodic processes. But unlike normal unimodular smoothing kernels. In this case.6) where a > 0 and b is any real number. If a > 1. This feature of the smoothing kernel enables the wavelet to be well localized in frequency space. and one of the most effective is based on wavelet analysis. while at low frequencies the wavelet is stretched out in shape. A continues wavelet is determined as follows: 1>a. one can be useful to filter the high-frequency components from the original time-series. For this reason a is referred . Function ^o. under the assumption that the smoothing kernel has at least fc-ordered derivatives. By moving from high to low levels of frequency the wavelet is able to zoom in on a processes behavior at a point in time and identify either singularities or alternatively zoom out and reveal the long and smooth features of a signal. the wavelet is tight in shape (small time interval) and is able to focus in on short lived phenomena like singularity points. while 0 < a < 1 tightens tp. the smoothing kernel used in deriving a wavelet can take on negative value. never giving any information about where in space or time the frequency happens. for example some currency exchange rate during the next month. improving the decorrelation between the wavelet coefficients. Wavelet analysis is a relatively new development in the area of applied mathematics that is just now receiving the attention of many scientists. and enabling the wavelet's bandwidths to be increased (decreased) to capture the long and smooth (short and discontinuous) characteristics of a time series. (6. unlike the Fourier transform. real life forecast is usually done on the basis of multi-frequency. noisy data.6 Noise reduction and Wavelets As we already mentioned. There are many ways to do it.

This is a necessary condition insuring smoothness and localization in frequency and time space. Major interests of the recent papers on the noise reduction using wavelet transform are the determination of the wavelet transform and the choice of thresholding parameters. changing the value of b shifts over the time arguments. For noise reduction. many techniques are available like as filtering. ftp(t)dt = 0. constant that insures that ipa. j | < x(t)^0>6 > ?db = J \x(t)\2dt and could be used to reconstruct x by x(t) = — / a~2 < x{t)ipa<bil)a^ > dadb. (6.b has an inner product equal to one. In order for a function to qualify as a "mother" wavelet it must satisfy the admissibility condition. The admissibility condition can also be interpreted as requiring tp(t) to be nonunimodular. / ip(t)dt = 0. allowing Va. The l a p ' 2 term is a normalizing. while when 0 < a < 1 the vertical height is increased. If is well localized around zero. the wavelet transform that employs thresholding in wavelet domain has been proposed by Donoho as a powerful method [Donoho (1994)].7) J a It can be shown that the wavelet coefficients < x(t)ipa. hence. There are many application areas of wavelet transform like as subband coding.[Chui (1992)]. the name wavelets.9) where C$ = 2n J \i>\2(x)/\x\^X < °° Note that the admissibility condition. The wavelet transform represents an efficient technique for signal processing with time-varying spectra.8) (6.178 Optimal control and forecasting of complex dynamical systems to as the scaling parameter. b is referred to as the translation parameter. When a > 1 the \a\x/2 term causes the vertical height of to be scaled down. < x(t)il>aib > = \a\x'2 [x{t)^— )dt.[Mallat (1989)]. adaptive method and wavelet transform. It has been proved that the method . It can be viewed as a decomposition of a signal in the time-scale plane [Daubechies (1992)]. W J (6. In reducing the noise of measured signal. data compression and noise reduction.6 to be well localized around the translation point b. is implied by C$ < oo x(t) has sufficient decay.b >b represent the details of the signal x(t) at the scale a. The function •)/>(£) is commonly referred to as the "mother" wavelet.

Of course. which predicts the eigenspectrum of perfectly un- . The correlations Cij between any two pairs i. and < . 6. In the case of the RKKY interaction. unlike in physical systems. Thresholding in wavelet domain is to smooth or to remove some coefficients of wavelet transform of the measured signal. And it is not clear. the noise content of the signal is reduced effectively under the non-stationary environment. there are fundamental difficulties in quantifying any kind of correlations between any two stocks. unlike physical systems. In economical systems. there is no formalism or theory to calculate the "interaction" between two companies i. correlations need not be just pairwise but rather involving clusters of stocks. j of stocks change with time (non-stationary).j. in physics there are examples of "indirect" interactions. It therefore acts through an intermediary which in metals are the conduction electrons (itinerant electrons) [Ruderman(1954)]. like "superexchange" or RKKY interaction. The correlation matrix C which has elements Cij = <yiyj>-<yi><yj>> (T. Through the thresholding operation. in which units these "interaction" could be measured.. The matrix C can be studied using Random Matrix Theory (RMT) [Wigner (1951)]. The two well-known thresholding techniques are soft thresholding and hard thresholding [Donoho (1994)]. The common sense suggests that their time evolution can be correlated (or anti-correlated).Forecasting 179 for noise reduction works well for a wide class of one-dimensional and twodimensional signals. > denotes a time average over the period studied. A possible measure of these correlations one can derive from the correlation of their stock prices. As we discussed before. However. The problem is that although every pair of company should interact either directly or indirectly.-no- (fU0) where &i = y/< Y? > — <Yi > 2 is the standard deviation of the price changes of company i. the precise nature of interaction is unknown or extremely complicated.7 Finance and random matrix theory It is often necessary to describe behavior of several (as many as 1000) firms in different sectors of economy. which is the dominant exchange interaction in metals where there is little or no direct overlap between neighboring magnetic electrons.

training is the process of determining the arc weights which are the key elements of an ANN. One can also mention other types of ANNs such as radial-basis functions networks. There are generalizations of the Random Matrix Theory for non-gaussian (Levy statistics) random variables. For an explanatory or causal forecasting problem. (6. Each node receives an input signal which is the total information from other nodes or external stimuli. [Plerou (2000)].. processes it locally through an activation or transfer function and produces a transformed output signal to other nodes or external outputs. ridge polynomial networks. modelling and control of dynamic systems..Yn^d). 6. Yn+l=f{Yn. Before an ANN can be used to perform any desired task. are composed of a number of interconnected simple processing elements called neurons or nodes. originally developed to mimic basic biological neural systems the human brain particularly. For further details. and Kohonens self organizing networks. Many different ANN models have been proposed since 1980s. The training algorithm is used to find the weights that minimize some overall error measure such as the sum of squared errors (SSE) or mean squared errors (MSE). Basically. Hopfield networks.180 Optimal control and forecasting of complex dynamical systems correlated Gaussian random matrices. which exhibit so-called "heavy tails" and are more suitable for description of econometric variables. Perhaps the most influential models are the multi-layer perceptrons (MLP).Yn_u. one can read a nice book [Nrgaard (2000)]. see. The deviations of the eigenspectrum of the matrix C from the universal predictions of RMT identify correlations and non-random properties of the specific system.11) Thus the ANN is equivalent to the nonlinear autoregressive model for time series forecasting problems. and wavelet networks. the inputs to an ANN are usually the past variables. For more examples and possible applications of neural networks in identification.8 Neural Networks Artificial neural networks. it must be trained to do so. and we again arrive to multidimensional optimization problem! .

We outline the main ideas behind the wavelet transform.9 Summary In this chapter we make a brief review of a general formulation of different forecasting problems and different types of approaches to get a satisfactory solution. that could be useful in filtering of noisy data before making forecast. how the chaos theory can help us in effective reduction of the model's dimensions. We have described. We also have mentioned some philosophical aspects of the forecasting.Forecasting 181 6. .

This page is intentionally left blank .

Webb. Rev. S. Dover Publications. 151 (1997). A Wiley- B. J. Phys. P. de Araujo. Phys. L. "Electrons in artificial atoms". Rev. K. Mesoscopic Phenomena in Solids. (1992). Carpenter. M. Inc. V. D. 413 (1996). Lett. "On Learning and Adaptation in the Economy. K. 8 1 . Karlov. A. "The Wigner molecule in a 2D quantum dot". Allen. "Handbook of Mathematical Functions". Abramovitz. LA. N. I. Stroud. W. Altshuler. C. (1975). Phys.. Wilson. Amsterdam. Ashoori. 280. D. Brown. Akman and M. P. 247902 (2001). 955 (1998). "Intence Resonant Interactions in Quantum Electronics". Bardeen. Lett. W. Walmsley. H. Eberly.Bibliography M. Lett. "Feedback quantum control of molecular electronic population transfer". L.. (1991). 277 (1999). Interscience Publication. 183 . S. Jr." Santa Fe Institute Paper 92-07-038. Weberand. and C. Arthur. Bacon. (1992). R. R. J. Whaley. Yakovlev. R. M. Physica E 4. N. E. "Optical resonance and two-level atoms". Warren. Chem. Springer-Verlag. V. New York. A. Akulin. V. "Analytic Solution for Strong-Field Quantum Control of Atomic Wave Packets". "Coherence-Preserving Quantum Bits". B. R. Stegun. and K B. Eds. L. Elsevier. V. Lee and R. Nature 379. 87. (1972). Tomak. E.

Haug. Eberl. R. D. S. G. D. Phys. S. "Coherent Optical Control of the Quantum State of a Single Quantum Dot". Blum. J. J. D. Christov. A. Rev. N. G. Lett. A 63. Rev. Bulatov. N. Phys. W. (1998). Bonadeo. Bogdan. V. A. 217 (1986). King. "Single-electron tunneling through a double quantum dot: The artificial molecule". "Density matrix theory and applications". A 60. B 5 3 . and H. 26. Phys. 19 (1941). (1981). L. "Shaped-pulse optimization of coherent emission of high-harmonic soft X-rays". Chem.Theory and experiment". Math. "Extracting qualitative dynamics from experimental data" Physica D 20. Phys. T. 13. H. Maugin. Burrage. S Wallentowitz. 4875 (1999). 4718 (2000). M. G. (2001). J. S. Rabitz. G. M.Conference on Uncertainty. Geophys. Astron. A. Erland. Girard. Nature 406.184 Optimal control and forecasting of complex dynamical systems R. Science 282. Walmsley. "Nonadiabatic control of BoseEinstein condensation in optical traps". Roy. Vugmeister. 4862 (1998). "High Strong Order Methods for Noncommutative Stochastic Ordinary Differential Equation Systems and the Magnus Formula" . Rev. B. Pfannkuche. J. Caputo. Murnane. Kosevich. 108. K. D. 1 (2001). H. Rabitz. J. "Quantum states of interacting electrons in a real quantum dot". Phys. C. Vdovin. M. 34. Gaz. Blick. M. "Formulae for numerical differentiation". E. A. Soc. R. 164 (2000). Katzer and D. A. Kapteyn. Plenum. Klitzing and K. Bickley. Brixner and G. 7899 (1996). 557 (2001). Gammon. M. G. H. 529 (1967). Zeek. Broomhead. Bartels. M. Backus. Blanchet. "Decoherence of molecular vibrational wave packets: Observable manifestations and control criteria". M. LA. A. . D. "Femtosecond polarization pulse shaping". "Linear models of dissipation whose Q is almost frequency independent". Misoguti. H. P. 25. B 61. E. "Temporal coherent control in the photoionization of Cs2'. 063404. Physica D. P. Rev. C. K. Park. K. "Soliton complex dynamics in strongly dispersive medium". Weis. Steel. Burrage and P. M. Maksym. 1473 (1998). Opt. v. Bruce and P. I. Wave Motion. New York. Brif. Bouchene and B. Gerber.

K. Phys. Donoho. M. 77. C. Studies in Nonlinear Dynamics and Econometrics. 74. B. Sokolov. 288 (1995).and Accelerating Super-Diffusion Governed by Distributed Order Fractional Diffusion Equations". W. Anal. (1994). J. Gorenflo. B. Chui. J. (1992). Microscopic simulations in physics". Mod. "Phasecontrolled currents in semiconductors". H. Phys. J. Sarkar. Deb.Bibliography 185 D. P. Liu. 198 (1996). Rev. "Fault-Tolerant Error Correction with Efficient Quantum Codes". 5. King. M. Cole. W. 8. H. P. art. Lett. (1992). H. M. 1015 (1995). (2004). Phys. Phys. "Molecular geometry optimization with a genetic algorithm". 046129 (2002). Williams. M. Sherwin and C. P. Zurek. Theory. Phys. Ooms. B. Rev. C. Miquel. 3260 (1996). "Inference and Forecasting for ARFIMA Models With an Application to US and UK Inflation".. IEEE Trans. DiVincenzo. 7 1 . Wasilewski. "Coherent Manipulation of Semiconductor Quantum Bits with Terahertz Radiation". 14. Deaven and K. "Ten Lectures on Wavelets". K. "De-Noising by Soft Thresholding". K. Rev. A. D. Phys. Rev. Elect. C. Ho. A. E 66. Daubechies. M. 60 (2001). Inform. 438 (1999). "Interacting electrons in polygonal quantum dots". Dupont. Shor. "Two-bit gates are universal for quantum computation". and I. Rev. Rev. R. Nature 410. Trans. R. Laflamme. SIAM. 205 (1999). D. T. Paz. An Introduction to Wavelets. DiVincenzo and P. E. B. E. M. Stanley. Academic Press Inc. 1 (1997). "Multi-objective genetic algorithms: Problem difficulties and construction of test problems". Doornik. J. 10719 (1999). 3596 (1995). D. . V. Rev. Lett. Chechkin. L. "An Algorithm for the Numerical Solution of Differential Equations of Fractional Order". R. Phys. 7. Phys. "Retarding Sub. 75. E.. Jefferson and S. 77. Num. Ceperley. Evolutionary Computation Journal. Hausler. P. M. D. "Perfect Quantum Error Correcting Code". Lett. and W. B 59. Buchanan and Z. Lett. I. Corkum. A 5 1 . Rev. S. Creffield. R. "Quantum Monte Carlo simulations. Diethelm. C.

"An evolutionary algorithm to calculate the ground state of a quantum system". "Lowest Energy Structures of Gold Nanoclusters". Lozovik. T. "Higher Transcendental Functions". Rev. Heinzel. Grifoni. Garcia. Governale. V. Phys. Feit. Fornberg and D. A. V.186 Optimal control and forecasting of complex dynamical systems A. Jr. and S. J. A. "Calculus of variations". Iserles. Sanchez-Portal and J. I. Bennemann. Lett.. R. P. Steiger. Beltran. Grigorenko. E. Speer. Phys. T. M. Garcia and K. J. 413. Fomin. K. "Two-Particle Systems Determined Using Quantum Genetic Algorithms". 203 (1994). A. Grigorenko and M. Bonitz. D.M. Lett. M. "Solution of the Schrodinger equation by a spectral method". in Acta Numerica 1994. Ederlyi. Rev. Khveshchenko. "Robust Two-Qubit Quantum Registers". E. "Theory for the Optimal Control of Time-Averaged Quantities in Quantum Systems". Comput. Garcia. M. "Single-Step Implementation of Uni- . Phys. B. 47. Nature. J. M. Grigorenko and D. "Numerical investigations on the switching behavior of magnetic tunnel junctions in the quasi-static and dynamic regime". 233003 (2002). D. Khveshchenko. Fuhrer. Rev. Posada-Amarillas. McGrau-Hill. 1600 (1998). 040506 (2005). L. D. and Yu. 268. I. M. 131 (2000). W. A. "Energy spectra of quantum rings". Soler. 81. Lett. K. "Wigner crystallization in mesoscopic 2D electron systems". "Decoherence and dephasing in coupled Josephson-junction qubits". I. Gelfand and S. Ensslin. E. E. I. Fleck. Grigorenko. 235309 (2002). Phys. Phys. B 65. Phys. 439 (2001). S. Garzon. I. 55. Lett. Rev. (2000). Cambridge University Press. A. Grigorenko and M. 273 (2001). Luscher. Phys. and G. I. 94. I. Dover Publications. O. New York. Bauer. Cambridge. Rev. Sloan. 86. Filinov. E. 119 (2001).V. H. Europhys. Schon. M. (1955). and M. Ihn. "Coherent control of photon-assisted tunneling between quantum dots: A theoretical approach using genetic algorithms". Chem. Michaelian. 89. Wegscheider and M. A. 822 (2001). Artacho.V. Physica A 291. Bichler. M. Fassbender and M. E. Lett. Physica A 284. 412 (1982). 3851 (2001). I. Ordejon. edited by A. Garcia. Grigorenko.

Phys. Hopkinson. G. "Coherences and populations in the driven damped two-state system". T. Hertel. K. B. 110501 (2005). Mowbray. Qammer. Bennemann. Ertl. (1989). Weiss. Heberle. I. P. J. "Ultrafast coherent control and destruction of excitons in quantum wells". Phys. D. 293. I. Stievater. Lett. (2001). Addison-Wesley. 60. Rev.A. Rep. Heberle. M. M. C. 232 (1998). Lett. A. on Circuits and systems-I. "Genetic Algorithms in search. 334 (1997). e-print arXiv.Hanggi. (2002). I. T. Tartakovskii. Smowton. Hendry and J. "Near Field Coherent Spectroscopy and Microscopy of a Quantum Dot system". IEEE Trans. Kantz. H. Phys. Rev. P. "Practical implementation of nonlinear time series methods: The TISEAN package". P. Baumberg. "Theory for the ultrafast dynamics of excited clusters: interplay between elementary excitations and atomic structure". Rev. and G. E. Doornik. Groom. Gammon. R. Phys. 485 (1995). T. B 7 1 . J. E. K. Phys. R. Science. Katzer. Gang Chen. Grigorenko and K.F. 2224 (2001). 0 . M. Phys. . Lett. chao-dyn/9810005. and D. "Comparative study of InGaAs quantum dot lasers with different degrees of dot layer confinement". Kohler. 189 (1983). Lorenzo and H. 304. Jeschke. D. M. T. and machine learning". "Driven quantum tunneling". 535 (1996). J. Hartmann. 75. H. 1. 2598 (1995). Goldberg. Appl. J. E. "Empirical Econometric Modelling Using PcGive". Hill. 187 M. Winterstetter and U. Procaccia. P. Hegger. Garcia. F. Orr. E. T. G. A. Knoesel. "Ultrafast electron dynamics at C u ( l l l ) : Response of an electron gas to optical excitation". Wolf. 5135 (1999). T. Baumberg and K. Phys. S. A. and G. Grifoni. Appl. M. 361 (2000). J. M. Timberlake Consultants Press (London). Rev. Schreiber. Rev. Guest. 76. 8 1 . Grassberger. 95. E 56. 360 (1999). 42. D. J. E. Hartley. Grifoni. M. P. S. Steel.Bibliography versal Quantum Gates". "Measuring the strangeness of strange attractors" Physica D 9. Lett. Physica B272. A. "Calculation of ground states of four-dimensional ± J Ising spin glasses". K. Kuhn and K. D. "Femtosecond pulse shaping for coherent carrier control". optimization. Kohler. H. Phys. Fundamental Theory and Applications. Tabak. Skolnick. H. D. A.

Waterman and F. W. P. Differential Equations. Lee. B. 68. Chusuei. Kramer. 10695 (2001).-M. Gutierrez. 1500 (1992). 045317 (2001). S. J. A. ed. edited by D. "Optimal control of one-and two-photon transitions with shaped femtosecond pulses and feedback". Omary. H. Mathematical and Computer Modelling. Int. Rev. A. Hwang. 967 (1989).. P. Colvin. E. Hayes-Roth. Rev. "Self-interaction-free density-functional theoretical study of the electronic structure of spherical and vertical quantum dots". M. S. NY (1978). Phys. R. 44. K. and S. M. D. 581 (1993). Academic Press.. C. "Do intelligent configuration search techniques outperform random search for large molecules?". Rawashdeh-Omary. Jiang. A. Holland. 24. Huffer and D. Bagus. R. Y. J. Fackler. Grabowski. M. Applications of fractional calculus in physics. Appl. J. S. T. C. (1975). . Jauregui. Hirata. W. Phys. Phys. "Teaching lasers to control molecules". Chem. Kompa. M. R. Dynamical Systems and Linear Algebra. A 62. Europhys. Hwang. Phys. Judson and H. J. Rev. 6468 (2002). C.188 Optimal control and forecasting of complex dynamical systems R. 277 (1992). Hornung. "Wigner Molecules in Nanostructures". 114. University of Michigan. J. Lett. H. "Electronic structure studies of six-atom gold clusters". Ingber. "Genetic algorithms and very fast simulated re-annealing: A comparison". "Very Fast Simulated Reannealing". A. S. Holland and J. L. W. and S. Zeidler. Rabitz. Proch. Quantum Chem. L. K. M. "Efficient schemes for reducing imperfect collective decoherences". "Time-dependent density functional theory employing optimized effective potentials". X. Meier. 16. Reitman. 277 (2000). Chem. L. Phys. Phys. River Edge. Rosen. I. in Pattern-Directed Inference systems. J. Hilfer. Hausler and B. World Scientific. J. Bartlett. D. Ingber. Motzkus. New York (1965). S. 12. D. D.. W. New Jersey (2000). S. Smale. Academic Press. R. S. F. H. Judson. in Adaptation in Natural and Artificial Systems. Ann Arbor. Lett. B 6 3 . MI.-I Chu. Hirsch. T. Ivanov. Ahn. Meza. Tong. 87 (1992). 116. B 7 1 . 062305 (2000). J. Mathematical and Computer Modeling.

Phys. (1996). A. D. "Statistical Physics I: Equilibrium Mechanics". Krotov "Global methods in optimal control theory". Manninen. Bacon. Kennel. N. Y. (1991). Kubo. 115305 (2002). A. "Excitation spectra of circular fewelectron quantum dots". Abarbanel. and K. and H. Koskinen. Phys. Toda. "Optimization by Simulated Annealing". 214 (1998). Honda. 91. L. T. Statistical D. Barenco. D. Springer. R. Kouri and D. Phys. Danoesastro. "Determining embedding dimension for phase-space reconstruction using a geometrical construction". Gelatt and M. P. Rev. "Decoherence-free subspaces for multiple-qubit errors". M. (1992). Brown. J. Kolwankar and A. Phys. Gangal. and N. . The MIT Press. B 65. Phys. Vecchi. M. Wensauer. Rev. Science. T. A. Hoffmann. Koza. S. Kornfeld. A 61. Lett. Whaley. Roessler.Bibliography 189 J. 130 (1963). 20. 186. R. Chem. D. Eto. New York. and S. Atmosph. 302. B. "Time-dependent integral equation approach to quantum dynamics of systems with time-dependent potentials". Springer-Verlag (1983). "Ergodic theory". M. 671. A. Austing. and S. "Deterministic nonperiodic flow".. Kitaev. P. V. 1136 (1999). Fomin. 1788 (1997). Lett. Sc. B. Marcel Dekker. M. "Foult-tolerant quatum computation by anyons". "Genetic programming". Mottelson. 278. and U. Deutsch. Sinai. 205323 (2001). Oosterkamp. 4083 (1995). D. M. 2 (2003). D. "Conditional Quantum Dynamics and Logic Gates". A. "Rotational and vibrational spectra of quantum rings". Rev. Phys. 80. M. Mikhailov. F. 74. "Local Fractional Fokker-Planck Equation". B 63. D. Saito. Dang. S. J. D. "Quantum dots in high magnetic fields: Calculation of ground-state properties". M. G. K. Lett. Kirkpatrick and C. 220. 82. 3403 (1992).V. Lett. Inc. and A. H. R. Rev. Science. Phys. Phys. Reimann. Lidar. Rev. E. W.G. D. A Bradford Book.. M. Ann. Kouwenhoven. Physical Review A 45. S. S. (1983). 052307 (2000). and G. (1982). J. Tarucha. Kusnezov. A. Kempe. LP. Rev. Ya. Ekert. Kainz. Lorenz. Bulgac.

Mak.F. Nakamura.T. A. A. Rosenbluth. "A Theory for multiresolutional signal decomposition: The Wavelet Representation". Niskanen. S. Chem. A 67. J. 613 (1989). A. Nakahara. G. Salomaa. Pollock.190 Optimal control and forecasting of complex dynamical systems D. 21. "Quantum oscillations in two coupled charge qubits" Nature 421. Chem. and M. 674 (1989). E 6 1 . Am. "Quantum-state engineering with Josephson-junction devices".B.A. "Optimal multiqubit operations for Josephson charge qubits". J. Phys. H. Yu. 81. C. Lloyd . 202 (1998). O. 293. Shnirman. Science 285. S. "Quantum computation with quantum dots". P. "Equation of State Calculations by Fast Computing Machines. Maddox. Salomaa. Phys. F. Lett. 357 (2001). Phys. M. 1087 (1953). Michaelian.. P. Makhlin. Orlando. L. Yu. and J. "Variational Density Matrix Method for Warm Condensed Matter and Application to Dense Hydrogen". Rosenbluth. V. Astafiev. 66. J. L. Mach. van der Wal. 786 (1999). Weber-Gottschick. K. L. E. Salomaa. Lett. S. Tsai. Nakahara and M. "Realization of arbitrary gates in holonomic quantum computation". J. Lemmens. R. Michaelian." J. Tsai. Averin4 and J. 823 (2003). and E. O. A 57. Y. O. M. M. H. K. Rev. Rev. T. J. L. and A. Rev. Intell. Nature. H. Metropolis. 90. Vartiainen. "Evolving few-ion clusters of Na and CI". Nakamura. 73. Phys. 012319 (2003). 1036 (1999). W. C. and H. J. "Coherent control of macroscopic quantum states in a single-Cooper-pair box". Phys. A. Pattn Anal. Phys. DiVincenzo. Lett. T. Pashkin. Loss and D. Arxiv preprint condmat/0002343. Rev. Phys. A. A. Y. Nature 398. Schoen. M. Levitov. 347. M. 3470. "Accel- . N. Teller. Tian. Egger. Pashkin and J. IEEE Trans. Niskanen.S. M. Phys. Mallat. Vartiainen. "Multilevel Blocking Approach to the Fermion Sign Problem in Path-Integral Monte Carlo Simulations". J. B. Phys. and M. Devrees. "Josephson persistent-current qubit". "Many Body Diffusion and Interacting Electrons in a Harmonic Confinement". D. Teller. Luczak. Yamamoto. Mod. O. 120 (1998). 11. Mooij. Rev. Niskanen. 197901 (2003). "How to shadow noisy chaos". (2000). 4533 (1998). 231 (1998). Rev. Militzer and E. (2000). Y.

Duan. Lett. . Y. and G. Yamaguchi. Palma. Y. J. Ott. Nature. Ott. 8120 (2000). C. and A. Inf. D. and A. B 62. 79. Computer Journal. 687 (1984). Cambridge University Press. Nelder and R. and Y. Poulsen and L. Phys. Ravn. S. Suominen. 36. Yorke. Phys. Ohtsuki. 110. Phys. T. Barenco. Yamamoto /'Electron Entanglement via a Quantum Dot". K. Springer. "Neural Networks for Modelling and Control of Dynamic Systems". Kouwenhoven. W. Ohtsuki. K. "Microwave spectroscopy of a quantum-dot molecule". 1953 (1997). 2. G. and J. Phys.Rev. 308 (1965).. 037901 (2002). Tarucha and L. Phys. Guo. M. Oliver. R. and H. van der Wiel. 395. Soc. H. 8867 (2001). V. 9825 (1999). Lipparini. Rev. 88. Zhu. P. W. 7. A. 1 (2004). 110. "Monotonically convergent algorithm for quantum optimal control with dissipation". Ekert. Ohtsuki. A 449. R. Y. Nakagami. Chem. F. L. "Quantum Computation and Dissipation". "Quantum optimal control of multiple targets: Development of a monotonically convergent algorithm and application to intramolecular vibrational energy redistribution control". Rabitz. A. Zhu and H. "Universality in quantum computation". Proc. 567 (1996). W. Y. and E. Lett. Rabitz. (2000). Umrigar. Hijman. 669 (1995). Mead. Zhu and H. Int. M. K. Rabitz. J. Ekert. N. Phys. "A Simplex Method for Function Minimization". London A452. F. J. Withers. Phys. A. E. 9825 (1999). Norgaard. Hansen. Deutsch. Fujisawa. I. W. W. "Monotonically convergent algorithm for quantum optimal control with dissipation". Ishibashi. M.Quant. W. Fujimura. O. T. C. Chem.Bibliography 191 eration of quantum algorithms using three-qubit gates". Lond. E. Stat. "Chaos in Dynamical Systems". K. R. Rev. J. "Diffusion Monte Carlo study of circular quantum dots". "Is the dimension of chaotic attractors invariant under coordinate changes?" J. Pederiva. G. 874 (1998). J. Cambridge (1994). Proc. K. "Preserving Coherence in Quantum Computation by Pairing Quantum Bits". 114. Soc. J. K. Oosterkamp. Chem.

Poetting. M. C. 83. numerical approximation. Poyatos. Rev. A 37. (1999). Slovac Academy of Sciences. K. "Fractional Calculus and Its Applications". I. Phys. "Oscillation Modes of Two-Dimensional Nanostructures within the Time-Dependent Local-Spin-Density Approximation". S. Podlubny.Rabitz. Schwalb. 78. Cirac. "Coherent control of atom dynamics in an optical lattice". Cramer. Rev. V. 99 (1954). Phys. "Explicit generation of unitary transformations in a single atom or molecule". Rev. Ramakrishna. Phys. Phys. and H. L. S. Paolo and R. "Optimal control of quantummechanical systems: Existence. B. and applications". "Coherent acceleration of Bose-Einstein condensates". Steuernagel. J. "Complete Characterization of a Quantum Process: The Two-Bit Quantum Gate". 68. J.N. . K. Kittel. Rev. 023604 (2001). "Control of a coupled twospin system without hard pulses". H. Rabitz. Podlubny. Botina. Rev. I. Hanggi. Peirce. 062308 (2003). A.E.R. Ober. San Diego. Phys. Amaral. Dahleh. 374 (2000). Sun. and H. H. Plerou. Kohler. Phys. Podlubny. J. M. A. V. J. Salomon. "On a Minimum Property of the Free Energy". (1997). Rev. Peierls. L. Serra. X. M. "Optimal control theory for unitary transformations".. L. Rev A 65. "The Evolutionary-Gradient-Search Procedure. Flores. "A random matrix theory approach to financial cross-correlations" Physica A 287. Academic press. 033424 (2001). 96. and P. Pu.E. e-print arXiv: math/0110241.F.. "Coherence stabilization of a two-qubit gate by AC fields". R. A 6 1 . 032106 (2000). ZoUer. I. H. Phys." J. Ruderman and C. Meystre. R. "Geometric and physical interpretation of fractional integration and fractional differentiation". P. 4950 (1988). P. Phys. Ober. 063405 (2002). 390 (1997). P. Lett. Romero. J. Rev. M. A. 54. Koza et al. R. V. Gopikrishnan. Nienhuis. and P.I.H Haroutyunyan and G.A. Rabitz. A 64.A. Puente and L. Ramakrishna. Kosloff. Lett. O. cond-mat/0409774. P. 918 (1938). "Indirect Exchange Coupling of Nuclear Magnetic Moments by Conduction Electrons".192 Optimal control and forecasting of complex dynamical systems R. Rosenow. 3266 (1999). A 64. H. Phys. Stanley. Rev. "The Laplace Transform Method for Linear Differential Equations of the Fractional Order".

(1998). P. 2272 (2000). E. Vose. and L. Schaffer. C. Freitas.P. Rev. "Efficient algorithm for optimal control of mixed-state quantum systems". Spendley. W. M. Weinheim. "Waiting-times and returns in high-frequency financial data: an empirical study". "Multiple Objective Optimization with Vector Evaluated Genetic Algorithms". G. Shore. (1989). Raberto. "Fractional calculus and continuous-time finance". 376 (2000). F. J. Physica A 287. San Francisco. D. Phys. Search and P. A 6 1 .W.). R. John Wiley and Sons. (1988). (1982). "Suppression of Magnetic State Decoherence Using Ultrafast Optical Pulses". The Lorenz Equations: Bifurcations. . A. (2001). "Phase Coherent Precessional Magnetization Reversal in Microscopic Spin Valve Elements". "The No Free Lunch and problem description length". In Genetic Algorithms and their Applications: Proceedings of the First International Conference on Genetic Algorithms. Scalas. Hillebrands. Lett. Physica A 287.. C. R13966 (1999). "Sequential application of simplex designs in optimisation and evolutionary operation. 85. 468 (2000). "Fractional Market Dynamics". Gorenflo. D. Fassbender. J. H. Puente and E. W. R. J. Schuster. V. Physica A 284. and Strange Springer-Verlag. Mainardi. Rev. D. "The theory of coherent atomic excitation"." Technometrics 4. M. and B. H.C. San Francisco. Attractors. E.R. Chaos. Germany: B. F. Whitley. 482 (2000). Laskin. 565. Serra. Lawrence Erlbaum. Hext.): Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2001). Girardeau. Morgan Kaufmann. N. "Deterministic Wiley-VCH. C. "Orbital current mode in elliptical quantum dots". Morgan Kaufmann. Schirmer. S. Crozat. Rev. Spector et al. Phys. Sparrow. B 60. Chappert. in L. Lett. Schumacher. Mainardi. 93 (1985). Rev. Proceedings of the Third Annual Genetic Programming Conference GP-98. G. 90. (Eds. CA. G.Bibliography 193 (Eds. J. Gorenflo . Himsworth. 852. Phys. New York. L. Phys. Leahy. Berman. D. C. 017201 (2003). R. Schumacher. F. 012101 (2000). Sousa. R. Miltat. Chaos: An Introduction". R. M. 441 (1962). P. Scalas. Lipparini.

898. B 62. "Photon-assisted Stiickelberg-like oscillations in a double quantum dot". Lett. Lecture notes in economics and mathematical systems. ed. L. P. Heidelberg.194 Optimal control and forecasting of complex dynamical systems O. Rev. Vidal. S. Oxford University Press: Oxford. Berlin.). E. 117. Takens. "Dynamical Systems and Turbulence. J. C. J. Phys. "Decoherence and dissipation during a quantum XOR gate operation". "Time-dependent resonant tunneling via two discrete states". Stoof and Y. Thurston. Phys. S. R. (1993). Garcia and K. C. Springer-Verlag. Scuseria. 1050 (1996). Phys. 647 (1963). 2489 (2002). "The Lorenz attractor exists". and L. K. T. 6 5 . Lecture Notes in Mathematics". Minemoto. Woste. In: Rand. J. 1205 (1972). Hanggi. Leisner. 3689 (2000). Sci.. Bennemann. 012309. (2003). Berlin (1981). . Phys. Biophysical Journal. Lupulescu. Rev. Wingreen. Tien and J. A. "Genetic algorithms: A general search procedure". 549 (1994).. "Assessment of simple exchange-correlation energy functionals of the one-particle density matrix". F. J. Staxoverov and G. G. E. 12. Tucker. Springer-Verlag. Acad. Phys Rev A. M. D. V. Am. "Viscoelasticity of Human Blood". "Chaos and Time-Series Analysis". Phys. Phys. Sutton and S. 231 (2001). H. Phys. S. Taut. T. P. Kaposta. Speer. Gordon. M. Sprott. A. and Young. Rosendo-Francisco. 1197 (1999). Chem. Chem. E. Bartelt. Rev. 76. C. R. New York. Vajda. 267. 328. A. 1916 (1996). W. 396. V. V. Nazarov. M. Thorwart. 2630 (2000). "Detecting strange attractors in fluid turbulence". P. Matter 12. B. Boyden.: Condens. "Applied simulated annealing". V. H. B 53. "Resonant photon-assisted tunneling through a double quantum dot: An electron pump from spatial Rabi oscillations". Stafford and N. 129. S. V. P. Rev. "Special analytical solutions for two and three electrons in a magnetic field". Phys. P. (Eds. 62. "Feedback optimization of shaped femtosecond laserpulses forcontrolling the wavepacket dynamics and reactivity of mixed alkaline clusters". "Multiphoton Process Observed in the Interaction of Microwave Fields with the Tunneling between Superconductor Films". N. C. (2001).

Quantum dissipative systems. Esteve. Whaley. 5454 (2001). Urbina. and F. K. Science 296. J. Joyez. K. Rev. E. (World Scientific. "Correlation-induced suppression of decoherence in capacitively coupled Cooper-pair boxes". Rev. S. Lambropoulos. Yabana and G. Evol. "Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. "A geometric theory of non-local two-qubit operations". Rev.Bibliography 195 L. Zaslavsky. 5325 (1999). Zhang. Lett. 1002 (1934). C. 1999). Landman. "No Free Lunch Theorems for optimization". Whaley. A. Cottet. "Spontaneous Symmetry Breaking in Single and Molecular Quantum Dots". 53. Y. Comput. Saenz. and P. "Dynamical suppression of decoherence in two-state quantum systems". Phys. E. Wigner. "On a class of analytic function from the quantum theory of collisions".Viola. Lett. Rev. J. M. Phys. Wigner. J. 3. 7 1 . A. M. 1006 (1967). D. 285. K. Zitzler and L. 2733 (1998). Appl. B. 36 (1951). and S. Lloyd. B. (2000). Apalategui. R. Rev. "Design and interpretation of laser pulses for the control of quantum systems"." IEEE Trans. "Optimal quantum circuit synthesis from Controlled-U gates". 82. A. "Pseudochaos". Knill. 46. 67 (1997). Phys. Rev. cond-mat/0407423. J. A 58. Q. Math. Lett. Pothier. A. Bertsch. D. 2417 (1999). G. Vala. 82. e-print arXiv: nlin/0112033.P. Vion. Viola and S. 1271 (1999). Thiele. Zhang.. quant-ph/0308167. Hu. 886 (2002). Nori. D. U. Zeldovich. C. Yannouleas and U. IEEE Transactions on Evolutionary Computation 1. Phys. K. Sastry. M. B. "Manipulating the Quantum State of an Electrical Circuit". Phys. H. "Application of the time-dependent local density approximation to optical activity". Aassime. Edelman. X. L. H. E. Sastry. Lloyd. de Vivie-Riedle. A 60. "Ab Initio Investigation of the Phase Lag in Coherent Control of H2" Phys. J. Singapore. 86. Ann. . "On the Interaction of Electrons in Metals". S. Weiss. Macready. E. Soviet Physics J E T P 24. Devoret. 257 (1999). B. P. Phys. Vala. Sundermann. "Universal Control of Decoupled Quantum Systems". Wolpert and W. You.

X. . B. J. B.196 Optimal control and forecasting of complex dynamical systems Phys. "Scalable Fault-Tolerant Quantum Computation in Decoherence-Free Subspaces". Zhu..91. K. Zhang. 042313 (2003). Sastry. 027903 (2003). J. 1953 (1998). "Rapidly convergent iteration methods for quantum optimal control of population". Rabitz. Whaley. Whaley "Minimum construction of two-qubit quantum operations". Rev. Feldman. Rev. Phys. 108. W. Zhang. Botina. B. Rev. quant-ph/0312193. A67. S. Yu. Sastry. Z. Chem. J. M. and H. Vala.-W. J. Phys. 93. J. Phys. Zhou. S. Vala. J. Lett. K. 010501 (2004). Zhou. J.. Lett. "Exact two-qubit universal quantum circuit".

47 decoherence problem. 74 197 Fermat's variational principle. 47 ground state problem. 79 CNOT gate. 165 Floquet formalism. 10 least square estimation. 162 Bloch-Redfield equations. 7 Euler-Ostrogradskii equation. 83 Frobenius trace norm. 58 halting problem. 82 Lyapunov exponents.Index adiabatic approximation. 146 degenerate functionals. 32 Lorenz attractor. 29 Heisenberg coupling. 85 chaos theory. 11 double quantum dot. 156 Fundamental Lemma. 138 forecast. 169 El Farol bar problem. 115 Josephson qubits. 149 high-frequency time series. 47 global optimization. 165 fractional derivative. 104 algorithm. 124 economical modelling. 145 decoherence-free subspaces. 30 Gray code. 147 Lagrange multiplier. 78 . 166 B-gate. 29 artificial agents. 18 Lagrangian. 169 electron pump. 6. 31 crossover operation. 7 genetic algorithm. 148 brachistochrone problem. 77 chaotic behavior. 5 Caputo derivative. 18 Jacobian elliptic function. 165 convexity. 149 financial markets. 125 Euler-Lagrange equation. 15 evolutionary gradient search. 2 fidelity. 149 isoperimetric problem. 15 density matrix. 156 complex systems. 117 distance between functions. 44 genetic operators. 95 Dirac delta function. 150 Ising coupling. 166 inter-qubit coupling. 169 autoregressive models.

124 Poincare-Bendixson theorem. 107 Makhlin's invariants.198 Optimal control and forecasting of complex dynamical systems sensitivity analysis. 30 optimal control. 34 mutation operation. 37 partition function. 96 qubit. 22 purity. 107 . 103 time-delay reconstruction. 50 quantum Liouville equation. 164 Takens's embedding theorem. 32 Tucker's theorem. 128 stochastic optimization. 47 nanostructures. 152 SWAP gate. 158 quantum fluctuations. 147 Rabi oscillations. 1 wavelet analysis. 35 Pareto optimality. 25 short-term predictions. 38 Wigner molecule. 26 relaxation. 105 mathematical pendulum. 33 Lorenz attractor. 94. 2 solar activity. 179 realistic constraints. 42 strange attractor. 147 Quantum Genetic Algorithm. 26 objective function. 148 universal gates. 116 Stiickelberg oscillations. 105 two-qubit register. 174 tautochronism. 156 transversality condition. 74 quantum gate. 150 quantronium. 38 simulated annealing. 21 Pareto front. 85 Ritz's method. 176 multi-photon resonance. 125 random matrix theory. 66 photon-assisted tunnelling. 77 simplex method. 176 soliton. 174 topological qubits. 110 Riemann-Liouville derivative. 79 supercoherent. 71 Wilkinson's problem. 122 neural networks. 78 Magnus series. 29 noise reduction. 43 sine-Gordon equation. 82 Lyapunov exponents. 78 time ordering operator. 84 Pontryagin Maximum Principle. 13 thermal fluctuations. 180 No Free Lunch Theorem. 83 two level system. 138 multiobjective optimization. 117 SnelPs law. 177 null controllability. 157 variational calculus. 177 weighted-sum method. 163 Markovian approximation. 74 three-body problem. 94 optimal control theory. 114 Mittag-Leffler function. 20 rotating wave approximation. 16 travelling salesman problem. 86 multi-agent models.

worldscientific. . Key elements of chaos theory and basics of fractional derivatives.com . The theory presented here can be efficiently applied to various problems. artificial neural networks). the book discusses optimal control theory and global optimization using modern numerical techniques.OPTIMAL CONTROL and FORECASTING of COMPLEX DYNAMICAL SYSTEMS his important book reviews applications of optimization and optimal control theory to modern problems in physics. such as the determination of the optimal shape of a laser pulse to induce certain excitations in quantum systems. which are useful in control and forecast of complex dynamical systems.„!•ftWu'SS&Jb- 1||J| ISBN 981-256-660-0 vorld Scientific YEARS OF P U B L I S H I N G www. The coverage includes several interdisciplinary problems to demonstrate the efficiency of the presented algorithms. and different methods of forecasting complex dynamics are discussed. the optimal design of nanostructured materials and devices. or the control of chaotic systems and minimization of the forecast error for a given forecasting model (for example. Starting from a brief review of the history of variational calculus. are presented. nano-science and finance.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times